Scout-AI uses named endpoints so you can refer to a configuration by a short name like nano or deep.
An endpoint is defined by a YAML file:
~/.scout/etc/AI/<endpoint>
Example:
# ~/.scout/etc/AI/nano
backend: responses
model: gpt-5-nano
Then:
scout-ai llm ask -e nano "Say hi"
Typical keys
backend: which backend adapter to use (e.g.responses,openai,anthropic,ollama,vllm,openwebui,bedrock)model: backend-specific model idurl: server URL (for backends likeollama/vllm/openwebui)
Many additional keys are passed through to the backend.
Why this matters for Rbbt integrations
When you use workflows as tools, reproducibility improves if:
- the endpoint configuration is named and checked into team conventions
- the chat file is saved (instead of copying prompts around)