AI
Cogno’s AI panel connects to a language model and gives it context from your active terminal — recent commands, their output, and optionally the running process tree. You can ask questions, get command suggestions, and run suggested commands directly.
How it works
Section titled “How it works”When you send a message, Cogno attaches a snapshot of the active terminal:
- The last N commands and their output (configurable)
- The current working directory
- The process tree of the active shell (optional)
The model responds with text and optionally suggests shell commands. Suggested commands can be run immediately or inserted into the input line.
Responses stream token by token as they arrive.
Providers
Section titled “Providers”Cogno supports any OpenAI-compatible API and Ollama’s native API.
On startup, Cogno automatically probes two local endpoints:
| Provider | Default URL |
|---|---|
| Ollama | http://localhost:11434 |
| LM Studio | http://localhost:1234 |
If a provider is found, it is added automatically and set as active. See the Config page for the full provider setup.
Configuration
Section titled “Configuration”| Setting | Default | Description |
|---|---|---|
ai.mode | auto | auto activates AI when a usable provider is available |
ai.active_provider | — | ID of the provider to use |
ai.request.include_process_tree | false | Include the process tree in the context |
ai.request.max_commands | 8 | Maximum recent commands sent as context |
ai.request.max_output_chars | 4000 | Maximum terminal output characters sent as context |
Action
Section titled “Action”| Action | Description |
|---|---|
open_ai_chat | Toggle the AI panel |