Skip to content

AI

Cogno’s AI panel connects to a language model and gives it context from your active terminal — recent commands, their output, and optionally the running process tree. You can ask questions, get command suggestions, and run suggested commands directly.

When you send a message, Cogno attaches a snapshot of the active terminal:

  • The last N commands and their output (configurable)
  • The current working directory
  • The process tree of the active shell (optional)

The model responds with text and optionally suggests shell commands. Suggested commands can be run immediately or inserted into the input line.

Responses stream token by token as they arrive.

Cogno supports any OpenAI-compatible API and Ollama’s native API.

On startup, Cogno automatically probes two local endpoints:

ProviderDefault URL
Ollamahttp://localhost:11434
LM Studiohttp://localhost:1234

If a provider is found, it is added automatically and set as active. See the Config page for the full provider setup.

SettingDefaultDescription
ai.modeautoauto activates AI when a usable provider is available
ai.active_providerID of the provider to use
ai.request.include_process_treefalseInclude the process tree in the context
ai.request.max_commands8Maximum recent commands sent as context
ai.request.max_output_chars4000Maximum terminal output characters sent as context
ActionDescription
open_ai_chatToggle the AI panel