๐Ÿค– Open Source ยท Python ยท AI

TermMind: Building an AI Terminal Assistant with 7 LLM Providers

March 27, 2026 ยท 11 min read

The Problem

As a developer who uses multiple AI providers daily, I was tired of switching between ChatGPT in the browser, Claude in another tab, and Ollama in a separate terminal. Each tool had its own interface, its own context limitations, and none of them understood my codebase the way I needed.

Existing tools like Aider and Continue.dev are great, but they're complex to set up, often require Node.js, and lock you into specific workflows. I wanted something simpler โ€” a single CLI tool that works with any AI provider, understands my code, and doesn't require a PhD to configure.

Enter TermMind

TermMind is an open-source AI terminal assistant that lets you chat with 7 different LLM providers directly from your terminal. It's written in pure Python with zero external runtime dependencies, installs via pip, and works out of the box.

Check out TermMind on GitHub

Architecture: Multi-Provider Design

The core design challenge was supporting 7 different API formats (OpenAI, Anthropic, Google, Groq, Together.ai, OpenRouter, Ollama) without creating a mess of if/else statements. I solved this with a provider registry pattern.

Each provider is a class that implements a common interface: chat(messages, model, **kwargs). The registry maps provider names to classes, and a factory function creates the right one based on config. This makes adding new providers trivially easy.

Here's the provider registry pattern in action:

# termmind/providers/registry.py

class ProviderRegistry:
    """Dynamic registry for LLM providers. Add a new provider
    by subclassing BaseProvider and calling registry.register()."""

    def __init__(self):
        self._providers = {}

    def register(self, name: str, cls):
        self._providers[name.lower()] = cls

    def create(self, name: str, api_key: str = None, **config):
        name = name.lower()
        if name not in self._providers:
            raise ValueError(f"Unknown provider: {name}. "
                             f"Available: {list(self._providers.keys())}")
        return self._providers[name](api_key=api_key, **config)

registry = ProviderRegistry()

# --- Provider registration (done in each provider module) ---
from termmind.providers.base import BaseProvider

class OpenAIProvider(BaseProvider):
    def chat(self, messages, model="gpt-4o", **kwargs):
        response = self.client.chat.completions.create(
            model=model, messages=messages, **kwargs
        )
        return response.choices[0].message.content

registry.register("openai", OpenAIProvider)

class ClaudeProvider(BaseProvider):
    def chat(self, messages, model="claude-sonnet-4-20250514", **kwargs):
        response = self.client.messages.create(
            model=model, messages=messages, **kwargs
        )
        return response.content[0].text

registry.register("claude", ClaudeProvider)

class OllamaProvider(BaseProvider):
    def chat(self, messages, model="llama3", **kwargs):
        response = self.client.chat(model=model, messages=messages)
        return response.message.content

registry.register("ollama", OllamaProvider)

# Adding a new provider is just 3 lines:
# class GroqProvider(BaseProvider): ...
# registry.register("groq", GroqProvider)

This pattern means adding a new provider doesn't require touching any existing code. A contributor can write a new provider file, register it, and it just works. The registry also enables runtime provider switching โ€” users can change providers mid-session without restarting.

Key Features

Code Memory Index โ€” TermMind indexes your project's functions, classes, and imports. When you ask "what does this function do?", it has the context already. No copy-pasting code into chat.

Diff Engine โ€” Shows exactly what changed in your files using unified diffs with color coding. Every AI edit is reviewable before committing.

# termmind/diff_engine.py โ€” simplified diff generation

import difflib

def generate_unified_diff(original: str, modified: str,
                          filename: str = "file.py") -> str:
    """Generate a color-coded unified diff between two file states."""
    old_lines = original.splitlines(keepends=True)
    new_lines = modified.splitlines(keepends=True)
    diff = difflib.unified_diff(
        old_lines, new_lines,
        fromfile=f"a/{filename}", tofile=f"b/{filename}",
        lineterm=""
    )
    return "\n".join(diff)

# Output example:
# --- a/parser.py
# +++ b/parser.py
# @@ -12,6 +12,8 @@
#  def parse_command(text: str) -> tuple:
#      """Parse user input into (command, args)."""
#      parts = text.strip().split(maxsplit=1)
# +    if not parts:
# +        return ("help", [])
#      cmd = parts[0].lower()
#      args = parts[1] if len(parts) > 1 else ""

Session Recorder & Replay โ€” Records entire coding sessions with timestamps. You can replay them step by step or export as HTML timelines. This is a feature I haven't seen in any competing tool.

ELI5 Mode โ€” Toggle simplified explanations for when AI responses are too technical. Great for learning new concepts.

# termmind/features/eli5.py

ELI5_SYSTEM_PROMPT = """You are explaining a technical concept to a curious
10-year-old. Use simple words, relatable analogies, and avoid jargon.
If the user asks about recursion, explain it like a mirror facing another
mirror. If they ask about APIs, explain it like ordering food at a
restaurant. Keep it fun and accurate."""

def eli5_transform(response: str, original_query: str) -> str:
    """Re-process an AI response through the ELI5 lens."""
    # The system prompt is prepended to the next API call,
    # causing the model to simplify its response
    messages = [
        {"role": "system", "content": ELI5_SYSTEM_PROMPT},
        {"role": "user", "content": f"Explain this simply:\n\n{original_query}"}
    ]
    return provider.chat(messages)  # Returns simplified version

Cost Optimizer โ€” Tracks per-request and session costs across providers. Shows which provider is cheapest for your usage pattern. Budget warnings prevent unexpected bills.

# termmind/features/cost_optimizer.py

class CostTracker:
    """Track and optimize LLM spending across providers."""

    # Pricing per 1M tokens (as of 2026)
    PRICING = {
        "gpt-4o":      {"input": 2.50,  "output": 10.00},
        "gpt-4o-mini": {"input": 0.15,  "output": 0.60},
        "claude-sonnet-4-20250514": {"input": 3.00, "output": 15.00},
        "claude-haiku-4-20250414":  {"input": 0.80, "output": 4.00},
        "gemini-2.0-flash": {"input": 0.075, "output": 0.30},
        "llama3":       {"input": 0.00,  "output": 0.00},  # Local!
    }

    def __init__(self, budget_usd: float = 10.0):
        self.budget = budget_usd
        self.session_costs: dict[str, float] = {}
        self.total_spent = 0.0

    def record(self, model: str, input_tokens: int, output_tokens: int):
        pricing = self.PRICING.get(model, {"input": 0, "output": 0})
        cost = (input_tokens * pricing["input"] / 1_000_000
              + output_tokens * pricing["output"] / 1_000_000)
        self.session_costs[model] = self.session_costs.get(model, 0) + cost
        self.total_spent += cost
        if self.total_spent > self.budget * 0.8:
            print(f"โš ๏ธ  Budget warning: ${self.total_spent:.2f} "
                  f"of ${self.budget:.2f} spent")

    def cheapest_for(self, task_type: str = "chat") -> str:
        """Recommend the cheapest provider for a given task type."""
        return min(self.PRICING.items(),
                   key=lambda x: x[1]["input"])[0]

Inline Doc Preview โ€” Type /docs function_name and see the docstring, signature, parameters, and return type โ€” parsed from Python, JavaScript, TypeScript, Go, Rust, and Java.

Testing: 227 Tests, Zero Failures

I wrote 9 test files covering every module. The project has 227 passing tests with zero failures. Tests cover API clients (mocked), command parsing, file operations, context management, diff engine, snippets, and more.

$ pytest tests/ -v --tb=short

tests/test_providers.py ...........                          [  5%]
tests/test_diff_engine.py ............                      [ 11%]
tests/test_commands.py ............................         [ 23%]
tests/test_context.py ................                      [ 29%]
tests/test_file_ops.py ...............                      [ 36%]
tests/test_cost_optimizer.py .........                     [ 40%]
tests/test_session_recorder.py .............                [ 47%]
tests/test_eli5.py ........                                [ 50%]
tests/test_code_memory.py ................................. [ 65%]
tests/test_snippets.py .....................                [ 73%]
tests/test_docs_preview.py ..................              [ 81%]
tests/test_config.py .............                         [ 88%]
tests/test_integration.py ........................         [100%]

======================== 227 passed, 0 failed in 3.42s =========================

All external API calls are mocked using unittest.mock, so the test suite runs completely offline. Each test file targets a single module, following the testing pyramid: fast unit tests at the base, a handful of integration tests at the top.

What Makes It Different

Unlike Aider (which requires Node.js and is Git-focused) or Continue.dev (which is VS Code specific), TermMind works in any terminal on any OS. It supports 7 providers out of the box. It has features like session recording and cost optimization that simply don't exist elsewhere.

Try It

Install with a single command:

pip install termmind
termmind

Pick a provider, enter your API key (or use Ollama for free, local inference), and start coding with AI. That's it.

github.com/rudra496/termmind

Connect

Back to Blog