Skip to content

๐Ÿงฐ Tools

co_ai ships with a suite of modular tools that agents can access via injection. These tools wrap shared functionality such as embeddings, prompt loading, logging, search, and evaluation.


๐Ÿง  Memory Tool

The MemoryTool manages pluggable stores, each handling a specific data type:

Built-in Stores

Store Description
EmbeddingStore Caches and retrieves embeddings (via pgvector)
HypothesisStore Stores and retrieves hypotheses per goal
ContextStore Persists intermediate context state
PromptLogger Logs and versions prompts
ReportLogger Saves YAML reports per run
memory.hypotheses.store(goal, text, confidence, review, features)
memory.context.save(run_id, stage, context_dict)
````

You can access any store via:

```python
memory.get("hypotheses")

๐Ÿ”Ž Web Search Tool

The WebSearchTool supports simple search using DuckDuckGo or a local instance of SearxNG.

Example

results = await WebSearchTool().search2("US debt ceiling history")

Each result is returned as a formatted string with title, snippet, and URL.

โš ๏ธ DuckDuckGo is rate-limited. SearxNG is recommended for production use.


๐Ÿงช Prompt Loader

The PromptLoader handles loading prompts using four modes:

Mode Source
file From local prompt templates (.txt)
static Hardcoded in YAML
tuning Best version from memory tuning
template Jinja2 templating with context injection
prompt = prompt_loader.load_prompt(cfg, context)

Prompts are formatted with context values, e.g. {goal}.


โš–๏ธ Evaluation

The OllamaEvaluator scores refinements using an LLM (e.g. qwen2.5) running locally:

evaluation = evaluator.evaluate(original, proposal)
print(evaluation.score, evaluation.reason)

Used for tuning prompts and comparing generated text quality.


๐Ÿ“š Template Utilities

Prompt templates live under prompts/<agent>/filename.txt.

You can render them using:

from jinja2 import Template

Template(template_text).render(**context)

๐Ÿ” Embedding Tool

The get_embedding(text, cfg) helper uses the configured embedding model and caches results in the database.


๐Ÿ“ JSON Logger

Structured logging for every event in the pipeline:

logger.log("HypothesisStored", {"goal": goal, "text": text[:100]})

Each log is saved as a .jsonl file per run.


๐Ÿ”ง Pluggable Stores

You can add custom stores via config:

extra_stores:
  - co_ai.memory.MyCustomStore

Register them via:

memory.register_store(MyCustomStore(...))

๐Ÿงฉ Adding Your Own Tools

You can pass any tool to agents by extending the agent constructor and updating the supervisor.