co_ai: Collaborative AI Hypothesis Engine
Welcome to the documentation for co_ai, a modular LLM-powered framework designed to assist in scientific hypothesis generation, evaluation, and refinement. This project is inspired by the SAGE architecture proposed in arXiv:2502.18864 and aims to simulate a collaborative AI research team.
π What is co_ai
?
co_ai
is an extensible agent-based pipeline framework built around a central Supervisor and a suite of intelligent agents. Each agent performs a distinct role β such as generating hypotheses, ranking them, reflecting on their quality, or evolving better ones β all while sharing state through a common memory and logging system.
The system is designed to:
- Generate high-quality hypotheses using goal-driven prompts
- Evaluate and refine outputs using ranked feedback and few-shot learning
- Tune itself over time using embedded prompt evaluations
- Persist context and decisions for future runs
π§ Key Features
- π§© Modular agent architecture (Generation, Ranking, Reflection, Evolution)
- π§ Vector memory store powered by PostgreSQL + pgvector
- π Context preservation across agents via memory tools
- π Prompt tuning via DSPy or Ollama-based evaluations
- βοΈ Hydra configuration system for flexible runtime setups
- π Logging with structured JSONL + emoji-tagged stages
π Example Use Case
You define a research goal (e.g., "The USA is on the verge of defaulting on its debt"). co_ai
spins up a pipeline to:
- Generate multiple hypotheses
- Reflect on their quality
- Rank and evolve them using internal feedback
- Store results, logs, prompts, and evaluations
- Optionally tune the prompts used in the process for the next iteration
Everything is modular and can be extended with custom agents, tools, and storage plugins.
π¦ Project Structure
co_ai/
βββ agents/ # Agent classes (generation, reflection, etc.)
βββ memory/ # Memory and store definitions
βββ logs/ # Structured logging system
βββ tuning/ # Prompt tuning tools
βββ tools/ # External API utilities (e.g., web search)
βββ main.py # Entry point
βββ supervisor.py # Pipeline orchestration
config/
prompts/
````
---
## π Resources
* [GitHub Repository](https://github.com/ernanhughes/co-ai)
* [The SAGE Paper (arXiv)](https://arxiv.org/abs/2502.18864)
* [Prompt Tuning Overview](prompt_tuning.md)
* [Configuration Guide](configuration.md)
---
## π¨βπ¬ Why Use This?
`co_ai` isnβt just another LLM wrapper β itβs a framework designed to **amplify human creativity and reasoning** through a configurable, extensible AI assistant team. Whether you're testing theories, validating hypotheses, or generating structured research output, `co_ai` turns prompts into pipelines, and pipelines into progress.