π οΈ Installation Guide
This guide walks you through setting up and running the co_ai
framework locally.
π¦ Requirements
- Python 3.8+
- PostgreSQL with pgvector extension
- Ollama (for local LLM inference)
- Poetry OR standard
pip
+venv
- Optional: Docker (for running PostgreSQL locally)
π§ 1. Clone the Repository
git clone https://github.com/ernanhughes/co-ai.git
cd co-ai
````
---
## π 2. Create a Virtual Environment
**Using `venv`:**
```bash
python3 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
Or with poetry
:
poetry install
poetry shell
π 3. Install Dependencies
pip install -r requirements.txt
OR using pyproject.toml
:
pip install .
π§ 4. Set Up PostgreSQL + pgvector
Option A: Using Docker
docker run --name coai-db -p 5432:5432 \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_USER=postgres \
-e POSTGRES_DB=coai \
ankane/pgvector
Option B: Local Installation
-
Install PostgreSQL
-
Install
pgvector
:
bash
CREATE EXTENSION vector;
- Create the database:
bash
createdb coai
- Run the schema:
bash
psql -U postgres -d coai -f schema.sql
π€ 5. Install & Run Ollama
ollama run qwen:latest
Or for smaller models:
ollama run llama2
Make sure Ollama
is running on http://localhost:11434
.
βοΈ 6. Run the App
python co_ai/main.py goal="The USA is on the verge of defaulting on its debt"
Or with CLI args:
python co_ai/main.py --config-name=config goal="My research goal here"
π Notes
- Logs are stored in
logs/
as structured JSONL. - Prompts are saved in
prompts/
and tracked in the database. - You can inspect all configuration using Hydra or customize each agent via
config/
.
β Youβre Ready!
You now have a full pipeline for running research-style hypothesis generation, evaluation, and prompt tuning.