REEN is a technology platform for scientific research and engineering development. The platform has three sections: Radar (scientific paper feed), Library (document analysis with AI-generated knowledge graphs), and Engineering (Gantt planning, AI Conference, Ex-Help). AI Conferences enable cross-AI and user multi-party conversations powered by local model subscriptions.
Version: 2.3.0 | Last Updated: 2026-03-05
https://backend.reen.techAuthorization: Bearer <token> header
backend.reen.tech, not reen.tech.reen.tech) does not proxy API calls and will return 500 errors.
REEN provides a native MCP (Model Context Protocol) server that gives AI agents direct access to plans, tasks, and progress tracking.
# Clone and build
git clone https://github.com/rhoe-llc-fz/reen-mcp-server.git
cd reen-mcp-server
npm install && npm run build
Add to .mcp.json in your project root:
{
"mcpServers": {
"reen": {
"command": "node",
"args": ["/path/to/reen/mcp-server/dist/index.js"],
"env": {
"REEN_API_TOKEN": "reen_YOUR_TOKEN_HERE"
}
}
}
}
| Tool | Description |
|---|---|
whoami | Get current authenticated user info |
list_plans | List all plans (summary or full detail) |
get_plan | Get a plan by ID (summary: narrative + compact tree, or full with all details) |
create_plan | Create a new Gantt plan |
update_plan | Update plan title, description, status, progress (+ change_reason, change_evidence for audit log) |
delete_plan | Delete a plan |
create_task | Create a top-level task (phase) |
create_subtask | Create a subtask under a task |
update_task | Update task title, status, description, progress (+ change_reason, change_evidence for audit log) |
update_task_dates | Update task start/due dates |
delete_task | Delete a task |
reorder_task | Move a task to a new position within its sibling group |
get_plan_progress | Get progress for all tasks in a plan |
get_narrative | Get the narrative text content of a plan |
update_narrative | Update the narrative text content of a plan |
list_exhelp | List all Ex-Help requests for a plan |
create_exhelp | Create a new Ex-Help request |
update_exhelp | Update Ex-Help request (title, problem, answer, status) |
get_exhelp_pack | Generate a context pack for an Ex-Help request |
delete_exhelp | Delete an Ex-Help request |
share_exhelp | Generate a public share link (7-day TTL) |
list_plan_files | List files attached to a plan (narrative or exhelp) |
list_conferences | List all conferences for the current user |
get_conference_initial_prompt | Get the initial system prompt of a conference |
update_conference_initial_prompt | Update the initial system prompt of a conference |
list_artifacts | List all artifacts for a plan |
create_artifact | Create a new artifact (note/file) in a plan |
update_artifact | Update an artifact's title or content |
delete_artifact | Delete an artifact (soft delete) |
| Strategic Plans | |
list_strategic_plans | List all strategic plans for the current user |
create_strategic_plan | Create a new strategic plan (free-form canvas) |
get_strategic_plan | Get a strategic plan with all cards and edges |
update_strategic_plan | Update title or description |
delete_strategic_plan | Delete a strategic plan (soft delete) |
add_strategic_plan_card | Add a plan card to the canvas |
remove_strategic_plan_card | Remove a card from the canvas |
add_strategic_plan_edge | Add an edge (arrow) between two cards |
| Research (Library) | |
research_upload_book | Upload a document for analysis (title, text, author, domain, language) |
research_list_books | List all documents in the user's library with processing status |
research_get_book | Get a document's knowledge graph (cards + edges). Only when status = completed |
research_get_book_status | Check processing status and progress |
research_get_segments | Get text segments + expected output schema. Only when status = segments_ready |
research_submit_analysis | Submit analysis results (cards + edges) from local AI model |
research_delete_book | Delete a document from the library |
| Conferences | |
create_conference | Create a new AI conference |
read_conference_messages | Read recent messages from a conference (default: last 50) |
send_conference_message | Send a message to a conference (supports @mentions) |
update_conference_agents | Set active AI models (claude, gpt, gemini, grok) |
| Variable | Required | Description |
|---|---|---|
REEN_API_TOKEN | Yes | API token from Settings → API Tokens |
REEN_API_URL | No | Backend URL (default: https://backend.reen.tech) |
Once configured, AI agents can use REEN tools natively:
# In Claude Code conversation:
"List my plans" → calls list_plans
"Create a new plan" → calls create_plan
"Mark task X as done" → calls update_task with status: "done"
"What's my progress?" → calls get_plan_progress
Get your API token via REEN web interface:
Using the token:
curl -H "Authorization: Bearer reen_YOUR_TOKEN_HERE" \
https://backend.reen.tech/api/gant/plans
Project is a container for plans. Structure: {access}/{username}/{project_name}
GET /api/gant/projects
Response:
{
"projects": [
{
"access": "private",
"username": "john.smith",
"project_name": "my-project",
"created_at": "2026-01-31T10:00:00Z"
}
]
}
POST /api/gant/projects
Request Body:
{
"access": "private",
"project_name": "my-new-project",
"description": "Project description (optional)"
}
Response:
{
"success": true,
"project_path": "private/john.smith/my-new-project"
}
DELETE /api/gant/projects/{access}/{username}/{project_name}
Plan is a structured set of tasks and subtasks following the Gant protocol.
GET /api/gant/plans?project_path={project_path}
Query Parameters:
project_path (optional) — filter by projectPOST /api/gant/plans
Minimal Request Body:
{
"project_path": "private/john.smith/my-project",
"title": "Q1 Development Plan",
"description": "Development plan for Q1 2026",
"phases": [
{
"id": "phase-1",
"title": "Phase 1: Setup",
"description": "Infrastructure setup",
"status": "planned",
"subtasks": [
{
"title": "Setup CI/CD pipeline",
"description": "Configure GitHub Actions",
"status": "planned"
}
]
}
]
}
briefing field for each phase/subtask:
"briefing": "What: CI/CD setup\nHow: GitHub Actions config\nWhy: Automated deployment"
GET /api/gant/timeline/{plan_id}
GET /api/gant/progress/{plan_id}/{task_id}
Response:
{
"task_id": "phase-1",
"status": "in-progress",
"progress": 0.5,
"notes": [
"[2026-01-31 10:00] phase-1: 30% — Started CI/CD setup",
"[2026-01-31 14:00] phase-1: 50% — Docker config ready"
]
}
DELETE /api/gant/plans/{plan_id}
POST /api/gant/task
Request Body:
{
"plan_id": "plan-2026-01-31-abc123",
"title": "Configure Docker",
"start_date": "2026-02-01",
"end_date": "2026-02-15",
"status": "planned",
"position": 0
}
position is optional (0-based index). Omit to append at the end. If specified, existing tasks at that position and below are shifted.
POST /api/gant/subtask
Request Body:
{
"plan_id": "plan-2026-01-31-abc123",
"task_id": "task-abc12345",
"title": "Write Dockerfile",
"start_date": "2026-02-01",
"end_date": "2026-02-05",
"status": "planned",
"position": 0
}
PATCH /api/gant/tasks/{task_id}
Request Body (all fields optional):
{
"title": "New title",
"description": "Updated description",
"status": "in-progress",
"progress": 0.5,
"position": 2,
"change_reason": "Why this change was made (recorded in audit log)",
"change_evidence": ["ref-1", "ref-2"]
}
Valid statuses: planned, in-progress, done, blocked, cancelled. All changes are automatically recorded in the audit log via PostgreSQL triggers.
GET /api/gant/plans/{plan_id}/events
Returns chronological audit log of all plan and task changes. Events are automatically generated by PostgreSQL AFTER UPDATE triggers — no manual logging needed.
Query Parameters:
task_id — filter by specific taskevent_type — status_change or field_changefield — filter by field name (title, status, description, etc.)actor — filter by authorlimit (default 200), offset (default 0)Response:
{
"events": [
{
"id": "evt_abc123",
"plan_id": "argus-...",
"task_id": "task-...",
"entity_type": "task",
"event_type": "status_change",
"field": "status",
"old_value": "planned",
"new_value": "in-progress",
"actor": "john.smith",
"actor_type": "human",
"reason": "Starting work",
"task_title": "Task Name",
"created_at": "2026-03-05T12:00:00Z"
}
],
"count": 42
}
POST /api/gant/tasks/reorder
Move a task to a new position within its sibling group (same parent). Sibling tasks are shifted automatically.
Request Body:
{
"task_id": "task-abc12345",
"position": 0
}
Response:
{
"success": true,
"task": { "id": "task-abc12345", "position": 0, ... }
}
When executing a plan through an AI agent (Claude Code, GPT-5 Pro):
status: "in-progress", progress: 0.0status: "done", progress: 1.0[YYYY-MM-DD HH:mm] task_id: N% — Brief description
Example:
[2026-01-31 10:00] phase-1: 30% — Dockerfile created
[2026-01-31 11:00] phase-1: 50% — Docker compose ready
[2026-01-31 12:00] phase-1: 100% — CI/CD pipeline working
planned — Scheduledin-progress — In progressdone — Completedblocked — Blocked0.0 — 0% (start)0.3 — 30% (first milestone)0.5 — 50% (half done)1.0 — 100% (completed)# Step 1: Create project
curl -X POST \
-H "Authorization: Bearer reen_YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{"access":"private","project_name":"q1-2026"}' \
https://backend.reen.tech/api/gant/projects
# Step 2: Create plan
curl -X POST \
-H "Authorization: Bearer reen_YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"project_path": "private/john.smith/q1-2026",
"title": "Q1 Development",
"phases": [{
"id": "setup",
"title": "Setup Infrastructure",
"status": "planned",
"subtasks": [{
"title": "Configure CI/CD",
"status": "planned"
}]
}]
}' \
https://backend.reen.tech/api/gant/plans
# Get plan timeline
curl -H "Authorization: Bearer reen_YOUR_TOKEN" \
https://backend.reen.tech/api/gant/timeline/plan-2026-01-31-abc123
# Get specific task progress
curl -H "Authorization: Bearer reen_YOUR_TOKEN" \
https://backend.reen.tech/api/gant/progress/plan-2026-01-31-abc123/phase-1
Each plan can have a narrative — a free-form text (Markdown) describing the plan's context, goals, and notes.
GET /api/gant/plans/{plan_id}/narrative
Response:
{
"narrative": "# Project Overview\n\nThis plan covers..."
}
PUT /api/gant/plans/{plan_id}/narrative
{
"narrative": "# Updated narrative\n\nNew content here..."
}
Response:
{
"success": true,
"narrative": "# Updated narrative\n\nNew content here..."
}
Ex-Help provides AI consultation requests linked to plans. Create a request describing a problem, generate a context pack, and paste back the AI answer.
GET /api/gant/exhelp/{plan_id}
Response:
{
"exhelp": [
{
"id": "exh_abc123",
"title": "Layout issue in dashboard",
"problem": "The sidebar overlaps...",
"answer": "",
"status": "draft",
"created_at": "2026-02-15T10:00:00Z"
}
]
}
POST /api/gant/exhelp/{plan_id}
{
"title": "New request",
"problem": "Describe the problem here..."
}
PATCH /api/gant/exhelp/{exhelp_id}
{
"title": "Updated title",
"problem": "Updated problem description",
"answer": "AI response pasted here",
"initial_prompt": "System prompt for external AI...",
"status": "answered"
}
The initial_prompt field provides system-level instructions for external AI. When a share link is generated, this field becomes the primary "# Task" section in the public pack.
draft — Initial state, being composedsent — Sent for AI consultationanswered — AI answer receivedGET /api/gant/exhelp/{exhelp_id}/pack?format=md
Generates a context pack containing the problem description, plan tasks, and relevant files. Available in md (Markdown) or json format.
Generate a time-limited public URL for sharing an Ex-Help request with external AI agents (no authentication required).
POST /api/gant/exhelp/{exhelp_id}/share
Response:
{
"share_url": "https://backend.reen.tech/api/public/exhelp/TOKEN",
"expires_at": "2026-02-23T10:00:00Z"
}
GET /api/public/exhelp/{share_token}
Returns a Markdown document optimized for AI consumption:
Artifacts provides note and file storage per plan. Store research notes, implementation details, and reference files as the plan progresses.
list_artifacts, create_artifact, update_artifact, delete_artifact.
GET /api/gant/artifacts/{plan_id}
Response:
{
"artifacts": [
{
"id": "art_abc123def456",
"plan_id": "argus-20260218-...",
"title": "Research notes",
"content": "## Key findings\n...",
"file_count": 2,
"created_at": "2026-02-18T10:00:00Z",
"updated_at": "2026-02-18T12:00:00Z"
}
],
"count": 1
}
POST /api/gant/artifacts/{plan_id}
{
"title": "New Artifact",
"content": ""
}
PATCH /api/gant/artifacts/{artifact_id}
{
"title": "Updated title",
"content": "## Updated content\n..."
}
DELETE /api/gant/artifacts/{artifact_id}
Soft-deletes the artifact. Associated files are preserved but hidden.
Use the standard Plan Files API with context=artifact:
POST /api/gant/plans/{plan_id}/files (multipart form: file, context=artifact, artifact_id)
GET /api/gant/plans/{plan_id}/files?context=artifact&artifact_id={id}
Strategic Plans provide a free-form canvas for organizing multiple Gantt plans visually. Each plan appears as a draggable card on an infinite React Flow canvas, with edges (arrows) connecting related plans.
list_strategic_plans, create_strategic_plan, get_strategic_plan, update_strategic_plan, delete_strategic_plan, add_strategic_plan_card, remove_strategic_plan_card, add_strategic_plan_edge.
GET /api/gant/strategic-plans
Response:
{
"strategic_plans": [
{
"id": "sp-a1b2c3d4e5f6g7h8",
"user_id": "john.smith",
"title": "Q4 2026 Strategy",
"description": "Long-term strategic initiatives",
"project": "main-project",
"viewport_x": 0,
"viewport_y": 0,
"viewport_zoom": 1.0,
"created_at": "2026-02-27T10:30:00Z",
"updated_at": "2026-02-27T15:45:00Z"
}
],
"count": 1
}
POST /api/gant/strategic-plans
{
"title": "Q4 2026 Strategy",
"description": "Long-term strategic initiatives",
"project": "main-project"
}
project is optional. SSE event strategic_plan.created is published.
GET /api/gant/strategic-plans/{sp_id}
Returns the plan metadata, all cards (with live plan title, status, progress), and edges:
{
"strategic_plan": { "id": "sp-...", "title": "...", ... },
"cards": [
{
"id": "spc-xyz123",
"plan_id": "p-001",
"position_x": 0, "position_y": 0,
"width": 280, "height": 140,
"plan_title": "Phase 1: Research",
"plan_status": "in-progress",
"plan_progress": 0.45
}
],
"edges": [
{
"id": "spe-abc789",
"source_card_id": "spc-xyz123",
"target_card_id": "spc-def456",
"label": "feeds into",
"edge_type": "default"
}
]
}
PATCH /api/gant/strategic-plans/{sp_id}
{
"title": "New Title",
"description": "Updated description",
"project": "different-project",
"viewport_x": 100.5,
"viewport_y": 200.3,
"viewport_zoom": 1.5
}
All fields are optional.
DELETE /api/gant/strategic-plans/{sp_id}
Soft-deletes the strategic plan. SSE event strategic_plan.deleted is published.
Cards represent Gantt plans placed on the strategic canvas. Each plan can appear once per canvas (unique constraint).
POST /api/gant/strategic-plans/{sp_id}/cards — Add a plan card
{
"plan_id": "p-001",
"position_x": 0,
"position_y": 0
}
PATCH /api/gant/strategic-plans/{sp_id}/cards/{card_id} — Update position/size
{
"position_x": 150.5,
"position_y": -200.3,
"width": 300,
"height": 160
}
DELETE /api/gant/strategic-plans/{sp_id}/cards/{card_id} — Remove card (cascading: connected edges are also removed)
Edges connect cards with typed arrows. Self-loops are prevented by a database constraint.
POST /api/gant/strategic-plans/{sp_id}/edges — Add edge
{
"source_card_id": "spc-xyz123",
"target_card_id": "spc-def456",
"label": "feeds into",
"edge_type": "feeds"
}
Edge types: default, dependency, feeds, blocks. Each type renders with a distinct color.
DELETE /api/gant/strategic-plans/{sp_id}/edges/{edge_id} — Remove edge
PATCH /api/gant/strategic-plans/{sp_id}/bulk-update — Update viewport and multiple card positions in a single request
{
"viewport_x": 150.5,
"viewport_y": -200.3,
"viewport_zoom": 1.2,
"cards": [
{ "id": "spc-xyz123", "position_x": 0, "position_y": 50 },
{ "id": "spc-def456", "position_x": 350, "position_y": 50 }
]
}
Upload documents and build interactive knowledge graphs. The pipeline extracts text, segments it into chapters, then pauses at segments_ready for your local AI to analyze via MCP.
segments_ready (STOP) → Local AI analyzes via MCP → Submit analysis → completed
POST /api/research/books/json
{
"title": "Introduction to TRIZ",
"text": "Full text content of the document...",
"author": "G. Altshuller",
"domain": "engineering",
"language": "ru"
}
Response:
{
"id": "rb-abc123",
"status": "pending",
"message": "Book uploaded, processing started"
}
Also supports multipart form upload via POST /api/research/books (accepts PDF files).
GET /api/research/books
Query params: ?domain=, ?status=
GET /api/research/books/{book_id}
Returns the document metadata, all cards (one per chapter), and edges (relationships between chapters). Only meaningful when processing_status = "completed".
GET /api/research/books/{book_id}/status
Response:
{
"processing_status": "segments_ready",
"processing_stage": "segmenting",
"processing_progress": 1.0,
"total_chapters": 12
}
| Status | Description |
|---|---|
pending | Queued for processing |
extracting | Extracting text from PDF |
segmenting | Splitting into chapters |
segments_ready | Ready for local AI analysis (pipeline paused) |
completed | Knowledge graph built |
failed | Processing error |
needs_ocr | Low text quality — upload text version instead |
GET /api/research/books/{book_id}/segments
Only works when status is segments_ready. Returns raw text segments and the expected JSON schema for analysis output.
Response:
{
"book_id": "rb-abc123",
"title": "Introduction to TRIZ",
"language": "ru",
"total_segments": 12,
"segments": [
{ "heading": "Chapter 1: Inventive Problems", "text": "...", "start_line": 1 },
{ "heading": "Chapter 2: Contradictions", "text": "...", "start_line": 45 }
],
"expected_output_schema": { ... }
}
POST /api/research/books/{book_id}/analyze
{
"book_summary": "A foundational text on systematic invention methodology...",
"cards": [
{
"chapter_number": 1,
"title": "Inventive Problems",
"essence": "Inventive problems contain contradictions that cannot be resolved by compromise",
"summary_simple": "Introduces the concept of inventive problems...",
"summary_technical": "Defines inventive problems as those containing technical or physical contradictions...",
"importance": 5,
"key_terms": ["inventive problem", "contradiction", "compromise"],
"evidence_quotes": [{ "text": "An inventive problem arises when...", "location": "p.12" }]
}
],
"edges": [
{
"from_chapter": 1,
"to_chapter": 2,
"type": "depends_on",
"confidence": 0.95,
"why": "Chapter 2 builds on the contradiction concept from Chapter 1"
}
]
}
Edge types: depends_on, extends, illustrates
PATCH /api/research/cards/{card_id}/position
{ "x": 250.5, "y": 100.0 }
POST /api/research/edges — Create a manual edge
PATCH /api/research/edges/{edge_id}/verify — Verify or reject an edge
DELETE /api/research/edges/{edge_id} — Delete an edge
DELETE /api/research/books/{book_id}
research_get_segments to retrieve chapters, analyze them locally with your AI model, then call research_submit_analysis with the resulting cards and edges. The server calculates final importance scores using a hybrid formula: 0.6 × LLM + 0.4 × normalized(indegree).
AI Conference provides a messenger-style chat for multi-party conversations between users and AI models (Claude, GPT, Gemini, Grok).
GET /api/conferences
Response:
{
"conferences": [
{
"id": "conf_b1e548af2763",
"title": "Architecture Review",
"owner": "john.smith",
"created_at": "2026-02-08T10:00:00Z",
"message_count": 15,
"agents": ["reen-cli-user"]
}
]
}
POST /api/conferences
{
"title": "My Conference",
"description": "Optional description"
}
GET /api/conferences/{id}?limit=100
Returns conference metadata and the last N messages.
DELETE /api/conferences/{id}
Each conference can have a system prompt that sets the context for all AI models.
GET /api/conferences/{id}/initial-prompt
Response:
{
"initial_prompt": "You are an expert architect..."
}
PUT /api/conferences/{id}/initial-prompt
{
"initial_prompt": "You are an expert architect reviewing our system design..."
}
Connect to a conference in real-time:
wss://backend.reen.tech/ws/conference/{id}?token=reen_YOUR_TOKEN
Send a message:
{"type": "message", "content": "@claude explain this code", "mentions": ["claude"]}
Receive messages:
{"type": "message", "id": "msg_abc", "author": "claude", "role": "assistant", "content": "...", "ts": "..."}
| Mention | Effect |
|---|---|
@claude | Routes to Claude |
@gpt | Routes to GPT |
@gemini | Routes to Gemini |
@grok | Routes to Grok |
@all | Routes to all enabled models |
The UI provides a Go/Stop toggle to control AI conversation flow:
| State | Button | Action |
|---|---|---|
| Stopped / Paused | ▶ Go | Resume model routing |
| Playing | ⏹ Stop | Stop + cancel all running generations |
WebSocket control messages:
{"type": "control", "action": "playing"} // Resume (Go)
{"type": "control", "action": "stopped"} // Stop + cancel all generations
Models respond sequentially within each round: Claude → GPT → Gemini → Grok. Each model sees the responses from previous models in the same round.
When a model uses @mention in its response, it triggers a follow-up round automatically:
@all discuss X → Claude responds → GPT sees Claude's answer → Gemini sees both → Grok sees all@mentions, mentioned models respond (with full context)@mentions exist@mentions or max 5 rounds reached| Pattern | Effect |
|---|---|
@claude, @gpt, @gemini, @grok | Triggers that model to respond |
@all | Triggers all models |
| Name without @ ("Claude said...") | Reference only, no trigger |
Models are instructed to end each response with a direct question using @ to keep the conversation going.
Example flow:
User: "@all discuss the future of AI"
Round 1 (automatic):
→ Claude responds (sees user message)
→ GPT responds (sees user + Claude)
→ Gemini responds (sees user + Claude + GPT)
→ Grok responds (sees all above)
Round 2 (triggered by @mentions in Round 1):
→ @mentioned models respond with full context
... up to 5 rounds, then auto-pause
When users upload text files (.md, .txt, .json, .py, .js, etc.) to a conference, the file content is included in the model's prompt context (up to 16KB per file). Models can analyze and discuss uploaded files.
POST /api/conferences/{id}/messages
{
"content": "Hello @claude, analyze this",
"author": "claude-code",
"mentions": ["claude"]
}
Allows external agents (Claude Code, scripts) to post messages into conferences.
Connect local AI models to conferences:
npm install -g reen-cli
# Connect to all conferences (recommended)
reen-cli daemon --token reen_YOUR_TOKEN
# Connect to a single conference
reen-cli connect conf_XXXX --token reen_YOUR_TOKEN
# List conferences
reen-cli list --token reen_YOUR_TOKEN
| Option | Description | Default |
|---|---|---|
--token, -t | REEN API token | required |
--models, -m | Comma-separated models | claude,gpt,gemini,grok |
--context, -c | Context messages count | 20 |
--server, -s | Server URL | https://backend.reen.tech |
--poll, -p | Poll interval (daemon mode) | 15s |
Source: github.com/rhoe-llc-fz/reen-cli
| Code | Description |
|---|---|
| 200 | Success |
| 400 | Bad Request (validation error) |
| 401 | Unauthorized (invalid token) |
| 404 | Not Found |
| 429 | Rate Limit Exceeded |
| 500 | Internal Server Error |
Default for API token: 60 requests/minute
When exceeded:
{
"error": "Rate limit exceeded",
"retry_after": 30
}
REEN Platform API v2.3.0 | © 2026 RHOE LLC FZ