Skip to content

evalops/maestro

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

381 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Maestro by EvalOps

CI

Maestro is a coding agent for real software work. It can inspect code, edit files, run shell commands, search large repos, and help across terminal, web, IDE, browser, Slack, and GitHub workflows.

This README is intentionally short. Use it to get running, then jump into the docs for the details.

What Maestro Covers

  • Terminal-first coding agent with both interactive TUI and one-shot CLI flows
  • Shared runtime across the web UI, VS Code, JetBrains, browser automation, Slack, and GitHub
  • Multi-provider model support, OAuth-based logins, and managed EvalOps routing
  • Hooks, MCP servers, context files, and headless automation for custom workflows
  • Visible tool use with approvals, sandboxing, and firewall controls

Interfaces

Interface Best for Guide
Terminal (TUI/CLI) Interactive coding sessions and one-shot repo tasks Features
Web UI Browser-based Maestro sessions Web UI Guide
Conductor Browser automation through a local Maestro server Conductor Bridge
VS Code Inline chat and IDE-native workflows VS Code extension
JetBrains IntelliJ, WebStorm, PyCharm, and related IDEs JetBrains plugin
Slack Chat-driven agent workflows with sandboxing Slack agent
GitHub Issue-driven automation and PR generation GitHub agent
Ambient Agent Long-running GitHub automation daemon Ambient Agent design
Headless Embedding Maestro in CI, tools, and eval harnesses Headless protocol

Install

Bun (recommended)

bun install -g @evalops/maestro

npm

npm install -g @evalops/maestro

Nix

nix run github:evalops/maestro

Quick Start

  1. Configure a model provider. Fast path:
export ANTHROPIC_API_KEY=sk-ant-...

Maestro also supports OpenAI, OpenAI Codex with ChatGPT login, Google, OpenRouter, Azure OpenAI, GitHub Copilot, Groq, xAI, Cerebras, and managed EvalOps auth. See Models for provider-specific setup and overrides.

For Codex subscription models, run maestro codex login or /login openai-codex, then select models under the openai-codex provider such as openai-codex/gpt-5.5.

  1. Launch the interface you want:
maestro
maestro "Audit this repository and suggest the next refactor"
maestro web

maestro web starts the browser UI on http://localhost:8080.

  1. Add project-specific behavior when needed:
  • Keys and config: ~/.maestro/keys.json, ~/.maestro/config.json
  • MCP servers: ~/.maestro/mcp.json or .maestro/mcp.json
  • Hooks: ~/.maestro/hooks.json or .maestro/hooks.json
  • Agent instructions: AGENT.md, .maestro/APPEND_SYSTEM.md, ~/.maestro/agent/AGENT.md

Safety Model

  • Approval modes let you choose how much confirmation Maestro needs before acting
  • Sandbox modes range from workspace containment to danger-full-access
  • Firewall rules, trusted paths, and CI/secrets protections reduce common footguns

See Safety and the Threat Model for the full behavior.

Docs

Goal Guide
Install, build, and first run Quickstart
Learn TUI and CLI workflows Features
Find slash commands and flags Tools Reference
Configure providers and models Models
Understand approvals and sandboxing Safety
Run the browser interface Web UI Guide
Set up MCP servers MCP Guide
Work on the repo as a contributor Contributor Runbook
Integrate Maestro headlessly Headless protocol
Bring any coding agent into EvalOps Any-Agent Control Plane
Browse the full docs map Documentation index

Contributing

Fast path for local development:

git clone https://github.com/evalops/maestro.git
cd maestro
bun install
npx nx run maestro:build --skip-nx-cache
npx nx run maestro:test --skip-nx-cache
npx nx run maestro:evals --skip-nx-cache

For the browser UI without local API keys or Redis, use the local-only dev profile:

make web-local
curl http://localhost:8080/api/models

To prove Maestro works against a local Cerebro stack, keep sibling checkouts and run:

gh repo clone evalops/cerebro ../cerebro
make cerebro-e2e-doctor
make cerebro-e2e

To actually use the two repos together locally, start Cerebro from Maestro and export the same Cerebro/MCP env into the Maestro terminal:

make cerebro-dev

# In another Maestro terminal:
eval "$(make -s cerebro-env)"
make run-ts

That target delegates to Cerebro's make local-maestro-e2e with LOCAL_MAESTRO_REPO set to the current Maestro checkout. It builds and smokes Maestro, emits Maestro's canonical Platform replay, publishes it through local NATS, and verifies Cerebro graph projection plus MCP recall from the generated session traffic. make cerebro-e2e-doctor checks the Cerebro checkout, Docker Compose, the replay generator, and Cerebro's own local-E2E doctor before the full smoke starts. It also checks the effective local Cerebro URL, MCP URL, and workspace from .env or exported environment values, then verifies that the configured API port is free before the self-contained smoke starts. The default URL is http://localhost:18080; use LOCAL_BASE_URL/MAESTRO_CEREBRO_URL plus matching Cerebro LOCAL_HTTP_PORT overrides when that port is occupied. Set LOCAL_CEREBRO_REPO=/path/to/cerebro when the checkout is not a sibling directory. If your machine cannot surface OTEL collector debug logs, run LOCAL_ASSERT_OTEL=false make cerebro-e2e.

For direct local Maestro runs against an already-running Cerebro dev stack, make cerebro-env prints copyable exports derived from .env or shell overrides. The Makefile exports those vars to make targets so make run-ts, make web-local, and local smokes all see the same configuration.

Need Redis or PostgreSQL for a specific workflow? Start from docker-compose.yml and use the Contributor Runbook for the rest of the repo workflow.

Repository Layout

  • src/ - CLI entrypoints and shared application code
  • packages/core/ - agent loop, transport, types, and sandbox primitives
  • packages/ai/ - model registry, provider transport, and event streaming
  • packages/tui/ - TypeScript terminal UI
  • packages/tui-rs/ - native Rust TUI
  • packages/web/ - browser UI
  • packages/vscode-extension/, packages/jetbrains-plugin/, packages/slack-agent/, packages/github-agent/ - interface integrations

License

Business Source License 1.1. You can use Maestro for development, testing, and production use, but not as a competing hosted or embedded product. On April 14, 2030, the license converts to Apache 2.0. See LICENSE for details.

About

Maestro — multi-model coding agent with TUI, web, IDE, Slack, and GitHub interfaces

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors