Open Beta macOS only Windows soon

Claude Sidecar

Fork. Work. Fold.

Great teams don't rely on a single voice. Neither should your AI. Sidecar opens alongside Claude Code and Cowork, giving any model the full context of your session. Keep working while it thinks. Fold the best results back.

$npm install -g claude-sidecarcopy
Docs
Claude Code (keeps working...) FOLD Summary received Race condition found. Recommend mutex. Your main session Fork Fold Sidecar Gemini 3 Pro You "Is this approach correct?" Gemini "There's a race condition when multiple requests hit token refresh. Use a mutex..." SIDECAR 3:42 Fold

Any model via OpenRouter


How It Works

Three Steps.
Zero Friction.

Launch a sidecar, do your work, fold the results back. Context is shared automatically.

Claude session Sidecar full context auto-shared

01 · Fork

Your conversation history is automatically passed to the new model. It starts with everything Claude already knows — no setup required.

Claude working... Sidecar Exploring... in parallel simultaneous

02 · Work

Interact with the sidecar in a real window alongside Claude Code. Explore, debug, get a second opinion.

Claude FOLD summary received Fold click

03 · Fold

Click Fold and a structured summary flows back. Findings, recommendations, and code changes. No noise.


Parallel Subagents

Spawn Subagents.
Claude Orchestrates.

Sidecar can also run headlessly as parallel subagents inside Claude Code or Cowork. Spawn multiple models at once. Each runs independently on its task. All results fold back into Claude.

Claude Code / Cowork Orchestrator spawns headless sidecars Gemini 3 Pro "Review architecture" running... --no-ui headless GPT-5 "Audit security" running... --no-ui headless DeepSeek R1 "Generate tests" running... --no-ui headless 3 FOLD Summaries Received Architecture verified. 2 vulnerabilities found. 47 tests generated. All results flow back into Claude's context fold fold fold

Claude as orchestrator. Any model as a specialist.

From inside Claude Code or Cowork, ask Claude to spawn headless sidecars using the MCP tool. Each subagent gets your full session context, works autonomously on its assigned task, and returns a structured summary. No window switching, no prompting each model yourself.

Run multiple sidecars simultaneously to split work across specialized models. One reviews architecture. Another audits security. A third generates tests. Claude collects every summary and synthesizes the results back into your main context.

Native MCP tool works inside Claude Code and Cowork
Each subagent gets full conversation context automatically
Runs autonomously with a configurable timeout
Structured summaries fold back into Claude's context
Conflict detection prevents subagents from overwriting each other

Automatic Context Sharing

Your Claude Context.
Any Model.

Sidecar reads your active Claude Code session and passes it to whichever model you choose. No export, no copy-paste, no starting from scratch.

The only tool that bridges Claude Code sessions to other AI models. Every sidecar starts with everything Claude already knows.

Conversation history File changes Tool calls Error output
Claude Code You Claude You Claude Read src/auth.js done 42 turns, 18.2k tokens Gemini 3 Pro full context loaded GPT-5 full context loaded DeepSeek R1 full context loaded Every model gets your full Claude session automatically

Works With

Claude Code & Cowork.
Two Interfaces, One Tool.

Sidecar integrates natively with both Claude Code and Claude Cowork. Install once, use everywhere.

CLI

Claude Code

Use the sidecar command directly in your terminal alongside Claude Code. Desktop app and CLI both supported.

sidecar start --model gemini --prompt "Review auth"
MCP Server

Claude Cowork

Sidecar registers as an MCP server automatically. Cowork agents can spawn sidecars natively from their sandbox.

Tool: sidecar_start model: gemini

See It In Action

One Command.
Full Context.

# Fact-check Claude's approach with Gemini
sidecar start --model gemini --prompt "Verify the auth refactor"

# Deep-dive without polluting your main context
sidecar start --model gemini-pro --prompt "Analyze the codebase"

# Autonomous background task
sidecar start --model gemini --no-ui --prompt "Generate tests"

> Started sidecar gk7x · google/gemini-3-flash-preview
> Context: 42 turns, 18.2k tokens
> Running autonomously... (15m timeout)

Use Cases

Built for How
You Actually Work.

Every sidecar shares your context automatically. Just pick a model and get to work.

Claude Code "Use Redis for cache" architecture.md fork Gemini 3 Pro Fact-checking architecture... Redis is correct for this workload. Consider adding TTL for session keys. Verify proposals with a second model sidecar start --model gemini --prompt "Verify this approach"

Fact-Check

Claude proposed an architecture? Send it to Gemini for a second opinion. Catch bad assumptions before they become bugs.

Claude Code TypeError: undefined at handleAuth:47 at processReq:123 stack trace... GPT-5 Tracing the bug... Race condition in token refresh. handleAuth() runs before session is ready. ! Send the bug to another model for fresh analysis sidecar start --model gpt --prompt "Debug this error"

Debug

Stuck on a bug? Bring in a different model for a second look. Fresh eyes often spot what familiarity misses.

"Design the API" Gemini REST with versioned endpoints, HATEOAS approach A GPT-5 GraphQL with real- time subscriptions approach B DeepSeek Hybrid: REST for CRUD, WebSockets for events approach C Claude synthesizes 3 perspectives

Brainstorm

Get three different models thinking about the same problem in parallel. Claude collects and synthesizes the best ideas from each.

Claude Code 200+ turns deep fork Gemini Flash Fresh perspective: You've been refactoring in circles. The real issue is the schema design. Here's a simpler approach... Break tunnel vision with a model that sees it fresh sidecar start --model gemini-flash --prompt "Review our approach"

Fresh Eyes

Deep in a session and losing perspective? Bring in a fresh model. It sees everything you've built, without the tunnel vision.

Compatible Models

Any Model.
Your Keys.

Use your existing API keys directly, or connect everything through OpenRouter with a single key.

Google Gemini Flash, Pro, Ultra
OpenAI GPT-5, o3, o4-mini
Anthropic Claude Opus, Sonnet, Haiku
xAI Grok 3, Grok 3 Mini
Meta Llama 4 Scout, Maverick
DeepSeek DeepSeek R1, V3, Coder
OpenRouter

200+ models from every provider. One API key for everything.

Direct API Keys

Already have a Google AI or OpenAI key? Use it directly. No middleman, no extra accounts.

OpenRouter (Recommended)

One key, every model. Automatic fallback, unified billing. Run sidecar setup to configure.


Features

Everything You Need.
Nothing You Don't.

Interactive + Headless

Full GUI window for interactive work, or autonomous background mode for batch tasks like test generation.

Conflict Detection

Warns when files changed externally while the sidecar was running. Never accidentally overwrite work.

Session Persistence

Every sidecar is saved. List past sessions, resume them, or chain new investigations on previous findings.

Context Filtering

Filter by turns, time window, or token budget. Skip context entirely for standalone tasks.

MCP Integration

Full MCP server for Claude Desktop and Cowork. Sidecar tools appear natively in Cowork's sandbox.

Agent Modes

Choose the mode that fits the task — from quick conversation to fully autonomous work.

Mid-Session Model Switching

Switch models without restarting. Start fast with Flash, then switch to Pro for complex analysis.

Auto-Update

Background update checks with zero latency. One-click updates from the toolbar or CLI.


Foundation

Built on OpenCode.

Sidecar builds on OpenCode, not around it. OpenCode does the heavy lifting.

OpenCode is the open-source AI coding engine that powers every sidecar session. It handles the conversation, tool use, native agent types, and the web UI. Sidecar adds context sharing from Claude Code, session history, the fold workflow, MCP support, and the Electron shell. We don't reinvent the wheel.
opencode.ai

Get Started

Install in 30 Seconds.

One npm install. Auto-configures Claude Code skill and MCP server.

$npm install -g claude-sidecarcopy
Read the Docs