Fork. Work. Fold.
Great teams don't rely on a single voice. Neither should your AI. Sidecar opens alongside Claude Code and Cowork, giving any model the full context of your session. Keep working while it thinks. Fold the best results back.
Any model via OpenRouter
Launch a sidecar, do your work, fold the results back. Context is shared automatically.
Your conversation history is automatically passed to the new model. It starts with everything Claude already knows — no setup required.
Interact with the sidecar in a real window alongside Claude Code. Explore, debug, get a second opinion.
Click Fold and a structured summary flows back. Findings, recommendations, and code changes. No noise.
Sidecar can also run headlessly as parallel subagents inside Claude Code or Cowork. Spawn multiple models at once. Each runs independently on its task. All results fold back into Claude.
From inside Claude Code or Cowork, ask Claude to spawn headless sidecars using the MCP tool. Each subagent gets your full session context, works autonomously on its assigned task, and returns a structured summary. No window switching, no prompting each model yourself.
Run multiple sidecars simultaneously to split work across specialized models. One reviews architecture. Another audits security. A third generates tests. Claude collects every summary and synthesizes the results back into your main context.
Sidecar reads your active Claude Code session and passes it to whichever model you choose. No export, no copy-paste, no starting from scratch.
The only tool that bridges Claude Code sessions to other AI models. Every sidecar starts with everything Claude already knows.
Sidecar integrates natively with both Claude Code and Claude Cowork. Install once, use everywhere.
Use the sidecar command directly in your terminal alongside Claude Code. Desktop app and CLI both supported.
Sidecar registers as an MCP server automatically. Cowork agents can spawn sidecars natively from their sandbox.
Every sidecar shares your context automatically. Just pick a model and get to work.
Claude proposed an architecture? Send it to Gemini for a second opinion. Catch bad assumptions before they become bugs.
Stuck on a bug? Bring in a different model for a second look. Fresh eyes often spot what familiarity misses.
Get three different models thinking about the same problem in parallel. Claude collects and synthesizes the best ideas from each.
Deep in a session and losing perspective? Bring in a fresh model. It sees everything you've built, without the tunnel vision.
Use your existing API keys directly, or connect everything through OpenRouter with a single key.
200+ models from every provider. One API key for everything.
Already have a Google AI or OpenAI key? Use it directly. No middleman, no extra accounts.
One key, every model. Automatic fallback, unified billing. Run sidecar setup to configure.
Full GUI window for interactive work, or autonomous background mode for batch tasks like test generation.
Warns when files changed externally while the sidecar was running. Never accidentally overwrite work.
Every sidecar is saved. List past sessions, resume them, or chain new investigations on previous findings.
Filter by turns, time window, or token budget. Skip context entirely for standalone tasks.
Full MCP server for Claude Desktop and Cowork. Sidecar tools appear natively in Cowork's sandbox.
Choose the mode that fits the task — from quick conversation to fully autonomous work.
Switch models without restarting. Start fast with Flash, then switch to Pro for complex analysis.
Background update checks with zero latency. One-click updates from the toolbar or CLI.
Sidecar builds on OpenCode, not around it. OpenCode does the heavy lifting.
One npm install. Auto-configures Claude Code skill and MCP server.