MCPBundles CLI: Give Your AI Coding Agent Access to 10,000+ Production Tools
MCPBundles has always worked as an MCP server. You add it to Claude Desktop, Cursor, ChatGPT, or any MCP-compatible client, and your AI gets access to Stripe, HubSpot, Postgres, PostHog, Gmail, and every other service you've connected — with real credentials, real permissions, and real data.
The MCPBundles CLI is an alternative way to access those same tools. Instead of configuring MCPBundles as a remote MCP server in your client, you install a command-line tool and authenticate with an API key. The AI agent discovers and calls your tools through shell commands — the same 10,000+ tools, the same credentials, the same workspace permissions.
pip install mcpbundles
Two Ways to the Same Tools
Every AI client — Claude Desktop, Cursor, ChatGPT, Claude Code, Codex — supports MCP natively. You can always configure MCPBundles as a remote MCP server and it works. But MCP implementation varies across clients. Some have quirks with transport negotiation. Some don't support remote servers yet, or have rough edges around session management, reconnection, or credential handling. And every client needs its own config file.
The CLI sidesteps all of that. It gives you a second path to the exact same tools:
MCP server. Add MCPBundles as a remote MCP server in your AI client's config. The AI connects over streamable HTTP and uses tools through the standard MCP protocol. This works everywhere MCP is supported.
CLI. Run pip install mcpbundles and authenticate with your API key. Any AI agent with terminal access can discover, search, and call tools directly — no MCP config file, no per-project setup, no client-specific quirks. The agent writes and runs the commands itself.
Both paths hit the same backend, use the same credentials, respect the same workspace permissions, and return the same data. The CLI just removes the MCP client implementation from the equation.
Why the CLI Matters for AI Agents
Every AI coding agent has a terminal. That's how it runs git, npm, pytest, docker, and everything else. The CLI makes your production services available through that same terminal — Stripe, HubSpot, Postgres, PostHog, Gmail, Attio, and dozens more become as accessible as any other command-line tool.
The agent doesn't need to know tool names or argument schemas upfront. The CLI provides discovery commands that let it ask "what services are available?", "what tools does this service have?", and "what parameters does this tool accept?" — then construct and execute the right call. The AI writes the commands. You describe what you want in natural language.
Tell Cursor "look up this customer in Stripe and check if they have any failed payments." The agent discovers the Stripe bundle, finds the search and payment tools, constructs the calls, runs them, reads the JSON responses, and gives you the answer — without you opening the Stripe dashboard or writing a single command yourself.
One API Key, Every Project
With the MCP server approach, each AI client needs its own config file — claude_desktop_config.json, .cursor/mcp.json, or equivalent — specifying the server URL, transport, and credentials. That config is per-client and often per-project.
The CLI works differently. You generate an API key from your MCPBundles workspace, run a connect command, and the CLI stores it encrypted on disk. From that point on, every mcpbundles command authenticates automatically — across every project, every IDE, every agent on your machine. One API key, not fifteen config files.
The tools you get through the CLI are the same tools you'd get through the MCP server. When the agent calls a Stripe tool, it's hitting your live Stripe account with your team's OAuth credentials. When it queries Postgres, it's your production database. Same authentication, same permission model, same data — just a different interface.
Teams and Workspaces
For teams, the CLI supports named connections — each one pointing to a different workspace with its own API key, its own set of enabled bundles, and its own credential bindings.
A developer might have a prod connection for the production workspace and a staging connection for testing. The agent can target either one. With one connection, it's automatic — no flags needed. With multiple, the agent just adds --as staging to any command.
This means different team members can have different access levels, different workspaces can have different services enabled, and the CLI respects all of it. The agent inherits whatever permissions the API key grants — nothing more, nothing less.
What the Agent Can Actually Do
The CLI gives the agent four capabilities that matter:
Discovery. The agent can list every service your team has connected, browse the tools available in each one, search across all 10,000+ tools by description, and inspect the full parameter schema of any tool. This is the same discover-then-call workflow that MCP was designed for, but it happens over shell commands instead of inside a chat session.
Direct tool calls. The agent can call any individual tool and get back structured JSON. Look up a customer, query a database, fetch analytics events, send an email, create a CRM record. Each call is a single shell command with key-value arguments — the CLI handles type coercion, authentication, and session management.
Multi-tool workflows. When the agent needs to chain calls across services — pull data from Stripe, cross-reference it in Postgres, update a record in HubSpot — the CLI provides a Python execution sandbox with access to every enabled tool as an async function. The agent writes the Python, the CLI runs it.
Self-correction. When a tool call fails, the CLI emits contextual hints — "this looks like a bundle tool, try adding --bundle" or "run get_bundles to see available services." The agent reads the hint, adjusts, and retries. No human intervention needed.
Teach the Agent the Workflow
Run mcpbundles init in any project directory and the CLI generates a skill file — a markdown document that describes the full discovery-to-execution workflow. AI agents that support skill files (Cursor, Claude Code, Codex) will automatically read it and know how to use the CLI from the first interaction.
Local MCP Proxy
The CLI can also act as a local MCP server. Run mcpbundles serve and it exposes an unauthenticated endpoint on localhost that any MCP client — Claude Desktop, Windsurf, or any other — can connect to without needing credentials. The proxy handles auth and routes to your MCPBundles connection behind the scenes.
You can even aggregate multiple connections into a single MCP surface. The proxy merges tool lists from all your connections and routes each tool call to the right upstream automatically.
Connect Once, Use Everywhere
MCP servers already work with Claude Desktop, ChatGPT, and other AI chat clients. But those connections are per-client and per-session. The CLI is different: connect once, and every AI agent on your machine — in every project, in every IDE — can reach the same set of services through the same credentials.
Your team's Stripe, HubSpot, Postgres, PostHog, Gmail, and every other connected service becomes as accessible to your AI agent as git is today.
pip install mcpbundles
Full docs at mcpbundles.com/docs/how-to/cli.