The MCP Server for Provisioning Real Backends

Cursor, Claude Code, and Windsurf speak the Model Context Protocol natively. MoonDB's MCP server gives them a 14-tool toolkit for the entire project lifecycle — creating projects, designing schemas, seeding data, querying rows, calling AI endpoints — plus an initialize.instructions primer that briefs the model on MoonDB conventions before its first call. No "ask the user for an API key" loops, no copy-pasted prompts. One config entry, and the agent operates.

Install in 30 seconds

Cursor — drop into .cursor/mcp.json

.cursor/mcp.json
{ "mcpServers": { "moondb": { "url": "https://moondb.ai/mcp", "headers": { "X-API-Key": "mk_..." } } } }

Claude Code — one CLI command

terminal
# add the MoonDB server (one-time, project- or user-scoped) claude mcp add --transport http \ moondb https://moondb.ai/mcp \ --header "X-API-Key: mk_..." # every future session has the tools available

Windsurf — add to ~/.codeium/windsurf/mcp_config.json

mcp_config.json
{ "mcpServers": { "moondb": { "serverUrl": "https://moondb.ai/mcp", "headers": { "X-API-Key": "mk_..." } } } }

Get an mk_… account key from the dashboard Account tab. The free tier covers 1 project, 500K reads/mo, 10K writes/mo — enough to ship a side project end-to-end.

What the agent gets

14 native tools, grouped by purpose

CategoryTools
Project management create_project, list_projects, rotate_keys
Schema set_schema, validate_schema (dry-run), get_schema, seed (with @table.index cross-refs), get_reference
Data CRUD query (full operator set + sort + select), get_row, insert, update_row (PATCH semantics), delete_row
AI ai_call (text + image models, schema-defined endpoints)

An instructions primer that prevents wrong-column-type retries

MCP's initialize response includes an instructions field that clients inject into the model's system context. MoonDB ships a focused primer covering the schema gotchas every agent re-discovers the hard way: column types are MoonDB-specific (not SQL), built-in columns (id, created_at, updated_at, password_hash) are auto-managed and must NOT be declared, auth tables need auth_table: true, owner-scoped tables use owner_field not owner_column, and destructive changes need confirm_destructive.

Without this primer the agent typically burns 4-5 turns on retries ("let me try a different type", "let me remove created_at", "let me change owner_column to owner_field"). With it, the model gets the schema right on the first call.

A live reference at get_reference

When the agent runs into something the tool descriptions don't cover (rare filter combinations, access rules, AI endpoint shape), it calls get_reference and gets the full canonical agent docs in one response — same source as /v1/llm-context.

The non-MCP path is fine, too

If your tool doesn't speak MCP yet, every project also publishes a .cursorrules template at GET /p/{id}/v1/cursor-rules and a CLAUDE.md template at GET /p/{id}/v1/claude-md. Drop the file into your project, prompt the agent in plain English, and it calls the REST API directly. You lose the live tool integration but the agent still has the schema, public_key, and endpoint reference for the project.

What MCP is NOT for

End-user authentication (/auth/*), file upload/download (/storage/*), and runtime AI calls from the user's app are intentionally not MCP tools. The agent's job there is to generate fetch() calls in the deployed application code — so they hit the project's REST endpoints with a per-user JWT, not the agent's account key. The initialize.instructions primer documents that contract so the agent doesn't try to wire end-user signup through MCP.

Example: from prompt to working backend

cursor
// you: "Use moondb to spin up a backend for a blog — users, posts, comments. Owner-scoped writes." // Cursor (under the hood): // 1. tools/call create_project { name: "blog" } // 2. tools/call validate_schema { ... } ← dry-run // 3. tools/call set_schema { ... } ← apply // 4. tools/call seed { users: [...], posts: [...] } // you (60 seconds later): "Now write the Next.js frontend hitting that API." // Cursor fetches /v1/llm-context for the project // and generates fetch() calls to /api/posts, /auth/login, etc.

FAQ

Is the MCP server free?

Yes — same as the rest of MoonDB. The free plan covers 1 project, 500K reads/mo, 10K writes/mo. Paid plans ($9 / $29 / $99/mo) lift the quotas.

What MCP transport do you use?

Streamable HTTP (the current MCP spec). POST /mcp with a JSON-RPC 2.0 envelope. Notifications (no id) get 202 Accepted; unknown methods return a JSON-RPC error envelope (-32601), never HTTP 404 — so clients don't treat the endpoint as missing.

How is auth handled?

Pass your mk_… account key as X-API-Key (or Authorization: Bearer mk_…). All tools operate against projects owned by that account — ownership is verified on every call.

Can I self-host the MCP server?

The whole MoonDB stack is open-source and runs on Cloudflare Workers + D1 + R2. Clone, set the secrets, wrangler deploy — you get your own /mcp endpoint. See Deploy.

Does it work outside Cursor / Claude Code / Windsurf?

Any MCP-compatible client. The transport is plain HTTP + JSON-RPC 2.0, no client SDK needed. You can also POST to /mcp from scripts and CI for project management.

Wire it up

One config entry, 14 tools, the full backend lifecycle.

Get an API key
Copied! Now paste it into your AI agent