MoonDB gives you a database, REST API, auth, and file storage from a single JSON schema. Two API calls to a working backend:
POST /v1/projects — you get an admin key (sk_) and a public key (pk_)PUT /p/{project_id}/v1/schema — REST API is live instantly# 1. Create a project curl -X POST https://moondb.ai/v1/projects \ -H "X-API-Key: mk_..." \ -d '{"name":"my-app"}' # 2. Set schema curl -X PUT https://moondb.ai/p/{project_id}/v1/schema \ -H "X-Admin-Key: sk_..." \ -H "Content-Type: application/json" \ -d '{"tables":{"tasks":{"columns":{"title":"string required","done":"bool default false"}}}}' # Done! CRUD is live: curl https://moondb.ai/p/{project_id}/api/tasks -H "X-Public-Key: pk_..."
The fastest path: copy one prompt, paste it into your agent, and it handles everything — schema design, API calls, frontend wiring.
| Agent | File | Location |
|---|---|---|
| Claude Code | CLAUDE.md | Project root |
| Cursor | .cursorrules | Project root |
| Windsurf | .windsurfrules | Project root |
| Copilot | .github/copilot-instructions.md | Repo root |
| Any agent | System prompt | Paste directly |
The prompt contains everything: API base URL, keys, schema format reference, endpoint patterns, and auth instructions. Your agent can design and apply a schema, then build the frontend — all without reading docs.
The generated prompt includes:
.env# You say to the agent: "Build a habit tracker. Users can create habits, mark them done each day, and see streaks." # The agent reads the MoonDB prompt from CLAUDE.md and: # 1. Designs a schema (users, habits, completions) # 2. PUTs it to /v1/schema # 3. Builds the React/Vue/Svelte frontend # 4. Wires up auth and API calls
GET /v1/llm-context and update the prompt file so it stays in sync.
If you prefer working without a coding agent, store credentials in .env:
# .env
MOONDB_API_BASE=https://moondb.ai/p/{project_id}
MOONDB_ADMIN_KEY=sk_...
MOONDB_PUBLIC_KEY=pk_...
Use the public key for browser reads. MoonDB has CORS enabled so any origin can call the API.
// Read tasks const res = await fetch( `$https://moondb.ai/api/tasks?sort=created_at.desc&limit=20`, { headers: { 'X-Public-Key': 'pk_...' } } ); const { data } = await res.json(); // Create task (requires auth token) const res = await fetch(`$https://moondb.ai/api/tasks`, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + userToken }, body: JSON.stringify({ title: 'Buy milk', done: false }) });
Use the admin key for schema changes and privileged operations. Never expose it to the client.
// Node.js / server-side
const res = await fetch(`$https://moondb.ai/v1/schema`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
'X-Admin-Key': process.env.MOONDB_ADMIN_KEY
},
body: JSON.stringify({ tables: { ... } })
});
Three credential types, each with a different scope:
| Key | Prefix | Purpose | Header |
|---|---|---|---|
| API Key | mk_ | Platform account (manage projects) | X-API-Key |
| Admin Key | sk_ | Project management (schema, config) | X-Admin-Key |
| Public Key | pk_ | Client-side data access | X-Public-Key |
sk_) is shown only once when you create a project. Save it immediately. If lost, rotate it via the dashboard. Never expose it in client-side code.
All keys can also be passed as Authorization: Bearer {key}. MoonDB auto-detects the key type from the prefix.
After login (POST /v1/accounts/login), you get a JWT. Use it as Authorization: Bearer {token} for platform operations (project management, billing). The JWT expires after 1 hour by default.
The schema is a JSON object sent to PUT /p/{id}/v1/schema:
{
"tables": {
"users": {
"columns": {
"email": "string required unique",
"display_name": "string",
"avatar": "file",
"role": { "type": "enum", "values": ["user", "admin"], "default": "user" }
},
"auth_table": true
},
"posts": {
"columns": {
"user_id": "ref users required",
"title": "string required",
"body": "text",
"published": "bool default false"
},
"access": { "read": "public", "create": "authenticated", "update": "owner", "delete": "owner" },
"owner_field": "user_id"
}
}
}
Every table automatically gets these — never define them:
id — UUID v4, primary keycreated_at — ISO 8601 timestamp, set on insertupdated_at — ISO 8601 timestamp, updated on every change| Option | Type | Description |
|---|---|---|
auth_table | boolean | Enable email/password auth. Auto-creates hidden password_hash. |
verify_email | boolean | Require email verification for auth users. |
access | object | Per-operation access rules: read, create, update, delete. |
owner_field | string | Column for owner access checks (must ref auth table). |
unique | string[][] | Compound unique: [["user_id","slug"]]. |
| Type | SQLite | Options |
|---|---|---|
string | TEXT | max_length |
text | TEXT | Long-form content |
int | INTEGER | min, max |
float | REAL | min, max |
bool | INTEGER | Stored as 0/1 |
enum | TEXT+CHECK | values (required), default |
json | TEXT | Stored as JSON string |
date | TEXT | ISO 8601 date |
datetime | TEXT | ISO 8601 datetime |
ref | TEXT FK | Foreign key. on_delete: cascade / restrict / set_null |
file | TEXT | R2 file reference (URL) |
Add after the type in short-form, or as object keys:
required — NOT NULLunique — UNIQUE constraintindex — creates an index for faster queriesdefault <value> — default valueFor more control, use the full object form instead of short-form strings:
{
"price": {
"type": "float",
"required": true,
"min": 0,
"max": 99999,
"default": 0
},
"status": {
"type": "enum",
"values": ["draft", "published", "archived"],
"default": "draft",
"required": true
},
"author_id": {
"type": "ref",
"ref": "users",
"required": true,
"on_delete": "cascade"
}
}
Write column definitions as space-separated strings for compact schemas:
# String with modifiers "email": "string required unique" # equivalent to: "email": { "type": "string", "required": true, "unique": true } # With default "streak": "int default 0" "is_active": "bool default true" # Reference to another table "user_id": "ref users required" # Reference with cascade delete "post_id": "ref posts required on_delete cascade" # Indexed column "slug": "string required index"
enum type always requires object form because you need to specify the values array.
{
"tables": {
"users": {
"columns": {
"email": "string required unique",
"name": "string required",
"bio": "text",
"avatar": "file",
"streak": "int default 0"
},
"auth_table": true
},
"habits": {
"columns": {
"user_id": "ref users required on_delete cascade",
"name": "string required",
"color": "string default #3b82f6",
"archived": "bool default false"
},
"access": { "read": "owner", "create": "authenticated", "update": "owner", "delete": "owner" },
"owner_field": "user_id"
},
"completions": {
"columns": {
"habit_id": "ref habits required on_delete cascade",
"user_id": "ref users required",
"date": "date required"
},
"access": { "read": "owner", "create": "authenticated", "update": "owner", "delete": "owner" },
"owner_field": "user_id",
"unique": [["habit_id", "date"]]
}
}
}
Always send the full schema. MoonDB diffs it against the current version and generates migrations automatically.
Applied immediately, no confirmation needed:
Returns a preview first. Add "confirm_destructive": true to apply:
PUT /p/{id}/v1/schema
{
"tables": { ... },
"confirm_destructive": true
}
GET /p/{id}/v1/schema # current schema + version POST /p/{id}/v1/schema/validate # dry-run without applying
Every table gets full CRUD automatically:
GET /p/{id}/api/{table} # List (with filters) GET /p/{id}/api/{table}/{row_id} # Get one POST /p/{id}/api/{table} # Create PATCH /p/{id}/api/{table}/{row_id} # Update DELETE /p/{id}/api/{table}/{row_id} # Delete POST /p/{id}/api/{table}/bulk # Bulk create (array body)
// List response { "data": [ { "id": "...", "title": "...", ... } ], "meta": { "total": 42, "limit": 20, "offset": 0 } } // Single item { "data": { "id": "...", "title": "...", ... } } // Error { "error": { "code": "...", "message": "...", "suggestion": "..." } }
Send an array of objects. Atomic — all succeed or all fail:
POST /p/{id}/api/tasks/bulk
[
{ "title": "Task 1", "done": false },
{ "title": "Task 2", "done": true }
]
Add filters as query params: field=op.value
# Equals GET /api/posts?published=eq.true # Comparison GET /api/posts?created_at=gte.2025-01-01 # Pattern match GET /api/posts?title=like.Hello% # Multiple values GET /api/posts?status=in.draft,published # Null checks GET /api/posts?deleted_at=is_null # Combine filters GET /api/posts?published=eq.true&created_at=gte.2025-01-01&sort=created_at.desc
| Operator | SQL | Example |
|---|---|---|
eq | = | name=eq.Alice |
neq | != | status=neq.draft |
gt, gte | >, >= | age=gte.18 |
lt, lte | <, <= | price=lt.100 |
like | LIKE | name=like.A% |
in | IN | role=in.user,admin |
is_null | IS NULL | bio=is_null |
not_null | IS NOT NULL | bio=not_null |
GET /api/posts?sort=created_at.desc GET /api/posts?sort=title.asc,created_at.desc
# Offset-based GET /api/posts?limit=20&offset=40 # Cursor-based (use next_cursor from response meta) GET /api/posts?limit=20&cursor=eyJ...
# Select specific fields GET /api/posts?select=id,title,created_at # Include related records (via ref columns) GET /api/posts?include=user_id
Mark a table as auth_table: true to get built-in email/password auth.
# Sign up POST /p/{id}/auth/signup { "email": "user@example.com", "password": "secret123" } # -> { token, refresh_token, user } # Log in POST /p/{id}/auth/login { "email": "user@example.com", "password": "secret123" } # Refresh token POST /p/{id}/auth/refresh { "refresh_token": "..." } # Get current user GET /p/{id}/auth/me Authorization: Bearer {token} # Logout POST /p/{id}/auth/logout Authorization: Bearer {token}
Pass the JWT in subsequent requests:
Authorization: Bearer eyJhbGci...
Add "verify_email": true to the auth table. Users receive a verification email on signup.
GET /p/{id}/auth/verify?token=... POST /p/{id}/auth/resend-verification Authorization: Bearer {token}
If your auth table has custom columns, pass them in the signup body:
POST /p/{id}/auth/signup
{
"email": "user@example.com",
"password": "secret123",
"display_name": "Alice",
"role": "user"
}
Set per-operation access rules in the schema:
| Level | Meaning | Required header |
|---|---|---|
public | Anyone (with public key) | X-Public-Key |
authenticated | Valid user JWT | Authorization: Bearer |
owner | Only own rows | Authorization: Bearer |
admin | Admin key only | X-Admin-Key |
"access": {
"read": "public",
"create": "authenticated",
"update": "owner",
"delete": "admin"
},
"owner_field": "user_id"
owner, you must set owner_field to a column that references the auth table. MoonDB auto-filters queries so users only see their own rows, and blocks updates/deletes on rows they don't own.
| Use case | Access config |
|---|---|
| Public blog | read: public, create/update/delete: admin |
| Social feed | read: public, create: authenticated, update/delete: owner |
| Private notes | all: owner |
| Admin-only config | all: admin |
Upload files to R2 storage via multipart form:
# Upload POST /p/{id}/storage/upload Content-Type: multipart/form-data Authorization: Bearer {token} # -> { url, key, size, content_type } # Download GET /p/{id}/storage/{key} # Delete DELETE /p/{id}/storage/{key} X-Admin-Key: sk_...
Use file type columns in your schema. Upload the file first, then store the returned URL in the column:
# 1. Upload POST /p/{id}/storage/upload -> { "url": "https://..." } # 2. Save to record PATCH /p/{id}/api/users/{user_id} { "avatar": "https://..." }
MoonDB includes built-in AI endpoints powered by Cloudflare Workers AI. Define endpoints in your schema, call them via the REST API, and pay with credits.
Add an ai_endpoints section to your schema:
{
"tables": { ... },
"ai_endpoints": {
"summarize": {
"model": "gemma",
"prompt": "Summarize this text in 2 sentences: {{text}}",
"access": "auth"
},
"generate_avatar": {
"model": "flux-schnell",
"prompt": "A minimal avatar for a user named {{name}}, flat design",
"access": "auth"
}
}
}
Template parameters ({{name}}) are filled from the request body at call time.
| Alias | Type | Credits | Description |
|---|---|---|---|
gemma | Text | 1/2 per 1K in/out | Fast & cheap. 256K context, vision, reasoning. |
gpt-oss | Text | 2/4 per 1K in/out | Most intelligent. Complex reasoning, code gen. |
flux-schnell | Image | 10 per image | Fast image generation for prototyping. |
flux-dev | Image | 100 per image | High quality, photorealistic images. |
POST /p/{project_id}/ai/{endpoint_name}
Authorization: Bearer {token}
Content-Type: application/json
{
"text": "MoonDB is a DBaaS built for coding agents..."
}
Text models return:
{ "type": "text", "result": "...", "model": "gemma", "credits_used": 3 }
Image models return base64-encoded data:
{ "type": "image", "image": "data:image/png;base64,...", "model": "flux-schnell", "credits_used": 10 }
Each endpoint has an access field:
"public" — no auth required."auth" (default) — end-user JWT or API key required."admin" — admin key only.Each plan includes free credits per month. When exhausted, buy more via POST /v1/billing/ai-credits. Check your balance in Dashboard → AI.
MoonDB generates project-specific context files so your coding agent knows the exact schema, endpoints, and auth flow.
Dashboard → Overview → copy the Agent Prompt. Paste into your agent's context file. This is the fastest way.
Dashboard → Agent Files has downloadable files:
.cursorrules — for CursorCLAUDE.md — for Claude CodeLLM context — for any agent# Text format GET /p/{id}/v1/llm-context # JSON format GET /p/{id}/v1/llm-context?format=json # Agent-specific GET /p/{id}/v1/cursor-rules GET /p/{id}/v1/claude-md
.env as MOONDB_ADMIN_KEYsuggestion fields are designed for agents to self-correctMoonDB exposes an MCP endpoint at POST /mcp so AI agents can manage projects through a single JSON-RPC 2.0 interface instead of crafting individual REST calls.
Pass your API key as X-API-Key: mk_... or Authorization: Bearer mk_.... All tools operate on projects owned by the authenticated account.
Every request is a JSON-RPC 2.0 envelope:
POST /mcp
Content-Type: application/json
X-API-Key: mk_...
{ "jsonrpc": "2.0", "id": 1, "method": "tools/call",
"params": { "name": "create_project", "arguments": { "name": "my-app" } } }
| Tool | Description | Required args |
|---|---|---|
create_project | Create a new project | name |
set_schema | Apply schema & run migrations | project_id, schema |
get_schema | Get current schema + table info | project_id |
query | Filtered SELECT on a table | project_id, table |
insert | Insert one or many records | project_id, table, data |
ai_call | Call an AI endpoint | project_id, endpoint |
initialize — returns server info and capabilities.tools/list — returns the tool definitions above.tools/call — executes a tool with provided arguments.# 1. Create project { "jsonrpc":"2.0", "id":1, "method":"tools/call", "params":{"name":"create_project","arguments":{"name":"todo-app"}} } # 2. Set schema { "jsonrpc":"2.0", "id":2, "method":"tools/call", "params":{"name":"set_schema","arguments":{ "project_id":"...", "schema":{"tables":{"tasks":{"columns":{"title":"string required","done":"bool default=false"}}}} }} } # 3. Insert data { "jsonrpc":"2.0", "id":3, "method":"tools/call", "params":{"name":"insert","arguments":{ "project_id":"...","table":"tasks","data":{"title":"Buy milk"} }} }
MoonDB is the backend. Your frontend can be deployed anywhere.
# .env.local or Vercel dashboard NEXT_PUBLIC_MOONDB_URL=https://moondb.ai/p/{project_id} NEXT_PUBLIC_MOONDB_PUBLIC_KEY=pk_... MOONDB_ADMIN_KEY=sk_... # server-side only
# netlify.toml or dashboard
[build.environment]
VITE_MOONDB_URL = "https://moondb.ai/p/{project_id}"
VITE_MOONDB_PUBLIC_KEY = "pk_..."
# wrangler.toml or Pages dashboard
[vars]
VITE_MOONDB_URL = "https://moondb.ai/p/{project_id}"
VITE_MOONDB_PUBLIC_KEY = "pk_..."
# Set in platform dashboard:
MOONDB_URL=https://moondb.ai/p/{project_id}
MOONDB_PUBLIC_KEY=pk_...
MOONDB_ADMIN_KEY=sk_...
MoonDB has CORS enabled, so any static site can call the API from the browser:
const API = 'https://moondb.ai/p/{project_id}';
const PK = 'pk_...';
const res = await fetch(API + '/api/tasks', {
headers: { 'X-Public-Key': PK }
});
const { data } = await res.json();
Use Clerk, Auth0, Supabase, or any OIDC provider instead of MoonDB's built-in auth.
PUT /p/{id}/v1/auth-config X-Admin-Key: sk_... { "provider": "external", "jwks_url": "https://your-provider.com/.well-known/jwks.json", "user_id_claim": "sub", "audience": "your-audience", "issuer": "https://your-issuer" }
After configuration, users authenticate with your provider and pass their JWT to MoonDB. MoonDB validates the token against the JWKS endpoint.
Authorization: Bearer {token}Every error includes a suggestion field — actionable instructions for agents and developers.
{
"error": {
"code": "FK_NOT_FOUND",
"message": "Foreign key 'user_id' references row 'abc' which does not exist",
"suggestion": "Create the referenced user first, then retry."
}
}
| Prefix | Category | Example |
|---|---|---|
VALIDATION_* | Invalid input | Missing required field |
AUTH_* | Authentication | Invalid credentials |
ACCESS_* | Authorization | Insufficient permissions |
FK_* | Foreign keys | Referenced row not found |
SCHEMA_* | Schema errors | Invalid column type |
RATE_* | Rate limits | Too many requests |
STORAGE_* | File storage | File too large |
| Status | Meaning |
|---|---|
| 200 | Success |
| 201 | Created |
| 400 | Validation error |
| 401 | Authentication required / invalid |
| 403 | Forbidden (insufficient permissions) |
| 404 | Not found |
| 409 | Conflict (duplicate unique value) |
| 429 | Rate limit or plan quota exceeded |
| New Moon | Half Moon | Full Moon | Eclipse | |
|---|---|---|---|---|
| Price | Free | $9/mo | $29/mo | $79/mo |
| Projects | 1 | 10 | 100 | Unlimited |
| Reads/mo | 1M | 50M | 500M | 2B |
| Writes/mo | 25K | 500K | 10M | 50M |
| Storage | 100 MB | 5 GB | 25 GB | 100 GB |
| Rate limit | 60 RPM | 1,200 RPM | 3,000 RPM | 10,000 RPM |
| Emails/mo | 100 | 5,000 | 50,000 | 500,000 |
Upgrade anytime from Dashboard → Billing. Downgrades take effect at the end of the billing period.