Quick Start

MoonDB gives you a database, REST API, auth, and file storage from a single JSON schema. Two API calls to a working backend:

  1. Create a project from the Dashboard or via POST /v1/projects — you get an admin key (sk_) and a public key (pk_)
  2. Set a schema via PUT /p/{project_id}/v1/schema — REST API is live instantly
# 1. Create a project
curl -X POST https://moondb.ai/v1/projects \
  -H "X-API-Key: mk_..." \
  -d '{"name":"my-app"}'

# 2. Set schema
curl -X PUT https://moondb.ai/p/{project_id}/v1/schema \
  -H "X-Admin-Key: sk_..." \
  -H "Content-Type: application/json" \
  -d '{"tables":{"tasks":{"columns":{"title":"string required","done":"bool default false"}}}}'

# Done! CRUD is live:
curl https://moondb.ai/p/{project_id}/api/tasks -H "X-Public-Key: pk_..."
Prefer a no-code start? Use the Prompt-first Setup — copy one prompt from the dashboard and let your coding agent do the rest.

Prompt-first Setup

The fastest path: copy one prompt, paste it into your agent, and it handles everything — schema design, API calls, frontend wiring.

  1. Go to the Dashboard and create a project
  2. On the Overview page, click copy on the Agent Prompt block
  3. Paste the prompt into your agent's context file:
AgentFileLocation
Claude CodeCLAUDE.mdProject root
Cursor.cursorrulesProject root
Windsurf.windsurfrulesProject root
Copilot.github/copilot-instructions.mdRepo root
Any agentSystem promptPaste directly

The prompt contains everything: API base URL, keys, schema format reference, endpoint patterns, and auth instructions. Your agent can design and apply a schema, then build the frontend — all without reading docs.

What the agent gets

The generated prompt includes:

Example workflow

# You say to the agent:
"Build a habit tracker. Users can create habits, mark them
done each day, and see streaks."

# The agent reads the MoonDB prompt from CLAUDE.md and:
# 1. Designs a schema (users, habits, completions)
# 2. PUTs it to /v1/schema
# 3. Builds the React/Vue/Svelte frontend
# 4. Wires up auth and API calls
After the agent changes the schema, tell it to re-fetch GET /v1/llm-context and update the prompt file so it stays in sync.

Manual Setup

If you prefer working without a coding agent, store credentials in .env:

# .env
MOONDB_API_BASE=https://moondb.ai/p/{project_id}
MOONDB_ADMIN_KEY=sk_...
MOONDB_PUBLIC_KEY=pk_...

Client-side usage

Use the public key for browser reads. MoonDB has CORS enabled so any origin can call the API.

// Read tasks
const res = await fetch(
  `$https://moondb.ai/api/tasks?sort=created_at.desc&limit=20`,
  { headers: { 'X-Public-Key': 'pk_...' } }
);
const { data } = await res.json();

// Create task (requires auth token)
const res = await fetch(`$https://moondb.ai/api/tasks`, {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer ' + userToken
  },
  body: JSON.stringify({ title: 'Buy milk', done: false })
});

Server-side usage

Use the admin key for schema changes and privileged operations. Never expose it to the client.

// Node.js / server-side
const res = await fetch(`$https://moondb.ai/v1/schema`, {
  method: 'PUT',
  headers: {
    'Content-Type': 'application/json',
    'X-Admin-Key': process.env.MOONDB_ADMIN_KEY
  },
  body: JSON.stringify({ tables: { ... } })
});

Keys & Auth

Three credential types, each with a different scope:

KeyPrefixPurposeHeader
API Keymk_Platform account (manage projects)X-API-Key
Admin Keysk_Project management (schema, config)X-Admin-Key
Public Keypk_Client-side data accessX-Public-Key
The admin key (sk_) is shown only once when you create a project. Save it immediately. If lost, rotate it via the dashboard. Never expose it in client-side code.

Alternative: Bearer token

All keys can also be passed as Authorization: Bearer {key}. MoonDB auto-detects the key type from the prefix.

Platform JWT

After login (POST /v1/accounts/login), you get a JWT. Use it as Authorization: Bearer {token} for platform operations (project management, billing). The JWT expires after 1 hour by default.

Defining Schema

The schema is a JSON object sent to PUT /p/{id}/v1/schema:

{
  "tables": {
    "users": {
      "columns": {
        "email": "string required unique",
        "display_name": "string",
        "avatar": "file",
        "role": { "type": "enum", "values": ["user", "admin"], "default": "user" }
      },
      "auth_table": true
    },
    "posts": {
      "columns": {
        "user_id": "ref users required",
        "title": "string required",
        "body": "text",
        "published": "bool default false"
      },
      "access": { "read": "public", "create": "authenticated", "update": "owner", "delete": "owner" },
      "owner_field": "user_id"
    }
  }
}

Built-in columns

Every table automatically gets these — never define them:

Table options

OptionTypeDescription
auth_tablebooleanEnable email/password auth. Auto-creates hidden password_hash.
verify_emailbooleanRequire email verification for auth users.
accessobjectPer-operation access rules: read, create, update, delete.
owner_fieldstringColumn for owner access checks (must ref auth table).
uniquestring[][]Compound unique: [["user_id","slug"]].

Column Types

TypeSQLiteOptions
stringTEXTmax_length
textTEXTLong-form content
intINTEGERmin, max
floatREALmin, max
boolINTEGERStored as 0/1
enumTEXT+CHECKvalues (required), default
jsonTEXTStored as JSON string
dateTEXTISO 8601 date
datetimeTEXTISO 8601 datetime
refTEXT FKForeign key. on_delete: cascade / restrict / set_null
fileTEXTR2 file reference (URL)

Column modifiers

Add after the type in short-form, or as object keys:

Object form

For more control, use the full object form instead of short-form strings:

{
  "price": {
    "type": "float",
    "required": true,
    "min": 0,
    "max": 99999,
    "default": 0
  },
  "status": {
    "type": "enum",
    "values": ["draft", "published", "archived"],
    "default": "draft",
    "required": true
  },
  "author_id": {
    "type": "ref",
    "ref": "users",
    "required": true,
    "on_delete": "cascade"
  }
}

Short-form Syntax

Write column definitions as space-separated strings for compact schemas:

# String with modifiers
"email": "string required unique"
# equivalent to:
"email": { "type": "string", "required": true, "unique": true }

# With default
"streak": "int default 0"
"is_active": "bool default true"

# Reference to another table
"user_id": "ref users required"

# Reference with cascade delete
"post_id": "ref posts required on_delete cascade"

# Indexed column
"slug": "string required index"
enum type always requires object form because you need to specify the values array.

Full example

{
  "tables": {
    "users": {
      "columns": {
        "email": "string required unique",
        "name": "string required",
        "bio": "text",
        "avatar": "file",
        "streak": "int default 0"
      },
      "auth_table": true
    },
    "habits": {
      "columns": {
        "user_id": "ref users required on_delete cascade",
        "name": "string required",
        "color": "string default #3b82f6",
        "archived": "bool default false"
      },
      "access": { "read": "owner", "create": "authenticated", "update": "owner", "delete": "owner" },
      "owner_field": "user_id"
    },
    "completions": {
      "columns": {
        "habit_id": "ref habits required on_delete cascade",
        "user_id": "ref users required",
        "date": "date required"
      },
      "access": { "read": "owner", "create": "authenticated", "update": "owner", "delete": "owner" },
      "owner_field": "user_id",
      "unique": [["habit_id", "date"]]
    }
  }
}

Schema Updates

Always send the full schema. MoonDB diffs it against the current version and generates migrations automatically.

Non-destructive changes

Applied immediately, no confirmation needed:

Destructive changes

Returns a preview first. Add "confirm_destructive": true to apply:

PUT /p/{id}/v1/schema
{
  "tables": { ... },
  "confirm_destructive": true
}

Useful endpoints

GET  /p/{id}/v1/schema           # current schema + version
POST /p/{id}/v1/schema/validate   # dry-run without applying

REST Endpoints

Every table gets full CRUD automatically:

GET    /p/{id}/api/{table}              # List (with filters)
GET    /p/{id}/api/{table}/{row_id}     # Get one
POST   /p/{id}/api/{table}              # Create
PATCH  /p/{id}/api/{table}/{row_id}     # Update
DELETE /p/{id}/api/{table}/{row_id}     # Delete
POST   /p/{id}/api/{table}/bulk         # Bulk create (array body)

Response format

// List response
{
  "data": [ { "id": "...", "title": "...", ... } ],
  "meta": { "total": 42, "limit": 20, "offset": 0 }
}

// Single item
{ "data": { "id": "...", "title": "...", ... } }

// Error
{ "error": { "code": "...", "message": "...", "suggestion": "..." } }

Bulk insert

Send an array of objects. Atomic — all succeed or all fail:

POST /p/{id}/api/tasks/bulk
[
  { "title": "Task 1", "done": false },
  { "title": "Task 2", "done": true }
]

Filtering & Sorting

Add filters as query params: field=op.value

# Equals
GET /api/posts?published=eq.true

# Comparison
GET /api/posts?created_at=gte.2025-01-01

# Pattern match
GET /api/posts?title=like.Hello%

# Multiple values
GET /api/posts?status=in.draft,published

# Null checks
GET /api/posts?deleted_at=is_null

# Combine filters
GET /api/posts?published=eq.true&created_at=gte.2025-01-01&sort=created_at.desc
OperatorSQLExample
eq=name=eq.Alice
neq!=status=neq.draft
gt, gte>, >=age=gte.18
lt, lte<, <=price=lt.100
likeLIKEname=like.A%
inINrole=in.user,admin
is_nullIS NULLbio=is_null
not_nullIS NOT NULLbio=not_null

Sorting

GET /api/posts?sort=created_at.desc
GET /api/posts?sort=title.asc,created_at.desc

Pagination

# Offset-based
GET /api/posts?limit=20&offset=40

# Cursor-based (use next_cursor from response meta)
GET /api/posts?limit=20&cursor=eyJ...

Field selection & includes

# Select specific fields
GET /api/posts?select=id,title,created_at

# Include related records (via ref columns)
GET /api/posts?include=user_id

End-user Auth

Mark a table as auth_table: true to get built-in email/password auth.

# Sign up
POST /p/{id}/auth/signup
{ "email": "user@example.com", "password": "secret123" }
# -> { token, refresh_token, user }

# Log in
POST /p/{id}/auth/login
{ "email": "user@example.com", "password": "secret123" }

# Refresh token
POST /p/{id}/auth/refresh
{ "refresh_token": "..." }

# Get current user
GET /p/{id}/auth/me
Authorization: Bearer {token}

# Logout
POST /p/{id}/auth/logout
Authorization: Bearer {token}

Using the token

Pass the JWT in subsequent requests:

Authorization: Bearer eyJhbGci...

Email verification

Add "verify_email": true to the auth table. Users receive a verification email on signup.

GET  /p/{id}/auth/verify?token=...
POST /p/{id}/auth/resend-verification
     Authorization: Bearer {token}

Signup with extra fields

If your auth table has custom columns, pass them in the signup body:

POST /p/{id}/auth/signup
{
  "email": "user@example.com",
  "password": "secret123",
  "display_name": "Alice",
  "role": "user"
}

Access Control

Set per-operation access rules in the schema:

LevelMeaningRequired header
publicAnyone (with public key)X-Public-Key
authenticatedValid user JWTAuthorization: Bearer
ownerOnly own rowsAuthorization: Bearer
adminAdmin key onlyX-Admin-Key
"access": {
  "read": "public",
  "create": "authenticated",
  "update": "owner",
  "delete": "admin"
},
"owner_field": "user_id"
When using owner, you must set owner_field to a column that references the auth table. MoonDB auto-filters queries so users only see their own rows, and blocks updates/deletes on rows they don't own.

Common patterns

Use caseAccess config
Public blogread: public, create/update/delete: admin
Social feedread: public, create: authenticated, update/delete: owner
Private notesall: owner
Admin-only configall: admin

File Storage

Upload files to R2 storage via multipart form:

# Upload
POST /p/{id}/storage/upload
Content-Type: multipart/form-data
Authorization: Bearer {token}

# -> { url, key, size, content_type }

# Download
GET /p/{id}/storage/{key}

# Delete
DELETE /p/{id}/storage/{key}
X-Admin-Key: sk_...

Using with file columns

Use file type columns in your schema. Upload the file first, then store the returned URL in the column:

# 1. Upload
POST /p/{id}/storage/upload  ->  { "url": "https://..." }

# 2. Save to record
PATCH /p/{id}/api/users/{user_id}
{ "avatar": "https://..." }

AI Endpoints

MoonDB includes built-in AI endpoints powered by Cloudflare Workers AI. Define endpoints in your schema, call them via the REST API, and pay with credits.

Defining AI endpoints

Add an ai_endpoints section to your schema:

{
  "tables": { ... },
  "ai_endpoints": {
    "summarize": {
      "model": "gemma",
      "prompt": "Summarize this text in 2 sentences: {{text}}",
      "access": "auth"
    },
    "generate_avatar": {
      "model": "flux-schnell",
      "prompt": "A minimal avatar for a user named {{name}}, flat design",
      "access": "auth"
    }
  }
}

Template parameters ({{name}}) are filled from the request body at call time.

Available models

AliasTypeCreditsDescription
gemmaText1/2 per 1K in/outFast & cheap. 256K context, vision, reasoning.
gpt-ossText2/4 per 1K in/outMost intelligent. Complex reasoning, code gen.
flux-schnellImage10 per imageFast image generation for prototyping.
flux-devImage100 per imageHigh quality, photorealistic images.

Calling an endpoint

POST /p/{project_id}/ai/{endpoint_name}
Authorization: Bearer {token}
Content-Type: application/json

{
  "text": "MoonDB is a DBaaS built for coding agents..."
}

Response

Text models return:

{ "type": "text", "result": "...", "model": "gemma", "credits_used": 3 }

Image models return base64-encoded data:

{ "type": "image", "image": "data:image/png;base64,...", "model": "flux-schnell", "credits_used": 10 }

Access control

Each endpoint has an access field:

Credits

Each plan includes free credits per month. When exhausted, buy more via POST /v1/billing/ai-credits. Check your balance in Dashboard → AI.

Agent Integration

MoonDB generates project-specific context files so your coding agent knows the exact schema, endpoints, and auth flow.

Option 1: Dashboard prompt (recommended)

Dashboard → Overview → copy the Agent Prompt. Paste into your agent's context file. This is the fastest way.

Option 2: Download context files

Dashboard → Agent Files has downloadable files:

Option 3: Fetch at runtime

# Text format
GET /p/{id}/v1/llm-context

# JSON format
GET /p/{id}/v1/llm-context?format=json

# Agent-specific
GET /p/{id}/v1/cursor-rules
GET /p/{id}/v1/claude-md

Tips

MCP Server

MoonDB exposes an MCP endpoint at POST /mcp so AI agents can manage projects through a single JSON-RPC 2.0 interface instead of crafting individual REST calls.

Authentication

Pass your API key as X-API-Key: mk_... or Authorization: Bearer mk_.... All tools operate on projects owned by the authenticated account.

Protocol

Every request is a JSON-RPC 2.0 envelope:

POST /mcp
Content-Type: application/json
X-API-Key: mk_...

{ "jsonrpc": "2.0", "id": 1, "method": "tools/call",
  "params": { "name": "create_project", "arguments": { "name": "my-app" } } }

Available tools

ToolDescriptionRequired args
create_projectCreate a new projectname
set_schemaApply schema & run migrationsproject_id, schema
get_schemaGet current schema + table infoproject_id
queryFiltered SELECT on a tableproject_id, table
insertInsert one or many recordsproject_id, table, data
ai_callCall an AI endpointproject_id, endpoint

Supported methods

Example: full flow

# 1. Create project
{ "jsonrpc":"2.0", "id":1, "method":"tools/call",
  "params":{"name":"create_project","arguments":{"name":"todo-app"}} }

# 2. Set schema
{ "jsonrpc":"2.0", "id":2, "method":"tools/call",
  "params":{"name":"set_schema","arguments":{
    "project_id":"...",
    "schema":{"tables":{"tasks":{"columns":{"title":"string required","done":"bool default=false"}}}}
  }} }

# 3. Insert data
{ "jsonrpc":"2.0", "id":3, "method":"tools/call",
  "params":{"name":"insert","arguments":{
    "project_id":"...","table":"tasks","data":{"title":"Buy milk"}
  }} }

Deploy Frontend

MoonDB is the backend. Your frontend can be deployed anywhere.

Vercel

# .env.local or Vercel dashboard
NEXT_PUBLIC_MOONDB_URL=https://moondb.ai/p/{project_id}
NEXT_PUBLIC_MOONDB_PUBLIC_KEY=pk_...
MOONDB_ADMIN_KEY=sk_...  # server-side only

Netlify

# netlify.toml or dashboard
[build.environment]
  VITE_MOONDB_URL = "https://moondb.ai/p/{project_id}"
  VITE_MOONDB_PUBLIC_KEY = "pk_..."

Cloudflare Pages

# wrangler.toml or Pages dashboard
[vars]
VITE_MOONDB_URL = "https://moondb.ai/p/{project_id}"
VITE_MOONDB_PUBLIC_KEY = "pk_..."

Render / Railway / Fly.io

# Set in platform dashboard:
MOONDB_URL=https://moondb.ai/p/{project_id}
MOONDB_PUBLIC_KEY=pk_...
MOONDB_ADMIN_KEY=sk_...

Static hosting (any)

MoonDB has CORS enabled, so any static site can call the API from the browser:

const API = 'https://moondb.ai/p/{project_id}';
const PK = 'pk_...';

const res = await fetch(API + '/api/tasks', {
  headers: { 'X-Public-Key': PK }
});
const { data } = await res.json();

External Auth

Use Clerk, Auth0, Supabase, or any OIDC provider instead of MoonDB's built-in auth.

PUT /p/{id}/v1/auth-config
X-Admin-Key: sk_...

{
  "provider": "external",
  "jwks_url": "https://your-provider.com/.well-known/jwks.json",
  "user_id_claim": "sub",
  "audience": "your-audience",
  "issuer": "https://your-issuer"
}

After configuration, users authenticate with your provider and pass their JWT to MoonDB. MoonDB validates the token against the JWKS endpoint.

How it works

  1. User logs in via Clerk/Auth0/etc.
  2. Your frontend gets a JWT from the provider
  3. Pass that JWT to MoonDB as Authorization: Bearer {token}
  4. MoonDB validates it via JWKS and extracts the user ID

Error Handling

Every error includes a suggestion field — actionable instructions for agents and developers.

{
  "error": {
    "code": "FK_NOT_FOUND",
    "message": "Foreign key 'user_id' references row 'abc' which does not exist",
    "suggestion": "Create the referenced user first, then retry."
  }
}
PrefixCategoryExample
VALIDATION_*Invalid inputMissing required field
AUTH_*AuthenticationInvalid credentials
ACCESS_*AuthorizationInsufficient permissions
FK_*Foreign keysReferenced row not found
SCHEMA_*Schema errorsInvalid column type
RATE_*Rate limitsToo many requests
STORAGE_*File storageFile too large

HTTP status codes

StatusMeaning
200Success
201Created
400Validation error
401Authentication required / invalid
403Forbidden (insufficient permissions)
404Not found
409Conflict (duplicate unique value)
429Rate limit or plan quota exceeded

Plans & Limits

New MoonHalf MoonFull MoonEclipse
PriceFree$9/mo$29/mo$79/mo
Projects110100Unlimited
Reads/mo1M50M500M2B
Writes/mo25K500K10M50M
Storage100 MB5 GB25 GB100 GB
Rate limit60 RPM1,200 RPM3,000 RPM10,000 RPM
Emails/mo1005,00050,000500,000

Upgrade anytime from Dashboard → Billing. Downgrades take effect at the end of the billing period.

What happens when limits are exceeded?