Skip to main content

Build for Production

Build your app for production using the sunpeak CLI:
sunpeak build
This generates a dist/ directory with built resources and compiled tools:
dist/
├── albums/
│   ├── albums.html    # Self-contained HTML (JS + CSS inlined)
│   └── albums.json    # Resource metadata (_meta, name, uri)
├── tools/
│   ├── show-albums.js # Compiled tool handler + schema
│   └── ...
├── server.js          # Compiled server entry (if src/server.ts exists)
└── ...
Each .html file is a complete, self-contained app bundle — no external scripts or stylesheets. Tool files are compiled to standard ESM JavaScript with node_modules resolved at runtime.

Start the Production Server

sunpeak start
This loads your compiled tool handlers, Zod schemas, and optional auth from dist/, registers them with the MCP protocol, and starts a Streamable HTTP server on port 8000.
sunpeak start --port 3000          # Custom port
sunpeak start --host 127.0.0.1     # Bind to localhost only
sunpeak start --json-logs           # Structured JSON logging
sunpeak start --sse                 # Use SSE streaming instead of JSON responses
sunpeak start --stateless           # Stateless mode (no session tracking)
PORT=3000 HOST=127.0.0.1 sunpeak start  # Or via environment variables
The production server:
  • Registers each tool from dist/tools/ with its real handler and Zod schema (input validation)
  • Serves pre-built resource HTML from dist/{name}/{name}.html
  • Calls your auth() function from src/server.ts on every request (if present)
  • Uses MCP Streamable HTTP transport on a single /mcp endpoint
  • Provides a /health endpoint for load balancer probes and uptime monitoring
  • Handles graceful shutdown on SIGTERM/SIGINT

Testing Locally

Use sunpeak build and sunpeak start to test production behavior locally:
sunpeak build && sunpeak start
Then expose the MCP server with a tunnel (e.g., ngrok http 8000) and connect from ChatGPT.

Custom Server Setup

For full control over the HTTP server (custom middleware, CORS, health checks, etc.), use createMcpHandler to mount the MCP protocol handler on your own server. Unlike sunpeak start which auto-discovers from dist/, these handlers accept pre-loaded config objects — you load tools, resources, and auth yourself. See Production Server API for config types.
import express from 'express';
import cors from 'cors';
import { createMcpHandler } from 'sunpeak/mcp';

const app = express();

// Your middleware
app.use(cors({ origin: 'https://chatgpt.com' }));
app.get('/health', (req, res) => res.json({ ok: true }));

// tools: ProductionTool[], resources: ProductionResource[], auth: AuthFunction
const mcpHandler = createMcpHandler({ tools, resources, auth });
app.use((req, res, next) => {
  mcpHandler(req, res).then(() => {
    if (!res.headersSent) next();
  });
});

app.listen(3000);
The handler manages Streamable HTTP sessions, auth propagation via req.auth, and CORS preflights for the /mcp path. For unmatched routes it does nothing, so your server handles them.

Serverless / Edge Deployment

For serverless and edge runtimes (Cloudflare Workers, Deno, Bun, Vercel Edge), use createHandler with stateless: true:
// Cloudflare Worker
import { createHandler } from 'sunpeak/mcp';

const handler = createHandler({
  tools,
  resources,
  stateless: true,
});
export default { fetch: handler };
// Hono (any runtime)
import { Hono } from 'hono';
import { createHandler } from 'sunpeak/mcp';

const app = new Hono();
const handler = createHandler({
  tools,
  resources,
  stateless: true,
});
app.all('/mcp', (c) => handler(c.req.raw));
export default app;
Unlike createMcpHandler, the web handler does not do path matching — it handles every request it receives. Mount it behind your own router.

Stateless vs Stateful Mode

By default, both createHandler and createMcpHandler use stateful mode: they track MCP sessions in an in-memory map and route requests by the mcp-session-id header. This works for single-instance servers and multi-instance deployments with sticky sessions (see Horizontal Scaling). Stateless mode (stateless: true) creates a fresh MCP server and transport for every request. No session map, no session ID validation. Use this when:
  • Serverless (Lambda, Workers, Vercel Edge) where each request may hit a different instance with no shared memory
  • Horizontally scaled deployments where configuring sticky sessions isn’t practical
In stateless mode, only POST is supported (no SSE streaming via GET). JSON responses are used by default, which is the right choice for serverless anyway.

Domain Resolution in Stateless Mode

sunpeak auto-computes resource domains from serverUrl based on which host is connecting (ChatGPT and Claude use different sandbox domain schemes). This requires knowing the host’s identity. ChatGPT sends user-agent: openai-mcp/1.0.0 and Claude sends user-agent: Claude-User on every MCP HTTP request. sunpeak detects these automatically, so domain resolution works in stateless mode without any extra configuration:
const handler = createHandler({
  tools,
  resources,
  stateless: true,
  serverUrl: 'https://my-app.example.com/mcp',
});
For custom MCP clients that don’t send identifiable headers, detectClientFromHeaders is exported from sunpeak/mcp so you can build your own detection logic in a custom handler. Both createMcpHandler and createHandler use JSON responses by default (enableJsonResponse: true), which works reliably across all environments including serverless. If you need SSE streaming (e.g., for progress updates on long-running tool calls), set enableJsonResponse: false in your config. See Production Server API for details.
For fully custom MCP server setups, sunpeak/mcp also re-exports registerAppTool, registerAppResource, and RESOURCE_MIME_TYPE from the MCP Apps SDK. See Server Helpers for the full list.

Production Operations

Health Checks

The built-in server exposes a GET /health endpoint that returns:
{ "status": "ok", "uptime": 3600 }
Use this for load balancer probes, Kubernetes liveness/readiness checks, and uptime monitoring. The endpoint is always unauthenticated.

Structured Logging

Pass --json-logs to output structured JSON lines instead of human-readable logs:
sunpeak start --json-logs
Each line is a JSON object with ts, level, and msg fields, plus contextual data:
{"ts":"2025-01-15T10:30:00.000Z","level":"info","msg":"Session started: a1b2c3d4...","sessionId":"a1b2c3d4-...","active":1}
{"ts":"2025-01-15T10:30:01.000Z","level":"info","msg":"CallTool: show-albums{artist}"}
This format is compatible with log aggregation tools like Datadog, CloudWatch, Loki, and Elasticsearch. Errors go to stderr, everything else to stdout.

Binding Interface

By default, the server binds to 0.0.0.0 (all interfaces). To restrict it to localhost only:
sunpeak start --host 127.0.0.1
HOST=127.0.0.1 sunpeak start

Process Management

sunpeak start handles SIGTERM and SIGINT for graceful shutdown (5-second drain timeout). It works with any process manager:
# Docker
CMD ["sunpeak", "start", "--json-logs"]

# PM2
pm2 start "sunpeak start --json-logs" --name sunpeak-app

# systemd
ExecStart=/usr/bin/sunpeak start --json-logs

Reverse Proxy

For TLS, rate limiting, and request size limits, put sunpeak start behind a reverse proxy. The server’s CORS headers allow all origins by default, which is correct for MCP servers called by AI hosts from various origins.
# nginx
location /mcp {
    proxy_pass http://127.0.0.1:8000;
    proxy_http_version 1.1;
    proxy_set_header Connection '';
    proxy_buffering off;       # Required if enableJsonResponse is false (SSE mode)
}

location /health {
    proxy_pass http://127.0.0.1:8000;
}

Horizontal Scaling

MCP uses session-based communication by default. Each initialize handshake creates a session, and all subsequent requests include an mcp-session-id header that must route back to the same server instance. For multi-instance deployments behind a load balancer, you have two options: Sticky sessions (recommended for long-lived servers) — Configure your load balancer to route requests with the same mcp-session-id header to the same backend instance:
  • AWS ALB: Enable target group stickiness
  • nginx: Use ip_hash or sticky cookie
  • Cloudflare: Enable session affinity
  • Kubernetes: Set session affinity annotations on your Ingress
Stateless mode (recommended for serverless) — Disable session tracking entirely. Each request creates a fresh server instance, so any backend can handle any request:
sunpeak start --stateless
Or with a custom server:
const handler = createMcpHandler({
  tools,
  resources,
  stateless: true,
});
See Serverless / Edge Deployment for details on domain resolution in stateless mode.

What Not to Worry About

These are handled at the infrastructure layer, not by sunpeak start:
  • TLS/HTTPS — Use a reverse proxy (nginx, Caddy, Cloudflare)
  • Rate limiting — Configure at the proxy or CDN level
  • Request body limits — Handled by the proxy or platform
For full control over the HTTP server (custom middleware, metrics, etc.), use createMcpHandler to mount the MCP handler on your own Express, Fastify, or http server.

Publish to ChatGPT

Publish your MCP App to ChatGPT via the OpenAI Platform.

Prerequisites

  • A ChatGPT App created on the OpenAI Platform
  • Your MCP server deployed and accessible via HTTPS
  • Built resources (run sunpeak build)

Production Checklist

  • Set appropriate CSP (Content Security Policy) for your resources
  • Configure CORS if serving from a different domain
  • Set up HTTPS for secure communication