Build for Production
Build your app for production using the sunpeak CLI:dist/ directory with built resources and compiled tools:
.html file is a complete, self-contained app bundle — no external scripts or stylesheets. Tool files are compiled to standard ESM JavaScript with node_modules resolved at runtime.
Start the Production Server
dist/, registers them with the MCP protocol, and starts a Streamable HTTP server on port 8000.
- Registers each tool from
dist/tools/with its real handler and Zod schema (input validation) - Serves pre-built resource HTML from
dist/{name}/{name}.html - Calls your
auth()function fromsrc/server.tson every request (if present) - Uses MCP Streamable HTTP transport on a single
/mcpendpoint - Provides a
/healthendpoint for load balancer probes and uptime monitoring - Handles graceful shutdown on SIGTERM/SIGINT
Testing Locally
Usesunpeak build and sunpeak start to test production behavior locally:
ngrok http 8000) and connect from ChatGPT.
Custom Server Setup
For full control over the HTTP server (custom middleware, CORS, health checks, etc.), usecreateMcpHandler to mount the MCP protocol handler on your own server.
Unlike sunpeak start which auto-discovers from dist/, these handlers accept pre-loaded config objects — you load tools, resources, and auth yourself. See Production Server API for config types.
req.auth, and CORS preflights for the /mcp path. For unmatched routes it does nothing, so your server handles them.
Serverless / Edge Deployment
For serverless and edge runtimes (Cloudflare Workers, Deno, Bun, Vercel Edge), usecreateHandler with stateless: true:
createMcpHandler, the web handler does not do path matching — it handles every request it receives. Mount it behind your own router.
Stateless vs Stateful Mode
By default, bothcreateHandler and createMcpHandler use stateful mode: they track MCP sessions in an in-memory map and route requests by the mcp-session-id header. This works for single-instance servers and multi-instance deployments with sticky sessions (see Horizontal Scaling).
Stateless mode (stateless: true) creates a fresh MCP server and transport for every request. No session map, no session ID validation. Use this when:
- Serverless (Lambda, Workers, Vercel Edge) where each request may hit a different instance with no shared memory
- Horizontally scaled deployments where configuring sticky sessions isn’t practical
POST is supported (no SSE streaming via GET). JSON responses are used by default, which is the right choice for serverless anyway.
Domain Resolution in Stateless Mode
sunpeak auto-computes resource domains fromserverUrl based on which host is connecting (ChatGPT and Claude use different sandbox domain schemes). This requires knowing the host’s identity.
ChatGPT sends user-agent: openai-mcp/1.0.0 and Claude sends user-agent: Claude-User on every MCP HTTP request. sunpeak detects these automatically, so domain resolution works in stateless mode without any extra configuration:
detectClientFromHeaders is exported from sunpeak/mcp so you can build your own detection logic in a custom handler.
Both createMcpHandler and createHandler use JSON responses by default (enableJsonResponse: true), which works reliably across all environments including serverless. If you need SSE streaming (e.g., for progress updates on long-running tool calls), set enableJsonResponse: false in your config. See Production Server API for details.
For fully custom MCP server setups,
sunpeak/mcp also re-exports registerAppTool, registerAppResource, and RESOURCE_MIME_TYPE from the MCP Apps SDK. See Server Helpers for the full list.Production Operations
Health Checks
The built-in server exposes aGET /health endpoint that returns:
Structured Logging
Pass--json-logs to output structured JSON lines instead of human-readable logs:
ts, level, and msg fields, plus contextual data:
Binding Interface
By default, the server binds to0.0.0.0 (all interfaces). To restrict it to localhost only:
Process Management
sunpeak start handles SIGTERM and SIGINT for graceful shutdown (5-second drain timeout). It works with any process manager:
Reverse Proxy
For TLS, rate limiting, and request size limits, putsunpeak start behind a reverse proxy. The server’s CORS headers allow all origins by default, which is correct for MCP servers called by AI hosts from various origins.
Horizontal Scaling
MCP uses session-based communication by default. Eachinitialize handshake creates a session, and all subsequent requests include an mcp-session-id header that must route back to the same server instance.
For multi-instance deployments behind a load balancer, you have two options:
Sticky sessions (recommended for long-lived servers) — Configure your load balancer to route requests with the same mcp-session-id header to the same backend instance:
- AWS ALB: Enable target group stickiness
- nginx: Use
ip_hashorsticky cookie - Cloudflare: Enable session affinity
- Kubernetes: Set session affinity annotations on your Ingress
What Not to Worry About
These are handled at the infrastructure layer, not bysunpeak start:
- TLS/HTTPS — Use a reverse proxy (nginx, Caddy, Cloudflare)
- Rate limiting — Configure at the proxy or CDN level
- Request body limits — Handled by the proxy or platform
createMcpHandler to mount the MCP handler on your own Express, Fastify, or http server.
Publish to ChatGPT
Publish your MCP App to ChatGPT via the OpenAI Platform.Prerequisites
- A ChatGPT App created on the OpenAI Platform
- Your MCP server deployed and accessible via HTTPS
- Built resources (run
sunpeak build)
Production Checklist
- Set appropriate CSP (Content Security Policy) for your resources
- Configure CORS if serving from a different domain
- Set up HTTPS for secure communication