Skip to main content
Beacon is Alpic’s automated audit for remote MCP servers and MCP Apps. Point it at a URL and it reports whether the server is ready to ship to ChatGPT and Claude.ai — including protocol conformance, tool and resource metadata, app rendering in each client, and how the server handles unexpected inputs. Beacon doesn’t just static-check your manifest: it launches your server inside a real ChatGPT and Claude.ai conversation in a headless browser, triggers a tool that exposes an MCP App, and asserts that the app actually renders end-to-end. Screenshots from each run are attached to the report so you can see what your users will see. Use Beacon before publishing a new server, after every significant change, or as a CI gate on your deployments.

What Beacon checks

Beacon evaluates your server and any MCP Apps it exposes against the specifications and platform requirements that ChatGPT and Claude.ai apply when reviewing an app. MCP server specs — protocol conformance and tool/resource shape: MCP Apps specs — requirements that apply once your server ships an MCP App (a view resource rendered by the client): Every check carries a severity. Errors must be fixed before a platform will accept the server; warnings should be fixed before submission; info is surfaced for awareness. The report also outputs a per-platform readiness verdict (ChatGPT / Claude.ai) derived from the checks that apply to each platform.
Beacon currently only supports unauthenticated MCP servers. If your server responds with HTTP 401, Beacon reports that authentication is required and skips the rest of the checks. Support for authenticated audits is on the roadmap.

Running an audit

From the dashboard

Open your team’s Beacon tab, paste an HTTPS URL, and hit Run. The page streams progress as each collector finishes and opens a detailed report when the audit completes. Past audits for the team are listed so you can revisit or compare them.
Beacon audit report

From the CLI

Use alpic audit to run Beacon from your terminal or CI pipeline:
alpic audit --url https://my-server.example.com/mcp
When run inside a linked project, Beacon targets the project’s deployed MCP URL automatically. Pass --json to get the full report for further processing in CI.
The CLI skips the end-to-end app rendering category by default because it takes several minutes per platform. Run the full audit (including live app rendering) from the dashboard.

Reading the report

The report groups results by severity and surfaces:
  • A readiness badge for ChatGPT and Claude.ai — green when no blocking errors remain and, for servers that ship an MCP App, at least one app check succeeded for that platform.
  • A list of issues with a short message, affected tool/resource, and a one-line hint explaining how to fix it.
  • App screenshots captured from the real ChatGPT and Claude.ai browser sessions Beacon ran, so you can visually confirm what your MCP App looks like in each client.
  • A Fix with AI action that packages the failing checks into a prompt you can paste into Claude Code, Cursor, or any other coding agent. The prompt references the same specs listed above so the agent can reason against the source of truth.
If a check is skipped, it’s because its required artifact wasn’t available — for example, a tool-level check has no tools to run against, or an app-rendering check has no MCP App resources to render. Skips are informational and never block platform readiness.