Using Ayzeo

Connect AI Clients to Ayzeo (MCP Server)

Step-by-step guide to connecting Claude Desktop, Cursor, Gemini CLI, and other AI clients to your Ayzeo project via the Model Context Protocol (MCP).

Ayzeo Team April 20, 2026 9 min read

Key Takeaways

  • Ayzeo exposes a remote MCP server at https://ayzeo.com/mcp/v1 on all Pro and Enterprise plans
  • Connect using any MCP-capable client: Claude Desktop, Claude Code, Cursor, Gemini CLI, Windsurf, or the OpenAI Responses API
  • Authentication uses a per-project API token; each token is scoped to a single project and can be revoked anytime
  • Your assistant can query AI visibility data, GSC opportunities by ACOS, GA AI-referrer traffic, and trigger content-page generation

What is the Ayzeo MCP Server?

The Model Context Protocol (MCP) is the open standard that Anthropic, OpenAI, and Google are converging on for letting AI assistants call external tools and data sources. Ayzeo's MCP server puts your full GEO data layer directly inside any MCP-capable AI client — so instead of switching to the Ayzeo dashboard to check opportunities, you ask your assistant in natural language and it fetches the answer.

Ask "based on our ACOS, what's the next best content page to create?" and your assistant will query the top-scoring GSC opportunities, cross-reference GA4 to see which landing pages already convert from AI referrals, check whether a dedicated content page already exists, and recommend one with the 9-dimension score breakdown as justification. If you agree, it can call Ayzeo's content generator directly — all without leaving the chat.

Prerequisites

  • An active Pro or Enterprise subscription. Free and Starter plans do not have MCP access.
  • An MCP-capable AI client installed (see supported clients).
  • Optional but recommended: connect Google Search Console and Google Analytics for full tool coverage. Tools that need these return a clear setup URL if they're missing, so nothing breaks if you skip this step.

Setting It Up

1

Create a project API token

Open Project Settings → API Tokens, click Create Token, and give it a descriptive name (e.g. claude-desktop or cursor-work-laptop). Copy the token immediately — you won't be able to see it again. One token per client keeps audit and revocation clean.

Working across several Ayzeo projects? Create one token per project and add each as its own named entry in the client config — ayzeo-brand-a, ayzeo-brand-b, ayzeo-agency-client-3. Every MCP client supports multiple server entries side by side, so your assistant can reach all projects at once. Organisation-scoped tokens are planned; this workaround covers the same use case today.

2

Copy the MCP endpoint config

Scroll to the Connect to AI (MCP) card on the same settings page. You'll find the endpoint URL (https://ayzeo.com/mcp/v1) and a ready-to-paste configuration snippet for Claude Desktop. A collapsible section lists all 14 available tools.

3

Add the server to your AI client

Paste the URL and token into your client of choice. See the client configuration section below for exact instructions for Claude Desktop, Cursor, and Gemini CLI. Restart the client after editing its config file.

4

Ask your first question

Start with something like "Using the Ayzeo MCP server, list my top 5 ACOS opportunities". The assistant should invoke get_gsc_opportunities and return a ranked list with per-dimension score breakdowns. If it works, you're done.

Supported AI Clients

Any client that supports remote MCP servers over Streamable HTTP with a custom Authorization header works. Below are the exact, verified configurations for each major client. All snippets use the same token you created in step 1.

Claude Code CLI Easiest

One command adds the server. Use --scope user to persist it across projects, or leave it off for project-local.

claude mcp add --transport http --scope user ayzeo \
  https://ayzeo.com/mcp/v1 \
  --header "Authorization: Bearer <your-token>"

Verify with claude mcp list. Remove with claude mcp remove ayzeo.

Cursor

Open Settings → Tools & MCP → New MCP Server, which opens ~/.cursor/mcp.json (user-level) or .cursor/mcp.json (project-level). Paste:

{
  "mcpServers": {
    "ayzeo": {
      "type": "http",
      "url": "https://ayzeo.com/mcp/v1",
      "headers": { "Authorization": "Bearer <your-token>" }
    }
  }
}

The panel shows a green dot when the server connects and a red dot on failure. Check Output → MCP Logs if you hit auth errors. Cursor accepts the config without "type": "http" too, but including it matches the Claude Code canonical shape and makes the intent explicit.

Windsurf

Edit ~/.codeium/windsurf/mcp_config.json with the same type + url + headers shape as Cursor (above). Windsurf auto-reloads MCP servers on file save; no restart needed.

Claude Desktop

Claude Desktop's Custom Connector UI (Settings → Connectors) only supports OAuth client id/secret auth, not bearer-token headers. Since Ayzeo uses bearer tokens, the supported path is the mcp-remote stdio bridge. Edit your claude_desktop_config.json:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "ayzeo": {
      "command": "npx",
      "args": [
        "-y", "mcp-remote",
        "https://ayzeo.com/mcp/v1",
        "--header", "Authorization: Bearer <your-token>"
      ]
    }
  }
}

Restart Claude Desktop fully (Cmd/Ctrl+Q, not just close the window). The hammer icon in the compose bar lists all 14 Ayzeo tools once it connects.

Gemini CLI

Install with npm install -g @google/gemini-cli. Add the server via CLI:

gemini mcp add --transport http ayzeo \
  https://ayzeo.com/mcp/v1 \
  --header "Authorization: Bearer <your-token>"

Or edit ~/.gemini/settings.json directly. Note that Gemini CLI uses httpUrl (not url) for Streamable HTTP servers:

{
  "mcpServers": {
    "ayzeo": {
      "httpUrl": "https://ayzeo.com/mcp/v1",
      "headers": { "Authorization": "Bearer <your-token>" },
      "timeout": 30000,
      "trust": false
    }
  }
}

OpenAI Responses API / Agents SDK

For programmatic use from an OpenAI-powered agent, pass Ayzeo as an MCP tool in the /v1/responses call. OpenAI’s API accepts a bearer token directly in the authorization field (it is not stored server-side):

{
  "model": "gpt-5.1",
  "tools": [{
    "type": "mcp",
    "server_label": "ayzeo",
    "server_url": "https://ayzeo.com/mcp/v1",
    "authorization": "Bearer <your-token>",
    "require_approval": "never"
  }],
  "input": "Based on our ACOS, what is the next best content page to create?"
}

The OpenAI Agents SDK (Python / TypeScript) wraps this pattern with hosted_mcp_tool. The same token and endpoint work there.

ChatGPT.com (end-user app) currently only ships ready-made Connectors for specific partners and restricts custom remote MCP to a “Developer mode” beta. For general third-party tool servers like Ayzeo, the reliable path today is the Responses API snippet above. Gemini (gemini.google.com) similarly has no public MCP client yet — the Gemini CLI is the Google integration path.

The 14 Tools

Your assistant discovers these automatically via MCP's tools/list handshake — no manual wiring needed. Grouped by theme:

Project & Analysis

  • get_project_info — name, domain, plan tier, dashboard URL
  • get_analysis_results — latest advanced website analysis (SEO health, schema, AI readiness)
  • run_website_analysis — trigger a fresh site audit
  • get_project_stats — aggregated citation and mention counts by model

AI Visibility

  • list_prompts — saved prompts with latest runs per model (answer, sentiment, mentioned brands, cited URL)
  • run_prompt — re-run a saved prompt (enables before/after comparison)
  • run_ai_visibility_check — ad-hoc AI search for any query
  • get_citation_history — recent per-model citation results
  • get_brand_mentions — aggregated brand mentions with sentiment

GSC & GA Data

  • get_gsc_opportunities — top-N high-ACOS queries with full 9-dimension breakdown (headline tool)
  • get_gsc_queries — paginated raw GSC data for custom filtering
  • get_ga_ai_traffic — GA4 traffic by AI referrer source with top landing pages

Content

  • get_content_pages — list already-generated pages with edit URLs
  • create_content_page_from_query — generate a new AI-optimised page for a query (counts against 5/day quota)

Example Prompts to Try

"Using Ayzeo, show me my top 10 ACOS opportunities and explain why the number-one query scored so high."

"Which of my prompts have zero brand mentions in ChatGPT? What do competitors rank for those prompts?"

"Cross-reference my top ACOS opportunities with GA AI-referrer traffic. Pick the single best page to create and generate it."

"Compare sentiment across all tracked brands for my project. Which competitors have the strongest positive sentiment, and where are the sentiment gaps for Ayzeo?"

Troubleshooting

"Pro subscription required" error

The MCP server gates requests behind an active Pro or Enterprise subscription. Trial and Free/Starter accounts get this error. Upgrade from the pricing page to enable it.

gsc_not_connected / ga_not_connected

The tool you invoked requires a connected integration. The error payload includes a setupUrl — open it to finish the OAuth flow, then retry. After connecting, allow a few hours for the first sync to populate data.

gsc_sync_pending / ga_sync_pending

The integration is connected but the initial data sync hasn't populated yet. GSC imports up to 90 days on first sync — this can take a few minutes. Wait and retry.

"Rate limit exceeded"

The shared API rate limit is 100 requests per 5 minutes per token. An aggressive agent loop can burn through this. Wait 5 minutes for the window to reset, or create an additional token and split traffic.

Assistant can't see the tools

Restart the AI client after editing its config file. Verify the endpoint URL is exactly https://ayzeo.com/mcp/v1 (no trailing slash). Check that the bearer header is formatted Authorization: Bearer <token> with a space between Bearer and the token. If still failing, probe the endpoint with the MCP Inspector to isolate whether it's a server or client issue.

Claude Desktop: ReferenceError: File is not defined / Server transport closed unexpectedly

mcp-remote pulls in undici ≥ 7, which requires Node ≥ 20.18.1. If your OS launch PATH puts an older Node first (e.g. an old nvm default), Claude Desktop’s npx process loads undici under the old Node and crashes before the MCP handshake completes — even if you try to invoke a newer npx, the #!/usr/bin/env node shebang inside it re-resolves node via that same stale PATH. Two things are needed to fix it:

Step 1. Find the absolute path to a Node ≥ 20 install you already have. Pick whichever matches your setup:

  • nvm: nvm use 20 (or 22/24), then which node. Typical output: ~/.nvm/versions/node/v<VERSION>/bin/node.
  • Homebrew (macOS): brew --prefix node → typically /opt/homebrew/opt/node; binary at <prefix>/bin/node.
  • fnm / volta / asdf: fnm which 20, volta which node, or asdf which node.
  • System install: which node (after confirming node --version prints 20+).
  • Windows: where node in PowerShell; typically C:\Program Files\nodejs\node.exe.

Verify: run <node-path> --version and confirm it prints v20.-something or higher.

Step 2. Edit claude_desktop_config.json — pin the absolute path to npx in command, and pin PATH in env so the shebang re-resolves to the same Node. Replace <NODE_BIN_DIR> with the directory containing your Node binary from Step 1 (e.g. ~/.nvm/versions/node/v22.11.0/bin).

{
  "mcpServers": {
    "ayzeo": {
      "command": "<NODE_BIN_DIR>/npx",
      "args": [
        "-y", "mcp-remote",
        "https://ayzeo.com/mcp/v1",
        "--header", "Authorization: Bearer <your-token>"
      ],
      "env": {
        "PATH": "<NODE_BIN_DIR>:/usr/local/bin:/opt/homebrew/bin:/usr/bin:/bin"
      }
    }
  }
}

On Windows, use the full path to npx.cmd for command and set env.PATH to the directory containing node.exe plus your usual %PATH% entries (semicolon-separated).

Step 3. Clear the npx cache if you ever launched mcp-remote under the broken Node — the cached install still has incompatible bits that the new Node will re-use if it finds them:

  • macOS / Linux: rm -rf ~/.npm/_npx
  • Windows: Remove-Item -Recurse -Force $env:APPDATA\npm-cache\_npx

Then fully quit Claude Desktop (Cmd+Q on macOS, right-click tray → Quit on Windows — not just close the window) and relaunch. If it still fails, reopen the log file referenced by Claude Desktop → Settings → Developer and confirm the new command path is actually being used.

Security & Limits

  • Each API token is scoped to a single Ayzeo project. A compromised token exposes only that project's data — nothing account-wide.
  • All traffic runs over HTTPS. The server is stateless — no session caching of credentials.
  • Revoke tokens from Project Settings → API Tokens at any time. Revocation is immediate.
  • Rate limit: 100 requests per 5 minutes per token, shared with the WordPress API.
  • create_content_page_from_query counts against the 5-per-day content-page quota per project.

Frequently Asked Questions

Q: Can multiple AI clients share one token?

A: Yes, but creating one token per client is better for auditing and revocation. If a specific client leaks the token, you can revoke just that token without breaking the others.

Q: Does the MCP server work across multiple Ayzeo projects at once?

A: Each token (and therefore each MCP connection) is scoped to a single project. To work across multiple projects in the same AI client, register multiple MCP servers — one per project — with distinct names like ayzeo-brand-a and ayzeo-brand-b.

Q: Can I see which MCP tools the assistant called?

A: Most AI clients (Claude Desktop, Cursor, Claude Code) surface tool calls inline in the conversation. You can inspect each call, its arguments, and the returned payload. On the Ayzeo side, every call updates the token's lastUsedAt timestamp visible on the API Tokens page.

Q: Will the assistant hallucinate data?

A: All tools return real Ayzeo data — the MCP server never generates numbers. The assistant's narration ("this query scored 87 because...") is derived from the actual dimension scores returned by get_gsc_opportunities. If the assistant ever cites a number you can't find in the data, ask it to re-run the tool call with the filter you care about.

Q: Is there usage metering?

A: MCP calls share the WordPress API quota (100 requests per 5 minutes per token). Content-page generation counts against the project's 5-per-day content quota. Heavy AI-visibility prompt runs consume credits from your project's prompt allowance, same as running them from the dashboard.