Connect your documentation to AI tools with a hosted MCP server.
Doccupine automatically generates a Model Context Protocol (MCP) server from your documentation, making your content accessible to AI applications like Claude, Cursor, VS Code, and other MCP-compatible tools. Your MCP server exposes semantic search capabilities, allowing AI tools to query your documentation directly and provide accuraxte, context-aware answers.
The Model Context Protocol (MCP) is an open protocol that creates standardized connections between AI applications and external services, like documentation. Doccupine generates an MCP server from your documentation, preparing your content for the broader AI ecosystem where any MCP client can connect to your documentation.
Your MCP server exposes search and retrieval tools for AI applications to query your documentation. Your users must connect your MCP server to their preferred AI tools to access your documentation.
When an AI tool has your documentation MCP server connected, the AI tool can search your documentation directly instead of making a generic web search in response to a user's prompt. Your MCP server provides access to all indexed content from your documentation site.
AI tools can search the web, but MCP provides distinct advantages for documentation.
Doccupine automatically generates an MCP server for your documentation and hosts it at your documentation URL with the /api/mcp path. For example, if your documentation is hosted at https://example.com, your MCP server is available at https://example.com/api/mcp.
The MCP server provides both a GET endpoint to discover available tools and a POST endpoint to execute tool calls.
Returns information about available tools and the current index status.
Response:
{
"tools": [
{
"name": "search_docs",
"description": "Search through the documentation content using semantic search...",
"inputSchema": { ... }
},
...
],
"index": {
"ready": true,
"chunkCount": 150
}
}Executes an MCP tool call.
Request Body:
{
"tool": "search_docs",
"params": {
"query": "how to deploy",
"limit": 6
}
}Response:
{
"content": [
{
"path": "app/deployment/page.tsx",
"uri": "docs://deployment",
"score": "0.892",
"text": "Deploy your Doccupine site as a Next.js application..."
},
...
]
}Your MCP server exposes three tools for interacting with your documentation:
Search through the documentation content using semantic search. Returns relevant chunks of documentation based on the query using vector embeddings and cosine similarity.
Parameters:
query (required): The search query to find relevant documentationlimit (optional): Maximum number of results to return (default: 6)Example:
{
"tool": "search_docs",
"params": {
"query": "how to configure AI assistant",
"limit": 5
}
}Get the full content of a specific documentation page by its path.
Parameters:
path (required): The file path to the documentation page (e.g., app/getting-started/page.tsx)Example:
{
"tool": "get_doc",
"params": {
"path": "app/deployment/page.tsx"
}
}List all available documentation pages, optionally filtered by directory.
Parameters:
directory (optional): Optional directory to filter results (e.g., components)Example:
{
"tool": "list_docs",
"params": {
"directory": "configuration"
}
}Doccupine's MCP server uses semantic search powered by vector embeddings to provide accurate, context-aware search results.
app/ directory for all page.tsx, page.ts, page.jsx, and page.js files.const content = declarations in your page files.The index is built automatically when the first search is performed. The index is stored in memory and persists for the lifetime of the server process. If you update your documentation, restart the server to rebuild the index.
Your users must connect your MCP server to their preferred AI tools.
These are some of the ways you can help your users connect to your MCP server:
Your MCP server URL is available at https://your-domain.com/api/mcp.
Create a guide for your users that includes your MCP server URL and the steps to connect it to Claude.
See the Model Context Protocol documentation for more details.
To use the MCP server, you need to have the AI Assistant configured. The MCP server uses the same LLM configuration for generating embeddings.
The MCP server requires an LLM provider to be configured for generating embeddings. Make sure you have set up your AI Assistant with a valid API key before using the MCP server.
See the AI Assistant documentation for configuration details.
Your MCP server searches content extracted from your page files. The server automatically discovers and indexes all page.tsx, page.ts, page.jsx, and page.js files in your app/ directory.
The server extracts content from const content = declarations in your page files. Make sure your documentation pages export a content constant with your markdown or MDX content.
Example:
export const content = `
# Getting Started
Welcome to the documentation...
`;The following directories are automatically excluded from indexing:
node_modules.next.gitapiIf the index is not building, check:
.env filecontent constantIf searches return no results:
app/ directorycontent constantindex.ready status via GET /api/mcp)The first search may be slower as it builds the index. Subsequent searches are fast as they use the in-memory index. If performance is consistently slow: