Loading...

RAG server providing documentation search and retrieval via vector embeddings, enabling context-aware AI responses using OpenAI and Qdrant.
Boost this tool
Subscribe to listing upgrades or segmented pushes.
RAG server providing documentation search and retrieval via vector embeddings, enabling context-aware AI responses using OpenAI and Qdrant.
This server provides both read and write capabilities related to documentation. While API keys are used for authentication, the ability to add and remove documentation sources, combined with the reliance on external services like OpenAI and Qdrant, introduces moderate risk. It is safe to use for information retrieval, but caution should be exercised when adding or removing documentation sources, and API keys should be securely managed.
Performance depends on the size of the documentation, the complexity of the queries, and the performance of the OpenAI and Qdrant services. Large documentation sets and complex queries may result in slower response times.
Cost is primarily driven by OpenAI API usage for embeddings generation and Qdrant resource consumption. Monitor API usage and optimize query strategies to minimize costs.
{
"mcpServers": {
"rag-docs": {
"command": "npx",
"args": [
"-y",
"@hannesrudolph/mcp-ragdocs"
],
"env": {
"OPENAI_API_KEY": "",
"QDRANT_URL": "",
"QDRANT_API_KEY": ""
}
}
}
}OPENAI_API_KEYsearch_documentationSearches the stored documentation and returns relevant excerpts based on a text query.
Read-only operation, no data modification.
list_sourcesLists all documentation sources currently stored in the system.
Read-only operation, no data modification.
extract_urlsExtracts URLs from a given webpage and optionally adds them to the processing queue.
Adds URLs to a queue for later processing, potential for unintended data ingestion.
remove_documentationRemoves specific documentation sources from the system by their URLs.
Deletes data, impacting future search results.
list_queueLists all URLs currently waiting in the documentation processing queue.
Read-only operation, no data modification.
run_queueProcesses and indexes all URLs currently in the documentation queue.
Processes URLs, potentially ingesting and indexing data.
clear_queueRemoves all pending URLs from the documentation processing queue.
Removes URLs from the queue, preventing them from being processed.
API Key
cloud
This server provides both read and write capabilities related to documentation. While API keys are used for authentication, the ability to add and remove documentation sources, combined with the reliance on external services like OpenAI and Qdrant, introduces moderate risk. It is safe to use for information retrieval, but caution should be exercised when adding or removing documentation sources, and API keys should be securely managed.
Autonomy is limited by the API keys and queue management. No sandboxing is implemented, so caution is advised when adding external URLs to the queue.
Production Tip
Monitor the queue size and processing times to ensure efficient operation and prevent backlogs.
It provides documentation search and retrieval capabilities using vector embeddings, allowing AI assistants to access relevant context.
You need an OpenAI API key for embeddings generation and a Qdrant API key for accessing the vector database.
You can add URLs to the processing queue using the `extract_urls` tool, and then process the queue using the `run_queue` tool.
You can remove specific documentation sources by their URLs using the `remove_documentation` tool.
The queue allows you to batch documentation processing and manage the order in which documentation sources are indexed.
You can use the `list_queue` tool to view the URLs currently waiting in the queue.
The server includes error handling and retry logic, but specific details are not documented. Check server logs for details.