Loading...

Integrates any OpenAI SDK compatible chat completion API (OpenAI, Perplexity, Groq, etc.) with Claude via the Model Context Protocol.
Boost this tool
Subscribe to listing upgrades or segmented pushes.
Integrates any OpenAI SDK compatible chat completion API (OpenAI, Perplexity, Groq, etc.) with Claude via the Model Context Protocol.
This server is relatively safe when used with trusted AI providers and when API keys are securely managed. However, it becomes risky if used with untrusted providers or if environment variables are not properly protected, potentially leading to data exposure or prompt injection vulnerabilities.
Performance is primarily determined by the latency and throughput of the configured AI chat provider. Network connectivity and API rate limits can also impact performance.
Cost depends on the pricing models of the AI chat providers used. Consider token usage, API call frequency, and any subscription fees associated with the providers.
npm install
```{
"mcpServers": {
"chat-openai": {
"command": "npx",
"args": [
"@pyroprompts/any-chat-completions-mcp"
],
"env": {
"AI_CHAT_KEY": "OPENAI_KEY",
"AI_CHAT_NAME": "OpenAI",
"AI_CHAT_MODEL": "gpt-4o",
"AI_CHAT_BASE_URL": "https://api.openai.com/v1"
}
}
}
}OPENAI_KEYchatRelays a user's question to a configured AI chat provider and returns the response.
Relays user input to external service, potential for data leakage or prompt injection.
Environment Variable
local
This server is relatively safe when used with trusted AI providers and when API keys are securely managed. However, it becomes risky if used with untrusted providers or if environment variables are not properly protected, potentially leading to data exposure or prompt injection vulnerabilities.
The server relies on external AI providers, so autonomy is limited by the capabilities and safety measures of those providers.
Production Tip
Monitor API usage and costs for each AI provider to optimize performance and budget.
Any provider with an OpenAI SDK compatible chat completion API, including OpenAI, Perplexity, Groq, and others.
Store API keys as environment variables and ensure your environment is properly secured to prevent unauthorized access.
Yes, you can configure multiple providers by referencing the MCP server multiple times with different environment variables.
The 'chat' tool relays user questions to the configured AI chat provider and returns the response.
Use the MCP Inspector for debugging, which provides a URL to access debugging tools in your browser.
The server relays the responses from the AI provider, so streaming support depends on the capabilities of the underlying AI provider's API.
No, the server relies on the external AI provider for input validation and sanitization.