Loading...

Queries multiple Ollama models to provide diverse AI perspectives, synthesizing responses for comprehensive answers, enhancing Claude's advisory capabilities.
Boost this tool
Subscribe to listing upgrades or segmented pushes.
Queries multiple Ollama models to provide diverse AI perspectives, synthesizing responses for comprehensive answers, enhancing Claude's advisory capabilities.
This MCP server is relatively safe for querying local models. However, the lack of authentication and potential for prompt injection require careful configuration and monitoring. It's safest when used with trusted models and well-defined system prompts.
Performance depends on the number of models queried, the size of the models, and the available hardware resources. Expect slower response times when querying multiple large models.
Cost is primarily related to the computational resources required to run the Ollama models. Larger models and concurrent queries will increase resource consumption.
npx -y @smithery/cli install @YuChenSSR/multi-ai-advisor-mcp --client claude{
"mcpServers": {
"multi-model-advisor": {
"command": "node",
"args": ["/absolute/path/to/multi-ai-advisor-mcp/build/index.js"]
}
}
}SERVER_NAMESERVER_VERSIONDEBUGOLLAMA_API_URLDEFAULT_MODELSGEMMA_SYSTEM_PROMPTLLAMA_SYSTEM_PROMPTDEEPSEEK_SYSTEM_PROMPTlist-available-modelsLists all available Ollama models on the system.
Read-only operation, no side effects.
query-modelsQueries multiple specified Ollama models with a given question.
Queries models, potentially leading to resource exhaustion or unexpected outputs if prompts are malicious.
None
This MCP server is relatively safe for querying local models. However, the lack of authentication and potential for prompt injection require careful configuration and monitoring. It's safest when used with trusted models and well-defined system prompts.
Autonomy is limited by the capabilities of the underlying Ollama models and the system prompts used. Ensure system prompts are carefully crafted to prevent unintended actions.
Production Tip
Monitor Ollama server resource usage to prevent performance bottlenecks when querying multiple models concurrently.
Install the desired Ollama models using `ollama pull` and then add them to the DEFAULT_MODELS environment variable.
Yes, you can specify the models to use in the query to Claude.
Modify the corresponding environment variables (e.g., GEMMA_SYSTEM_PROMPT, LLAMA_SYSTEM_PROMPT).
The server will likely return an error, and Claude may not be able to synthesize a complete response.
No, there is no built-in authentication. It's recommended to run this server in a secure environment.
Enable the DEBUG environment variable to view logs. You can also monitor the resource usage of the Ollama server.
While primarily designed for Claude Desktop, it should be compatible with other MCP clients, but testing may be required.