Loading...

Provides intelligent web search capabilities using OpenAI's reasoning models, ideal for AI assistants needing up-to-date information.
Boost this tool
Subscribe to listing upgrades or segmented pushes.
Provides intelligent web search capabilities using OpenAI's reasoning models, ideal for AI assistants needing up-to-date information.
The server is relatively safe for read-only operations, but the dependency on an external API and the potential for unpredictable model outputs introduce moderate risks. It is safe to use with proper API key management and monitoring of query content. Risky scenarios include using a compromised API key or performing sensitive queries without adequate safeguards.
Performance depends on the OpenAI API's response times and the complexity of the search query. Reasoning models can increase latency.
Cost is primarily driven by OpenAI API usage, which is based on token consumption. More complex queries and higher reasoning effort will increase costs.
pip install openai-websearch-mcp{
"mcpServers": {
"openai-websearch-mcp": {
"command": "uvx",
"args": ["openai-websearch-mcp"],
"env": {
"OPENAI_API_KEY": "your-api-key-here",
"OPENAI_DEFAULT_MODEL": "gpt-5-mini"
}
}
}
}OPENAI_API_KEYopenai_web_searchPerforms intelligent web searches using OpenAI's reasoning models.
Primarily a read-only operation, retrieving information from the web.
API Key
The server is relatively safe for read-only operations, but the dependency on an external API and the potential for unpredictable model outputs introduce moderate risks. It is safe to use with proper API key management and monitoring of query content. Risky scenarios include using a compromised API key or performing sensitive queries without adequate safeguards.
The tool's autonomy is limited by the need for an OpenAI API key and the inherent risks of web search. Sandboxing helps mitigate some risks, but careful monitoring is still recommended.
Production Tip
Monitor API usage and costs closely to avoid unexpected charges. Implement robust error handling to gracefully handle API failures.
The server supports gpt-4o, gpt-4o-mini, gpt-5, gpt-5-mini, gpt-5-nano, o3, and o4-mini.
Set the OPENAI_API_KEY environment variable to your API key.
The default reasoning model is gpt-5-mini.
Set the reasoning_effort parameter to 'low', 'medium', or 'high'.
Yes, you can use the user_location parameter to specify a location for localized results.
gpt-5 is a more powerful model suitable for deep research, while gpt-5-mini is faster and more cost-effective for quick searches.
The server will return an error, and the search will fail. Implement error handling to gracefully handle API failures.