Loading...

Interactive MCP server enabling LLMs to request user input, send notifications, and initiate command-line chat sessions on the user's local machine.
Boost this tool
Subscribe to listing upgrades or segmented pushes.
Interactive MCP server enabling LLMs to request user input, send notifications, and initiate command-line chat sessions on the user's local machine.
This server is relatively safe when used with trusted LLMs that adhere to the guiding principles. However, the ability to execute commands on the local machine introduces a moderate risk if the LLM is compromised or poorly configured. Exercise caution when using with untrusted LLMs.
Performance is limited by the speed of user interaction and the execution of commands on the local machine. Expect delays when waiting for user input.
No direct cost implications. Cost is primarily associated with the LLM client's usage (e.g., API calls, token consumption).
npm install -g{
"mcpServers": {
"interactive": {
"command": "npx",
"args": ["-y", "interactive-mcp"]
}
}
}request_user_inputAsks the user a question and returns their answer, optionally displaying predefined options.
User input can be manipulated, but the tool itself doesn't directly cause harm.
message_complete_notificationSends a simple OS notification to the user.
Notifications are generally harmless.
start_intensive_chatInitiates a persistent command-line chat session with the user.
Opens a command-line interface, potentially allowing for execution of commands.
ask_intensive_chatAsks a question within an active intensive chat session.
Relies on user input within a command-line context.
stop_intensive_chatCloses an active intensive chat session.
Simply closes a command-line session.
None
cloud
This server is relatively safe when used with trusted LLMs that adhere to the guiding principles. However, the ability to execute commands on the local machine introduces a moderate risk if the LLM is compromised or poorly configured. Exercise caution when using with untrusted LLMs.
Autonomy is dependent on the LLM client configuration. The server itself does not enforce any autonomy restrictions.
Production Tip
Carefully validate LLM prompts and user inputs to prevent unintended command execution.
It allows LLMs to interact with users on their local machines, requesting input, sending notifications, and initiating command-line chats.
It's relatively safe when used with trusted LLMs, but caution is advised due to the ability to execute commands locally.
You need to add a configuration block to your client's settings file, specifying the command and arguments to run the server.
You can configure the timeout for user input prompts and disable specific tools.
It's not ideal for fully automated tasks as it requires user interaction.
The main risk is the potential for malicious LLMs to execute harmful commands on your system.
No, it does not have built-in sandboxing capabilities.