Modes of Operation
MCPurify can be run in different modes. This page will guide through the different options.
As Library
MCPurify can be used inside an existing MCP host to sit between a model and it’s connected MCP services. The core struct to provide all the filtering is mcpurify::filter::FilterServer. This type also manages all the configured backend services.
The following example takes a Config deserialized from an external file and configures the FilterServer with the respective paramters.
use mcpurify::filter::{FilterServer, FilterRequest, FilterResponse};use tokio::sync::mpsc::{Receiver, Sender},
// load filterserverlet ( mut filter_server, request_tx, response_rx)= FilterServer::new(arguments.timeout.map(|t| Duration::from_millis(t)));
// If there is a config file present, we can initialize the filter server with// the necessary providers and an optional timeout.let config = Config::try_from("/path/to/config".into()).expect("Loading config from PathBuf");
filter_server.load_from_config(&config).expect("Initialize filter server from config");
// the filter server will run in the backgroundlet filter_server_handler = tokio::spawn(async move { filter_server.process().await });
// now we can use the `request_tx` to send messages to the filter, and read the response from `response_tx`// the following example sends a request and displays the response. Adapt the code to your needs.
let request = ... ;
// send the request to the filter(s)let result = request_tx.send(request).await;
// .. handle errors and process responselet response = response_rx.recv().await;As Executable
MCPurify ships with an executable called mcpurify-filtering-service. This CLI app can run in REPL-like mode, where each MCP tool call payload must be contained in a single line and the result will be printed to stdout. The following demo shows how to run the binary via cargo run and activate repl mode.
cargo run --bin mcpurify-filtering-service \ --config <path/to/config/file> \ --timeout 500 \ --replOne Shot Instance
MCPurify can be used to be executed as a one-shot instance. This requires to have a configuration passed to mcpurify-filtering-service and the Json RPC v2 encoded MCP service request:
cargo --quiet run -- -c assets/config/default.config.toml '{ "jsonrpc": "2.0", "id": 2, "method": "tools/call", "params": { "name": "get_weather","arguments": {"location": "Tokyo" }}}'As REPL
MCPurify can be run interactively in repl mode. This requires to have a configuration passed to mcpurify-filtering-service. Each request must be provided as Json RPC v2 request in a single line; the response will be Json RPC v2 encoded as well.
cargo --quiet run --bin mcpurify-filtering-service -- -c assets/config/default.config.toml --replAs Proxy
MCPurify can be run as simple http(s) proxy to forward any requests to an external ai provider and filter the results. In order to run the proxy mode, a proxy must be provided as configuration.
in a provided config.toml file:
[proxy]downstream = "http://compute.local:11434"listenaddr = "127.0.0.1:3000"downstream defines the ai provider backend. Since the proxy will transparently pass all requests, any backend can be used. listenaddr defines on which local address and port the proxy shall be made available. Given the configuration has been put in an appropriate file, the proxy mode can be started via:
cargo run -- -c config.tomlNote: In order for the proxy mode to provide all functionality, at least one ai provider must be present in the configuration.
Disclaimer
MCPurify provides a middleware to forward any tool call response to a custom filter. There is a supplementary model, an LLM trained to detect the most common attack vectors, such as gaining access to a system, escalating privileges, or introducing malware into the host. While the model has been fine-tuned on a larger model with best efforts, we cannot guarantee that each tool call will be correctly classified and thus accepted or rejected.
Filtered Example Tool Calls
The listing below shows some example tool invocations that will be allowed or denied, dependending on the classifier model.
Reject
The model tries to rm files from root.
{ "model": "llama3.2", "created_at": "2025-07-07T20:22:19.184789Z", "message": { "role": "assistant", "content": "", "tool_calls": [ { "function": { "name": "delete_files", "arguments": { "params": ["-rf"], "object": "/" } } } ] }, "done": false}And the filter will generate following response
{ "name" : "vllm", "permission" : "deny", "errors" : []}Accept
The model requests a list of all files in /tmp
{ "model": "llama3.2", "created_at": "2025-07-07T20:22:19.184789Z", "message": { "role": "assistant", "content": "", "tool_calls": [ { "function": { "name": "list_files", "arguments": { "object": "/tmp" } } } ] }, "done": false}The filter will generate following response:
{ "name" : "vllm", "permission" : "accept", "errors" : []}