Insecure credential storage plagues MCP

Page content

This fourth post in our series on Model Context Protocol (MCP) security examines a vulnerability distinct from the protocol-level weaknesses discussed in our previous posts: many MCP environments store long-term API keys for third-party services in plaintext on the local filesystem, often with insecure, world-readable permissions. Exploitation of this vulnerability could touch every system connected to your LLM app; the more powerful your MCP environment, the greater the risk of insecurely stored credentials.

This practice is widespread within the MCP ecosystem. We observed it in multiple MCP tools, from official servers connecting to GitLab, Postgres, and Google Maps, to third-party tools like the Figma connector and the Superargs wrapper. While these are only examples, they illustrate a concerning trend that leaves attackers one file disclosure vulnerability away from stealing your API keys and compromising the entirety of your data in the third-party service. There’s no need for complex exploits, and there are many different ways the attacker could read the API keys from your system:

  • Local malware: User-level malware designed to steal information can scan predictable file paths (e.g., ~/Library/Application Support/, ~/.config/, or application logs) and exfiltrate discovered credentials.
  • Exploitation of other vulnerabilities: Arbitrary file read vulnerabilities in unrelated software on the same system become direct pathways to stealing these plaintext secrets.
  • Multi-user systems: On shared workstations or servers, other users with file system access could read credentials stored in world-readable files.
  • Cloud backups: Automated backup tools may synchronize server configuration files to cloud storage, exposing them to the provider or even to other users if the backup storage system is misconfigured.

This post dissects the insecure ways MCP software handles credentials and the paths they provide attackers to your data. We also discuss the improved security practices that developers of both MCP servers and the third-party services they connect you to can apply to address these risks.

Stealing long-term credentials stored by MCP servers

Because many MCP servers exist to connect an LLM to a third-party API, such as a knowledge management system or cloud infrastructure service, they often need credentials to read or modify data. To that end, MCP integrated OAuth 2.1 in its March 2025 protocol revision. If implemented correctly, OAuth’s token-based approach provides an easy and secure way for servers to obtain short-term credentials with a limited scope.

However, not every downstream service that users want to connect to their LLM supports OAuth, so many MCP tools require users to provide the server with API keys. These long-term credentials typically arrive at the MCP server through one of two pathways, both of which create vectors for credential theft:

Pathway 1: Insecure configuration files

Most MCP servers obtain credentials via command-line arguments or environment variables, often sourced from configuration files managed by the host AI application. We observed this pattern with the official MCP servers for Google Maps, Postgres, and GitLab.

The security risk emerges when the host application stores this configuration insecurely. For example, Claude Desktop creates a claude_desktop_config.json file in the user’s home directory. On macOS, we found this file has world-readable permissions:

$ ls -la ~/Library/Application\ Support/Claude\ Desktop/claude_desktop_config.json
-rw-r--r--  1 user  staff  2048 Apr 12 10:45 claude_desktop_config.json

This -rw-r--r-- permission set allows any process or user on the system to read the file’s contents, including any plaintext API keys stored within, using standard file access operations. No special privileges are required.

Pathway 2: Credentials leaked via chat logs

Another common pattern involves users inputting credentials directly into the AI chat interface, relying on the model to pass them to the appropriate MCP server. Supercorp’s Superargs wrapper explicitly facilitates this for servers that expect configuration information in arguments or environment variables.

This method presents two distinct risks. First, as detailed in our previous post, a malicious MCP server can simply steal the credentials directly from the conversation history. Second, the host AI application itself often logs the entire conversation history—including any embedded credentials—to local files for debugging or history features.

During our testing, we found applications like Cursor and Windsurf store these conversation logs with world-readable permissions:

$ ls -la ~/.cursor/logs/conversations/
-rw-r--r--  1 user  staff  15482 Apr 15 12:23 conversation_20240415.json

Similar to configuration files, these insecurely permissioned logs provide another easily accessible source of plaintext credentials for local attackers or malware.

Compounding the Risk: The Figma Example

Some implementations expose credentials through both pathways simultaneously. The community-provided MCP server for Figma allows users to set their API token via a tool call. However, the server then saves this credential to a configuration file in the user’s home directory using Node.js’s fs.writeFileSync function. By default, this function creates files with 0666 permissions (-rw-rw-rw-), making the stored Figma token world-readable and, depending on the user’s umask setting, world-writable as well.

Writing to the configuration file enables attacks similar to session fixation, where the victim unknowingly logs into an attacker-controlled account. In the case of a design tool like Figma, the victim will likely save trade secrets or other private information in the account, immediately disclosing them to the attacker. If the downstream service is a bank or cryptocurrency exchange, the user could be tricked into depositing or transferring assets directly into the attacker’s accounts.

The steps to safer credential handling

Replacing these leaky credential stores with better authentication methods will not happen overnight, but multiple stakeholders can help move the ecosystem forward. All web services with public-facing APIs should add OAuth support, including short-lived tokens with narrow scopes. In addition to helping clients minimize the risk of credential theft, OAuth also provides the best user experience, as signing in through a browser is usually much simpler than tinkering with a configuration file.

Even if the third-party service does not support OAuth, MCP server developers can choose more secure methods for storing credentials locally. Modern desktop operating systems have purpose-built APIs for credential storage with automatic encryption, such as Windows’ Credentials Management API and macOS’s keychain API. These APIs are far preferable to using plaintext file storage, even if the file in question is readable only by its owner.

As for users, the best they can do is to carefully review the software they install in their environments and only use MCP servers that either use OAuth or store credentials using a secure operating system API. Alternatively, and only as a final stopgap, users can manually tighten the permissions on any sensitive files left behind by their AI software.

When a field of technology evolves rapidly in the way AI and MCP are, it is easy for developers to focus on rapid delivery and leave security as an afterthought. But with MCP becoming the foundation for increasingly powerful AI systems, we need to reverse this trend and make secure credential handling a top priority from the start.

See the other posts in our series on MCP security:

Thank you to our AI/ML security team for their work investigating this attack technique!