API keys are one of the simplest and most common forms of API authentication -- and one of the easiest credentials to leak. A single exposed key in a public repository can be discovered by automated scanners within seconds, leading to unauthorized access, unexpected charges, and compromised data. This guide covers five progressively more secure methods for storing and managing API keys in Python, from quick local development setups to production-grade secrets management.
API key authentication works by including a unique string -- the key -- with each request your application makes to an API. The server checks this string against its records and either grants or denies access. Unlike OAuth 2.0, which involves a multi-step token exchange, API keys are sent directly with every request. This simplicity is their strength and their weakness: there is no authorization code flow, no scopes, and no token expiration unless the provider builds those features in separately. That means the burden of protecting the key falls entirely on you.
Why API Key Security Matters
A leaked API key gives an attacker the same level of access your application has. Depending on the service, that could mean reading private data, making purchases, sending messages on your behalf, or consuming your entire usage quota. Automated bots continuously scan public repositories, paste sites, and even client-side JavaScript for exposed credentials. Once a key is found, it can be exploited within minutes.
The problem is especially acute because API keys are bearer credentials -- whoever holds the key can use it. There is no additional verification step. If you accidentally commit a key to a public Git repository, simply deleting the file in a later commit does not help. The key remains visible in the repository's history and can be recovered by anyone who knows where to look.
If you have already committed an API key to a repository -- even a private one -- revoke that key immediately and generate a new one. Removing the file from the current branch does not erase it from Git history.
Method 1: Environment Variables with os.environ
The simplest approach to keeping keys out of your code is to store them as environment variables on your operating system and read them at runtime with Python's built-in os module. This separates your credentials from your source code entirely, so nothing sensitive appears in your files or repository.
# Set the variable in your terminal first:
# export WEATHER_API_KEY="your-secret-key-here" (macOS/Linux)
# set WEATHER_API_KEY=your-secret-key-here (Windows CMD)
import os
import requests
api_key = os.environ["WEATHER_API_KEY"]
# Using os.environ[] raises KeyError if the variable is missing,
# which is preferable to silently proceeding with None.
response = requests.get(
"https://api.weather.example.com/v1/current",
headers={"X-API-Key": api_key},
)
print(response.json())
The advantage of this method is that it requires no additional libraries and works across every operating system. The drawback is that environment variables set in the terminal do not persist across sessions or reboots unless you add them to a shell profile file like .bashrc or .zshrc. For projects with multiple keys, managing them all as shell exports becomes cumbersome quickly.
Method 2: The python-dotenv Library
The python-dotenv library solves the persistence problem by letting you define all your environment variables in a single .env file that sits in your project directory. At runtime, a single function call loads those variables into os.environ so the rest of your code can access them the same way it would access any other environment variable.
# 1. Install the library
# pip install python-dotenv
# 2. Create a .env file in your project root:
# WEATHER_API_KEY=your-secret-key-here
# MAPS_API_KEY=another-secret-key
# DEBUG=false
# 3. CRITICAL: Add .env to your .gitignore file
# echo ".env" >> .gitignore
# 4. Load and use the variables in your Python code:
import os
from dotenv import load_dotenv
load_dotenv() # Reads .env into os.environ
weather_key = os.getenv("WEATHER_API_KEY")
maps_key = os.getenv("MAPS_API_KEY")
if not weather_key:
raise RuntimeError(
"WEATHER_API_KEY is not set. "
"Check your .env file."
)
This approach is the most popular method for local development because it is simple, portable, and keeps all of your secrets in one place. The critical rule is that the .env file must be listed in your .gitignore file before you ever make your first commit. If the file is tracked by Git even once, its contents are preserved in history permanently.
Create a .env.example file that lists the required variable names with placeholder values (like WEATHER_API_KEY=your-key-here) and commit that to your repository. This documents which variables the project needs without exposing real credentials.
Method 3: Pydantic Settings for Validated Configuration
As projects grow, scattered os.getenv() calls become difficult to maintain. The pydantic-settings library provides a structured approach: you define your configuration as a typed Python class, and Pydantic automatically loads values from environment variables and .env files, validates their types, and raises clear errors if anything is missing.
# pip install pydantic-settings
from pydantic_settings import BaseSettings, SettingsConfigDict
from pydantic import Field, SecretStr
class Settings(BaseSettings):
model_config = SettingsConfigDict(
env_file=".env",
extra="ignore",
)
weather_api_key: SecretStr = Field(min_length=1)
maps_api_key: SecretStr = Field(min_length=1)
debug: bool = False
# Pydantic loads from .env and validates on instantiation
settings = Settings()
# SecretStr prevents accidental logging of the key
# Access the raw value only when you need it:
weather_key = settings.weather_api_key.get_secret_value()
# This will print SecretStr('**********'), not the actual key:
print(settings.weather_api_key)
The SecretStr type is especially valuable. It wraps the key so that printing the settings object, logging it, or serializing it to JSON all produce a masked value instead of the raw secret. You only access the underlying string by explicitly calling .get_secret_value(), which makes accidental exposure in logs and error traces far less likely.
Method 4: The keyring Library for OS-Level Storage
The keyring library stores credentials in your operating system's native secure storage -- Keychain on macOS, Windows Credential Locker on Windows, and the Secret Service API (GNOME Keyring or KWallet) on Linux. This is more secure than a plain-text .env file because the operating system encrypts the stored values and restricts access to the user who created them.
# pip install keyring
import keyring
import requests
# Store a key once (run this separately, not in your app code):
# keyring.set_password("myproject", "weather_api", "your-key")
# Retrieve it at runtime:
api_key = keyring.get_password("myproject", "weather_api")
if not api_key:
raise RuntimeError(
"API key not found in keyring. "
"Run: keyring.set_password('myproject', "
"'weather_api', 'your-key')"
)
response = requests.get(
"https://api.weather.example.com/v1/current",
headers={"X-API-Key": api_key},
)
print(response.json())
The keyring approach is a strong middle ground between the convenience of .env files and the infrastructure overhead of a cloud secrets manager. It works well for individual developers, desktop applications, and scripts running on dedicated machines. The main limitation is that keyring requires an interactive session to initially store the secret, which makes it less practical for headless server deployments or containerized environments.
Method 5: Cloud Secrets Managers for Production
For production deployments, dedicated secrets management services provide the strongest security guarantees. Services like AWS Secrets Manager, Google Cloud Secret Manager, and Azure Key Vault encrypt secrets at rest using hardware security modules, enforce fine-grained access control through IAM policies, maintain audit logs of every access event, and support automated key rotation.
# Example: Retrieving a secret from AWS Secrets Manager
# pip install boto3
import boto3
import json
def get_secret(secret_name, region="us-east-1"):
"""Retrieve a secret from AWS Secrets Manager."""
client = boto3.client(
"secretsmanager",
region_name=region,
)
response = client.get_secret_value(
SecretId=secret_name
)
# Secrets can be stored as plain strings or JSON
secret = response["SecretString"]
try:
return json.loads(secret)
except json.JSONDecodeError:
return secret
# Usage
credentials = get_secret("myproject/api-keys")
weather_key = credentials["weather_api_key"]
The trade-off with cloud secrets managers is added complexity and cost. You need to configure IAM roles, manage network access, and handle the possibility that the secrets service itself is temporarily unavailable. For production systems, however, this trade-off is well worth it. Your secrets are encrypted with keys you control, access is restricted to specific services and roles, and every retrieval is logged for auditing and compliance purposes.
Key Rotation and Lifecycle Management
Storing keys securely is only half of the equation. Keys also need to be rotated -- replaced with new ones on a regular schedule -- to limit the damage window if a key is compromised without your knowledge. Industry recommendations suggest rotating keys every 90 days for standard environments and every 30 to 60 days for systems handling sensitive data or subject to compliance requirements like PCI DSS.
A zero-downtime rotation follows a specific pattern: create the new key first, update your application to use the new key, verify that everything works, and only then revoke the old key. If your provider supports multiple active keys simultaneously, the transition is seamless. Here is a simple pattern for managing this in code:
import os
import requests
def make_api_request(url):
"""Try the primary key, fall back to the secondary."""
keys = [
os.environ.get("API_KEY_PRIMARY"),
os.environ.get("API_KEY_SECONDARY"),
]
for key in keys:
if not key:
continue
response = requests.get(
url,
headers={"X-API-Key": key},
)
if response.status_code != 401:
return response
raise RuntimeError("All API keys failed authentication.")
Never print or log API keys during debugging. If you need to verify that a key loaded correctly, check its length or log only the first few characters (e.g., print(f"Key loaded: {api_key[:4]}...")).
Preventing Accidental Leaks with Pre-Commit Scanning
Human error is the leading cause of credential leaks. Even with a .gitignore in place, a developer might paste a key into a source file while testing and forget to remove it. Pre-commit hooks provide an automated safety net by scanning every commit for patterns that look like secrets and blocking the push if any are found.
# Install the pre-commit framework
# pip install pre-commit
# Create a .pre-commit-config.yaml file:
# repos:
# - repo: https://github.com/gitleaks/gitleaks
# rev: v8.21.2
# hooks:
# - id: gitleaks
# Then run:
# pre-commit install
# From this point forward, every commit is scanned
# automatically. If a secret pattern is detected,
# the commit is rejected with a clear error message.
Tools like gitleaks and trufflehog detect common secret patterns including API keys, tokens, private keys, and connection strings. GitHub also offers built-in push protection that can block pushes containing recognized secret formats before they ever reach the remote repository. Combining local pre-commit scanning with server-side push protection creates two layers of defense against accidental exposure.
Comparing Storage Methods
Each storage method suits a different stage of your project's lifecycle. The right choice depends on your deployment environment, team size, and security requirements.
| Method | Best For | Security Level | Trade-Off |
|---|---|---|---|
os.environ |
Quick scripts, CI/CD pipelines | Basic | Variables do not persist across sessions |
python-dotenv |
Local development, small teams | Basic | Plain-text file on disk; requires .gitignore discipline |
pydantic-settings |
Growing projects with multiple configs | Basic+ | Still reads from .env; adds validation and masking |
keyring |
Desktop apps, single-developer machines | Medium | Requires interactive setup; not ideal for containers |
| Cloud Secrets Manager | Production deployments, regulated industries | High | Added infrastructure complexity and cost |
Key Takeaways
- Never hard-code API keys in source code: Even in private repositories, hard-coded credentials can leak through backups, code reviews, or breaches. Always externalize secrets into environment variables, files, or dedicated stores.
- Start with python-dotenv for local development: A
.envfile combined with a.gitignoreentry is the fastest way to keep keys out of your codebase. Addpydantic-settingswhen you need validation and masking. - Use a cloud secrets manager in production: Services like AWS Secrets Manager, Azure Key Vault, and Google Cloud Secret Manager provide encryption at rest, access control, audit logging, and automated rotation that no local file can match.
- Rotate keys on a regular schedule: Aim for every 90 days at minimum, with shorter cycles for sensitive systems. Always create the new key before revoking the old one to avoid downtime.
- Add automated leak prevention: Install pre-commit hooks like gitleaks to scan for secrets before they enter your repository. Enable GitHub push protection as an additional server-side safety net.
API key security is not a single decision but a set of habits applied throughout a project's lifecycle. Start with the method that fits your current stage -- a .env file for a weekend project, a cloud secrets manager for a production service -- and build on it as the project grows. The goal at every stage is the same: keep your credentials out of places where they do not belong and treat them with the same care you would give to any password.