API authentication can feel overwhelming when you are starting out. API keys, OAuth 2.0, JWT -- these terms show up in nearly every API's documentation, and it is not always obvious which one you need or how they relate to each other. The short answer is that they solve different problems. API keys identify your application. OAuth 2.0 handles delegated access on behalf of a user. JWT is a token format that carries signed data. This guide explains each one in plain language, shows you what they look like in Python code, and gives you a clear decision framework for choosing the right method.
Before comparing the three methods, it helps to understand a distinction that trips up even experienced developers: the difference between authentication and authorization. These terms are used interchangeably in casual conversation, but they mean different things in security. Getting this distinction right makes the rest of API authentication much easier to understand.
Authentication vs Authorization: The Core Difference
Authentication answers the question: "Who are you?" It is the process of proving your identity -- you are the user you claim to be, or your application is the application it claims to be. A username and password is authentication. An API key is authentication. Presenting a valid token is authentication.
Authorization answers a different question: "What are you allowed to do?" Once your identity is established, the server decides what resources you can access and what actions you can take. A user might be authenticated (logged in) but not authorized to delete another user's data.
API keys handle authentication. OAuth 2.0 handles both authentication and authorization. JWT is a format that can carry either -- it depends on what claims are inside the token and how the server validates them.
API Keys: The Simplest Method
An API key is a long, randomly generated string that acts as a password for your application. You get the key from the API provider (usually from a developer dashboard), include it in your requests, and the server checks whether the key is valid. There is no user involved, no token exchange, and no expiration unless the provider builds that in separately.
import requests
import os
API_KEY = os.environ["WEATHER_API_KEY"]
# API key sent as a header
response = requests.get(
"https://api.weather.example.com/v1/forecast",
headers={"X-API-Key": API_KEY},
params={"city": "Austin"},
)
print(response.json())
# Some APIs expect the key as a query parameter instead
response = requests.get(
"https://api.maps.example.com/v1/directions",
params={
"origin": "Austin,TX",
"destination": "Dallas,TX",
"key": API_KEY, # Less secure -- appears in logs
},
)
API keys are popular because they are simple to generate, simple to use, and work well for scenarios where you need to identify which application is making a request rather than which user is behind it. The trade-off is limited security: API keys are static (they do not expire unless you manually rotate them), they have no built-in concept of permissions or scopes, and if one is stolen, the attacker has the same access your application has until you revoke it.
Always send API keys in the Authorization header or a custom header like X-API-Key. Sending keys as URL query parameters exposes them in server logs, browser history, and referrer headers.
There is a subtler point worth internalizing here: API keys create an identity vacuum. When a request arrives with only an API key, the server knows which application is asking but has no idea who inside that application triggered the request or why. This makes forensic investigation difficult after an incident. If three developers share the same API key and one of them leaks it, the server logs cannot distinguish between legitimate and malicious requests. This is why organizations with mature security practices assign individual API keys per developer, per environment, and per service -- not because each key does anything differently, but because attribution matters when things go wrong.
OAuth 2.0: Delegated Access for Users
OAuth 2.0 is an authorization framework designed for a specific scenario: your application needs to access a user's data on another service without seeing that user's password. Originally specified in RFC 6749 (October 2012), OAuth 2.0 has become the industry standard for delegated authorization. When you click "Sign in with Google" on a website, OAuth 2.0 is the protocol that lets that website request limited access to your Google account. Your password never leaves Google -- the website receives a temporary access token instead.
OAuth 2.0 involves multiple parties working together: the user who owns the data, the client (your application), the authorization server (like Google or GitHub), and the resource server that hosts the data. The process follows specific flows (called "grant types") depending on the type of application. The two you will encounter frequently are the Authorization Code flow (for web and mobile apps) and the Client Credentials flow (for machine-to-machine communication).
# OAuth 2.0 Client Credentials flow (machine-to-machine)
import requests
import os
# Step 1: Exchange credentials for an access token
token_response = requests.post(
"https://auth.example.com/oauth/token",
data={
"grant_type": "client_credentials",
"scope": "read:data",
},
auth=(
os.environ["CLIENT_ID"],
os.environ["CLIENT_SECRET"],
),
)
access_token = token_response.json()["access_token"]
# Step 2: Use the token to call the API
response = requests.get(
"https://api.example.com/v1/resources",
headers={"Authorization": f"Bearer {access_token}"},
)
print(response.json())
OAuth 2.0 is more complex than API keys because it involves a multi-step token exchange. But this complexity buys you significant security advantages: tokens are short-lived (limiting damage from theft), scoped (restricting what the application can do), and revocable (allowing immediate access removal without changing passwords).
JWT: A Token Format, Not an Auth Method
JWT (JSON Web Token) is the piece that confuses beginners the most, because it is not an authentication method in the same category as API keys or OAuth 2.0. JWT is a token format -- a way of packaging information into a compact, signed string. Defined in RFC 7519 (May 2015), JWT provides a standardized, URL-safe means of representing claims to be transferred between two parties. OAuth 2.0 often uses JWTs as the format for its access tokens, but you can also use JWTs independently of OAuth.
A JWT has three parts separated by dots: a header (declares the signing algorithm), a payload (carries claims like user ID, role, and expiration time), and a signature (proves the token has not been tampered with). RFC 7519 defines seven registered claim names -- iss (issuer), sub (subject), aud (audience), exp (expiration time), nbf (not before), iat (issued at), and jti (JWT ID) -- as standard fields for interoperability. The payload is Base64URL-encoded but not encrypted -- anyone can decode and read it. The signature is what makes it trustworthy.
# Creating and verifying a JWT with PyJWT
import jwt
import datetime
SECRET_KEY = "your-secret-key"
# Create a token with claims
token = jwt.encode(
{
"sub": "user_42",
"role": "editor",
"exp": datetime.datetime.now(datetime.timezone.utc)
+ datetime.timedelta(hours=1),
},
SECRET_KEY,
algorithm="HS256",
)
print(token)
# eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
# Verify and decode the token
payload = jwt.decode(
token, SECRET_KEY, algorithms=["HS256"]
)
print(payload["sub"]) # "user_42"
print(payload["role"]) # "editor"
The key advantage of JWTs is that they are self-contained. The server does not need to look up session data in a database to identify the user -- all the information is right there in the token. This makes JWTs a natural fit for stateless API authentication, microservices architectures, and any system where you want to avoid server-side session storage.
But self-containment introduces a tension that is important to understand early: the more information a token carries, the more damage its exposure causes. A session ID stored in a cookie is meaningless to an attacker without access to the server's session store. A JWT, by contrast, tells the attacker exactly who the user is, what role they have, when the token expires, and what scopes they can access -- all without touching the server. This is the trade-off of statelessness: you eliminate the database lookup, but you broadcast the user's authorization context in every request. The signature ensures the token was not tampered with, but it does nothing to prevent the token from being read, replayed, or used by someone who intercepted it.
JWTs are signed, not encrypted. Never store sensitive data like passwords, credit card numbers, or personal identification numbers in a JWT payload. The signature guarantees integrity (no tampering), not confidentiality (no reading). If you need both integrity and confidentiality, RFC 7519 defines a Nested JWT structure where a signed JWT is used as the plaintext of a JSON Web Encryption (JWE, RFC 7516) structure -- but this adds significant complexity and is not common in API authentication scenarios.
How They Work Together
These three methods are not mutually exclusive -- they often work together in the same system. A common pattern looks like this: your application uses OAuth 2.0 to obtain an access token from an authorization server. That access token is formatted as a JWT containing claims about the user's identity and permissions. Your application sends this JWT as a Bearer token in the Authorization header of each API request. Meanwhile, your internal microservices might use API keys for simple service-to-service calls where user context is not needed.
Understanding that JWT is a token format (not a competing auth method) and that OAuth 2.0 is a framework (not a single protocol) clears up the confusion that leads beginners to ask "should I use OAuth or JWT?" The answer is usually "both -- OAuth handles the flow, JWT handles the token."
The Trust Spectrum: A Mental Model
Here is a framework that makes the relationship between these three methods easier to internalize: think of API authentication as a spectrum of trust, where each method occupies a different position based on how much the server trusts the caller and how much proof the caller provides.
At one end of the spectrum sits the API key. It represents maximum trust in the caller and minimum proof of identity. The server says: "If you have this key, I believe you are who you say you are, and I will give you the same access every time, no questions asked." This is the model of a building master key -- powerful, simple, and dangerous if it falls into the wrong hands. There is no concept of "what this specific person is allowed to do" because the key does not represent a person. It represents an application, and the server trusts that application completely.
In the middle sits JWT as a standalone token. The server extends conditional trust: "I trust the entity that signed this token, and I will honor whatever claims are inside it until it expires." The server does not need to phone home to verify the token -- it can validate the signature locally. This is the model of a notarized letter of introduction. The letter carries your identity and permissions, it is tamper-evident (the notary's seal), but it is not private (anyone can read the letter). Trust is bounded by time (expiration) and by the claims inside.
At the other end sits OAuth 2.0. It represents minimum assumed trust and maximum proof of identity. The server says: "I do not trust your application at all. I trust the user, and I trust the authorization server. Your application must prove, through a multi-step verified exchange, that the user explicitly granted you limited, temporary permission to act on their behalf." This is the model of a power of attorney -- legally specific, time-limited, revocable, and requiring the direct involvement of the person granting the authority.
The further you move along the trust spectrum from API keys toward OAuth 2.0, the more ceremony is involved -- but also the more control you gain over who can do what, for how long, and with what recourse if something goes wrong.
This spectrum is not about "better or worse." It is about matching the level of trust to the level of risk. A cron job pulling weather data from your own server has a different trust profile than a third-party mobile app requesting access to a user's bank transactions. The trust spectrum helps you see that the choice is not arbitrary -- it follows directly from the threat model of each scenario.
Refresh Tokens: Keeping Users Logged In Securely
If access tokens expire in 15 to 30 minutes, how do users stay logged in for hours or days without constantly re-entering their password? This is the problem refresh tokens solve, and it is a pattern that comes up in nearly every OAuth 2.0 implementation.
When your application first authenticates through OAuth 2.0, the authorization server typically returns two tokens: a short-lived access token and a longer-lived refresh token. The access token is what your application sends with each API request. The refresh token stays stored securely and is only used for one purpose -- requesting a new access token when the current one expires.
import requests
import os
# When the access token expires, use the refresh token to get a new one
refresh_response = requests.post(
"https://auth.example.com/oauth/token",
data={
"grant_type": "refresh_token",
"refresh_token": os.environ["REFRESH_TOKEN"],
"client_id": os.environ["CLIENT_ID"],
},
)
tokens = refresh_response.json()
new_access_token = tokens["access_token"]
new_refresh_token = tokens.get("refresh_token") # May issue a new one
# Use the new access token for API calls
response = requests.get(
"https://api.example.com/v1/resources",
headers={"Authorization": f"Bearer {new_access_token}"},
)
print(response.json())
The security advantage of this pattern is containment. The access token is exposed on every API request, so keeping its lifespan short (15 to 30 minutes) limits the window an attacker has if they intercept it. The refresh token is only sent to the authorization server's token endpoint, reducing its exposure surface. If the refresh token is compromised, the authorization server can revoke it without affecting other users or requiring a password change.
Modern best practice goes a step further with refresh token rotation. Each time your application uses a refresh token to get a new access token, the authorization server also issues a brand-new refresh token and invalidates the old one. If an attacker tries to use a stolen refresh token that has already been rotated, the authorization server detects the reuse and can revoke the entire token family -- immediately logging out all sessions for that user. This pattern is recommended by the OAuth 2.0 Security Best Current Practice (RFC 9700, Section 4.14.2, published January 2025) and is a requirement in the upcoming OAuth 2.1 specification for public clients.
PKCE: Why Every OAuth Flow Needs It Now
The article mentioned "Authorization Code flow with PKCE" earlier without explaining the PKCE part. PKCE (pronounced "pixie") stands for Proof Key for Code Exchange, defined in RFC 7636 (September 2015), and it closes a specific security hole in the OAuth 2.0 authorization code flow that affects every type of application.
Here is the problem PKCE solves. In the standard authorization code flow, the authorization server redirects the user back to your application with an authorization code in the URL. Your application then exchanges that code for an access token. But if an attacker intercepts that authorization code during the redirect -- through a malicious app on the same device, a compromised browser extension, or a man-in-the-middle attack -- they can exchange it for a token before your application does. The attacker now has access to the user's data.
PKCE prevents this by adding a verification step that ties the token exchange to the specific client that started the flow. Before redirecting the user to the authorization server, your application generates a random string called a code verifier and creates a hashed version called a code challenge. The code challenge is sent with the initial authorization request. When your application later exchanges the authorization code for a token, it sends the original code verifier. The authorization server hashes the verifier and compares it to the challenge it received earlier. If they do not match, the exchange is rejected.
import hashlib
import base64
import os
# Step 1: Generate a random code verifier (43-128 characters)
code_verifier = base64.urlsafe_b64encode(os.urandom(32)).rstrip(b"=").decode()
# Step 2: Create the code challenge by hashing the verifier
code_challenge = (
base64.urlsafe_b64encode(
hashlib.sha256(code_verifier.encode()).digest()
)
.rstrip(b"=")
.decode()
)
# Step 3: Include the challenge in the authorization request
auth_url = (
"https://auth.example.com/authorize"
f"?response_type=code"
f"&client_id={os.environ['CLIENT_ID']}"
f"&redirect_uri=http://localhost:8080/callback"
f"&code_challenge={code_challenge}"
f"&code_challenge_method=S256"
f"&scope=read:data"
)
# Step 4: After receiving the authorization code, exchange it with the verifier
import requests
token_response = requests.post(
"https://auth.example.com/oauth/token",
data={
"grant_type": "authorization_code",
"code": "AUTHORIZATION_CODE_FROM_CALLBACK",
"redirect_uri": "http://localhost:8080/callback",
"client_id": os.environ["CLIENT_ID"],
"code_verifier": code_verifier, # Proves we started this flow
},
)
print(token_response.json())
An attacker who intercepts the authorization code cannot complete the exchange because they never had the original code verifier. The verifier is generated and stored locally by your application and is never sent over the network until the token exchange step, where it goes directly to the authorization server over HTTPS.
PKCE was originally designed for mobile and single-page applications that cannot securely store a client secret (RFC 7636). The upcoming OAuth 2.1 specification (draft-ietf-oauth-v2-1-15) makes PKCE mandatory for all clients, including traditional server-side web applications. RFC 9700 Section 2.1.1 already recommends PKCE for all clients as a best current practice. If you are implementing OAuth today, use PKCE regardless of your application type.
What OAuth 2.1 Consolidates
OAuth 2.1 is not a new protocol. It is a consolidation -- a single document that absorbs a decade of security lessons, extensions, and best practices that accumulated around OAuth 2.0 since its original publication in 2012. Aaron Parecki, co-author of the OAuth 2.1 specification and Director of Identity Standards at Okta, has described the effort as primarily aimed at making the spec more approachable by drastically cutting the number of documents implementers need to read (source: Identity, Unlocked podcast). Understanding what OAuth 2.1 changes helps you see not just what to implement, but why the ecosystem evolved this way.
The OAuth 2.0 Security Best Current Practice (RFC 9700, published January 2025) laid the groundwork. It formalized threats that practitioners had been mitigating informally for years -- authorization code interception, redirect URI manipulation, refresh token theft, and credential leakage through browser history and referrer headers. OAuth 2.1 (draft-ietf-oauth-v2-1-15, dated March 2, 2026) takes those recommendations and bakes them directly into the core specification, so implementers no longer need to cross-reference multiple RFCs to build a secure flow.
Three changes define the shift. First, PKCE becomes mandatory for all authorization code flows, not just public clients. This extends the protection originally specified in RFC 7636 to every client type. Second, the implicit grant and the Resource Owner Password Credentials (ROPC) grant are formally removed. The implicit grant returned tokens directly in URL fragments, exposing them in browser history and referrer headers. ROPC required applications to collect user passwords directly, violating the core principle that OAuth exists to avoid sharing credentials. Both patterns had been deprecated in RFC 9700 Section 2.1.2; OAuth 2.1 makes the removal official. Third, refresh tokens for public clients must be either sender-constrained or one-time use, closing the window where a stolen refresh token could be silently reused indefinitely.
The practical implication is clarifying. If you implement OAuth today following the PKCE and refresh token rotation patterns described in this article, you are already aligned with where the specification is headed. OAuth 2.1 does not ask you to learn something new -- it asks you to stop doing the things that were never safe to begin with.
What Happens When a JWT Gets Stolen?
This is the question the comparison table hints at but does not fully answer. JWTs are stateless -- the server does not track which tokens it has issued. That means there is no session record to delete, no database row to remove. A stolen JWT remains valid until it expires. So what are your options?
The first and simplest strategy is short expiration times. If your access tokens expire in 15 minutes, a stolen token is useful for at most 15 minutes. This is why the refresh token pattern matters so much. Short-lived access tokens combined with longer-lived refresh tokens (which can be revoked server-side) give you both stateless performance and a revocation path.
The second strategy is a token blocklist (sometimes called a denylist). When a user logs out or you detect a compromise, you add the token's unique identifier (the jti claim) to a blocklist. Every request then checks whether the incoming token's jti appears on the list. Redis is a common choice for this because it provides fast lookups and supports automatic expiration -- you can set each blocklist entry to expire when the original token would have expired, so the list stays small.
import redis
import jwt
import datetime
redis_client = redis.StrictRedis(host="localhost", port=6379, db=0)
SECRET_KEY = "your-secret-key"
def revoke_token(token: str):
"""Add a token to the blocklist until it would have expired."""
payload = jwt.decode(token, SECRET_KEY, algorithms=["HS256"])
jti = payload["jti"]
exp = datetime.datetime.fromtimestamp(
payload["exp"], tz=datetime.timezone.utc
)
remaining = exp - datetime.datetime.now(datetime.timezone.utc)
# Store in Redis with TTL matching token expiration
redis_client.setex(f"blocklist:{jti}", int(remaining.total_seconds()), "1")
def is_token_revoked(token: str) -> bool:
"""Check if a token has been revoked."""
payload = jwt.decode(token, SECRET_KEY, algorithms=["HS256"])
return redis_client.exists(f"blocklist:{payload['jti']}") > 0
The third strategy is token versioning. Each user record in your database includes a token version number. Every JWT you issue contains this version in its claims. When you need to revoke all tokens for a user (account compromise, password change, suspicious activity), you increment the version number. Any JWT carrying the old version fails validation. This approach handles the "log out everywhere" scenario without maintaining a per-token blocklist.
In practice, a well-designed system combines these strategies. Short-lived access tokens limit the exposure window for routine cases. A Redis-backed blocklist handles targeted revocation (single session logout). Token versioning provides the emergency "revoke everything" capability. The combination gives you defense in depth without sacrificing the performance benefits of stateless token validation.
Think Like an Attacker: Cascading Failures
Understanding authentication methods in isolation is useful. Understanding how they fail together is what separates a secure system from one that looks secure until it does not. Consider a scenario that traces one compromised credential through an entire system to show how authentication failures cascade.
An internal microservice uses a static API key to call a data service. That key is stored in an environment variable on a container, and the container image is pushed to a registry with overly broad read access. An attacker who gains access to the registry now has the API key. Because the key has no expiration, no scopes, and no per-request logging tied to a user identity, the attacker can make the same calls the microservice makes -- and nobody notices because the traffic looks identical to legitimate service-to-service communication.
But the data service also accepts OAuth 2.0 bearer tokens for its user-facing endpoints. The attacker notices that the API key grants access to an internal endpoint that returns user profile data, including email addresses. With those emails, the attacker crafts a phishing campaign targeting users of the application. Some users click through, authenticate on a spoofed login page, and the attacker harvests their refresh tokens. Because the system does not implement refresh token rotation, the stolen refresh tokens remain valid indefinitely. The attacker now has persistent access to user accounts.
Now trace this backward. The root cause was not any single weakness. It was the combination of a static API key with no rotation policy, a container registry with insufficient access controls, an internal endpoint that returned user data without requiring user-level authentication, and a refresh token implementation that did not enforce one-time use. Each of these is a beginner mistake described elsewhere in this article. Together, they form a kill chain.
Attackers do not evaluate authentication methods one at a time. They look for the seams -- the places where one method hands off trust to another without verifying the trust is still justified. The transition from API key access to user-level data, or from a valid JWT to an unrotated refresh token, is where the architecture is weakest. When you design your authentication strategy, examine every boundary where one trust model meets another.
This kind of thinking is what threat modeling provides. Before you write a single line of authentication code, map the trust boundaries in your system. Where does an API key's implicit trust intersect with a user-scoped resource? Where does a stateless JWT's lack of revocation create a gap that a refresh token must cover? Where does a credential stored in one environment get exposed in another? The answers to these questions shape your authentication architecture far more than any individual method choice.
Storing Credentials: Development vs Production
Knowing how API keys, OAuth tokens, and JWT secrets work is only half the picture. Where you store these credentials determines how quickly a mistake turns into a breach. The storage strategy looks different depending on whether you are developing locally or running in production.
During local development, the standard practice is environment variables loaded from a .env file that is excluded from version control. The python-dotenv package handles this in Python. Create a .env file in your project root, add it to .gitignore, and load it at the start of your script.
# .env file (add this to .gitignore -- never commit it)
WEATHER_API_KEY=sk-abc123def456
CLIENT_ID=my-app-client-id
CLIENT_SECRET=my-app-client-secret
JWT_SECRET=a-long-random-string-used-for-signing
# Load environment variables from .env
from dotenv import load_dotenv
import os
load_dotenv()
api_key = os.environ["WEATHER_API_KEY"]
jwt_secret = os.environ["JWT_SECRET"]
In production, a .env file on a server is a liability. Files can be read by other processes, accidentally included in container images, or exposed through misconfigured web servers. Production environments should use a dedicated secrets manager -- a service designed specifically for storing, rotating, and auditing access to sensitive credentials. AWS Secrets Manager, Google Cloud Secret Manager, HashiCorp Vault, and Azure Key Vault are the common options. These services encrypt credentials at rest, control access through IAM policies, log every access event, and support automatic rotation.
The principle is the same regardless of which authentication method you are using: credentials should never appear in source code, never be committed to a repository, and never be stored in plain text on a production system. Environment variables are the minimum baseline. A secrets manager is the production standard. For a broader look at defensive coding practices beyond authentication, see our guide to secure Python coding.
Side-by-Side Comparison
| Feature | API Keys | OAuth 2.0 | JWT |
|---|---|---|---|
| What it is | A static credential string | An authorization framework with token exchange flows | A signed token format carrying claims |
| Identifies | An application | A user (or application via client credentials) | Whoever or whatever the claims describe |
| Expiration | None (unless manually set) | Built-in; access tokens are short-lived | Via the exp claim |
| Scopes / permissions | No | Yes -- scopes limit what the token can access | Can carry role/permission claims |
| Revocation | Revoke the key | Revoke the token or refresh token | Difficult (stateless); requires a blocklist |
| Complexity | Low | High (multi-step flows, redirect URIs) | Medium (signing, verification, claims) |
| Best for | Server-to-server, internal APIs, simple scripts | User-facing apps, third-party integrations | Stateless auth, microservices, token format within OAuth 2.0 |
Decision Framework: Which Method Should You Use?
The right choice depends on who is calling your API and what level of security you need. Here is a practical decision tree, followed by a deeper look at the trade-offs that generic recommendations tend to skip.
Is a user involved? If your application acts on behalf of a human user (accessing their data, performing actions in their name), use OAuth 2.0 with JWT access tokens. The Authorization Code flow with PKCE is the recommended grant type for web, mobile, and desktop applications.
Is it machine-to-machine only? If no user is involved -- for example, a backend service calling another service, a scheduled job, or a CI/CD pipeline -- you have two solid options. Use API keys if simplicity is the priority and you trust the environment. Use OAuth 2.0 Client Credentials if you need scoped, time-limited access tokens with built-in expiration.
Are you building a public API for third-party developers? Use OAuth 2.0. Third-party developers should never see your users' credentials. OAuth's delegated access model is designed precisely for this scenario. If you are still in the design phase, our guide on how to build an API with Python covers the framework choices and security decisions that precede authentication implementation.
Do you need stateless authentication for microservices? Use JWTs as your token format. Each service can verify the token independently using the signing key, without calling back to a central authentication server on every request.
Going Deeper: The Questions Behind the Questions
The decision tree above gives you a starting point, but real-world systems demand more nuance. Here are the deeper evaluations that separate a secure architecture from one that just looks like one on a whiteboard.
Evaluate your revocation latency tolerance. The standard advice is "use short-lived JWTs." But how short depends on how much damage a compromised token can do in your specific system before it expires. A read-only analytics dashboard can tolerate a 30-minute window. A financial transfer endpoint cannot. If your system processes irreversible actions -- money transfers, data deletion, permission escalation -- you need either sub-minute token lifetimes (which puts real pressure on your refresh token infrastructure) or a synchronous revocation check on every request, which reintroduces the statefulness you were trying to avoid. Map your endpoints by consequence severity, then set expiration policies per-endpoint or per-scope rather than applying one blanket lifetime across your entire API.
Evaluate your key distribution problem before your authentication method. Every method shares the same upstream dependency: secrets need to reach the right place securely. API keys need to reach application servers. Client secrets need to reach your OAuth configuration. JWT signing keys need to reach every service that validates tokens. The method you choose inherits the security posture of your deployment pipeline. If your CI/CD system stores secrets in plaintext environment variables on build runners, switching from API keys to OAuth does not improve your security -- it just moves the exposure to a different credential. Before choosing an authentication method, audit the path each secret takes from creation to use. If that path crosses untrusted networks, shared storage, or environments with overly broad access, fix the path first.
Evaluate whether your "internal" traffic will stay internal. Many teams use API keys for service-to-service communication because the services are "internal." But internal is a network topology, not a security guarantee. Container orchestration platforms, cloud VPCs, and service meshes all have blast radiuses. If an attacker gains a foothold inside your network boundary -- through a compromised dependency, a misconfigured ingress rule, or a supply chain attack -- every API key-authenticated service becomes accessible. The deeper question is not "is this traffic internal?" but "what is the cost if an attacker reaches this endpoint from inside my network?" If the answer involves user data, financial records, or administrative actions, the endpoint deserves scoped, time-limited tokens even for service-to-service calls. The Client Credentials flow exists precisely for this scenario, and its operational overhead is small compared to the incident response cost of a lateral movement attack through API key-authenticated internal services.
Evaluate scope granularity as an architectural decision, not an afterthought. OAuth 2.0 scopes are the mechanism that enforces least privilege, but many implementations define scopes too broadly (e.g., read and write) and then rely on application-level logic to restrict access further. This creates a gap between what the token permits and what the application enforces -- and that gap is exactly where privilege escalation vulnerabilities live. Design scopes around resources and actions: read:user_profile, write:billing, admin:delete_account. If a compromised token with a write scope can modify billing records, user settings, and admin configurations equally, your scopes are not protecting you -- they are giving you a false sense of granularity. The cost of fine-grained scopes is upfront design work. The cost of coarse scopes is invisible until an incident forces you to explain why a token meant for reading notifications could also approve wire transfers.
Evaluate token propagation across service boundaries. In microservices architectures, a common pattern is to forward the user's JWT from the gateway to every downstream service. This is convenient but creates a propagation problem: every service in the chain now holds a token that grants the full set of the original user's permissions, even if that service only needs a narrow slice of access. A payment service receiving a token scoped for read:profile and write:payments can technically make profile-related API calls that have nothing to do with its function. The more rigorous approach is token exchange (RFC 8693), where each service exchanges the incoming token for a new, narrowly scoped token before calling the next service downstream. This adds latency and complexity, but it enforces least privilege at every hop rather than only at the gateway. For systems where a single compromised microservice should not grant an attacker access to unrelated resources, token exchange is worth the overhead.
You can combine methods in the same project. Use API keys for internal service calls and OAuth 2.0 + JWT for user-facing endpoints. The key is choosing the right method for each endpoint based on who is calling it and what security level is required. But go further: within a single authentication method, differentiate by consequence. Not all endpoints deserve the same token lifetime, the same scope breadth, or the same revocation strategy. The decision is not just which method, but how tightly configured that method needs to be for each specific trust boundary.
Common Mistakes Beginners Make
Treating API keys like user authentication. API keys identify an application, not a user. If you need to know which specific person is making a request, API keys alone are not sufficient -- you need OAuth 2.0 or session-based authentication.
Choosing OAuth 2.0 when API keys would suffice. OAuth 2.0 adds significant complexity. If you are writing a script that calls a weather API from your own server, an API key stored in an environment variable is perfectly appropriate. You do not need an authorization code flow for a cron job.
Confusing JWT with an authentication method. JWT is a token format. Saying "I use JWT for authentication" is like saying "I use JSON for my database" -- it describes the format, not the system. JWT tokens need to be created, distributed, and verified within an authentication system, whether that is OAuth 2.0, a custom login endpoint, or something else.
Storing JWTs in browser localStorage. This exposes tokens to cross-site scripting (XSS) attacks. For web applications, store JWTs in HTTP-only, secure cookies with the SameSite attribute set, which makes them inaccessible to JavaScript.
Not setting expiration on anything. API keys should be rotated regularly (every 90 days minimum). JWT access tokens should expire in 15 to 30 minutes. Refresh tokens should expire in days, not months. If nothing expires, a single compromised credential grants permanent access.
Key Takeaways
- API keys identify applications, not users: They are best for server-to-server communication, internal APIs, and simple scripts where you trust the caller. They are static, have no built-in expiration or scopes, and must be stored as securely as passwords. Be aware of the identity vacuum they create -- when a key is shared or leaked, the server cannot attribute requests to a specific person or context.
- OAuth 2.0 handles delegated access for user-facing applications: When your application needs to act on behalf of a user -- especially with third-party services -- OAuth 2.0 provides scoped, time-limited tokens without exposing user credentials to your application.
- JWT is a token format, not a competing auth method: It packages identity and permission claims into a compact, signed string. OAuth 2.0 frequently uses JWT as its access token format. Understanding that JWT is a "what" (format) while OAuth 2.0 is a "how" (framework) resolves the confusion between them. Remember the trade-off of self-containment: JWTs eliminate database lookups but broadcast the user's authorization context in every request.
- Refresh tokens enable long sessions without long-lived access tokens: Short-lived access tokens (15 to 30 minutes) combined with refresh token rotation give you both security and session continuity. OAuth 2.1 formalizes this pattern and requires sender-constrained or one-time-use refresh tokens for public clients.
- PKCE is mandatory, not optional: Every OAuth authorization code flow should use PKCE to prevent authorization code interception attacks. OAuth 2.1 requires it for all client types -- public and confidential.
- OAuth 2.1 consolidates, it does not reinvent: If you already implement PKCE, refresh token rotation, and avoid the implicit and password grants, you are aligned with where the specification is headed. OAuth 2.1 removes what was never safe and formalizes what practitioners have been doing for years.
- JWT revocation requires a deliberate strategy: Short expiration times limit exposure. Token blocklists (backed by Redis) handle targeted revocation. Token versioning handles the "revoke everything for this user" scenario. A production system typically combines all three.
- Think in trust boundaries, not isolated methods: Attackers exploit the seams where one trust model meets another. A static API key granting access to user-scoped data, an unrotated refresh token following a phished credential -- cascading failures happen at the intersections. Threat model your trust boundaries before you choose your authentication methods.
- Combine methods where it makes sense: Use API keys for internal machine-to-machine calls and OAuth 2.0 with JWT for user-facing endpoints. The right method depends on who is calling, what level of security the endpoint requires, and where that scenario falls on the trust spectrum.
- Always set expiration and rotate credentials: Short-lived tokens, regular key rotation, and secure storage (environment variables for local development, secrets managers for production) apply to every method regardless of which one you choose.
API authentication is not about picking one method and applying it everywhere. It is about understanding the trust relationships in your system and selecting the method that matches each relationship's risk profile. API keys for high-trust, low-ceremony scenarios. OAuth 2.0 for delegated user access where trust must be earned through a verified exchange. JWT as the token format that makes stateless verification possible. In practice, a well-designed system uses a combination of all three -- and the lines between them are not boundaries but seams that require their own security analysis. Add PKCE to every OAuth flow, build a revocation strategy before you need one, store credentials in a secrets manager from day one, map your trust boundaries before you write your first authentication middleware, and you will have a security posture that holds up well beyond a beginner project.