Decorators are one of the features that makes Python feel distinctly different from other languages. They let you wrap a function inside another function, adding behavior before or after the original runs, without ever touching its source code. But knowing how decorators work and knowing when to reach for one are two separate skills. This article walks through the specific, real-world situations where a decorator is the right tool, the situations where it is not, and the patterns that separate clean decorator usage from code that just looks clever.
The @decorator syntax was introduced in Python 2.4 through PEP 318. Before that, developers had to manually reassign function names after transforming them, a process that was both error-prone and difficult to read. The @ symbol solved this by placing the transformation declaration directly above the function it applies to. That syntactic improvement was the entire point: making intent visible at the point of declaration, rather than buried after the function body.
But the syntax being convenient does not mean every function needs a decorator. The question to ask is not "can I make this a decorator?" but "does this behavior belong on multiple functions, and is it truly separate from those functions' core logic?" If the answer to both is yes, a decorator is likely the right pattern. If either answer is no, a plain function call or a class method will usually be simpler and easier to follow.
The Decision Test: Should This Be a Decorator?
Before writing a custom decorator, run the logic through three criteria. First, the behavior should be cross-cutting, meaning it applies to more than one function and is not tied to any single function's business logic. Logging, timing, authentication checks, and caching all qualify because they are concerns that span an entire application rather than living inside one specific operation.
Second, the behavior should operate on the boundary of the function, meaning it cares about the function's inputs, outputs, or execution lifecycle rather than the function's internal state. A decorator that measures how long a function takes to run does not need to know anything about what the function does internally. It only needs to record timestamps before and after calling it. That boundary-level operation is what makes decorators clean.
Third, the behavior should be transparent. The decorated function should still feel like the same function to its callers. If applying a decorator fundamentally changes what a function returns or requires callers to handle new exception types, that is no longer transparent wrapping. That is a new function pretending to be the old one.
Every custom decorator should use functools.wraps. Without it, the wrapped function loses its original __name__, __doc__, and __qualname__ attributes. This breaks debugging tools, documentation generators, and introspection. Apply @functools.wraps(func) to the inner wrapper function as a non-negotiable habit.
Here is the base pattern that every custom decorator should follow:
import functools
def my_decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
# behavior before the function runs
result = func(*args, **kwargs)
# behavior after the function runs
return result
return wrapper
The *args and **kwargs in the wrapper signature ensure the decorator works with any function regardless of its parameter list. The @functools.wraps(func) line preserves the original function's metadata. And the explicit return result ensures the caller receives whatever the original function returned. Omitting that return statement is a common mistake that causes decorated functions to silently return None.
Six Real-World Decorator Use Cases
1. Logging Function Calls
Logging is the textbook use case for decorators. When you need to record what functions are being called, what arguments they receive, and what they return, placing that logging code inside every function creates massive duplication. A logging decorator centralizes that logic in one place.
import functools
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def log_calls(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
logger.info(
"Calling %s with args=%s kwargs=%s",
func.__name__, args, kwargs
)
result = func(*args, **kwargs)
logger.info(
"%s returned %s",
func.__name__, result
)
return result
return wrapper
@log_calls
def calculate_discount(price, percentage):
return price * (percentage / 100)
calculate_discount(200, 15)
# INFO:__main__:Calling calculate_discount with args=(200, 15) kwargs={}
# INFO:__main__:calculate_discount returned 30.0
Every function decorated with @log_calls now produces structured log output without a single line of logging code inside the function body. The function's core responsibility stays focused on its own logic.
2. Caching Expensive Results
Python's standard library includes two built-in caching decorators in the functools module: lru_cache and cache. The lru_cache decorator stores results in a dictionary keyed by the function's arguments. When the same arguments appear again, the cached result is returned instead of re-executing the function. The cache decorator, added in Python 3.9, works identically but has no size limit, making it equivalent to lru_cache(maxsize=None).
from functools import lru_cache
@lru_cache(maxsize=256)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
print(fibonacci(100))
# 354224848179261915075
print(fibonacci.cache_info())
# CacheInfo(hits=98, misses=101, maxsize=256, currsize=101)
Without @lru_cache, this recursive Fibonacci implementation has exponential time complexity. With the decorator, each unique input is computed exactly once, and subsequent calls for the same value are dictionary lookups. The cache_info() method exposes hit and miss counts, which is useful for verifying that the cache is working as expected in production.
Avoid using @lru_cache on instance methods. Because self is included in the cache key, cached entries hold references to the instance and prevent garbage collection for the lifetime of the process. Use @lru_cache on module-level functions, class methods, or static methods instead.
For cases where you need caching with time-based expiration, the third-party cachetools library provides decorators like TTLCache that evict entries after a configurable number of seconds. This is particularly useful in web applications where cached data needs to stay fresh.
3. Retry Logic for Unreliable Operations
Network requests, database connections, and external API calls fail intermittently. Rather than wrapping every call site in a try/except loop with a retry counter, a retry decorator encapsulates that pattern once.
import functools
import time
def retry(max_attempts=3, delay=1.0, exceptions=(Exception,)):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
last_exception = None
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except exceptions as e:
last_exception = e
if attempt < max_attempts:
time.sleep(delay * attempt)
raise last_exception
return wrapper
return decorator
@retry(max_attempts=3, delay=2.0, exceptions=(ConnectionError,))
def fetch_user_profile(user_id):
# simulating an unreliable API call
import random
if random.random() < 0.7:
raise ConnectionError("Service unavailable")
return {"id": user_id, "name": "Example User"}
This decorator accepts parameters that control how many times to retry, how long to wait between attempts (with linear backoff built in via delay * attempt), and which exception types should trigger a retry. The decorated function looks and behaves identically to its undecorated version from the caller's perspective. Callers do not need to know that retry logic exists.
The retry decorator above demonstrates a parameterized decorator, which requires three levels of nesting: the outer function accepts configuration, the middle function accepts the target function, and the inner function replaces the target function. This three-layer structure is necessary whenever a decorator needs to accept arguments beyond the function it wraps.
4. Authentication and Authorization
Web frameworks like Flask and Django use decorators extensively for route protection. The pattern works for any application where certain functions should only execute after verifying that the caller has permission.
import functools
def require_role(role):
def decorator(func):
@functools.wraps(func)
def wrapper(user, *args, **kwargs):
if role not in user.get("roles", []):
raise PermissionError(
f"User lacks required role: {role}"
)
return func(user, *args, **kwargs)
return wrapper
return decorator
@require_role("admin")
def delete_all_records(user, table_name):
return f"Deleted all records from {table_name}"
admin_user = {"name": "Kandi", "roles": ["admin", "editor"]}
viewer_user = {"name": "Guest", "roles": ["viewer"]}
print(delete_all_records(admin_user, "logs"))
# Deleted all records from logs
try:
delete_all_records(viewer_user, "logs")
except PermissionError as e:
print(e)
# User lacks required role: admin
The @require_role decorator checks the user's role list before the function body runs. If the role check fails, the function never executes. This pattern is particularly powerful because it makes the access requirement visible at the function definition. Anyone reading the code can immediately see that delete_all_records requires admin access, without reading the function body.
5. Input Validation
Validating function arguments is another cross-cutting concern that benefits from decorator extraction. Rather than starting every function with a block of type checks and value range assertions, a validation decorator can enforce constraints declaratively.
import functools
import inspect
def validate_types(**expected_types):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
sig = inspect.signature(func)
bound = sig.bind(*args, **kwargs)
bound.apply_defaults()
for param_name, expected_type in expected_types.items():
if param_name in bound.arguments:
value = bound.arguments[param_name]
if not isinstance(value, expected_type):
raise TypeError(
f"Parameter '{param_name}' expected "
f"{expected_type.__name__}, "
f"got {type(value).__name__}"
)
return func(*args, **kwargs)
return wrapper
return decorator
@validate_types(name=str, age=int)
def create_user(name, age, email=None):
return {"name": name, "age": age, "email": email}
print(create_user("Kandi", 30, email="kandi@example.com"))
# {'name': 'Kandi', 'age': 30, 'email': 'kandi@example.com'}
try:
create_user("Kandi", "thirty")
except TypeError as e:
print(e)
# Parameter 'age' expected int, got str
This decorator uses inspect.signature to bind the arguments to their parameter names, then checks each specified parameter against its expected type. The validation happens entirely outside the function body, keeping create_user focused on its actual job: creating a user dictionary.
6. Rate Limiting
When interacting with external APIs or exposing your own endpoints, controlling how frequently a function can be called prevents abuse and avoids hitting provider rate limits.
import functools
import time
def rate_limit(max_calls, period):
def decorator(func):
calls = []
@functools.wraps(func)
def wrapper(*args, **kwargs):
now = time.time()
# remove calls outside the current window
while calls and calls[0] <= now - period:
calls.pop(0)
if len(calls) >= max_calls:
wait_time = period - (now - calls[0])
raise RuntimeError(
f"Rate limit exceeded. "
f"Try again in {wait_time:.1f} seconds."
)
calls.append(now)
return func(*args, **kwargs)
return wrapper
return decorator
@rate_limit(max_calls=5, period=60)
def query_external_api(endpoint):
return f"Response from {endpoint}"
# First 5 calls succeed
for i in range(5):
print(query_external_api(f"/data/{i}"))
# 6th call within the same 60-second window raises an error
try:
query_external_api("/data/5")
except RuntimeError as e:
print(e)
The calls list acts as a sliding window, tracking timestamps of recent invocations. Expired entries are pruned on each call. If the number of calls within the current window exceeds max_calls, the decorator raises an exception rather than allowing the function to execute. This pattern is used heavily in web scraping tools and API client libraries where exceeding rate limits results in temporary bans or throttled responses.
Decorators can be stacked. For example, applying both @retry and @rate_limit to the same function gives you rate-controlled API calls with automatic retry on failure. When stacking, the order matters: decorators apply from bottom to top, so the decorator closest to the def line wraps the function first.
When Not to Use Decorators
Decorators are not universally appropriate. There are clear situations where reaching for a decorator introduces complexity without corresponding benefit.
Single-use behavior. If the logic you are wrapping applies to exactly one function and will never be reused, a decorator adds indirection without saving any duplication. A simple function call or an inline block of code will be clearer. The point of decorators is reuse across multiple functions. Without that reuse, the abstraction is overhead.
Deep internal access. Decorators operate on the boundary of a function: they see inputs and outputs. If your added behavior needs to reach into the middle of a function's logic, control its internal flow, or access local variables, a decorator cannot do this cleanly. Refactoring the function itself, or using a context manager, is usually the right approach in those cases.
Unclear side effects. A well-designed decorator should not surprise the caller. If applying a decorator changes what a function returns, modifies its arguments before passing them through, or introduces exceptions that the original function never raised, the decorator is violating the transparency principle. Callers should be able to reason about a function's behavior from its signature and docstring without needing to inspect every decorator in the chain.
Excessive stacking. Stacking three, four, or five decorators on a single function makes the execution order difficult to trace. Each decorator wraps the one below it, creating nested layers that must be mentally unwound to understand what happens when the function is called. If you find yourself stacking more than two or three decorators routinely, consider whether a class-based approach or a middleware pipeline would express the same logic more clearly.
| Use Case | Decorator Appropriate? | Alternative |
|---|---|---|
| Logging calls across 20+ functions | Yes | -- |
| Caching pure function results | Yes | -- |
| Retry logic for API calls | Yes | -- |
| One-off validation for a single endpoint | No | Inline validation |
| Modifying a function's return type | No | Explicit wrapper function |
| Managing database transactions | Maybe | Context manager |
| Timing a single function during debugging | No | Inline timing or profiler |
The "Maybe" entry for database transactions reflects a common debate. Both decorators and context managers can handle transactions. Context managers (with blocks) make the scope of the transaction explicit in the calling code, while decorators hide it. In web frameworks, decorator-based transaction management is standard because the scope is always the entire request handler. In library code, context managers are often preferred because the caller retains control over when the transaction begins and ends.
Key Takeaways
- Apply the three-criteria test. A behavior belongs in a decorator when it is cross-cutting (applies to multiple functions), operates on boundaries (inputs, outputs, and lifecycle rather than internal state), and is transparent (does not change the function's contract with its callers).
- Use
functools.wrapsin every custom decorator. Without it, debugging and introspection break because the wrapper function replaces the original function's metadata. This is not optional. - Favor built-in decorators before writing custom ones. Python's standard library includes
@property,@staticmethod,@classmethod,@functools.lru_cache,@functools.cache,@functools.cached_property, and@functools.wraps. These are battle-tested and optimized. Check whether a built-in decorator solves your problem before implementing a custom version. - Know when to reach for something else. Context managers, middleware pipelines, class inheritance, and simple function calls are all valid alternatives. Decorators are a tool, not a philosophy. Use them when they clarify intent and reduce duplication, skip them when they add hidden complexity.
The best use of decorators is invisible. When a reader sees @retry(max_attempts=3) above a function, they understand immediately what that function gains, without reading the decorator's implementation. When they see @lru_cache(maxsize=128), they know the function's results are being cached. That instant readability is the real value of the pattern. Every decorator you write should aim for the same clarity: a single line above a function that communicates exactly what additional behavior is being applied, with no surprises waiting underneath.