Python's decorator syntax is a surface-level entry point into one of the language's richest design spaces. Beyond the basic @property and @staticmethod that appear in tutorials, the standard library ships a collection of specialized utility decorators built for performance optimization, type dispatch, comparison generation, resource management, and production-grade reliability patterns. This article covers those decorators in depth — how they work internally, when to reach for each one, and how to build custom variants that solve real problems.
A decorator in Python is any callable that accepts a function (or class) as its argument and returns a replacement callable. That simplicity is intentional — it means decorators compose cleanly, can carry state, and can be parameterized using factory functions that return the actual decorator. The utility decorators covered here take that composability seriously. Several of them come from functools, Python's module for higher-order function support. Others come from contextlib. A final group is custom-built, but follows patterns so common in production codebases that they belong in any working Python developer's toolkit.
Standard Library Utility Decorators
The functools module is the primary home of Python's specialized utility decorators. Each one solves a distinct class of problem, and understanding them individually before combining them is the right approach.
functools.lru_cache — Memoization with Bounded Memory
lru_cache as a sticky notepad stapled to the function. The first time you ask a question, the answer gets written down. Every time after that, the notepad is checked first. If the notepad is full, the oldest note gets thrown away to make room.maxsize recent results. The underlying storage is a dictionary keyed by the argument tuple, which means all positional and keyword arguments must be hashable. The "LRU" in the name stands for Least Recently Used — when the cache reaches capacity, the result that was accessed least recently is evicted first.
from functools import lru_cache
@lru_cache(maxsize=128)
def fibonacci(n: int) -> int:
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
# First call computes recursively; subsequent calls hit cache
print(fibonacci(40)) # 102334155
# Inspect cache behavior
info = fibonacci.cache_info()
print(f"Hits: {info.hits}, Misses: {info.misses}, Size: {info.currsize}")
# Hits: 38, Misses: 41, Size: 41
# Clear cache when needed
fibonacci.cache_clear()
Setting maxsize=None removes the size limit entirely, turning the decorator into an unbounded memoization cache. Python 3.9 introduced @functools.cache as a shorthand for exactly this configuration — it is faster than lru_cache(maxsize=None) because it skips the LRU tracking logic. Use @cache when you know the argument space is finite and manageable; use @lru_cache(maxsize=N) when you need memory-bounded caching on a long-running process.
The typed parameter (available since Python 3.3) controls whether arguments of different types are treated as distinct cache entries. With typed=True, f(3) and f(3.0) are cached separately even though they compare as equal. This matters when your function's return value differs based on the argument type — for example, a serializer that produces different output for int versus float. One additional precision point: the official Python documentation notes that f(a=1, b=2) and f(b=2, a=1) are considered distinct cache entries because they differ in keyword argument order, not just value. Design cached functions to use positional arguments wherever possible to avoid unintended cache misses from argument reordering.
"Since a dictionary is used to cache results, the positional and keyword arguments to the function must be hashable. Distinct argument patterns may be considered to be distinct calls with separate cache entries."
from functools import cache
@cache
def count_paths(rows: int, cols: int) -> int:
"""Count unique paths in a grid from top-left to bottom-right."""
if rows == 1 or cols == 1:
return 1
return count_paths(rows - 1, cols) + count_paths(rows, cols - 1)
print(count_paths(10, 10)) # 48620 — computed once, reused for all sub-problems
print(count_paths(20, 20)) # 35345263800
The cache is threadsafe — the underlying dictionary will remain coherent during concurrent updates. However, a wrapped function may be called more than once for the same arguments if a second thread makes an additional call before the first completes and caches its result. Design your cached functions to be idempotent.
Recursive algorithms that recalculate overlapping subproblems — Fibonacci, shortest paths, combinatorial counters — go from exponential to linear time with a single decorator. But in long-running services, an unbounded @cache on a function that takes user-supplied inputs can become a memory leak. The maxsize parameter on lru_cache is your pressure-release valve. Set it to the largest number of concurrent unique inputs you expect, not to None.
functools.cached_property — Instance-Level Lazy Computation
functools.cached_property — Instance-Level Lazy Computation
cached_property is lazy initialization as a first-class citizen. The value does not exist until someone asks for it. The moment someone does, it springs into existence and parks itself directly on the object — not on the class, not in a side-cache. The class no longer has any involvement in subsequent reads.@functools.cached_property combines property-style access with one-time computation. On first access, it calls the method and writes the result to the instance's __dict__ under the same attribute name. Every subsequent access reads directly from the instance dictionary, bypassing the descriptor entirely. This makes it significantly cheaper than @property for expensive computations that do not change after initialization.
from functools import cached_property
import statistics
class SalesReport:
def __init__(self, transactions: list[float]):
self.transactions = transactions
@cached_property
def mean(self) -> float:
print("Computing mean...")
return statistics.mean(self.transactions)
@cached_property
def stdev(self) -> float:
print("Computing standard deviation...")
return statistics.stdev(self.transactions)
@cached_property
def summary(self) -> dict:
# Both mean and stdev are already cached when this runs
return {"mean": self.mean, "stdev": self.stdev, "n": len(self.transactions)}
data = [12.5, 14.3, 11.8, 15.0, 13.6, 12.9, 14.7]
report = SalesReport(data)
print(report.mean) # Computing mean... -> 13.542...
print(report.mean) # No recomputation — reads from instance dict
print(report.summary) # No recomputation for mean or stdev
To invalidate the cache for a specific instance, delete the attribute: del report.mean. The next access will recompute and re-cache. This is more targeted than class-level cache invalidation and suits scenarios where individual records need to be refreshed without affecting others.
cached_property requires that the instance's __dict__ attribute exists and is a mutable mapping. It will not work with classes that define __slots__ without explicitly including '__dict__' as one of the slots. Additionally, the Python documentation notes that cached_property is not thread-safe by design — if multiple threads access an uncomputed property simultaneously, the method may be called more than once before the result is written to the instance dictionary. For thread-safe lazy initialization in concurrent code, protect the first access with a threading.Lock or use a different caching strategy.
functools.singledispatch — Type-Based Function Dispatch
@functools.singledispatch (added in Python 3.4 via PEP 443) transforms a function into a generic function that dispatches to different implementations based on the type of the first argument. The base function serves as the fallback for any type without a registered implementation. Additional implementations are registered using the .register() decorator on the generic function object. Starting in Python 3.7, the .register() attribute supports using type annotations directly rather than passing the type as an argument. Python 3.11 further extended this to accept typing.Union as a type annotation, enabling registration across multiple types in a single declaration.
from functools import singledispatch
from datetime import date, datetime
from decimal import Decimal
@singledispatch
def serialize(value) -> str:
"""Fallback: convert to string representation."""
return str(value)
@serialize.register
def _(value: int) -> str:
return f"INT:{value}"
@serialize.register
def _(value: float) -> str:
return f"FLOAT:{value:.6f}"
@serialize.register
def _(value: Decimal) -> str:
return f"DECIMAL:{value:.10f}"
@serialize.register
def _(value: date) -> str:
return value.strftime("%Y-%m-%d")
@serialize.register
def _(value: datetime) -> str:
return value.isoformat()
@serialize.register(list)
def _(value) -> str:
return "[" + ", ".join(serialize(item) for item in value) + "]"
print(serialize(42)) # INT:42
print(serialize(3.14159)) # FLOAT:3.141590
print(serialize(Decimal("1.0000000001"))) # DECIMAL:1.0000000001
print(serialize(date(2026, 3, 29))) # 2026-03-29
print(serialize([1, 2.5, date(2026, 1, 1)])) # [INT:1, FLOAT:2.500000, 2026-01-01]
Python resolves dispatch by walking the MRO (Method Resolution Order) of the argument's type. If no exact type match is found, it looks for a match on the nearest ancestor class. This means registering an implementation for numbers.Number will handle any numeric subtype that does not have its own registration. Introspecting registered implementations is straightforward via serialize.registry.
"A generic function is composed of multiple functions implementing the same operation for different types. Which implementation should be used during a call is determined by the dispatch algorithm. When the implementation is chosen based on the type of a single argument, this is known as single dispatch."
For dispatch on method arguments inside a class, use functools.singledispatchmethod (added in Python 3.8). It handles the implicit self argument correctly and integrates with @classmethod when needed.
The open/closed principle says code should be open for extension but closed for modification. An isinstance chain violates this — adding a new type requires editing the original function. A singledispatch generic function is open for extension: any module can register a new implementation without touching the base. This is how serializers, formatters, and converters stay maintainable as a codebase grows.
functools.total_ordering — Generating Rich Comparison Methods
Implementing all six comparison operators (__eq__, __lt__, __le__, __gt__, __ge__, __ne__) for a custom class is repetitive. @functools.total_ordering fills in the missing methods from a minimum set. You provide __eq__ and one of __lt__, __le__, __gt__, or __ge__; the decorator derives the rest.
from functools import total_ordering
@total_ordering
class SemanticVersion:
def __init__(self, major: int, minor: int, patch: int):
self.major = major
self.minor = minor
self.patch = patch
def _key(self):
return (self.major, self.minor, self.patch)
def __eq__(self, other) -> bool:
if not isinstance(other, SemanticVersion):
return NotImplemented
return self._key() == other._key()
def __lt__(self, other) -> bool:
if not isinstance(other, SemanticVersion):
return NotImplemented
return self._key() < other._key()
def __repr__(self) -> str:
return f"v{self.major}.{self.minor}.{self.patch}"
v1 = SemanticVersion(1, 9, 3)
v2 = SemanticVersion(2, 0, 0)
v3 = SemanticVersion(1, 9, 3)
print(v1 < v2) # True
print(v2 > v1) # True — derived by total_ordering
print(v1 <= v3) # True — derived by total_ordering
print(v1 >= v2) # False — derived by total_ordering
print(sorted([v2, v1, SemanticVersion(1, 10, 0)]))
# [v1.9.3, v1.10.0, v2.0.0]
The documentation notes that total_ordering does not override methods already declared in the class or its superclasses. Return NotImplemented (not False) when the other operand is not a recognized type — this allows Python to try the reflected operation on the right-hand operand before raising TypeError.
functools.wraps — Preserving Function Identity
Every decorator that wraps a function should apply @functools.wraps(func) to the inner wrapper. Without it, the wrapper function replaces the original's __name__, __doc__, __module__, __qualname__, and __annotations__ with its own values, and the wrapper's __dict__ is not updated with entries from the original function's __dict__. To be precise: functools.wraps is a convenience wrapper around functools.update_wrapper, which assigns the attributes listed in WRAPPER_ASSIGNMENTS directly and merges (updates) the wrapper's __dict__ with entries from the original's — it does not replace the wrapper's dictionary wholesale. Omitting @functools.wraps breaks introspection tools, documentation generators, debuggers, and type checkers.
import functools
def without_wraps(func):
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
def with_wraps(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
@without_wraps
def greet_bad(name: str) -> str:
"""Return a greeting string."""
return f"Hello, {name}"
@with_wraps
def greet_good(name: str) -> str:
"""Return a greeting string."""
return f"Hello, {name}"
print(greet_bad.__name__) # wrapper
print(greet_bad.__doc__) # None
print(greet_good.__name__) # greet_good
print(greet_good.__doc__) # Return a greeting string.
print(greet_good.__wrapped__) #
@functools.wraps also sets __wrapped__ on the wrapper, giving direct access to the original unwrapped function. This is how tools like inspect.unwrap() traverse decorator chains, and how test frameworks can patch the underlying callable without removing the decorator.
"To allow access to the original function for introspection and other purposes (e.g. bypassing a caching decorator such as lru_cache()), this function automatically adds a __wrapped__ attribute to the wrapper that refers to the function being wrapped."
@lru_cache(maxsize=128) and call fib(40) twice. On the second call, what happens?lru_cache wrapper stores results in an internal dictionary keyed by the argument tuple. On the second call, the key (40,) is already present so the cached value is returned immediately — the recursive function body is never touched. Verify this by checking fib.cache_info().hits.
lru_cache has no effect on Python's recursion depth limit. Caching actually makes deep recursion safer by eliminating redundant recursive calls — once a depth is computed, it is never visited again for the same input.
__eq__ and __lt__ and is decorated with @total_ordering. Which methods does the decorator generate?total_ordering only fills in missing ordering comparisons. It does not generate __ne__ (Python 3 derives that automatically from __eq__), and it has no knowledge of __hash__ or __repr__ — those are entirely outside its scope.
__eq__ and __lt__, the decorator derives __le__, __gt__, and __ge__. It never overrides methods the class has already defined — if you had also written __le__ yourself, the decorator would leave it alone.
total_ordering never touches methods already defined in the class. The whole point is to fill in the gaps. If you provided __lt__ and __eq__, those stay exactly as you wrote them.
@functools.wraps(func) from a custom decorator. What is the most immediate concrete consequence?@functools.wraps, the inner wrapper function becomes the visible identity of the decorated function. Any code that inspects __name__ — logging formatters, stack traces, pytest output, Sphinx autodoc — will see 'wrapper' instead of the original name, and the docstring is gone entirely.
lru_cache keys its cache on the argument tuple, not on function identity metadata. Omitting @functools.wraps does not affect the cache at all. The real impact is on introspection: __name__, __doc__, __module__, __qualname__, and __annotations__ all report the wrapper's values instead of the original's.
Context Management and Resource Control
The contextlib module provides decorators that bridge the gap between context managers and function decorators, enabling clean resource management patterns without writing full __enter__/__exit__ classes.
contextlib.contextmanager — Generator-Based Context Managers
@contextlib.contextmanager converts a generator function into a context manager. Everything before the single yield statement executes as __enter__; everything in the finally block after yield executes as __exit__. The yielded value becomes the target of the as clause in the with statement.
from contextlib import contextmanager
import os
import tempfile
@contextmanager
def temp_directory():
"""Create a temporary directory and clean it up on exit."""
import shutil
tmpdir = tempfile.mkdtemp()
try:
yield tmpdir
finally:
shutil.rmtree(tmpdir, ignore_errors=True)
@contextmanager
def patched_env(key: str, value: str):
"""Temporarily set an environment variable."""
original = os.environ.get(key)
os.environ[key] = value
try:
yield
finally:
if original is None:
del os.environ[key]
else:
os.environ[key] = original
# Usage as a context manager
with temp_directory() as tmpdir:
path = os.path.join(tmpdir, "output.txt")
with open(path, "w") as f:
f.write("temporary data")
print(os.path.exists(tmpdir)) # True
print(os.path.exists(tmpdir)) # False — cleaned up
# Usage as a function decorator — every call gets its own context
with patched_env("APP_ENV", "testing"):
print(os.environ["APP_ENV"]) # testing
print(os.environ.get("APP_ENV")) # None (or original value)
Because contextmanager builds on ContextDecorator, the resulting context manager can also be used directly as a function decorator using the @ctx_manager() syntax. A new generator instance is created on each function call, so the context manager remains reusable across multiple invocations.
from contextlib import contextmanager
import time
@contextmanager
def timed_block(label: str):
start = time.perf_counter()
try:
yield
finally:
elapsed = time.perf_counter() - start
print(f"[{label}] completed in {elapsed:.4f}s")
# As a context manager
with timed_block("matrix multiply"):
result = sum(i * j for i in range(500) for j in range(500))
# As a decorator — note the call syntax ()
@timed_block("sort benchmark")
def run_sort():
data = list(range(10_000, 0, -1))
data.sort()
run_sort() # [sort benchmark] completed in 0.0008s
contextlib.asynccontextmanager — Async Resource Management
For async code, @contextlib.asynccontextmanager does the same job with an asynchronous generator. It was added in Python 3.7. Support for using the resulting context manager as a function decorator (via the @ctx_manager() syntax) was added in Python 3.10. The decorated function must be an async def generator containing exactly one yield.
import asyncio
from contextlib import asynccontextmanager
@asynccontextmanager
async def managed_connection(host: str, port: int):
"""Simulate acquiring and releasing an async database connection."""
print(f"Connecting to {host}:{port}...")
conn = {"host": host, "port": port, "active": True}
try:
yield conn
finally:
conn["active"] = False
print(f"Connection to {host}:{port} closed.")
async def fetch_users():
async with managed_connection("db.internal", 5432) as conn:
print(f"Running query on {conn['host']}")
await asyncio.sleep(0.01) # Simulate I/O
return ["alice", "bob", "charlie"]
asyncio.run(fetch_users())
# Connecting to db.internal:5432...
# Running query on db.internal
# Connection to db.internal:5432 closed.
Comparison: Decorator Categories and Use Cases
| Decorator | Module | Primary Use Case |
|---|---|---|
@lru_cache |
functools | Bounded memoization for pure functions with hashable args |
@cache |
functools | Unbounded memoization; faster than lru_cache(None) |
@cached_property |
functools | One-time computation cached on the instance |
@singledispatch |
functools | Type-based dispatch without isinstance chains |
@total_ordering |
functools | Auto-generate comparison methods from __eq__ + one other |
@wraps |
functools | Preserve wrapped function metadata in custom decorators |
@contextmanager |
contextlib | Generator-based context managers without class boilerplate |
@asynccontextmanager |
contextlib | Async context managers for coroutine-based code |
Custom Utility Decorator Patterns
Standard library decorators cover general-purpose needs, but production codebases frequently require custom decorators for cross-cutting concerns such as retry logic, rate limiting, input validation, timing instrumentation, and thread-safe execution guards. The patterns below are written with @functools.wraps throughout and designed to be composable.
Parameterized Retry with Exponential Backoff
Network calls, file I/O operations, and external API requests fail intermittently. A retry decorator with configurable attempts, delay, and backoff factor handles transient failures without cluttering business logic.
import functools
import time
import logging
logger = logging.getLogger(__name__)
def retry(
max_attempts: int = 3,
delay: float = 1.0,
backoff: float = 2.0,
exceptions: tuple = (Exception,)
):
"""Retry a function on specified exceptions with exponential backoff."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
current_delay = delay
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except exceptions as exc:
if attempt == max_attempts:
logger.error(
"%s failed after %d attempts: %s",
func.__name__, max_attempts, exc
)
raise
logger.warning(
"%s attempt %d/%d failed: %s. Retrying in %.1fs...",
func.__name__, attempt, max_attempts, exc, current_delay
)
time.sleep(current_delay)
current_delay *= backoff
return wrapper
return decorator
# Usage — only retries on connection-related errors
@retry(max_attempts=4, delay=0.5, backoff=2.0, exceptions=(ConnectionError, TimeoutError))
def fetch_config(endpoint: str) -> dict:
import urllib.request
with urllib.request.urlopen(endpoint, timeout=5) as resp:
import json
return json.loads(resp.read())
# Stacking with other decorators
@retry(max_attempts=3, delay=1.0)
@lru_cache(maxsize=64)
def get_user_profile(user_id: int) -> dict:
# Cached results never trigger retry; only uncached calls can fail
return {"id": user_id, "name": "example"}
Rate Limiter Decorator
Rate limiting is a common requirement when calling external APIs or protecting shared resources. The decorator below uses a token bucket approach: calls are allowed up to calls times per period seconds, with excess calls blocked until a slot is available.
import functools
import time
import threading
from collections import deque
def rate_limit(calls: int, period: float):
"""Allow at most `calls` invocations per `period` seconds."""
def decorator(func):
lock = threading.Lock()
call_times: deque = deque()
@functools.wraps(func)
def wrapper(*args, **kwargs):
with lock:
now = time.monotonic()
# Remove timestamps outside the current window
while call_times and call_times[0] <= now - period:
call_times.popleft()
if len(call_times) >= calls:
sleep_for = period - (now - call_times[0])
time.sleep(sleep_for)
call_times.append(time.monotonic())
return func(*args, **kwargs)
return wrapper
return decorator
@rate_limit(calls=5, period=1.0)
def call_api(endpoint: str) -> str:
return f"response from {endpoint}"
# Will process 5 calls immediately, then wait before the 6th
for i in range(8):
result = call_api(f"/api/resource/{i}")
print(f"Call {i + 1}: {result}")
Type Enforcement Decorator
Python's type hints are annotations, not runtime constraints. A type enforcement decorator bridges that gap — it inspects the function signature at decoration time and validates argument types on every call, raising TypeError with a descriptive message when a mismatch is detected.
import functools
import inspect
def enforce_types(func):
"""Validate argument types against the function's annotations at call time."""
sig = inspect.signature(func)
hints = func.__annotations__
@functools.wraps(func)
def wrapper(*args, **kwargs):
bound = sig.bind(*args, **kwargs)
bound.apply_defaults()
for param_name, value in bound.arguments.items():
if param_name in hints and param_name != "return":
expected = hints[param_name]
if not isinstance(value, expected):
raise TypeError(
f"{func.__name__}() argument '{param_name}' "
f"must be {expected.__name__}, got {type(value).__name__}"
)
return func(*args, **kwargs)
return wrapper
@enforce_types
def calculate_discount(price: float, rate: float, label: str) -> float:
"""Apply a discount rate to a price."""
return price * (1.0 - rate)
print(calculate_discount(99.99, 0.15, "member")) # 84.9915
try:
calculate_discount("free", 0.10, "promo")
except TypeError as e:
print(e)
# calculate_discount() argument 'price' must be float, got str
This pattern works well for boundary enforcement in library APIs and data pipelines. For full runtime type checking in large codebases, consider beartype or typeguard, which handle union types, generics, and Optional transparently and with better performance.
Class-Based Stateful Decorators
When a decorator needs to carry state across calls — call counts, cumulative timing, call history — a class-based implementation is cleaner than a closure with mutable variables. The class implements __init__ to store the function and __call__ to act as the wrapper.
import functools
import time
class Profiler:
"""Track cumulative call count and total execution time."""
def __init__(self, func):
functools.update_wrapper(self, func)
self.func = func
self.call_count = 0
self.total_time = 0.0
def __call__(self, *args, **kwargs):
start = time.perf_counter()
result = self.func(*args, **kwargs)
elapsed = time.perf_counter() - start
self.call_count += 1
self.total_time += elapsed
return result
@property
def average_time(self) -> float:
if self.call_count == 0:
return 0.0
return self.total_time / self.call_count
def stats(self) -> dict:
return {
"function": self.func.__name__,
"calls": self.call_count,
"total_s": round(self.total_time, 6),
"avg_s": round(self.average_time, 6),
}
@Profiler
def process_record(record: dict) -> dict:
"""Simulate record processing with variable work."""
time.sleep(0.001)
return {k: str(v).upper() for k, v in record.items()}
records = [{"id": i, "name": f"user_{i}"} for i in range(20)]
for r in records:
process_record(r)
print(process_record.stats())
# {'function': 'process_record', 'calls': 20, 'total_s': 0.02..., 'avg_s': 0.001...}
Universal Async/Sync Timer
A decorator that works on both synchronous and asynchronous functions requires runtime detection of whether the wrapped callable is a coroutine function. Using asyncio.iscoroutinefunction() at decoration time, the factory returns the correct wrapper variant.
import asyncio
import functools
import time
def timer(func):
"""Measure and report execution time for sync and async functions."""
if asyncio.iscoroutinefunction(func):
@functools.wraps(func)
async def async_wrapper(*args, **kwargs):
start = time.perf_counter()
result = await func(*args, **kwargs)
print(f"[async] {func.__name__} -> {time.perf_counter() - start:.4f}s")
return result
return async_wrapper
else:
@functools.wraps(func)
def sync_wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
print(f"[sync] {func.__name__} -> {time.perf_counter() - start:.4f}s")
return result
return sync_wrapper
@timer
def compute_sum(limit: int) -> int:
return sum(range(limit))
@timer
async def fetch_data(url: str) -> str:
await asyncio.sleep(0.05)
return f"data from {url}"
compute_sum(10_000_000) # [sync] compute_sum -> 0.2133s
asyncio.run(fetch_data("https://api.example.com/data")) # [async] fetch_data -> 0.0501s
Decorator Stacking Order
When multiple decorators are stacked on a function, they apply from innermost (closest to the function) to outermost (furthest from the function) during decoration, but execute from outermost to innermost at call time. Understanding this order prevents subtle bugs when combining decorators like @retry and @timer.
# Decoration order (bottom-up): enforce_types applied first, then timer, then retry
# Call order (top-down): retry wraps timer wraps enforce_types wraps the function
@retry(max_attempts=3, delay=0.1)
@timer
@enforce_types
def unstable_compute(value: float, scale: float) -> float:
import random
if random.random() < 0.4:
raise ConnectionError("simulated transient failure")
return value * scale
# retry sees the timer-wrapped version
# timer sees the enforce_types-wrapped version
# enforce_types sees the original function
# Equivalent to:
# unstable_compute = retry(3, 0.1)(timer(enforce_types(unstable_compute)))
'wrapper' in log output and stack traces. What is wrong?import time
def retry(max_attempts=3, delay=1.0):
def decorator(func):
def wrapper(*args, **kwargs):
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except Exception:
if attempt == max_attempts:
raise
time.sleep(delay)
return wrapper
return decorator
@retry(max_attempts=4, delay=0.5)
def fetch_data(url: str) -> dict:
"""Fetch JSON from the given URL."""
import urllib.request, json
with urllib.request.urlopen(url) as r:
return json.loads(r.read())
@functools.wraps(func) is missing from the inner wrapper.
wrapper function's own name and empty docstring replace the original function's metadata. Every log line, stack trace, monitoring dashboard, and Sphinx autodoc page that reads __name__ will show 'wrapper' instead of 'fetch_data' — making production debugging significantly harder and documentation meaningless.
@functools.wraps(func) to every inner wrapper in every decorator you write. It costs one line and pays for itself the first time you need to read a stack trace.
cached_property is supposed to cache an expensive computation per instance, but the method recalculates on every access instead of caching. What is the structural cause?from functools import cached_property
class DataPipeline:
__slots__ = ('_data',)
def __init__(self, data: list[float]):
self._data = data
@cached_property
def summary(self) -> dict:
print("computing summary...")
return {
'count': len(self._data),
'total': sum(self._data),
'mean': sum(self._data) / len(self._data),
}
pipeline = DataPipeline([1.0, 2.0, 3.0])
print(pipeline.summary) # prints "computing summary..." every time
__slots__ is defined without including '__dict__'.
cached_property works by writing the computed result directly into the instance's __dict__ on first access. When __slots__ is declared without '__dict__', the instance has no __dict__ at all. The descriptor's write silently fails, so it falls back to recomputing every time.
'__dict__' to the slot list restores the mutable mapping that cached_property requires. If you need the memory savings of __slots__ without __dict__, drop cached_property entirely and use a regular method with a manual _cache attribute stored in a named slot.
singledispatch function registers both int and bool handlers, but the bool implementation is never reached. The output for True is always "integer: True". What is happening?from functools import singledispatch
@singledispatch
def process(value):
return f"unknown: {value}"
@process.register
def _(value: bool) -> str:
return f"boolean: {value}"
@process.register
def _(value: int) -> str:
return f"integer: {value}"
print(process(True)) # expected: boolean: True
# actual: integer: True
int is registered after bool, and later registrations overwrite earlier ones for types that share an MRO.
bool is a subclass of int. When singledispatch resolves dispatch for True, it walks the MRO: [bool, int, object]. It finds int registered and, because int was registered after bool, the internal cache for the bool type was rebuilt to point at the int handler. The bool-specific handler is effectively shadowed.
process.dispatch(bool) — it returns the function that will be invoked, making it easy to catch this mistake before it reaches production.
Key Takeaways
- Match the caching decorator to the data lifetime: Use
@lru_cache(maxsize=N)for long-running processes where memory must be bounded,@cachefor finite computation trees, and@cached_propertyfor per-instance computed attributes that do not change after first access. - Prefer
@singledispatchoverisinstancechains: Type dispatch viasingledispatchis more maintainable, follows the open/closed principle, and allows extension from outside the original module without modifying the base function. - Always apply
@functools.wrapsin custom decorators: Omitting it silently corrupts function metadata, breaks introspection, and causes hard-to-trace failures in documentation generators, test frameworks, and type checkers. It costs nothing to include. - Use
@contextmanagerfor simple resource management; reach for a class when managing complex state: Generator-based context managers are concise for setup/teardown with minimal branching. When__exit__logic becomes complex or the context manager needs to store mutable state across uses, a class with explicit methods is clearer. - Design custom utility decorators with production concerns in mind from the start: Thread safety (rate limiter), observable state (profiler), and clean stacking behavior (sync/async timer) should be first-class requirements, not afterthoughts.
Decorators are at their strongest when they encapsulate a single, well-defined cross-cutting concern — caching, validation, retry, timing — and leave the decorated function free to express pure business logic. The standard library's utility decorators embody this principle, and well-designed custom decorators follow the same contract: transparent, composable, and respectful of the original function's identity through @functools.wraps.