Core Standard Library Decorators

Python ships with a small set of production-grade decorators inside functools that handle caching, metadata preservation, generic dispatch, and comparison ordering. Understanding them precisely — not just knowing they exist — changes how you design functions and classes at the architectural level.

The functools module was designed to support functional-style programming in Python. It provides higher-order functions — functions that either accept functions as arguments or return them as results. Several of its tools are decorators in their own right, and others are utilities that make writing correct decorators possible in the first place. This article covers the decorators that ship with the standard library and are used widely in professional Python codebases: @wraps, @lru_cache, @cache, @cached_property, @singledispatch, and @total_ordering.

Each section below treats the decorator as a working tool rather than a concept to be introduced. The focus is on mechanics, edge cases, and patterns that show up in real code — not toy examples that disappear when you close the tab.

Metadata Preservation: @wraps and update_wrapper

Every decorator you write faces the same structural problem: the wrapper function replaces the original, and the original's identity disappears. When Python evaluates @my_decorator above a function definition, it executes my_function = my_decorator(my_function). The result stored under that name is the wrapper, not the function you defined. That means __name__, __doc__, __module__, __qualname__, and __annotations__ all point to the wrapper's values unless you explicitly transfer them.

This is not a cosmetic issue. Tools like help(), pydoc, Sphinx, pytest, and any framework that inspects signatures at runtime will see the wrong information. Debugging tracebacks will show the wrapper's name instead of the function you wrote. Type checkers and linters will misread the signature. The problem compounds when decorators are stacked — each layer erases the layer below it.

The problem without @wraps

def log_call(func):
    def wrapper(*args, **kwargs):
        """Wrapper for logging."""
        print(f"Calling {func.__name__}")
        return func(*args, **kwargs)
    return wrapper

@log_call
def compute_total(price: float, tax_rate: float) -> float:
    """Return the total price including tax."""
    return price * (1 + tax_rate)

print(compute_total.__name__)   # wrapper
print(compute_total.__doc__)    # Wrapper for logging.
print(compute_total.__annotations__)  # {}  — annotation data gone

The decorated function has lost its name, its docstring, and its type annotations. Any code that relies on those attributes — including runtime type checkers, documentation generators, and test introspection — will behave incorrectly or silently.

Fixing it with @functools.wraps

from functools import wraps

def log_call(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        """Wrapper for logging."""
        print(f"Calling {func.__name__}")
        return func(*args, **kwargs)
    return wrapper

@log_call
def compute_total(price: float, tax_rate: float) -> float:
    """Return the total price including tax."""
    return price * (1 + tax_rate)

print(compute_total.__name__)        # compute_total
print(compute_total.__doc__)         # Return the total price including tax.
print(compute_total.__annotations__) # {'price': float, 'tax_rate': float, 'return': float}
print(compute_total.__wrapped__)     # 

@wraps copies __module__, __name__, __qualname__, __doc__, __annotations__, and __dict__ from the wrapped function to the wrapper. Since Python 3.12, it also copies __type_params__, which is required for generic functions defined with the PEP 695 type parameter syntax (Python docs: functools). It also sets a __wrapped__ attribute on the wrapper that points back to the original function, giving introspection tools a way to traverse the decorator chain. Tools like inspect.signature() unwrap this chain automatically, which means a correctly wrapped decorator is invisible to signature inspection by default.

Pro Tip

The __wrapped__ attribute added by @wraps lets you access the original unwrapped function directly: compute_total.__wrapped__(10.0, 0.08). This is useful for testing the raw function without decorator side effects, and for tools like inspect.signature() that unwrap decorator chains automatically.

update_wrapper: the lower-level primitive

@wraps is implemented as a convenience wrapper around functools.update_wrapper(). You need update_wrapper directly when you implement a decorator as a class rather than a closure, because class-based decorators do not use a nested function as the wrapper.

Both functions accept optional assigned and updated parameters that let you control exactly which attributes are copied. The defaults are the module-level constants functools.WRAPPER_ASSIGNMENTS (which controls which attributes are assigned directly) and functools.WRAPPER_UPDATES (which controls which wrapper attributes are updated by merging). Customising these is useful when your wrapper intentionally changes certain attributes — for example, when you want to preserve everything from the original except its docstring, which you're replacing with generated documentation.

import functools

# WRAPPER_ASSIGNMENTS default: ('__module__', '__name__', '__qualname__',
#   '__annotations__', '__type_params__', '__doc__')
# WRAPPER_UPDATES default: ('__dict__',)

# Selectively skip __doc__ so the wrapper can provide its own docstring
def timed(func):
    assigned = tuple(a for a in functools.WRAPPER_ASSIGNMENTS if a != '__doc__')
    functools.update_wrapper(timed_wrapper, func, assigned=assigned)

    def timed_wrapper(*args, **kwargs):
        """Wraps the function and prints elapsed time in milliseconds."""
        import time
        start = time.perf_counter()
        result = func(*args, **kwargs)
        elapsed = (time.perf_counter() - start) * 1000
        print(f"{func.__name__} took {elapsed:.3f} ms")
        return result

    functools.update_wrapper(timed_wrapper, func, assigned=assigned)
    return timed_wrapper

@timed
def load_config(path: str) -> dict:
    """Load configuration from a TOML file."""
    import time; time.sleep(0.001)
    return {}

print(load_config.__name__)  # load_config — preserved
print(load_config.__doc__)   # Wraps the function and prints elapsed time in milliseconds.
import functools
import json
import urllib.request

class retry:
    """Decorator that retries a function up to `times` times on exception.

    Supports both bare usage (@retry) and parameterised usage (@retry(times=5)).
    """

    def __init__(self, func=None, *, times: int = 3) -> None:
        self.times = times
        self._func: callable | None = None
        if func is not None:
            functools.update_wrapper(self, func)
            self._func = func

    def __call__(self, *args, **kwargs):
        # Parameterised form: @retry(times=5) — first call receives the function
        if self._func is None:
            func = args[0]
            wrapper = retry(func, times=self.times)
            return wrapper
        last_exc: Exception | None = None
        for attempt in range(self.times):
            try:
                return self._func(*args, **kwargs)
            except Exception as exc:
                last_exc = exc
                print(f"Attempt {attempt + 1} failed: {exc}")
        raise last_exc  # type: ignore[misc]

@retry
def fetch_data(url: str) -> dict:
    """Fetch JSON from a remote URL."""
    with urllib.request.urlopen(url) as response:
        return json.loads(response.read())

@retry(times=5)
def fetch_data_resilient(url: str) -> dict:
    """Fetch JSON from a remote URL with five retry attempts."""
    with urllib.request.urlopen(url) as response:
        return json.loads(response.read())

print(fetch_data.__name__)           # fetch_data
print(fetch_data.__doc__)            # Fetch JSON from a remote URL.
print(fetch_data_resilient.__name__) # fetch_data_resilient

Without update_wrapper, the class instance would have no __name__ attribute at all — attribute access would raise AttributeError. The call to update_wrapper(self, func) in __init__ transfers all the standard attributes onto the class instance, making it behave transparently as the original function.

Python Pop Quiz
What does @functools.wraps add to the wrapper function that allows introspection tools to access the original unwrapped function?

Caching Decorators: @lru_cache, @cache, and @cached_property

Python's standard library provides three caching decorators that operate at different scopes and with different eviction semantics. Understanding which one to reach for — and when — requires knowing exactly how each one stores and retrieves values.

@lru_cache: bounded memoization with LRU eviction

@lru_cache wraps a function with a memoizing layer that stores results keyed by the function's positional and keyword arguments. When the number of unique call signatures in the cache reaches maxsize, the entry that was used least recently is evicted to make room for the new one. The default maxsize is 128.

from functools import lru_cache

def levenshtein(s1: str, s2: str) -> int:
    """Compute edit distance between two strings using cached index recursion."""

    @lru_cache(maxsize=None)
    def dp(i: int, j: int) -> int:
        if i == 0:
            return j
        if j == 0:
            return i
        if s1[i - 1] == s2[j - 1]:
            return dp(i - 1, j - 1)
        return 1 + min(
            dp(i - 1, j),      # deletion
            dp(i, j - 1),      # insertion
            dp(i - 1, j - 1),  # substitution
        )

    return dp(len(s1), len(s2))

print(levenshtein("kitten", "sitting"))  # 3
print(levenshtein("saturday", "sunday")) # 3
Why index-based recursion

Slicing strings in the cache key (e.g. lev(s1[1:], s2)) creates a new string object on every recursive call. For a pair of strings of length m and n, that is O(m×n) distinct string allocations used as cache keys. Using integer indices instead means the cache keys are small tuples of ints — no allocations, no redundant string copies, and the inner dp closure captures the strings by reference once. The cache is defined inside the function so each call to levenshtein gets a fresh cache scoped to that pair of strings, avoiding unbounded memory accumulation across repeated calls.

Arguments must be hashable because the cache uses a dictionary internally. Calling the same function with keyword arguments in a different order produces separate cache entries: f(a=1, b=2) and f(b=2, a=1) are treated as distinct signatures. The cache is thread-safe — the underlying data structure stays coherent under concurrent writes — but it is possible for a cache miss to be computed more than once if two threads call the function with the same arguments before either result is stored (Python docs: lru_cache).

The typed parameter and cache_parameters()

By default, @lru_cache treats arguments of equal value as the same cache key regardless of their type. Setting typed=True causes arguments of different types to be cached separately, even when their values are numerically equal. This distinction is relevant any time a function's return value depends on the concrete type of its inputs rather than just their value — for example, a function that formats output differently for int versus float.

from functools import lru_cache

@lru_cache(maxsize=128, typed=True)
def describe_value(x: int | float) -> str:
    """Return a description that includes the runtime type of x."""
    return f"{type(x).__name__}: {x!r}"

print(describe_value(3))      # int: 3
print(describe_value(3.0))    # float: 3.0

# Both calls were cache misses — different types, different keys
print(describe_value.cache_info())
# CacheInfo(hits=0, misses=2, maxsize=128, currsize=2)

# Inspect the cache configuration (added in Python 3.9)
print(describe_value.cache_parameters())
# {'maxsize': 128, 'typed': True}

The cache_parameters() method, added in Python 3.9, returns the maxsize and typed settings that were used to create the cache. This is useful for serialization, logging, and debugging — scenarios where you need to inspect a cached function's configuration without re-reading the source. Before 3.9, the typed value could only be recovered by reading the source directly.

Performance Note

The @lru_cache documentation notes that the LRU feature performs best when maxsize is a power of two. Values like 64, 128, 256, or 512 give better internal hash distribution than arbitrary round numbers. For most applications the difference is negligible, but for hot paths with high cache turnover it is worth knowing.

Note

When you cache a method on a class with @lru_cache, the self instance is part of the cache key. This prevents the cache from confusing results across instances, but it also prevents the instance from being garbage-collected as long as the cache holds a reference to it. For per-instance caching, prefer @cached_property.

@cache: unbounded memoization (Python 3.9+)

Introduced in Python 3.9, @cache is equivalent to @lru_cache(maxsize=None) but implemented as a simpler dictionary lookup without any eviction bookkeeping. Because it never needs to track access order, it is faster and uses less memory per entry than a bounded LRU cache when the total number of unique inputs is small and known. Like @lru_cache, it is thread-safe in that the underlying dictionary stays coherent under concurrent writes, but a cache miss may still be computed more than once if two threads call the function with the same arguments simultaneously before either result is stored (Python docs: cache).

from functools import cache

@cache
def factorial(n: int) -> int:
    if n == 0:
        return 1
    return n * factorial(n - 1)

print(factorial(10))   # 3628800  — 11 recursive calls, all cached
print(factorial(5))    # 120      — no new calls, pure cache hit
print(factorial(12))   # 479001600 — only two new calls needed

Because @cache never evicts, it is unsuitable for functions called with an unbounded range of arguments over the lifetime of a long-running process. Every unique input adds a permanent entry to the cache dictionary. For anything with an open input space — parsing arbitrary user strings, for example — use @lru_cache with a reasonable maxsize. Reserve @cache for pure functions with a small, finite domain: recursive algorithms over bounded inputs, configuration lookups, compiled regular expressions keyed by pattern strings, and similar cases where the total number of distinct inputs is predictable.

import re
from functools import cache

@cache
def get_pattern(pattern_str: str) -> re.Pattern[str]:
    """Compile and cache a regex pattern by its string."""
    return re.compile(pattern_str)

# In a hot loop, pattern compilation is paid once per unique string
lines = ["2026-03-29 INFO  server started", "2026-03-29 ERROR timeout"]
date_pattern = get_pattern(r"\d{4}-\d{2}-\d{2}")
for line in lines:
    match = date_pattern.search(line)
    if match:
        print(match.group())

@cached_property: per-instance computed attributes (Python 3.8+)

@cached_property occupies a different niche from the two caching decorators above. It is a descriptor that behaves like @property on the first access and like a plain attribute on every subsequent access. The first time the attribute is read on an instance, the method is called, and the result is stored directly in the instance's __dict__ under the same name. On all later reads, normal attribute lookup finds the stored value before the descriptor protocol is invoked, so the method is never called again.

from functools import cached_property
import statistics

class SensorReading:
    def __init__(self, samples: list[float]):
        self._samples = samples

    @cached_property
    def mean(self) -> float:
        print("computing mean...")
        return statistics.mean(self._samples)

    @cached_property
    def stdev(self) -> float:
        print("computing stdev...")
        return statistics.stdev(self._samples)

reading = SensorReading([12.4, 13.1, 11.9, 12.7, 13.0])

print(reading.mean)   # computing mean... then 12.62
print(reading.mean)   # no print, returns 12.62 from __dict__
print(reading.stdev)  # computing stdev... then 0.4817...
print(reading.stdev)  # no print, returns cached value

# Invalidate by deleting the attribute
del reading.mean
print(reading.mean)   # computing mean... runs again

There are two constraints worth noting. First, the class must have a mutable instance __dict__. Classes that define __slots__ without including __dict__ as one of the slots cannot use @cached_property because there is nowhere to store the cached value. Second, the Python documentation notes that @cached_property interferes with PEP 412 key-sharing dictionaries, which means instance dictionaries on classes using this decorator can occupy more memory than usual — a relevant consideration when creating large numbers of instances. Third, @cached_property does not prevent a race condition in multi-threaded code: two threads may both find the attribute missing and both compute the value before either has written it.

When double-computation is harmful — because the computation has side effects, acquires a resource, or is expensive enough that racing wastes real time — the solution is to use a threading.Lock stored on the instance. The pattern below ensures the computation runs exactly once per instance even under concurrent access, without imposing serialization on reads after the first computation:

import threading
from functools import cached_property

class DatabaseConnection:
    def __init__(self, dsn: str) -> None:
        self._dsn = dsn
        self._schema_lock = threading.Lock()

    @cached_property
    def schema(self) -> dict:
        """Fetch the schema once; thread-safe via double-checked locking."""
        # cached_property writes to __dict__ only once the method returns.
        # Use a lock so concurrent threads don't both execute the fetch.
        with self._schema_lock:
            # Check again inside the lock — another thread may have
            # written to __dict__ while we were waiting.
            if "schema" in self.__dict__:
                return self.__dict__["schema"]
            result = self._fetch_schema()
            return result

    def _fetch_schema(self) -> dict:
        # Simulate an expensive database round-trip
        return {"tables": ["users", "orders", "products"]}

conn = DatabaseConnection("postgresql://localhost/mydb")
print(conn.schema)  # fetched exactly once, even with concurrent access
print(conn.schema)  # returns __dict__ value, no lock needed
Warning

@cached_property is not compatible with @classmethod or @staticmethod, and it cannot be stacked directly with @property. It works exclusively on instance methods in classes with a writable __dict__.

The comparison table below summarizes how the three caching tools relate to each other:

Scope Function-level, shared across all callers
Eviction LRU when maxsize reached
Hashable args Yes — all arguments must be hashable
Scope Function-level, shared across all callers
Eviction None — unbounded, entries are never evicted
Hashable args Yes — all arguments must be hashable
Scope Per-instance attribute stored in __dict__
Eviction Manual — delete the attribute: del obj.attr
Hashable args No — takes no arguments (property access only)
Python Pop Quiz
Which of the following correctly explains why @cache can be faster than @lru_cache(maxsize=128) per lookup?

Generic Dispatch and Ordering: @singledispatch and @total_ordering

@singledispatch: type-based function overloading

Python does not have native function overloading. When you need a function to behave differently based on the type of its first argument, the conventional approach is a chain of isinstance checks. This works but it couples type-specific logic into a single function body, making it harder to extend without modifying the original code.

@singledispatch solves this by transforming a function into a generic function. The decorated function serves as the fallback implementation for unrecognized types. Type-specific implementations are registered separately using the .register() method, which is itself a decorator. The dispatcher selects which implementation to call based on the type of the first positional argument at call time.

import json
from functools import singledispatch

@singledispatch
def serialize(obj) -> str:
    """Fallback: raise for unrecognized types."""
    raise TypeError(f"Cannot serialize type {type(obj).__name__!r}")

@serialize.register
def _(obj: int) -> str:
    return json.dumps({"type": "int", "value": obj})

@serialize.register
def _(obj: float) -> str:
    return json.dumps({"type": "float", "value": obj})

@serialize.register
def _(obj: str) -> str:
    return json.dumps({"type": "str", "value": obj})

@serialize.register
def _(obj: list) -> str:
    items = [json.loads(serialize(item)) for item in obj]
    return json.dumps({"type": "list", "items": items})

print(serialize(42))
# {"type": "int", "value": 42}

print(serialize("hello"))
# {"type": "str", "value": "hello"}

print(serialize([1, "hello", 3.14]))
# {"type": "list", "items": [{"type": "int", ...}, ...]}

# Inspect the dispatch registry
print(serialize.registry.keys())
# dict_keys([, , , , ])

# Manually look up the implementation for a type
print(serialize.dispatch(int))
#   — the registered int implementation

When the type of the argument does not match any registered implementation exactly, Python uses the method resolution order of the argument's class to find the closest registered type. If none is found, the fallback registered to object (the original decorated function) is used. This means you can register implementations for abstract base classes from collections.abc and they will apply to all concrete subclasses automatically.

The deeper architectural value of @singledispatch is that registrations are not limited to the module where the generic function is defined. Any module that imports the function can call .register() on it, adding support for new types without modifying the original file. This is the open/closed principle in its most literal form: the function is open for extension (new type handlers) but closed for modification (the core logic is untouched). This is the pattern that makes @singledispatch more than a style preference over isinstance chains — it makes type-specific behaviour pluggable across module boundaries.

# core.py — defines the generic function
from functools import singledispatch

@singledispatch
def render(obj) -> str:
    raise TypeError(f"No renderer for {type(obj).__name__!r}")

@render.register(str)
def _(obj: str) -> str:
    return obj

@render.register(int)
def _(obj: int) -> str:
    return str(obj)

# --- extensions.py — adds support for a third-party type without touching core.py
import decimal
from core import render  # import the generic function, then extend it

@render.register(decimal.Decimal)
def _(obj: decimal.Decimal) -> str:
    return f"{obj:.2f}"

# The Decimal handler is now active everywhere render() is called,
# including code that imported render before this module was loaded.
print(render(decimal.Decimal("3.14159")))  # 3.14
print(render("hello"))                     # hello
print(render(42))                          # 42
from functools import singledispatch
from collections.abc import Mapping, Sequence

@singledispatch
def describe(obj) -> str:
    return f"unknown: {obj!r}"

@describe.register(Mapping)
def _(obj) -> str:
    return f"mapping with {len(obj)} keys"

@describe.register(Sequence)
def _(obj) -> str:
    return f"sequence of length {len(obj)}"

print(describe({"a": 1, "b": 2}))    # mapping with 2 keys
print(describe([10, 20, 30]))         # sequence of length 3
print(describe((1, 2)))               # sequence of length 2
print(describe("hello"))              # sequence of length 5
print(describe(42))                   # unknown: 42

Note that str inherits from Sequence, so it dispatches to the sequence implementation unless you register a more specific handler for str first. Specificity is resolved through the MRO, and the most specific registered ancestor wins.

@singledispatchmethod: dispatch on instance and class methods

For method dispatch inside a class, singledispatch does not work directly because the first argument is always self. Python 3.8 added singledispatchmethod specifically for this case. It dispatches on the second argument (the first non-self argument). Note: a caching regression was introduced in singledispatchmethod in Python 3.11 and removed in Python 3.13.5 — if you are running 3.11 through 3.13.4 with singledispatchmethod in performance-critical code, upgrade to 3.13.5 or later (Python 3.13.5 changelog).

from functools import singledispatchmethod

class Converter:
    @singledispatchmethod
    def to_string(self, value) -> str:
        raise TypeError(f"No handler for {type(value).__name__!r}")

    @to_string.register(int)
    def _(self, value: int) -> str:
        return f"int({value})"

    @to_string.register(float)
    def _(self, value: float) -> str:
        return f"float({value:.4f})"

    @to_string.register(bool)
    def _(self, value: bool) -> str:
        return "true" if value else "false"

c = Converter()
print(c.to_string(42))       # int(42)
print(c.to_string(3.14159))  # float(3.1416)
print(c.to_string(True))     # true
Note

bool is a subclass of int in Python. Without a specific registration for bool, boolean arguments would be dispatched to the int handler. When type specificity matters, always register the more specific type explicitly.

@total_ordering: filling in missing comparison methods

Python's data model defines six rich comparison methods: __lt__, __le__, __gt__, __ge__, __eq__, and __ne__. Writing all six for every comparable class is mechanical and error-prone. @total_ordering reduces the requirement: provide __eq__ and exactly one of __lt__, __le__, __gt__, or __ge__, and the decorator fills in the remaining four.

from functools import total_ordering

@total_ordering
class SemanticVersion:
    """A comparable semantic version number."""

    def __init__(self, major: int, minor: int, patch: int):
        self.major = major
        self.minor = minor
        self.patch = patch

    def __eq__(self, other: object) -> bool:
        if not isinstance(other, SemanticVersion):
            return NotImplemented
        return (self.major, self.minor, self.patch) == (other.major, other.minor, other.patch)

    def __lt__(self, other: object) -> bool:
        if not isinstance(other, SemanticVersion):
            return NotImplemented
        return (self.major, self.minor, self.patch) < (other.major, other.minor, other.patch)

    def __repr__(self) -> str:
        return f"SemanticVersion({self.major}, {self.minor}, {self.patch})"

v1 = SemanticVersion(1, 2, 0)
v2 = SemanticVersion(1, 3, 0)
v3 = SemanticVersion(2, 0, 0)

print(v1 < v2)    # True
print(v2 > v1)    # True  — derived from __lt__
print(v1 <= v1)   # True  — derived from __eq__ and __lt__
print(v3 >= v2)   # True  — derived
print(sorted([v3, v1, v2]))
# [SemanticVersion(1, 2, 0), SemanticVersion(1, 3, 0), SemanticVersion(2, 0, 0)]

Returning NotImplemented (not raising it) from comparison methods is the correct pattern when the other operand's type is unrecognized. This allows Python to try the reflected operation on the other operand. Raising TypeError directly would suppress that fallback and break comparisons between compatible types that happen to be defined in different modules.

One meaningful trade-off: the derived comparison methods produced by @total_ordering are slower than hand-written implementations because they are computed from the provided methods at call time. The official documentation notes this cost explicitly, and recommends implementing all six manually if benchmarking shows the derived methods are a bottleneck in performance-critical code. In practice, this matters almost exclusively in tight sorting loops over very large collections.

When you do need to hand-implement all six, the pattern is mechanical but exact. The key discipline is that every method must return NotImplemented — not raise an exception — for unrecognised types, and the six methods must be mutually consistent so that a > b always agrees with not (a <= b). Here is the idiomatic form:

class FastVersion:
    """SemanticVersion with all six comparison methods hand-implemented.

    Use this form when this class appears in a tight sorting loop over
    tens of thousands of objects and profiling has identified the derived
    @total_ordering methods as measurable overhead.
    """

    def __init__(self, major: int, minor: int, patch: int) -> None:
        self._v = (major, minor, patch)

    def __eq__(self, other: object) -> bool:
        if not isinstance(other, FastVersion):
            return NotImplemented
        return self._v == other._v

    def __ne__(self, other: object) -> bool:
        if not isinstance(other, FastVersion):
            return NotImplemented
        return self._v != other._v

    def __lt__(self, other: object) -> bool:
        if not isinstance(other, FastVersion):
            return NotImplemented
        return self._v < other._v

    def __le__(self, other: object) -> bool:
        if not isinstance(other, FastVersion):
            return NotImplemented
        return self._v <= other._v

    def __gt__(self, other: object) -> bool:
        if not isinstance(other, FastVersion):
            return NotImplemented
        return self._v > other._v

    def __ge__(self, other: object) -> bool:
        if not isinstance(other, FastVersion):
            return NotImplemented
        return self._v >= other._v

    def __repr__(self) -> str:
        return f"FastVersion{self._v}"

# Tuple comparison handles all version semantics correctly —
# (2, 0, 0) > (1, 9, 9) as expected.
versions = [FastVersion(1, 3, 0), FastVersion(2, 0, 0), FastVersion(1, 2, 9)]
print(sorted(versions))
# [FastVersion(1, 2, 9), FastVersion(1, 3, 0), FastVersion(2, 0, 0)]

Note that storing the version as a single tuple self._v rather than three separate attributes is itself the deeper optimisation here. It reduces attribute lookups in every comparison from three to one, which matters more than whether @total_ordering is used. Profile before rewriting.

Python Pop Quiz
When @singledispatch receives an argument whose type has no registered handler, what does the dispatcher do?

Common Pitfalls That Do Not Show Up in the Docs

The official Python documentation describes correct behavior. It rarely describes what goes wrong in practice. The following are failure modes that appear in real codebases but are not prominently labeled anywhere in the functools reference.

Forgetting @wraps on stacked decorators

When decorators are stacked, each layer must apply @wraps to preserve the original function's identity. A single unwrapped layer in the middle of the stack will overwrite the metadata that the outer layers worked to preserve.

from functools import wraps

def outer(func):
    @wraps(func)  # correct
    def wrapper(*args, **kwargs):
        return func(*args, **kwargs)
    return wrapper

def inner(func):
    # @wraps omitted — this breaks the chain for anything outer sees
    def wrapper(*args, **kwargs):
        return func(*args, **kwargs)
    return wrapper

@outer
@inner
def my_function():
    """My docstring."""
    pass

print(my_function.__name__)  # wrapper — inner broke it
print(my_function.__doc__)   # None — docstring gone

The fix is simple: add @wraps(func) inside inner as well. But the failure mode is silent — there is no runtime error, only corrupted metadata.

Using @lru_cache on methods when memory matters

When an instance method is decorated with @lru_cache, self is part of the cache key, which means the cache dictionary holds a strong reference to every instance it has seen. Those instances cannot be garbage-collected until the cache is cleared, regardless of whether any other reference to them exists. In applications that create many short-lived objects per request — web handlers, task workers, parser nodes — this produces a steady memory leak that grows until the process restarts or the cache is manually cleared.

There are three patterns to address this. The first is to use @cached_property instead, which stores the result in the instance's own __dict__ and is released when the instance is collected. The second is to scope the cache to the instance's lifetime by using a method-level @lru_cache that is created fresh per instance rather than at class definition time. The third is to clear the class-level cache explicitly at a point where you know old instances are done.

from functools import lru_cache

class QueryBuilder:
    def __init__(self, table: str) -> None:
        self.table = table

    # Pattern: per-instance cache using a closure created in __init__
    # The cache is tied to this instance and released when it is.
    def __init_subclass__(cls, **kwargs):
        super().__init_subclass__(**kwargs)

    def __post_init__(self) -> None:
        # Bind a fresh lru_cache to this instance so its lifetime
        # matches the instance rather than the class
        self._build_query = lru_cache(maxsize=64)(self._build_query_impl)

    def _build_query_impl(self, columns: tuple[str, ...], where: str) -> str:
        col_str = ", ".join(columns)
        return f"SELECT {col_str} FROM {self.table} WHERE {where}"

# Simpler pattern: just use @cached_property for single computed values
from functools import cached_property

class Report:
    def __init__(self, rows: list[dict]) -> None:
        self._rows = rows

    @cached_property
    def summary(self) -> dict:
        """Compute once; released when the Report instance is collected."""
        total = sum(r.get("amount", 0) for r in self._rows)
        return {"count": len(self._rows), "total": total}

r = Report([{"amount": 100}, {"amount": 250}])
print(r.summary)  # {'count': 2, 'total': 350}
# When r goes out of scope, r.summary is also released — no cache leak

Registering str before Sequence in @singledispatch

Because str inherits from Sequence in Python's ABC hierarchy, a generic Sequence handler will match string arguments unless you register a more specific handler for str first. The registration order matters: the dispatcher uses the MRO of the argument type and picks the most specific registered ancestor. If you want different behavior for strings versus lists, register str explicitly.

from functools import singledispatch
from collections.abc import Sequence

@singledispatch
def describe(obj) -> str:
    return f"unknown: {obj!r}"

@describe.register(Sequence)
def _(obj) -> str:
    return f"sequence of length {len(obj)}"

# Without a str registration, strings match Sequence
print(describe("hello"))      # sequence of length 5
print(describe([1, 2, 3]))    # sequence of length 3

# Add a specific str handler
@describe.register(str)
def _(obj) -> str:
    return f"string: {obj!r}"

print(describe("hello"))      # string: 'hello'  — now distinct
print(describe([1, 2, 3]))    # sequence of length 3  — unchanged

Applying @total_ordering without returning NotImplemented

The derived methods that @total_ordering generates depend on your provided methods returning NotImplemented (not raising TypeError) when the operand's type is not recognized. If your __eq__ or __lt__ raises TypeError directly instead, the derived methods will propagate that exception rather than giving Python's operator dispatch mechanism a chance to try the reflected operation on the other operand. Always return NotImplemented as a value, not as a raised exception.

from functools import total_ordering

@total_ordering
class Priority:
    """Task priority with named levels."""
    _ORDER = {"low": 0, "medium": 1, "high": 2, "critical": 3}

    def __init__(self, level: str):
        if level not in self._ORDER:
            raise ValueError(f"Unknown priority level: {level!r}")
        self.level = level

    def __eq__(self, other: object) -> bool:
        if not isinstance(other, Priority):
            return NotImplemented
        return self._ORDER[self.level] == self._ORDER[other.level]

    def __lt__(self, other: object) -> bool:
        if not isinstance(other, Priority):
            return NotImplemented
        return self._ORDER[self.level] < self._ORDER[other.level]

    def __repr__(self) -> str:
        return f"Priority({self.level!r})"

tasks = [Priority("high"), Priority("low"), Priority("critical"), Priority("medium")]
print(sorted(tasks))
# [Priority('low'), Priority('medium'), Priority('high'), Priority('critical')]

print(Priority("high") > Priority("medium"))   # True
print(Priority("low") <= Priority("low"))      # True

Combining @total_ordering with @dataclass

Python dataclasses can generate comparison methods automatically when order=True is passed to @dataclass, which uses the same tuple-comparison approach under the hood. However, @total_ordering gives you explicit control over the comparison logic when the natural attribute order does not capture the semantics you need. The two approaches are mutually exclusive: do not use order=True on a dataclass and then also apply @total_ordering.

from dataclasses import dataclass
from functools import total_ordering

# @dataclass handles __init__ and __repr__; eq=False keeps __eq__ ours.
# order=False (default) means no auto comparison methods — @total_ordering fills those in.
@total_ordering
@dataclass(eq=False)
class FileSize:
    size_bytes: int
    label: str = ""

    def __eq__(self, other: object) -> bool:
        if not isinstance(other, FileSize):
            return NotImplemented
        return self.size_bytes == other.size_bytes

    def __lt__(self, other: object) -> bool:
        if not isinstance(other, FileSize):
            return NotImplemented
        return self.size_bytes < other.size_bytes

small  = FileSize(1_024,           "1 KB")
medium = FileSize(1_048_576,       "1 MB")
large  = FileSize(1_073_741_824,   "1 GB")

print(small < medium)    # True
print(large >= medium)   # True
print(sorted([large, small, medium]))
# [FileSize(size_bytes=1024, label='1 KB'), FileSize(size_bytes=1048576, ...), ...]
Python Pop Quiz
Why does applying @lru_cache to an instance method cause a memory leak in applications that create many short-lived objects?

Key Takeaways

  1. Always apply @wraps inside custom decorators. Skipping it silently corrupts function metadata across every tool that reads __name__, __doc__, or __annotations__ at runtime — including test frameworks, documentation generators, and type checkers. For class-based decorators, use functools.update_wrapper(self, func) in __init__. When stacking decorators, every layer in the stack must apply @wraps independently. When your wrapper intentionally changes a specific attribute like __doc__, pass a custom assigned tuple to update_wrapper that excludes it — WRAPPER_ASSIGNMENTS is a tuple constant you can filter directly.
  2. Choose the right caching tool for the right scope. Use @lru_cache when inputs are unbounded and a fixed memory budget is required. Use @cache for pure functions with a small, finite domain. Use @cached_property for expensive computed instance attributes — and if thread safety is required, use a per-instance lock with double-checked locking inside the method body. Use typed=True with @lru_cache when the return value depends on the concrete type of the arguments; inspect active settings with cache_parameters(). Never apply @lru_cache to instance methods in classes that create many short-lived objects — use @cached_property instead to keep cache lifetime scoped to the instance.
  3. Use @singledispatch to eliminate isinstance chains and keep type-specific logic open for extension. Registering against abstract base classes from collections.abc extends coverage to all virtual subclasses without modifying the dispatch function. The deeper value is that .register() can be called from any module that imports the generic function, making type handlers pluggable across module boundaries without touching the original file. For method dispatch inside a class, use singledispatchmethod instead. If running Python 3.11 through 3.13.4, be aware of the singledispatchmethod caching regression fixed in 3.13.5.
  4. @total_ordering removes boilerplate but has a measurable performance cost on hot paths. For domain objects compared rarely or in small collections, it is the correct default. For classes sorted in tight loops over large data sets, hand-implement all six comparison methods and store the comparison key as a single tuple attribute — this reduces attribute lookups from N to 1 per comparison and removes the derived-method overhead entirely. Always return NotImplemented rather than raising TypeError.
  5. The pitfalls are structural, not syntactic. Silent metadata corruption from an unwrapped inner decorator, memory retention from caching instance methods, string-as-sequence dispatch surprises in @singledispatch, and TypeError-raising comparison methods that break @total_ordering — none of these produce immediate exceptions. Understanding the mechanisms makes the failure modes predictable, and knowing the deeper solution patterns (scoped caches, selective attribute copying, pluggable dispatch registrations, tuple-keyed comparisons) gives you tools that address the root cause rather than the symptom.

The standard library decorators in functools represent patterns that recur in nearly every non-trivial Python codebase. Learning to apply them precisely — knowing their constraints, their thread-safety characteristics, their version requirements, and their interaction with other language features — produces code that is both more correct and substantially easier for other engineers to read and maintain.

Frequently Asked Questions

What is the difference between @cache and @lru_cache in Python?

@cache is an unbounded memoization decorator introduced in Python 3.9, equivalent to @lru_cache(maxsize=None) but implemented as a simpler dictionary lookup with no eviction bookkeeping, making it faster per entry. Use @cache when the input domain is small and bounded. Use @lru_cache when you need a fixed memory ceiling — it evicts the least recently used entries once maxsize is reached. The default maxsize is 128. Setting maxsize to a power-of-two value gives the best internal hash distribution. Source: Python docs: functools.cache.

Why should I always use @functools.wraps in my custom decorators?

When a decorator wraps a function, the wrapper replaces the original function object. Without @functools.wraps, attributes like __name__, __doc__, __module__, __qualname__, and __annotations__ all reflect the wrapper rather than the original function. Since Python 3.12, __type_params__ is also copied when @wraps is used. This breakage affects introspection tools, IDEs, doctest, Sphinx, pytest fixture discovery, and any runtime code that reads function metadata. The @wraps decorator also sets __wrapped__, which lets inspect.signature() traverse the decorator chain correctly.

What does @singledispatch do in Python?

@singledispatch transforms a function into a generic function that dispatches to different implementations based on the type of its first positional argument. You register type-specific implementations using the .register() method. The dispatcher resolves the implementation using the argument type's MRO, so registering against abstract base classes from collections.abc covers all concrete subclasses automatically. For dispatch inside a class, use singledispatchmethod instead, which dispatches on the second argument. Source: Python docs: functools.singledispatch.

When should I use @cached_property instead of @property?

Use @cached_property when the computed value is expensive to produce and will not change for the lifetime of the instance. The decorator computes the value once on first access and stores it directly in the instance __dict__ under the same name, bypassing the descriptor on all subsequent reads. Use @property when you need the value recomputed on every access or when you need setter and deleter support. Note that @cached_property requires a mutable instance __dict__ and interferes with PEP 412 key-sharing dictionaries, which can increase per-instance memory usage. Source: Python docs: functools.cached_property.

How does @total_ordering reduce boilerplate in Python classes?

@total_ordering fills in the missing rich comparison methods (__le__, __gt__, __ge__) from the two you provide (__eq__ and one of __lt__, __le__, __gt__, or __ge__). Without it, a fully comparable class requires six method definitions. With it, you write two and the decorator derives the rest. The trade-off is a per-call overhead for the derived methods; in tight sorting loops over large collections, hand-implementing all six is faster. Always return NotImplemented rather than raising TypeError from comparison methods — the derived implementations rely on this. Source: Python docs: functools.total_ordering.

Does @lru_cache work with methods on a class?

Yes, but with an important caveat. When @lru_cache is applied to an instance method, self becomes part of the cache key. This prevents result confusion across instances, but it also prevents the instance from being garbage-collected as long as the cache holds a reference to it. In applications that create many short-lived instances — web servers, task queues, per-request objects — this causes steady memory growth unless the cache is cleared explicitly. For per-instance memoization of a computed attribute, @cached_property is the right tool. Source: Python FAQ: How do I cache method calls?

What is the typed parameter in @lru_cache?

When typed=True is passed to @lru_cache, arguments of different types are cached under separate keys even when their values are equal. For example, f(3) and f(3.0) produce two distinct cache entries with typed=True, because int and float are different types. The default is typed=False. To inspect the current maxsize and typed settings on a cached function at runtime, call cache_parameters(), which was added in Python 3.9. Source: Python docs: functools.lru_cache.