The functools module is part of Python's standard library and exists for one purpose: providing higher-order functions, which are functions that act on or return other functions. It ships with every Python installation, requires no external dependencies, and contains tools that range from simple decorator helpers to full caching systems and function overloading mechanisms. This article covers every public function in the module with working code examples, explains when each one is useful, and includes the Placeholder sentinel added in Python 3.14.
At the time of writing, the module's __all__ list contains 15 names: update_wrapper, wraps, WRAPPER_ASSIGNMENTS, WRAPPER_UPDATES, total_ordering, cache, cmp_to_key, lru_cache, reduce, partial, partialmethod, singledispatch, singledispatchmethod, cached_property, and Placeholder. These fall into five categories: caching, partial application, decorator utilities, comparison helpers, and function dispatch. Understanding these categories makes the module far easier to navigate than trying to memorize each function individually.
Caching: lru_cache, cache, and cached_property
The caching tools in functools are among the most frequently used features in the entire standard library. They store the return values of function calls so that repeated calls with the same arguments skip the computation and return the stored result directly.
lru_cache
lru_cache is a decorator that wraps a function with a memoizing callable. It stores up to maxsize results (defaulting to 128) and uses a Least Recently Used eviction strategy. When the cache is full, the entry that has gone unused the longest gets discarded to make room for new results.
from functools import lru_cache
@lru_cache(maxsize=128)
def fibonacci(n):
if n < 2:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
print(fibonacci(50))
# 12586269025
print(fibonacci.cache_info())
# CacheInfo(hits=48, misses=51, maxsize=128, currsize=51)
Without caching, this recursive Fibonacci function has exponential time complexity. With @lru_cache, each unique input is computed once. The cache_info() method reports hits, misses, the maxsize setting, and how many entries are currently stored. The cache_clear() method empties the cache entirely, which is useful for testing or when the underlying data changes.
Setting typed=True forces the cache to treat arguments of different types as distinct entries. By default, f(3) and f(3.0) are treated as the same call because 3 == 3.0 is True. With typed=True, they get separate cache entries.
@lru_cache(maxsize=64, typed=True)
def convert_temperature(value, scale="celsius"):
if scale == "celsius":
return value * 9/5 + 32
return (value - 32) * 5/9
# These are cached as separate entries with typed=True
print(convert_temperature(100)) # 212.0
print(convert_temperature(100.0)) # 212.0 (separate cache entry)
All arguments to a cached function must be hashable because the cache uses them as dictionary keys. Passing a list or dictionary as an argument will raise a TypeError. Convert mutable inputs to immutable forms (like tuple or frozenset) before calling the cached function.
cache
Added in Python 3.9, cache is a lightweight alternative to lru_cache. It is functionally identical to lru_cache(maxsize=None), meaning it stores every unique call result without ever evicting entries. Because it skips eviction logic, it is slightly smaller and faster than a bounded lru_cache.
from functools import cache
@cache
def factorial(n):
return n * factorial(n - 1) if n else 1
print(factorial(10)) # 3628800
print(factorial(5)) # 120 (cache hit, no computation)
print(factorial(12)) # 479001600 (only 2 new recursive calls)
Because @cache never evicts entries, use it only when the set of unique inputs is bounded and small enough to fit in memory. For long-running processes with unpredictable input variety, @lru_cache with an explicit maxsize is safer.
cached_property
cached_property works like the built-in @property decorator, but the result is computed once and then stored as a normal instance attribute. Subsequent accesses read the cached value directly from the instance dictionary without re-executing the getter method.
from functools import cached_property
import statistics
class DataSet:
def __init__(self, values):
self._values = values
@cached_property
def standard_deviation(self):
print("Computing standard deviation...")
return statistics.stdev(self._values)
data = DataSet([2, 4, 4, 4, 5, 5, 7, 9])
print(data.standard_deviation) # Computing standard deviation... 2.0
print(data.standard_deviation) # 2.0 (no recomputation)
# Invalidate cache by deleting the attribute
del data.standard_deviation
print(data.standard_deviation) # Computing standard deviation... 2.0
Deleting the attribute clears the cached value, forcing the next access to recompute it. This invalidation mechanism is useful when the underlying data changes after initial computation. Note that cached_property requires the instance to have a mutable __dict__, so it will not work on classes that define __slots__ without including __dict__.
Partial Application and Reduce
partial
partial freezes some arguments of an existing function, producing a new callable with a simplified signature. The resulting partial object stores the original function, the frozen positional arguments, and the frozen keyword arguments as read-only attributes: .func, .args, and .keywords.
from functools import partial
def power(base, exponent):
return base ** exponent
square = partial(power, exponent=2)
cube = partial(power, exponent=3)
print(square(7)) # 49
print(cube(4)) # 64
# Inspect the partial object
print(square.func) # <function power at 0x...>
print(square.keywords) # {'exponent': 2}
A common real-world use for partial is creating specialized versions of general-purpose functions for use as callbacks, where the callback interface expects a function with fewer arguments than the full function requires.
from functools import partial
# Creating a binary string converter
from_binary = partial(int, base=2)
from_hex = partial(int, base=16)
print(from_binary("10110")) # 22
print(from_hex("ff")) # 255
Placeholder (Python 3.14+)
Before Python 3.14, partial could only freeze positional arguments from left to right. If you needed to freeze the second positional argument while leaving the first open, you had to use keyword arguments, which does not work for positional-only parameters. Python 3.14 introduced the Placeholder sentinel to solve this. It reserves a slot in the positional arguments that gets filled when the partial object is called.
# Python 3.14+
from functools import partial, Placeholder
# str.replace(self, old, new) — all positional-only
# Freeze the replacement to empty string, leave self and old open
remove = partial(str.replace, Placeholder, Placeholder, "")
message = "Hello, dear dear world!"
print(remove(message, " dear"))
# Hello, world!
# Chain partials: freeze " dear" as the old string
remove_dear = partial(remove, Placeholder, " dear")
print(remove_dear(message))
# Hello, world!
Placeholder is a singleton, similar to None. It is not a class you instantiate. You can alias it for brevity, such as from functools import Placeholder as _P, though using a bare _ is risky since _ is conventionally used as a throwaway variable.
partialmethod
partialmethod works like partial but is designed for use inside class definitions. It produces a descriptor rather than a direct callable, which means it handles self (or cls) correctly when accessed as a method.
from functools import partialmethod
class Cell:
def __init__(self):
self.alive = False
def set_state(self, state):
self.alive = state
revive = partialmethod(set_state, True)
kill = partialmethod(set_state, False)
cell = Cell()
cell.revive()
print(cell.alive) # True
cell.kill()
print(cell.alive) # False
reduce
reduce applies a two-argument function cumulatively to the items of an iterable, reducing the sequence to a single value. It was a built-in function in Python 2 and was moved into functools in Python 3.
from functools import reduce
from operator import mul
numbers = [1, 2, 3, 4, 5]
# Product of all elements
product = reduce(mul, numbers)
print(product) # 120
# With an initial value
product_with_start = reduce(mul, numbers, 10)
print(product_with_start) # 1200
# Flatten a list of lists
nested = [[1, 2], [3, 4], [5, 6]]
flat = reduce(lambda acc, lst: acc + lst, nested)
print(flat) # [1, 2, 3, 4, 5, 6]
The optional third argument to reduce is the initial value, which is placed before the items of the iterable in the calculation. If the iterable is empty and no initial value is provided, reduce raises a TypeError.
For simple aggregations like sums and products, Python's built-in sum() and math.prod() are clearer and faster. Use reduce when you need a custom accumulation operation that does not have a built-in equivalent, such as merging dictionaries or composing functions.
Decorator Utilities, Comparison Helpers, and Dispatch
wraps and update_wrapper
functools.wraps is a decorator factory that preserves the original function's metadata when writing custom decorators. Without it, the wrapper function replaces the decorated function's __name__, __doc__, __qualname__, and __annotations__, which causes issues with debugging tools, help(), and documentation generators.
from functools import wraps
def timer(func):
@wraps(func)
def wrapper(*args, **kwargs):
import time
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"{func.__name__} took {elapsed:.4f}s")
return result
return wrapper
@timer
def process_data(items):
"""Process a list of items and return the result."""
return [x * 2 for x in items]
print(process_data.__name__) # process_data (not "wrapper")
print(process_data.__doc__) # Process a list of items and return the result.
print(process_data.__wrapped__) # <function process_data at 0x...>
wraps is a convenience wrapper around update_wrapper. The two are functionally identical, but wraps is used as a decorator while update_wrapper is called as a regular function. In practice, wraps is used almost exclusively because the decorator form is cleaner.
wraps also adds a __wrapped__ attribute to the wrapper function that points to the original function. This is useful for bypassing the decorator entirely during testing or for introspection tools that need access to the unwrapped implementation.
total_ordering
When building classes that support comparison operators, Python requires you to implement each comparison method individually: __eq__, __lt__, __le__, __gt__, and __ge__. The @total_ordering decorator reduces this to two: you define __eq__ and any one of the other four, and the decorator supplies the rest.
from functools import total_ordering
@total_ordering
class Version:
def __init__(self, major, minor, patch):
self.major = major
self.minor = minor
self.patch = patch
def __eq__(self, other):
return (self.major, self.minor, self.patch) == \
(other.major, other.minor, other.patch)
def __lt__(self, other):
return (self.major, self.minor, self.patch) < \
(other.major, other.minor, other.patch)
def __repr__(self):
return f"Version({self.major}.{self.minor}.{self.patch})"
v1 = Version(2, 1, 0)
v2 = Version(2, 3, 1)
v3 = Version(2, 1, 0)
print(v1 < v2) # True
print(v1 >= v3) # True (auto-generated by total_ordering)
print(v2 > v1) # True (auto-generated)
print(v1 <= v2) # True (auto-generated)
The auto-generated methods derive their logic from the two you provide. Defining __eq__ and __lt__ means the decorator can infer __gt__ as not (__lt__ or __eq__), __le__ as __lt__ or __eq__, and __ge__ as not __lt__.
cmp_to_key
cmp_to_key converts an old-style comparison function (which returns -1, 0, or 1) into a key function compatible with Python 3's sorted(), min(), max(), and similar functions. This is primarily a migration tool for Python 2 codebases that used cmp parameters.
from functools import cmp_to_key
def compare_by_last_char(a, b):
if a[-1] > b[-1]:
return 1
elif a[-1] < b[-1]:
return -1
return 0
words = ["apple", "zoo", "banana", "kiwi"]
sorted_words = sorted(words, key=cmp_to_key(compare_by_last_char))
print(sorted_words)
# ['banana', 'apple', 'kiwi', 'zoo']
singledispatch and singledispatchmethod
singledispatch transforms a function into a generic function that dispatches to different implementations based on the type of its first argument. This provides a form of function overloading that Python does not natively support through method signatures.
from functools import singledispatch
from datetime import datetime, date
from decimal import Decimal
@singledispatch
def format_value(value):
"""Default: convert to string."""
return str(value)
@format_value.register
def _(value: int):
return f"{value:,}"
@format_value.register
def _(value: float):
return f"{value:,.2f}"
@format_value.register
def _(value: Decimal):
return f"${value:,.2f}"
@format_value.register
def _(value: datetime):
return value.strftime("%Y-%m-%d %H:%M:%S")
@format_value.register
def _(value: list):
return "[" + ", ".join(format_value(item) for item in value) + "]"
print(format_value(1500000)) # 1,500,000
print(format_value(3.14159)) # 3.14
print(format_value(Decimal("99.95"))) # $99.95
print(format_value(datetime(2026, 3, 29, 14, 0))) # 2026-03-29 14:00:00
print(format_value([42, 3.14, "hello"])) # [42, 3.14, hello]
The .register method can infer the type from the function's type annotation (as shown above) or accept the type explicitly: @format_value.register(int). The dispatch happens on the type of the first argument only. If no registered implementation matches the argument type, Python walks the type's method resolution order (MRO) to find the closest registered type, falling back to the base object implementation if nothing else matches.
singledispatchmethod provides the same behavior for class methods. It can be stacked with @classmethod or @staticmethod, but the singledispatchmethod decorator must be the outermost (topmost) decorator in the stack.
from functools import singledispatchmethod
class Formatter:
@singledispatchmethod
def format(self, value):
return str(value)
@format.register
def _(self, value: int):
return f"Integer: {value:,}"
@format.register
def _(self, value: float):
return f"Float: {value:.4f}"
f = Formatter()
print(f.format(42000)) # Integer: 42,000
print(f.format(3.14)) # Float: 3.1400
print(f.format("hello")) # hello
| Function | Category | Added In |
|---|---|---|
lru_cache |
Caching | Python 3.2 |
cache |
Caching | Python 3.9 |
cached_property |
Caching | Python 3.8 |
partial |
Partial Application | Python 2.5 |
partialmethod |
Partial Application | Python 3.4 |
Placeholder |
Partial Application | Python 3.14 |
reduce |
Cumulative Operations | Python 3.0 (moved from builtins) |
wraps |
Decorator Utilities | Python 2.5 |
update_wrapper |
Decorator Utilities | Python 2.5 |
total_ordering |
Comparison Helpers | Python 2.7 |
cmp_to_key |
Comparison Helpers | Python 3.2 |
singledispatch |
Function Dispatch | Python 3.4 |
singledispatchmethod |
Function Dispatch | Python 3.8 |
Key Takeaways
- The functools module is a toolbox for higher-order functions. It provides decorators and utilities that operate on callables, covering caching (
lru_cache,cache,cached_property), partial application (partial,partialmethod,Placeholder), decorator metadata preservation (wraps,update_wrapper), comparison boilerplate reduction (total_ordering,cmp_to_key), function overloading (singledispatch,singledispatchmethod), and cumulative operations (reduce). - Use
lru_cachefor bounded caching andcachefor unbounded caching. The bounded version with an explicitmaxsizeis safer for long-running processes. The unbounded version is simpler and slightly faster when the number of unique inputs is small and predictable. Both require hashable arguments. - Use
wrapsin every custom decorator. It preserves function identity, which is critical for debugging, introspection, and documentation. The__wrapped__attribute it creates also gives you direct access to the original function for testing. - Use
partialto create specialized functions from general ones. It is particularly valuable for callback-driven APIs where you need to bind arguments ahead of time. WithPlaceholderin Python 3.14, you can now freeze any positional argument, not just leading ones. - Use
singledispatchfor type-based function overloading. It provides a clean alternative to chains ofisinstancechecks, and new types can register their own implementations without modifying the original function.
The functools module has grown steadily since its introduction in Python 2.5, with each new release adding tools that address real patterns Python developers encounter in production code. From the original partial and wraps through to the Placeholder sentinel in Python 3.14, the module continues to expand what Python can express at the function level. Knowing what each tool does and when to reach for it saves time, reduces boilerplate, and makes code that is both more readable and more efficient.