Python Decorator: Ways to Speed Up Recursions Using Built-in Caching

Recursive functions are elegant, but they can be brutally slow. A naive recursive Fibonacci implementation makes roughly 2n function calls for an input of n. Python's standard library provides two caching decorators — functools.lru_cache and functools.cache — that store results from previous calls and return them instantly on repeats. Adding one line of code can transform an exponential algorithm into a linear one. This article covers every built-in caching approach for speeding up recursion in Python, with benchmarks, management methods, and the gotchas you need to know.

Why Recursion Is Slow Without Caching

The classic recursive Fibonacci function illustrates the problem. Each call branches into two more calls, and the same subproblems get recomputed over and over:

import time


def fib(n):
    if n < 2:
        return n
    return fib(n - 1) + fib(n - 2)


start = time.perf_counter()
result = fib(35)
elapsed = time.perf_counter() - start

print(f"fib(35) = {result}")
print(f"Time: {elapsed:.3f}s")
Output fib(35) = 9227465 Time: 2.847s

Computing fib(35) takes nearly 3 seconds because the function makes approximately 29 million calls. fib(3) alone gets called over 5 million times. Every one of those redundant calls does the same work and produces the same result. Caching eliminates the redundancy by storing each result the first time it is computed.

Manual Memoization with a Dictionary

Before reaching for a built-in decorator, it helps to understand the pattern. Memoization stores results in a dictionary keyed by the function's arguments:

def fib_memo(n, memo={}):
    if n in memo:
        return memo[n]
    if n < 2:
        return n
    memo[n] = fib_memo(n - 1, memo) + fib_memo(n - 2, memo)
    return memo[n]


start = time.perf_counter()
result = fib_memo(35)
elapsed = time.perf_counter() - start

print(f"fib_memo(35) = {result}")
print(f"Time: {elapsed:.6f}s")
Output fib_memo(35) = 9227465 Time: 0.000021s

From nearly 3 seconds to 21 microseconds. The dictionary stores each computed value, and subsequent calls with the same n return the stored result immediately. This is exactly what the built-in decorators automate.

Note

The mutable default argument memo={} is a well-known Python idiom for persistent state across calls. While mutable defaults are usually a source of bugs, they are intentional here. The built-in decorators remove the need for this pattern entirely.

functools.lru_cache: Bounded Caching

functools.lru_cache wraps a function with a Least Recently Used cache. When the cache reaches its maxsize, the entry that was accessed least recently gets evicted to make room:

from functools import lru_cache
import time


@lru_cache(maxsize=128)
def fib(n):
    if n < 2:
        return n
    return fib(n - 1) + fib(n - 2)


start = time.perf_counter()
result = fib(100)
elapsed = time.perf_counter() - start

print(f"fib(100) = {result}")
print(f"Time: {elapsed:.6f}s")
Output fib(100) = 354224848179261915075 Time: 0.000089s

Without caching, fib(100) would require approximately 1020 function calls and would never finish. With @lru_cache, it completes in under a millisecond because each value from fib(0) through fib(100) is computed exactly once.

The maxsize parameter defaults to 128. Setting it to None disables the size limit, making the cache grow without bound. Since Python 3.8, you can also apply @lru_cache without parentheses to use the default maxsize:

@lru_cache
def factorial(n):
    return n * factorial(n - 1) if n else 1


print(factorial(10))
Output 3628800

functools.cache: Unbounded Caching

Python 3.9 introduced functools.cache as a simpler alternative to lru_cache(maxsize=None). It is a thin wrapper around a dictionary lookup with no eviction logic, making it slightly faster for unbounded use cases:

from functools import cache


@cache
def fib(n):
    if n < 2:
        return n
    return fib(n - 1) + fib(n - 2)


print(fib(50))
print(fib(100))
Output 12586269025 354224848179261915075
Featurelru_cachecache
Available sincePython 3.2Python 3.9
Size limitConfigurable (maxsize)Unbounded (no limit)
Eviction policyLeast Recently UsedNone
OverheadSlightly higher (LRU bookkeeping)Slightly lower (dict lookup only)
Equivalent toN/Alru_cache(maxsize=None)
Best forLong-running processes, web serversRecursive algorithms, bounded input space

cache_info() and cache_clear()

Both decorators add two management methods to the decorated function. cache_info() returns a named tuple with performance statistics, and cache_clear() empties the cache:

from functools import lru_cache


@lru_cache(maxsize=64)
def fib(n):
    if n < 2:
        return n
    return fib(n - 1) + fib(n - 2)


fib(50)
print(fib.cache_info())

fib.cache_clear()
print(fib.cache_info())
Output CacheInfo(hits=48, misses=51, maxsize=64, currsize=51) CacheInfo(hits=0, misses=0, maxsize=64, currsize=0)

The hits field shows how many calls returned a cached result. The misses field shows how many calls computed a new result. For fib(50), there are 51 unique inputs (0 through 50), each computed once (51 misses), and 48 calls that hit the cache (the recursive second branch for each n >= 2).

You can also access the original unwrapped function through __wrapped__, which is useful for testing or benchmarking without the cache:

# Call the original uncached function directly
print(fib.__wrapped__(10))
Output 55

The typed Parameter

By default, lru_cache treats arguments of different types as equivalent if they compare equal. This means fib(3) and fib(3.0) share the same cache entry. Setting typed=True separates them:

from functools import lru_cache


@lru_cache(maxsize=128, typed=True)
def compute(x):
    print(f"  Computing for {x!r} (type: {type(x).__name__})")
    return x * 2


print(compute(3))
print(compute(3.0))
print(compute(3))
Output Computing for 3 (type: int) 6 Computing for 3.0 (type: float) 6.0 6

With typed=True, the integer 3 and the float 3.0 are cached separately. The third call returns the cached integer result without recomputing.

Recursion Depth and sys.setrecursionlimit

Python's default recursion limit is 1000. Even with caching, a deeply recursive first call can exceed this limit because the cache is empty and every level recurses before any result is stored. You can increase the limit with sys.setrecursionlimit, but the safer approach is to warm the cache incrementally:

from functools import lru_cache


@lru_cache(maxsize=None)
def fib(n):
    if n < 2:
        return n
    return fib(n - 1) + fib(n - 2)


# Warm the cache in safe increments
for i in range(0, 5001, 500):
    fib(i)

# Now fib(5000) hits cached values, no deep recursion
print(f"fib(5000) has {len(str(fib(5000)))} digits")
Output fib(5000) has 1045 digits
Recursion Limit

Calling sys.setrecursionlimit() to a very high value can crash the Python process by overflowing the C stack. Incremental cache warming is the safer approach for deeply recursive functions. Build up cached values in small batches so that subsequent calls never need to recurse more than a few hundred levels.

Caching Beyond Fibonacci: Dynamic Programming

The caching decorators apply to any recursive algorithm with overlapping subproblems. Here are two additional examples:

Stair Climbing Problem

from functools import cache


@cache
def climb_stairs(n):
    """Count ways to climb n stairs taking 1, 2, or 3 steps at a time."""
    if n < 0:
        return 0
    if n == 0:
        return 1
    return climb_stairs(n - 1) + climb_stairs(n - 2) + climb_stairs(n - 3)


print(f"Ways to climb 10 stairs: {climb_stairs(10)}")
print(f"Ways to climb 30 stairs: {climb_stairs(30)}")
Output Ways to climb 10 stairs: 274 Ways to climb 30 stairs: 53798080

Minimum Edit Distance (Levenshtein)

from functools import lru_cache


@lru_cache(maxsize=None)
def edit_distance(s, t):
    """Compute the minimum edit distance between two strings."""
    if not s:
        return len(t)
    if not t:
        return len(s)
    if s[-1] == t[-1]:
        return edit_distance(s[:-1], t[:-1])
    return 1 + min(
        edit_distance(s[:-1], t),        # deletion
        edit_distance(s, t[:-1]),         # insertion
        edit_distance(s[:-1], t[:-1]),    # substitution
    )


print(edit_distance("kitten", "sitting"))
print(edit_distance("saturday", "sunday"))
print(edit_distance.cache_info())
Output 3 3 CacheInfo(hits=63, misses=88, maxsize=None, currsize=88)
Pro Tip

For string-based recursive algorithms like edit distance, the cache key is the tuple of all arguments. Strings are hashable in Python, so they work directly with lru_cache. If your function takes a list, convert it to a tuple before caching.

When Not to Cache

Caching is not always appropriate. Avoid caching functions that are non-deterministic (their output depends on time, randomness, or external state), that have side effects (writing to files, sending network requests, printing output), that accept unhashable arguments (lists, dictionaries, sets), or that are called with a huge number of unique inputs and no repetition (the cache grows without benefit).

from functools import lru_cache


@lru_cache
def bad_candidate(items):
    """This will fail because lists are not hashable."""
    return sum(items)


try:
    bad_candidate([1, 2, 3])
except TypeError as e:
    print(f"Error: {e}")

# Fix: convert to tuple before caching
@lru_cache
def good_candidate(items):
    return sum(items)

print(good_candidate((1, 2, 3)))
Output Error: unhashable type: 'list' 6

Key Takeaways

  1. Memoization eliminates redundant computation: Recursive functions with overlapping subproblems compute the same values repeatedly. Caching stores each result once and returns it on subsequent calls, reducing exponential time complexity to linear.
  2. functools.lru_cache is the standard tool: Available since Python 3.2, it provides a configurable bounded cache with LRU eviction. The default maxsize is 128. Set it to None for unbounded caching.
  3. functools.cache is the simpler option: Available since Python 3.9, it is equivalent to lru_cache(maxsize=None) but faster because it skips eviction bookkeeping. Use it for recursive algorithms where you know the input space fits in memory.
  4. Use cache_info() to monitor performance: The hits, misses, maxsize, and currsize fields tell you how effectively the cache is working. High miss rates suggest the cache is too small or the function is not being called with repeated arguments.
  5. Warm the cache to avoid hitting the recursion limit: For deeply recursive functions, call the function with incrementally larger inputs so that the cache builds up layer by layer. This avoids RecursionError without increasing sys.setrecursionlimit.
  6. Only cache deterministic, pure functions: Functions with side effects, non-deterministic output, or unhashable arguments are not suitable candidates for caching. The cached result must be correct for every future call with the same inputs.

Python's built-in caching decorators are the simplest and most reliable way to speed up recursive functions. One decorator, one import, and the algorithm's time complexity drops by orders of magnitude.

Frequently Asked Questions

What is the difference between functools.cache and functools.lru_cache?

functools.cache (Python 3.9+) is an unbounded cache with no size limit. It is equivalent to lru_cache(maxsize=None) but slightly faster because it skips the LRU eviction bookkeeping. functools.lru_cache has a configurable maxsize (default 128) and evicts least recently used entries when the cache is full. Use cache for recursive functions with a bounded input space, and lru_cache when you need to control memory usage.

How does lru_cache speed up recursive functions?

Recursive functions like Fibonacci recompute the same subproblems repeatedly, resulting in exponential time complexity. lru_cache stores the result of each unique input the first time it is computed. On subsequent calls with the same input, the cached result is returned immediately without re-executing the function, reducing the time complexity to linear.

Does lru_cache work with unhashable arguments?

No. Both lru_cache and cache use a dictionary internally, so all function arguments must be hashable. Passing mutable types like lists, dictionaries, or sets as arguments raises a TypeError. Convert them to tuples or frozensets before passing them to a cached function.

How do I clear the cache of a decorated function?

Call the cache_clear() method on the decorated function. For example, fibonacci.cache_clear() removes all cached entries. You can also inspect cache performance with cache_info(), which returns a named tuple showing hits, misses, maxsize, and currsize.