Every function-based decorator in Python is a closure. The wrapper function that a decorator returns captures the original function -- and any configuration arguments -- as free variables that persist long after the decorator's outer function has finished executing. Understanding this closure mechanism is what turns decorators from memorized syntax into a tool you can reason about, debug, and extend. This article traces the connection from basic closures through the __closure__ attribute, the nonlocal keyword, into the nested closure architecture of parameterized decorators, through the collision between closures and the descriptor protocol, and into the memory and concurrency implications that Python 3.14's free-threaded mode makes urgent.
What a Closure Is
A closure is a function that retains access to variables from the scope where it was defined, even after that scope has finished executing. The retained variables are called free variables -- they are used inside the inner function but defined in an enclosing function, not in the global scope.
def make_greeter(greeting):
def greet(name):
return f"{greeting}, {name}!"
return greet
hello = make_greeter("Hello")
howdy = make_greeter("Howdy")
print(hello("Alice")) # Hello, Alice!
print(howdy("Bob")) # Howdy, Bob!
When make_greeter("Hello") executes, it creates a local variable greeting with the value "Hello", defines the inner function greet, and returns it. At this point, make_greeter has finished executing and its local scope would normally be destroyed. But greet references greeting, so Python keeps greeting alive inside a cell object attached to greet. The function hello is a closure -- it carries its enclosing environment with it.
hello and howdy are independent closures. Each one captured a different value of greeting at the time it was created. They do not share state. This independence is the foundation that makes decorators work -- each decorated function gets its own private closure scope.
This variable capture follows Python's LEGB rule: Local, Enclosing, Global, Built-in. Closures operate at the Enclosing level. When the inner function looks up greeting, it first checks its own local scope (not found), then checks the enclosing scope (found in the cell object), and stops. The global and built-in scopes are never consulted for that name.
namename parameter is local to greet. It exists only for the duration of each call. Python checks this scope first. Since greeting is not here, it moves to the next level.greeting (captured in cell object)greeting was local to make_greeter, but because greet references it, Python stores it in a cell object inside greet.__closure__. The enclosing scope survives the outer function's return.make_greeter, hello, howdygreeting were defined at module scope instead of inside make_greeter, it would not be a free variable and no closure would form. The function would simply read from the global scope on every call.print, len, range, etc.print and len live here. If a name is not found in Local, Enclosing, or Global, Python checks Built-in last. If it is not found here either, you get a NameError.Inspecting Closures With __closure__
Every function object in Python has a __closure__ attribute. For regular functions that are not closures, it is None. For closures, it is a tuple of cell objects. Each cell wraps one captured free variable:
def make_greeter(greeting):
def greet(name):
return f"{greeting}, {name}!"
return greet
hello = make_greeter("Hello")
# The names of the free variables
print(hello.__code__.co_freevars)
# ('greeting',)
# The cell objects holding the captured values
print(hello.__closure__)
# (| ,)
# The captured value itself
print(hello.__closure__[0].cell_contents)
# Hello |
__code__.co_freevars lists the names of variables the function captures. __closure__ is a tuple of cell objects in the same order. cell_contents gives you the live value inside each cell. This is not a copy -- it is a reference to the same object the closure accesses when it runs.
A function that uses variables only from the global scope or its own local scope is not a closure, even if it is defined inside another function. Closures specifically require free variables -- variables from an enclosing function's scope that are neither local nor global.
Every Decorator Creates a Closure
When you write a decorator, the outer function receives the decorated function as a parameter. The inner wrapper captures it as a free variable. The wrapper is returned and replaces the original function. The original function survives inside the wrapper's closure:
import functools
def log_calls(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
print(f"Calling {func.__name__}")
return func(*args, **kwargs)
return wrapper
@log_calls
def add(a, b):
return a + b
# add is now the wrapper function, which is a closure
print(add.__closure__)
# (,)
# The captured free variable is the original add function
print(add.__code__.co_freevars)
# ('func',)
print(add.__closure__[0].cell_contents)
#
# Calling add invokes wrapper, which accesses func through the closure
print(add(3, 4))
# Calling add
# 7 |
The sequence is: log_calls(add) executes. Inside log_calls, func is a local variable pointing to the original add. wrapper is defined and references func. wrapper is returned. log_calls finishes, but func is not garbage collected because wrapper.__closure__ holds a cell pointing to it. The name add is rebound to wrapper. Every call to add(3, 4) invokes wrapper(3, 4), which reads func from its closure and calls it.
functools.wraps also copies a __wrapped__ attribute onto the wrapper, giving you a direct reference to the original function. This is separate from the closure mechanism but serves a similar purpose for introspection: add.__wrapped__ returns the original add function.
After applying @log_calls to add, you inspect add.__closure__[0].cell_contents and get back the original add function. What is the mechanism that keeps the original function alive?
Mutable State With nonlocal
A closure can read its free variables without any special syntax. But if you need to reassign a free variable that points to an immutable value (like an integer or string), you must declare it nonlocal. Without nonlocal, an assignment inside the inner function creates a new local variable that shadows the free variable:
import functools
def count_calls(func):
call_count = 0
@functools.wraps(func)
def wrapper(*args, **kwargs):
nonlocal call_count # required to modify the integer
call_count += 1
print(f"{func.__name__} has been called {call_count} time(s)")
return func(*args, **kwargs)
return wrapper
@count_calls
def greet(name):
return f"Hello, {name}"
greet("Alice") # greet has been called 1 time(s)
greet("Bob") # greet has been called 2 time(s)
greet("Carol") # greet has been called 3 time(s)
# The closure carries both func and call_count
print(greet.__code__.co_freevars)
# ('call_count', 'func')
Without nonlocal call_count, the line call_count += 1 would try to read call_count before assigning to it, producing an UnboundLocalError. This is because Python sees the assignment and treats call_count as a local variable for the entire function body, even before the assignment executes. The nonlocal declaration tells Python that call_count belongs to the enclosing scope and should be modified there.
The distinction between reading and writing matters here. Python's bytecode compiler makes the local-vs-free decision at compile time, not at runtime. If there is any assignment to a name anywhere in the function body, that name is treated as local for the entire function. The nonlocal keyword overrides this decision. This is the same mechanism that global uses for module-scope variables, but nonlocal targets the enclosing function scope specifically.
When the free variable points to a mutable object (like a list or dictionary), you do not need nonlocal because you are modifying the object in place rather than reassigning the variable:
import functools
import time
def track_timing(func):
timings = [] # mutable -- no nonlocal needed
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
timings.append(time.perf_counter() - start) # in-place mutation
return result
wrapper.get_timings = lambda: list(timings)
return wrapper
@track_timing
def process(n):
return sum(range(n))
process(1_000_000)
process(2_000_000)
print(process.get_timings())
# [0.021234, 0.043567]
Closure state is not thread-safe. The call_count and timings variables in the examples above are shared across all calls to the decorated function, but they have no synchronization. In a multithreaded program, concurrent calls can produce race conditions -- call_count += 1 is not atomic, and timings.append() can interleave with reads. If your decorator manages mutable state and runs in threaded code, protect the state with a threading.Lock. This becomes particularly critical in Python 3.14's free-threaded mode, where the GIL no longer serializes bytecode execution. See the Thread Safety Under Free-Threading section.
Parameterized Decorators: Nested Closures
A parameterized decorator adds an extra level of nesting, creating a chain of closures. The outermost function accepts configuration. The middle function accepts the decorated function. The innermost function is the wrapper. Each level captures variables from its enclosing scope:
import functools
def retry(times=3, exceptions=(Exception,)):
"""Parameterized decorator: outermost function accepts config."""
def decorator(func):
"""Middle function: accepts the function to decorate."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
"""Innermost function: the actual wrapper (closure)."""
last_exception = None
for attempt in range(times):
try:
return func(*args, **kwargs)
except exceptions as e:
last_exception = e
raise last_exception
return wrapper
return decorator
@retry(times=5, exceptions=(ConnectionError,))
def fetch_data():
print("Attempting fetch...")
raise ConnectionError("timeout")
# Inspect the closure chain:
# wrapper captures 'func', 'times', and 'exceptions'
print(fetch_data.__code__.co_freevars)
# ('exceptions', 'func', 'times')
print(fetch_data.__closure__[0].cell_contents)
# (,)
print(fetch_data.__closure__[2].cell_contents)
# 5
When @retry(times=5, exceptions=(ConnectionError,)) is evaluated, Python calls retry(times=5, exceptions=(ConnectionError,)), which returns decorator. decorator is a closure over times and exceptions. Python then calls decorator(fetch_data), which returns wrapper. wrapper is a closure over func, times, and exceptions. All three captured variables are available inside wrapper every time it is called.
The key insight is that the closure at each level flattens upward. wrapper does not capture decorator's scope and then delegate to retry's scope. Instead, Python's compiler determines at definition time that wrapper uses func, times, and exceptions, and creates cells for all three in wrapper.__closure__ directly. The nesting defines which scopes are available, but the resulting closure is a flat tuple of cells.
Given the retry decorator above, what does len(fetch_data.__closure__) return?
The Late Binding Pitfall
Closures capture references to variables, not copies of their values at the time the closure is created. This matters when a closure is created inside a loop:
# THE PITFALL: all closures share the same variable
def make_multipliers():
multipliers = []
for i in range(5):
def multiply(x):
return x * i # 'i' is a free variable
multipliers.append(multiply)
return multipliers
fns = make_multipliers()
print([fn(10) for fn in fns])
# [40, 40, 40, 40, 40] -- all use i=4, not 0,1,2,3,4!
Every multiply function captures the same i variable. By the time the closures are called, the loop has finished and i is 4 for all of them. The fix is to capture the current value by using a default argument, which evaluates at function definition time rather than call time:
# THE FIX: capture the current value via a default argument
def make_multipliers():
multipliers = []
for i in range(5):
def multiply(x, _i=i): # _i captures current value of i
return x * _i
multipliers.append(multiply)
return multipliers
fns = make_multipliers()
print([fn(10) for fn in fns])
# [0, 10, 20, 30, 40] -- correct!
An alternative fix that preserves the closure-based approach is to use functools.partial, which binds the current value of i at the time partial is called:
import functools
def make_multipliers():
def multiply(i, x):
return x * i
return [functools.partial(multiply, i) for i in range(5)]
fns = make_multipliers()
print([fn(10) for fn in fns])
# [0, 10, 20, 30, 40] -- correct!
The partial approach makes the intent clearer: you are explicitly binding a value at creation time. Under the hood, partial stores the bound argument in its own args tuple, sidestepping the late-binding issue entirely because no free variable is involved.
This pitfall rarely affects decorators directly (because each @decorator application creates its own closure scope), but it appears in decorator-adjacent patterns like dynamically wrapping methods in a loop. If you are iterating over methods and creating wrappers, use the default-argument technique to avoid late binding.
This decorator is supposed to count how many times each decorated function is called. But every decorated function shares the same counter. What is the bug?
call_count = 0
def count_calls(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
nonlocal call_count
call_count += 1
print(f"{func.__name__}: call #{call_count}")
return func(*args, **kwargs)
return wrapper
@count_calls
def foo(): pass
@count_calls
def bar(): pass
foo() # foo: call #1
bar() # bar: call #2 <-- should be #1Stacking Decorators: Closure Chains
When you stack multiple decorators on a single function, each decorator wraps the result of the one below it. The bottom decorator receives the original function. The next one up receives the wrapper returned by the bottom decorator. Each wrapper is its own closure, capturing a different func reference:
import functools
def bold(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
return f"<b>{func(*args, **kwargs)}</b>"
return wrapper
def italic(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
return f"<i>{func(*args, **kwargs)}</i>"
return wrapper
@bold
@italic
def greet(name):
return f"Hello, {name}"
# Equivalent to: greet = bold(italic(greet))
print(greet("Alice"))
# <b><i>Hello, Alice</i></b>
# The outermost wrapper (bold's) captures italic's wrapper
print(greet.__closure__[0].cell_contents)
# <function greet at 0x...> (this is italic's wrapper)
# italic's wrapper captures the original greet
inner = greet.__closure__[0].cell_contents
print(inner.__closure__[0].cell_contents)
# <function greet at 0x...> (the original function)
Execution flows top-down through the decorator stack but application happens bottom-up. Python evaluates @italic first, passing the original greet to italic and getting back italic's wrapper. Then Python passes that wrapper to bold, getting back bold's wrapper. The final greet name points to bold's wrapper, which is a closure over italic's wrapper, which is itself a closure over the original function. Each layer in the chain is an independent closure with its own __closure__ tuple.
The order of stacking matters. @bold on top of @italic produces <b><i>...</i></b>. Reversing the order produces <i><b>...</b></i>. Each decorator only knows about the function it received -- it has no awareness of what other decorators are in the stack. This independence is a direct consequence of the closure mechanism: each wrapper captures exactly one func reference, and that reference is whatever was passed to the decorator at application time.
Closures vs. the Descriptor Protocol
Function-based decorators have a subtle limitation that becomes visible when you try to decorate descriptors like @classmethod or @staticmethod. A standard closure-based decorator calls the wrapped object directly through its __call__ method. But descriptors depend on their __get__ method being invoked by the attribute access machinery to produce the correct callable:
import functools
def log_calls(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
print(f"Calling {func.__name__}")
return func(*args, **kwargs)
return wrapper
class MyClass:
@log_calls
@classmethod
def from_config(cls, path):
return cls()
# In Python 3.8 and earlier, this fails:
# TypeError: 'classmethod' object is not callable
# The closure calls func(*args, **kwargs) directly,
# but @classmethod has no __call__ -- it relies on __get__
# Python 3.10+ allows chaining @classmethod with other
# decorators by giving classmethod a __call__ method,
# but the underlying problem remains instructive
The root issue is that a closure-based wrapper is a plain function. When Python looks up a method on a class, it invokes the descriptor protocol: it calls __get__ on the attribute to produce a bound method. A plain function implements __get__ (which is why regular methods work as descriptors), but the closure wrapper bypasses the descriptor protocol of the wrapped object. It calls func(*args, **kwargs) directly instead of letting func.__get__(instance, owner) produce the correct callable first.
For regular instance methods, this bypass happens to work because the function's own __call__ method accepts self as an explicit argument. But for @classmethod, @staticmethod, or custom descriptors, the bypass fails because these descriptors have no standalone __call__ -- they produce their callable only through __get__.
The solution is to make the decorator itself a descriptor. The wrapt library implements this pattern: its wrapper class implements __get__, so it correctly delegates to the descriptor protocol of the wrapped object. If you are writing decorators that need to work with arbitrary descriptors, consider either using wrapt or implementing __get__ on a class-based wrapper:
import functools
class DescriptorAwareDecorator:
"""A decorator that honors the descriptor protocol."""
def __init__(self, func):
self.func = func
functools.update_wrapper(self, func)
def __call__(self, *args, **kwargs):
print(f"Calling {self.func.__name__}")
return self.func(*args, **kwargs)
def __get__(self, obj, objtype=None):
if obj is None:
return self
# Delegate to the wrapped object's descriptor protocol
bound = self.func.__get__(obj, objtype)
return self.__class__(bound)
# This works correctly with instance methods
class MyClass:
@DescriptorAwareDecorator
def regular_method(self):
return "instance method"
obj = MyClass()
print(obj.regular_method())
# Calling regular_method
# instance method
Understanding the closure-descriptor tension is what separates writing decorators that work in isolation from writing decorators that compose correctly with the rest of Python's object model. Class-based decorators that implement __get__ solve the composition problem, but they trade the simplicity of closures for the explicitness of storing state as instance attributes. In practice, function-based closures work correctly for the vast majority of decorator use cases. The descriptor protocol matters when your decorator needs to wrap @classmethod, @staticmethod, @property, or custom descriptors.
Closures and Memory
Because a closure holds a reference to each of its free variables through cell objects, those variables cannot be garbage collected as long as the closure exists. For a decorated function at module scope, this means the original function and any decorator state live for the lifetime of the process. This is usually fine -- you want the original function to stay alive. But there are situations where closure references cause unintended memory retention.
The most common case involves decorating instance methods with caching decorators like functools.lru_cache. The cache stores its arguments as keys, and self is one of those arguments. This means the cache holds a reference to every instance that ever called the method, preventing garbage collection of those instances even after all other references are gone:
import functools
class DataProcessor:
def __init__(self, data):
self.data = data # could be large
# PROBLEM: lru_cache captures self as a cache key
@functools.lru_cache
def expensive_calc(self, param):
return sum(x * param for x in self.data)
# Every instance that calls expensive_calc stays in memory
# because the cache holds a reference to self
proc = DataProcessor(range(1_000_000))
proc.expensive_calc(42)
del proc # NOT garbage collected -- cache still references it
The fix is to make the cache local to each instance rather than global to the class. Move the lru_cache application into __init__ so the cache is tied to the instance's lifecycle:
import functools
class DataProcessor:
def __init__(self, data):
self.data = data
# Cache is per-instance -- dies with the instance
self.expensive_calc = functools.lru_cache()(self._expensive_calc)
def _expensive_calc(self, param):
return sum(x * param for x in self.data)
proc = DataProcessor(range(1_000_000))
proc.expensive_calc(42)
del proc # garbage collected normally
Solution 2: weakref-Based Cleanup
For cases where you want a class-level cache but cannot afford to pin instances, use weakref.finalize to register a cleanup callback that clears the cache when the instance is about to be collected:
import weakref
import functools
def instance_lru_cache(maxsize=128):
"""Per-instance lru_cache with weakref cleanup."""
def decorator(method):
cache_attr = f"_cache_{method.__name__}"
@functools.wraps(method)
def wrapper(self, *args, **kwargs):
cache = getattr(self, cache_attr, None)
if cache is None:
@functools.lru_cache(maxsize=maxsize)
def cached(*a, **kw):
return method(self, *a, **kw)
weakref.finalize(self, cached.cache_clear)
object.__setattr__(self, cache_attr, cached)
cache = cached
return cache(*args, **kwargs)
return wrapper
return decorator
class DataProcessor:
def __init__(self, data):
self.data = data
@instance_lru_cache(maxsize=64)
def expensive_calc(self, param):
return sum(x * param for x in self.data)
proc = DataProcessor(range(1_000_000))
proc.expensive_calc(42)
del proc # garbage collected; finalize clears the cache
Solution 3: Bounded State With collections.deque
Beyond caching, any decorator that stores references to its arguments or return values in a long-lived data structure (a registry list, a global dictionary, an event subscriber list) creates the same pattern. The closure itself keeps the decorated function alive, and whatever the closure appends to keeps growing. When writing decorators that accumulate state, consider whether that state should have a bounded size or a cleanup mechanism:
import collections
import functools
def bounded_history(maxlen=100):
"""Decorator that records call history with bounded memory."""
def decorator(func):
history = collections.deque(maxlen=maxlen) # auto-evicts oldest
@functools.wraps(func)
def wrapper(*args, **kwargs):
result = func(*args, **kwargs)
history.append((args, kwargs, result))
return result
wrapper.get_history = lambda: list(history)
wrapper.clear_history = history.clear
return wrapper
return decorator
@bounded_history(maxlen=50)
def process(x):
return x ** 2
# History never exceeds 50 entries regardless of call volume
Class-based decorators that use __call__ instead of a closure avoid the closure mechanism entirely. They store the wrapped function as an instance attribute rather than a free variable. This can make memory behavior more explicit, but introduces its own complications with descriptor protocol interactions when decorating methods. For stateful decorators, classes are sometimes clearer than closures with nonlocal, but function-based closures remain the dominant pattern in the Python ecosystem.
Thread Safety Under Free-Threading
Python 3.14 (stable as of October 2025) ships with a significantly improved free-threaded mode (PEP 703), where the Global Interpreter Lock is disabled and threads can execute Python bytecode in true parallel. The specializing adaptive interpreter (PEP 659) is re-enabled in free-threaded mode in 3.14, closing the performance gap that existed in 3.13's experimental free-threaded build.
This changes the calculus for decorator closure state. Under the GIL, mutable closure state was incidentally serialized -- two threads could not execute Python bytecode simultaneously, so simple operations like call_count += 1 were unlikely to produce visible corruption (though they were never guaranteed to be safe). With the GIL disabled, two threads can execute the increment on the same cell object's value at the same time, producing lost updates or corrupted state.
import functools
import threading
def thread_safe_counter(func):
"""Thread-safe call counter using a Lock."""
call_count = 0
lock = threading.Lock()
@functools.wraps(func)
def wrapper(*args, **kwargs):
nonlocal call_count
with lock:
call_count += 1
current = call_count
print(f"{func.__name__}: call #{current}")
return func(*args, **kwargs)
wrapper.get_count = lambda: call_count
return wrapper
@thread_safe_counter
def process(x):
return x * 2
The lock is itself a free variable in the closure, captured alongside call_count and func. The with lock: block ensures that the read-modify-write of call_count is atomic. The function call func(*args, **kwargs) happens outside the lock to avoid holding it longer than necessary.
For decorators where each thread should have its own independent state (like per-thread request context), use threading.local() instead of a shared closure variable:
import functools
import threading
def per_thread_state(func):
"""Decorator where each thread gets independent state."""
local = threading.local()
@functools.wraps(func)
def wrapper(*args, **kwargs):
if not hasattr(local, 'call_count'):
local.call_count = 0
local.call_count += 1
return func(*args, **kwargs)
wrapper.get_local = lambda: local
return wrapper
Python 3.14's free-threaded build uses biased reference counting to reduce the overhead of removing the GIL. Objects track their owning thread and use fast non-atomic operations when only the owner accesses them. This means the performance cost of thread safety in decorators is largely limited to the explicit synchronization you add (the Lock), not to hidden overhead in every cell object access. The single-threaded performance penalty of free-threaded mode has narrowed considerably from Python 3.13's approximately 40% overhead to roughly 9% in 3.14, making the trade-off more practical for production code.
Under Python 3.14's free-threaded mode, two threads call a decorated function simultaneously. The decorator uses nonlocal call_count and increments it without a lock. Which outcome is possible?
Key Takeaways
- Every function-based decorator is a closure. The wrapper function returned by a decorator captures the original function as a free variable. This free variable is stored in a cell object in the wrapper's
__closure__tuple, keeping the original function alive after the decorator's outer function has returned. - Free variables are variables used inside a function but defined in an enclosing scope, not locally or globally. You can inspect them with
func.__code__.co_freevars(the names) andfunc.__closure__(the cell objects holding the values). They follow the LEGB lookup chain at the Enclosing level. - Use
nonlocalto modify immutable free variables. Withoutnonlocal, assigning to a free variable inside the inner function creates a new local variable. Thenonlocaldeclaration tells Python the variable belongs to the enclosing scope. Mutable objects (lists, dicts) can be modified in place withoutnonlocal. - Parameterized decorators create nested closures with flattened cell tuples. The outermost function captures configuration. The middle function captures the decorated function. The innermost wrapper captures all of the above in a single flat
__closure__tuple. - Closures capture references, not values. All closures over the same variable see the same current value. In loops, use default arguments (
_i=i) orfunctools.partialto bind the value at definition time rather than lookup time. - Stacking decorators creates a chain of closures. Each decorator wraps the result of the one below it. The bottom decorator captures the original function, the next captures the first wrapper, and so on. The order of stacking determines execution order.
- Closure-based decorators bypass the descriptor protocol. A plain function wrapper calls the wrapped object directly without invoking
__get__. This breaks@classmethod,@staticmethod, and custom descriptors. Use a class-based wrapper with__get__or thewraptlibrary for descriptor-safe decorators. - Closures keep their free variables alive. The original function and any state captured by a decorator closure cannot be garbage collected while the closure exists. Use per-instance caching,
weakref.finalize, or bounded data structures likecollections.dequeto control memory growth. - Closure state is not thread-safe, and the GIL removal makes this urgent. In Python 3.14's free-threaded mode, mutable closure variables require explicit synchronization. Use
threading.Lockfor shared state orthreading.local()for per-thread state.
Closures are the mechanism that makes decorators possible. The @ syntax is just shorthand for passing a function to another function and rebinding the name. The closure is what keeps the original function -- and any configuration -- alive inside the wrapper. Once this mechanism is clear, writing decorators becomes an exercise in deciding what to capture, what to protect, and how the wrapper interacts with the rest of Python's object model.
Frequently Asked Questions
What is the relationship between closures and decorators in Python?
Every function-based decorator creates a closure. The decorator's outer function receives the decorated function as an argument. The inner wrapper function captures that argument as a free variable, keeping it alive in a cell object stored in the wrapper's __closure__ attribute. When the wrapper is called, it accesses the captured function through this closure mechanism.
What is a free variable in a Python closure?
A free variable is a variable used inside a function but not defined there or in the global scope. It is defined in an enclosing function's local scope. When the inner function is returned and the outer function finishes, the free variable survives because the inner function's __closure__ holds a reference to it through a cell object.
How can you inspect the closure of a decorated function?
Every function has a __closure__ attribute that is either None (no closure) or a tuple of cell objects. Each cell has a cell_contents attribute that holds the captured value. The __code__.co_freevars attribute lists the names of the free variables. Together, these two attributes let you see exactly what the closure captured and under what names.
Why is nonlocal needed in decorator closures?
Without nonlocal, assigning to a variable inside the inner function creates a new local variable that shadows the enclosing variable. The nonlocal keyword tells Python that the variable belongs to the enclosing scope, allowing the inner function to modify it. This is necessary when a decorator needs mutable state, such as a call counter or accumulated timing data.
How do parameterized decorators use closures?
A parameterized decorator adds an extra layer of nesting. The outermost function accepts the configuration arguments. It returns a middle function (the actual decorator) that accepts the function to decorate. The middle function returns the inner wrapper. The wrapper is a closure that captures both the configuration arguments and the decorated function as free variables.
What happens to closures when you stack multiple decorators?
Each decorator in a stack creates its own independent closure. The bottom decorator receives the original function and returns a wrapper. The next decorator up receives that wrapper as its func argument and returns its own wrapper. The result is a chain of closures, where each wrapper's __closure__ holds a reference to the wrapper below it. Decorators are applied bottom-up but execute top-down.
Can decorator closures cause memory leaks?
Closures keep their free variables alive by holding references through cell objects, which prevents garbage collection of those variables. This normally is not a problem for module-level decorated functions. However, decorators that cache arguments -- such as functools.lru_cache applied to instance methods -- can prevent instances from being garbage collected because self is stored as a cache key. Solutions include making the cache per-instance by applying the decorator inside __init__, using weakref.finalize to register cleanup callbacks, or using bounded data structures like collections.deque for accumulated state.
Is closure state in decorators thread-safe?
No. Mutable variables captured by a decorator's closure are shared across all calls to the decorated function, with no synchronization. Operations like incrementing a counter or appending to a list can produce race conditions when multiple threads call the decorated function concurrently. To make closure state thread-safe, protect it with a threading.Lock acquired before reading or writing the shared variable. In Python 3.14's free-threaded mode, where the GIL is disabled, this becomes even more critical because threads can truly execute Python bytecode in parallel.
Why do closure-based decorators break when wrapping descriptors like @classmethod?
A standard function-closure decorator calls the wrapped object directly via its __call__ method. But descriptors like @classmethod depend on the descriptor protocol -- their __get__ method must be invoked to produce the bound callable. A closure-based wrapper bypasses this protocol because it is a plain function, not a descriptor. The solution is to make the decorator itself a descriptor by implementing __get__ on a class-based wrapper, or to use a library like wrapt that handles this transparently.
How does Python 3.14 free-threaded mode affect decorator closures?
In Python 3.14's free-threaded mode (PEP 703), the Global Interpreter Lock is disabled, meaning threads can truly execute Python bytecode in parallel. This makes closure state that was previously incidentally safe under the GIL now genuinely vulnerable to race conditions. The specializing adaptive interpreter (PEP 659) is re-enabled in free-threaded mode in 3.14, improving performance, but decorators with mutable closure state must use explicit synchronization like threading.Lock or threading.local to remain correct.