Why Do Lambdas in a Loop All Return the Same Result?

You write a loop, create a lambda inside it for each iteration, store them all in a list, and then call them. Every single one returns the same number. Not the value from its iteration. The last one. This is one of the most reliably surprising behaviors Python has to offer, and understanding it will permanently change how you think about variables and functions.

This behavior trips up experienced developers, not just beginners. It shows up in GUI event handlers, list comprehensions with callbacks, dynamically generated test cases, and anywhere else functions get built inside a loop. The fix is a single line once you know what is happening, but the explanation requires understanding something fundamental about how Python resolves variable names at runtime.

The Problem, Reproduced

Here is the classic form. Nothing unusual-looking about it:

functions = []

for i in range(5):
    functions.append(lambda: i)

print([f() for f in functions])
# Output: [4, 4, 4, 4, 4]

The expectation is reasonable: the first lambda should return 0, the second 1, and so on up to 4. Instead, every function returns 4, which was the final value of i when the loop ended. The loop ran correctly. The values of i were 0, 1, 2, 3, and 4 at each step. So why does calling the lambdas after the loop always give back 4?

Note

This behavior is not a bug. It is intentional, consistent, and documented. Python's scoping rules work exactly as designed here. The surprise comes from a mismatch between what programmers expect and what the language actually guarantees.

To understand the output, you need two concepts: closures and late binding. They work together to produce this result.

What a Closure Actually Is

When a function references a variable that is defined in an outer scope, Python creates what is called a closure. The function carries a reference to the surrounding scope so it can look up that variable later. This is what allows the lambda inside the loop to reference i at all, even though i is defined outside the lambda.

The important word there is reference. The lambda does not copy the value of i at the moment it is created. It holds a reference to the variable i itself, stored in the enclosing scope. You can verify this with Python's built-in inspection tools:

f = functions[0]

# Look at what variables the closure holds onto
print(f.__code__.co_freevars)   # ('i',)
print(f.__closure__[0].cell_contents)  # 4  (after the loop)

The closure cell does not say "remember the value 0 from when I was created." It says "I am connected to the variable named i in the enclosing scope." When the loop finishes, i is 4, and that is the only value any of the lambdas will ever see when they are called.

Python creates a data structure called a cell that references the non-local variables the closure's function accesses. This way, the variables are preserved after the outer function has terminated.

Source: Python Closures Clearly Explained, Saurus AI

At the CPython implementation level, this cell is a concrete object — an instance of the internal cell type, available as types.CellType. All the lambdas from the loop share one cell object pointing to i. There is no copying of values into separate cells per iteration. The same cell gets written and overwritten as the loop runs, and all functions hold a reference to that same cell. When you call any of them after the loop, they all read from the same cell, which now contains 4.

Think of it this way: every lambda you created has a piece of string tied to the same sign post. The sign post says i. When the loop is done, the value written on that sign post is 4. Every lambda walks over to the same sign post and reads the same value.

Late Binding: The Real Culprit

The technical name for this behavior is late binding. Python resolves the values of variables inside a function body at the time the function is called, not at the time it is defined. This is a deliberate design choice and it is consistent throughout the language.

Late binding is actually useful in the majority of cases. Consider a function that calls another function by name:

def process(data):
    return transform(data)  # 'transform' is looked up when process() runs

If Python used early binding, you could never redefine or mock transform after process was defined. Late binding gives you flexibility. But in the loop scenario, that same flexibility produces the counterintuitive result.

The sequence of events for the lambda loop looks like this at runtime:

# What Python actually does, step by step:

# Loop iteration 0: i = 0, lambda is created and stored.
#   The lambda contains a reference to the variable 'i', not the value 0.

# Loop iteration 1: i = 1, same thing. The variable i now holds 1.
#   The first lambda still references the variable 'i', which is now 1.

# Loop iteration 2: i = 2 ...
# Loop iteration 3: i = 3 ...
# Loop iteration 4: i = 4. Loop ends. 'i' stays at 4.

# NOW you call the lambdas:
functions[0]()  # looks up 'i' right now -> 4
functions[1]()  # looks up 'i' right now -> 4
# etc.
Watch Out

The same problem occurs with regular def functions defined inside a loop, not just lambdas. The word "lambda" is not the cause. The cause is that any function referencing a loop variable will close over the variable itself, not a snapshot of its value.

Why the Loop Variable Persists

There is a deeper wrinkle here that surprises developers coming from other languages: in Python, the loop variable is not scoped to the loop. After for i in range(5) finishes, i still exists in the enclosing scope, still holding its final value. Python has no block scoping — only function scope, module scope, class scope, and built-in scope (the LEGB rule).

This means the lambdas do not merely "see the last value before the loop ended." They see a live variable that continues to exist in scope after the loop, accessible and modifiable by anything in the same function. If you reassign i after the loop and then call a lambda, the lambda will return the new value:

functions = []
for i in range(5):
    functions.append(lambda: i)

i = 99  # reassign after the loop

print(functions[0]())  # 99, not 4

This demonstrates that the lambdas are not reading a snapshot frozen at loop-end. They are reading a live variable that happens to be in the same scope. The loop variable outlives the loop entirely, which is different from languages like C++, Java, or Rust where block-scoped variables disappear when the block exits.

Late binding is good in lots of situations. Looping to create unique functions is unfortunately a case where it can cause hiccups.

Source: The Hitchhiker's Guide to Python — Common Gotchas

Four Ways to Fix It

Once you understand that the lambda holds a reference to the variable rather than the value, the fix becomes obvious: you need to capture the value at the time of creation. There are four clean ways to do this, each with different tradeoffs.

Option 1: Default Argument Capture (Most Common)

Default argument values in Python are evaluated at function definition time, not at call time. This is the opposite of late binding, and you can exploit it to snapshot a value:

functions = []

for i in range(5):
    functions.append(lambda i=i: i)
    #                      ^^^
    # This is a default argument named 'i' with a default value of i.
    # The right-hand 'i' is evaluated NOW, at definition time.

print([f() for f in functions])
# Output: [0, 1, 2, 3, 4]

The lambda i=i: i syntax looks odd at first glance. There are two different things named i here: the parameter i (left of the equals sign) and the loop variable i (right of the equals sign). The default value of the parameter is computed when the lambda is defined, so it captures the current value of the loop variable at that iteration. The parameter name does not have to be i — many developers use a different name to make this clearer:

functions = []

for i in range(5):
    functions.append(lambda val=i: val)

print([f() for f in functions])
# Output: [0, 1, 2, 3, 4]
Pro Tip

Using a different parameter name like val instead of i is worth the extra characters. It makes the intent explicit and removes any confusion about which i is which when someone reads the code later.

Option 2: A Factory Function

A factory function wraps the lambda in another function call. Each call to the factory creates a fresh local scope, and the value passed in is bound to a parameter in that scope, not the loop variable:

def make_lambda(n):
    return lambda: n

functions = []

for i in range(5):
    functions.append(make_lambda(i))

print([f() for f in functions])
# Output: [0, 1, 2, 3, 4]

Here, n is a local variable inside make_lambda. Each call to make_lambda(i) passes the current value of i as an argument. The returned lambda closes over n in that specific invocation's scope, and since n never changes after the function returns, the captured value is stable. This is the most readable and explicit approach, and it works well when the logic inside the lambda is complex enough to warrant its own function anyway.

Option 3: functools.partial

If the goal is to partially apply arguments rather than wrap logic in a new lambda, functools.partial is clean and idiomatic:

from functools import partial

def return_value(x):
    return x

functions = [partial(return_value, i) for i in range(5)]

print([f() for f in functions])
# Output: [0, 1, 2, 3, 4]

partial binds the argument at the time it is created, so there is no late binding issue. This pattern is particularly useful when working with callback-based frameworks where you need to pass a pre-configured function rather than a lambda.

Option 4: Immediately Invoked Lambda Expression (IIFE)

An immediately invoked function expression — borrowed from JavaScript terminology — uses an outer lambda that is called right away to create a new, isolated scope. The inner lambda closes over a local variable in that scope instead of the loop variable:

functions = []

for i in range(5):
    functions.append((lambda val: lambda: val)(i))
    #                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    # The outer lambda is called immediately with the current value of i.
    # It creates a new scope where val = i (captured now, not later).
    # The inner lambda closes over val in that private scope.

print([f() for f in functions])
# Output: [0, 1, 2, 3, 4]

This pattern achieves the same early binding as a factory function, but without naming the outer function. The tradeoff is readability: the double-lambda syntax is genuinely confusing to most readers and, as the Real Python documentation notes, Python does not encourage using immediately invoked lambdas as a general pattern. It is worth knowing the pattern exists — you will encounter it in codebases — but the factory function approach communicates intent far more clearly. Use this option when you are writing deliberately compact, functional-style code where the context makes the structure apparent.

Choosing Between the Four Options

Default argument capture (lambda val=i: val) is the go-to for one-liners and list comprehensions. Factory functions are the right choice when readability or testability matters. functools.partial fits best when you are partially applying an existing named function. IIFEs are a curiosity worth recognizing but rarely the clearest choice in production code.

The list-of-lambdas example is clean for illustration, but this problem surfaces in several practical patterns that are worth recognizing by name.

When This Bites You in Real Code

GUI Button Callbacks

This is one of the places developers hit this bug hardest, because the time gap between definition and execution is long and obvious. Creating multiple buttons in a loop, each with a handler that should reference a different index or item:

import tkinter as tk

root = tk.Tk()
buttons = []

# BUG: all buttons will print 4 when clicked
for i in range(5):
    btn = tk.Button(root, text=f"Button {i}",
                    command=lambda: print(i))
    buttons.append(btn)
    btn.pack()

# FIX: capture i at definition time
for i in range(5):
    btn = tk.Button(root, text=f"Button {i}",
                    command=lambda val=i: print(val))
    buttons.append(btn)
    btn.pack()

root.mainloop()

Every button in the buggy version prints 4 when clicked, regardless of which button it is, because the loop has long since finished by the time any button is clicked. This scenario makes the time-gap problem visceral: the code that creates the buttons runs once at startup, but the lambdas run later — whenever a user clicks. That delay is why the bug is so hard to spot during a quick code review.

Dynamically Built Dispatch Tables

When building a mapping from keys to handler functions programmatically:

handlers = {}
actions = ["open", "save", "close"]

# BUG: all handlers reference the same final value of 'action'
for action in actions:
    handlers[action] = lambda: f"Handling {action}"

print(handlers["open"]())   # "Handling close"
print(handlers["save"]())   # "Handling close"

# FIX
for action in actions:
    handlers[action] = lambda a=action: f"Handling {a}"

print(handlers["open"]())   # "Handling open"
print(handlers["save"]())   # "Handling save"

List Comprehensions with Callable Results

List comprehensions that produce lambdas have the same scoping rules as a for loop. The comprehension variable is a free variable in the same sense:

# BUG
multipliers = [lambda x: x * n for n in range(1, 6)]
print(multipliers[0](10))  # 50, not 10

# FIX
multipliers = [lambda x, n=n: x * n for n in range(1, 6)]
print(multipliers[0](10))  # 10
print(multipliers[2](10))  # 30

Dynamically Generated Test Cases

This scenario is less commonly discussed but just as dangerous: generating test functions programmatically and storing them in a loop. This comes up when you want to build a set of tests at collection time based on configuration data or dynamically discovered test cases.

# BUG: all test functions check the same 'case' (the last one)
tests = {}
test_cases = [("add", 1 + 1, 2), ("mul", 2 * 3, 6), ("sub", 5 - 3, 2)]

for name, expr, expected in test_cases:
    tests[name] = lambda: assert expr == expected  # SyntaxError aside...

# The real version of this bug looks like:
test_funcs = []
for name, expr, expected in test_cases:
    def test():
        assert expr == expected  # 'expected' is always 2 (last iteration)
    test_funcs.append(test)

# FIX: use a factory function or default args to snapshot the values
def make_test(expr, expected):
    def test():
        assert expr == expected
    return test

test_funcs = [make_test(expr, expected) for name, expr, expected in test_cases]

With pytest specifically, the right solution is to use @pytest.mark.parametrize rather than generating functions in a loop — parametrize handles the value capture correctly. But when you are building test utilities outside of pytest's machinery, or generating callable test objects for another test runner, the late binding trap is waiting.

How Other Languages Handle This

Understanding that Python is not alone here — and that some languages handle this differently by design — sharpens your mental model of what Python is actually choosing to do.

JavaScript (pre-ES6) has exactly the same problem with var-scoped variables in loops. The classic fix, before let was introduced, was also the factory function pattern or an IIFE. ES6's let keyword solved it by giving each loop iteration its own block-scoped binding, so closures over let variables each capture a distinct variable. Python has not made an equivalent change and has no plans to — the proposal to make loop variables iteration-scoped has been discussed in the Python community and consistently rejected in favor of keeping the current simple, consistent scoping rules.

Rust sidesteps the issue at the language level. Closures in Rust must explicitly declare how they capture variables: by reference (&), by mutable reference (&mut), or by move (move). A move closure takes ownership of the captured value, making value-at-definition-time semantics explicit and enforced by the borrow checker. There is no ambiguity about what is captured or when.

Java took a different approach: lambdas can only close over variables that are effectively final — meaning the variable is never reassigned after its initial assignment. This prevents the loop variable capture problem by making it a compile-time error to close over a mutable loop variable at all. You must explicitly copy the value into a new final variable before the lambda can reference it.

Python's approach — unrestricted late binding with mutable free variables — is more flexible and consistent, but it places the responsibility for correctness entirely on the programmer.

Does This Affect Async Code?

Yes, and in a way that can be harder to diagnose because the gap between definition and execution is even wider. An async def function is still a closure when it references variables from an outer scope. Coroutines created in a loop will exhibit exactly the same late binding behavior:

import asyncio

async def run_all():
    tasks = []

    # BUG: all coroutines will use i = 4 when they run
    for i in range(5):
        async def task():
            return i
        tasks.append(asyncio.create_task(task()))

    results = await asyncio.gather(*tasks)
    print(results)  # [4, 4, 4, 4, 4]

# FIX: factory coroutine captures the value in its own scope
async def run_all_fixed():
    async def make_task(val):
        return val

    tasks = [asyncio.create_task(make_task(i)) for i in range(5)]
    results = await asyncio.gather(*tasks)
    print(results)  # [0, 1, 2, 3, 4]

The async case is particularly tricky because the coroutine body does not run until the event loop schedules it — which is always after the loop creating it has finished. By the time task() runs and looks up i, the loop is long done and i is at its final value. The fix is the same: capture the value at creation time using a factory coroutine or default argument.

Async and the Event Loop Make This Worse

With synchronous code you might catch the bug quickly if you call the functions right after the loop in a test. With async code, coroutines are scheduled by the event loop and may not run until much later, making the time gap between definition and execution even harder to reason about.

Key Takeaways

  1. Lambdas close over variables, not values. When a lambda references a variable from an enclosing scope, it holds a reference to that variable. Whatever the variable holds when the lambda is called is what the lambda returns, regardless of what it held when the lambda was defined.
  2. Late binding is the mechanism, not the bug. Python resolves variable names at call time by design. This is what makes the language dynamic and flexible. The loop case simply exposes this behavior in an unexpected context.
  3. The loop variable persists after the loop. Python does not scope variables to loops. After for i in range(5) finishes, i still lives in the enclosing scope at its final value. Anything in that scope — including code you add after the loop — can read or change it, and any lambdas will reflect that.
  4. Default arguments are evaluated early. The idiomatic fix, lambda val=i: val, works because default argument values are computed when the function is defined, not when it is called. This is one of the few places in Python where you get early binding behavior.
  5. Factory functions are the clearest solution. When readability matters more than brevity, wrapping the lambda inside a factory function makes the intent explicit and eliminates any ambiguity about what is being captured. It is also easiest to unit test in isolation.
  6. This applies to regular functions, async functions, and comprehensions too. Any function — lambda, def, or async def — defined inside a loop with a reference to the loop variable will exhibit the same behavior. Async code is particularly risky because coroutines may not run until long after the loop completes.
  7. Other languages made different choices. JavaScript's let gives each iteration its own binding. Rust forces explicit capture mode declarations. Java requires effectively-final variables. Python's single, consistent scoping rule is simpler, but it places full responsibility for correctness on the programmer.

Late binding is one of those Python behaviors that looks like a trap until you understand the underlying model. Once you do, the fix is mechanical and the reasoning behind it gives you sharper intuition for how Python resolves names throughout the rest of the language. Every time a function runs and looks up a name, it is asking: what does this name currently point to in the enclosing scope? In a loop, the answer at call time is almost always the last value the loop variable held — or whatever was assigned to that name after the loop. Knowing that, you can spot this entire class of bug on sight, whether it shows up in a GUI callback, a generated test, or an async task.

back to articles