Here is a function that doubles a number, written two ways:

def
def double(x):
    return x * 2
lambda
double = lambda x: x * 2

Both produce a callable object. Both accept the same arguments. Both return the same result. If you disassemble them with Python's dis module, you will find that CPython compiles them to identical bytecode instructions. So what is the difference, and when should you use one over the other?

The short answer is: lambda creates an anonymous, single-expression function meant to be used inline and then discarded. def creates a named, multi-statement function meant to be referenced by name. But the full answer involves the history of the lambda calculus, a near-death experience for the keyword itself, a specific PEP 8 rule that tells you when you are using lambda wrong, a traceback behavior that can make debugging significantly harder, and the question of what Python 3.14's deferred annotations mean for the divide between these two constructs. Let's cover all of it.

Where lambda Comes From

The keyword lambda in Python is not arbitrary. It comes from the lambda calculus, a formal system for expressing computation invented by mathematician Alonzo Church at Princeton University. Church's foundational paper, "An Unsolvable Problem of Elementary Number Theory," appeared in the American Journal of Mathematics, volume 58 (1936), pages 345-363. The lambda calculus uses the Greek letter λ to denote function abstraction: the expression λx.x+1 means "a function that takes x and returns x+1." In this system, all functions are anonymous -- they have no names, only definitions. The lambda calculus was proven equivalent in computational power to Alan Turing's machine model, a result now known as the Church-Turing thesis.

Church's ideas profoundly influenced programming language design. John McCarthy's Lisp (1958) adopted lambda directly as a core language construct. In late 1993, a contributor named Amrit Prem submitted working patches adding lambda, map(), filter(), and reduce() to Python. Guido van Rossum later confirmed this attribution in CPython's Misc/HISTORY file, and recalled in his Python History blog (April 2009) that while he initially could not recall the author, the patches represented a significant, early chunk of contributed code. He also noted his discomfort with the "lambda" terminology, but accepted it for lack of a better alternative. Lambda shipped as part of Python 1.0 on January 26, 1994.

But the language's creator had mixed feelings about this inheritance from the start.

Guido Wanted to Kill It

In a March 2005 blog post titled "The fate of reduce() in Python 3000" (Python 3000 was the working name for Python 3), Guido van Rossum announced his intention to remove lambda from the language entirely. His argument was twofold: the name "lambda" is confusing to anyone without a Lisp or Scheme background, and there is a widespread misunderstanding that lambda can do things that a nested def cannot. He recounted demonstrating to fellow developer Laura Creighton that a nested def can do everything a lambda can -- and she had what he described as an "Aha!-erlebnis" upon seeing it.

Two choices side-by-side force a decision irrelevant to your program.

-- Guido van Rossum, paraphrased from "The fate of reduce() in Python 3000", Artima Weblogs, March 10, 2005

His core claim was that providing two syntaxes for the same operation forces a decision that adds no value to the resulting program. He also argued that once map(), filter(), and reduce() were gone, there would not be many places where short anonymous functions were genuinely needed.

The Python community pushed back. Hard. Ultimately, lambda survived. Guido updated the blog post with an addendum confirming that lambda, filter, and map would stay, and only reduce would be moved out of the built-in namespace into functools. About a year later -- as documented by the Python Course EU tutorial on functional features -- he wrote with visible exasperation that after so many failed attempts to find an alternative syntax, the right answer was simply to keep lambda and stop wasting community time on the question. The keyword has remained unchanged in the language ever since.

Under the Hood: Identical Bytecode

The single most important technical fact about lambda vs def is this: they produce the same bytecode. You can verify this yourself using the dis module, Python's built-in bytecode disassembler:

Python REPL Disassembling both forms (CPython 3.11+)
>>> import dis

>>> def square_def(x):
...     return x * x

>>> square_lambda = lambda x: x * x

>>> dis.dis(square_def)
  2           LOAD_FAST                0 (x)
              LOAD_FAST                0 (x)
              BINARY_OP                5 (*)
              RETURN_VALUE

>>> dis.dis(square_lambda)
  1           LOAD_FAST                0 (x)
              LOAD_FAST                0 (x)
              BINARY_OP                5 (*)
              RETURN_VALUE

The instructions are identical: LOAD_FAST, LOAD_FAST, BINARY_OP, RETURN_VALUE. The only visible difference is the source line number in the first column (2 vs 1). This means there is zero performance difference between a lambda function and an equivalent def function at runtime. The Python interpreter does not treat them differently once they are compiled.

A Note on Bytecode Versions

The output shown here reflects CPython 3.11+, which unified all binary operations under BINARY_OP with a numeric operand. In Python 3.10 and earlier, you would see BINARY_MULTIPLY instead. The core point is the same in every CPython version: both forms produce identical instructions. The Python documentation explicitly states that no guarantees are made about bytecode stability across versions -- it is an implementation detail of CPython specifically.

Why the Single-Expression Restriction Exists

Python's restriction that lambda bodies can only contain a single expression is ideological rather than technical. There is no bytecode-level reason for the limitation. Guido van Rossum has stated that multi-statement lambdas will not be added to Python because they would require indentation within expressions -- which he considers incompatible with Python's significant-whitespace philosophy. The restriction is a deliberate design fence, not a compiler constraint.

Both forms produce a function object. Both support closures, default arguments, *args, and **kwargs. Both are first-class objects that can be passed as arguments, returned from functions, and stored in data structures. The differences are entirely at the level of syntax, naming, and developer experience.

The Eight Real Differences

1. Name vs Anonymity (and __qualname__)

A def statement binds the function to a name in the current scope. A lambda expression creates a function object and returns it as a value -- it does not bind it to any name. If you assign a lambda to a variable, you are doing the binding yourself through a separate assignment statement.

This difference is reflected in two attributes: __name__ and __qualname__:

Python REPL The __name__ and __qualname__ attributes
>>> def add_def(a, b):
...     return a + b

>>> add_lambda = lambda a, b: a + b

>>> add_def.__name__
'add_def'

>>> add_lambda.__name__
'<lambda>'

>>> class Calculator:
...     add = lambda self, a, b: a + b
...     def subtract(self, a, b): return a - b

>>> Calculator.add.__qualname__
'Calculator.<lambda>'

>>> Calculator.subtract.__qualname__
'Calculator.subtract'

The def version knows its name is add_def. The lambda version only knows it is <lambda>, regardless of what variable you assigned it to. The __qualname__ attribute (qualified name) is particularly revealing: it shows the full dotted path to the function, which is used by pickle, profilers, and logging tools. A lambda inside a class shows Calculator.<lambda> -- if you have three lambdas in the same class, all three share that same qualified name, making them indistinguishable in any tool that relies on it.

2. Statements vs Expressions

def is a statement. It must appear on its own line (or lines) and cannot be embedded inside another expression. lambda is an expression. It can appear anywhere a value is expected: inside a function call, inside a list, as a dictionary value, as a default argument.

This is the entire reason lambda exists. It lets you create a function at the exact point where you need it, without interrupting the flow of an expression:

Python Lambda used inline where def cannot go
# Sorting by the second element of each tuple
pairs = [("alice", 32), ("bob", 25), ("carol", 28)]
pairs.sort(key=lambda pair: pair[1])

# A dictionary of strategy functions
strategies = {
    "add": lambda a, b: a + b,
    "mul": lambda a, b: a * b,
    "max": lambda a, b: a if a > b else b,
}

# An immediately invoked function expression (IIFE)
result = (lambda x: x ** 2 + 1)(5)  # 26

You cannot write def in any of these positions. It is syntactically impossible to embed a def statement inside a function call, a list literal, or a dictionary literal. This is the "sole benefit" that PEP 8 refers to.

3. Single Expression vs Multiple Statements

A lambda body is restricted to a single expression. It cannot contain assignments, for loops, while loops, if/elif/else blocks (though it can use ternary if/else expressions), try/except, with, raise, return, yield, assert, or import. Attempting to use any statement inside a lambda produces a SyntaxError.

A def function has no such limitation. Its body can contain any number of statements of any kind. This is the fundamental trade-off: lambda gains embeddability at the cost of being limited to a single expression.

4. Decorators and Annotations

A def function supports decorators (@staticmethod, @cache, @property, etc.) and type annotations (PEP 3107). A lambda supports neither.

def Full feature support
from functools import cache

@cache
def fib(n: int) -> int:
    if n < 2:
        return n
    return fib(n - 1) + fib(n - 2)
lambda None of this is possible
# No decorator syntax
# No type annotations
# No docstring
# No multi-line body

fib = lambda n: n if n < 2 \
    else fib(n-1) + fib(n-2)

You can technically apply a decorator to a lambda manually (fib = cache(lambda n: ...)), but you lose the clean @decorator syntax and readability. Type annotations are syntactically impossible on lambda parameters because the colon after a lambda parameter would be ambiguous with the colon that separates parameters from the body.

5. Docstrings

A def function can have a docstring -- a string literal as the first statement of its body that Python stores in the function's __doc__ attribute and that tools like help(), Sphinx, and IDE tooltips use:

Python Docstrings on def vs lambda
def celsius_to_fahrenheit(c):
    """Convert a temperature from Celsius to Fahrenheit."""
    return c * 9 / 5 + 32

help(celsius_to_fahrenheit)
# Help on function celsius_to_fahrenheit:
# celsius_to_fahrenheit(c)
#     Convert a temperature from Celsius to Fahrenheit.

c_to_f = lambda c: c * 9 / 5 + 32

help(c_to_f)
# Help on function <lambda>:
# <lambda>(c)

A lambda has no place for a docstring. You can assign one manually after the fact (c_to_f.__doc__ = "..."), but this defeats the purpose of inline anonymous functions and is universally considered bad practice.

6. Pickling

Python's pickle module serializes objects by referencing them by name. A def function defined at module level can be pickled because pickle can look it up by its module and name. A lambda assigned to a variable typically cannot be pickled, because its internal name is <lambda> and pickle cannot resolve that back to a specific object:

Python REPL Lambda breaks pickle
>>> import pickle

>>> def square_def(x): return x * x
>>> pickle.dumps(square_def)  # Works fine

>>> square_lambda = lambda x: x * x
>>> pickle.dumps(square_lambda)
# PicklingError: Can't pickle <function <lambda> ...>

This matters in production contexts like multiprocessing (which uses pickle to send functions to worker processes), distributed computing frameworks like Dask and Ray, and caching libraries that serialize function objects.

There are real workarounds if you genuinely need to serialize a lambda. The cloudpickle library (used internally by Apache Spark and Dask) serializes functions by value rather than by reference, which means it can handle lambdas that standard pickle cannot. The dill library is another option; the multiprocess package (a fork of the standard multiprocessing module) substitutes dill for pickle transparently, so you can use lambdas with Pool.map() without changing anything else. These are the real solutions -- not workarounds, but intentional tools built for exactly this problem. The tradeoff is that cloudpickle and dill are slower than standard pickle for large objects, and they should never be used for long-term storage or to unpickle data from untrusted sources.

7. Python 3.12+ Annotation Scope Interaction

There is one recent difference worth knowing if you work with Python's typing system. Python 3.12 (PEP 695) introduced annotation scopes for generic type parameters, and the Python 3.13 release notes explicitly confirm that annotation scopes within class scopes can now contain lambdas and comprehensions. This means the interaction between lambda and Python's evolving type annotation system is now more precisely defined than it was in earlier versions. In practice this matters if you write generic classes with type parameters and use lambdas inside them -- the scoping now behaves predictably rather than inheriting quirks from the enclosing class scope.

8. The Walrus Operator (:=) and Lambda Scope

Since Python 3.8, the walrus operator (:=) allows assignment within expressions -- the same syntactic space where lambdas live. This creates a subtle interaction: a lambda body counts as its own scope for walrus operator purposes. PEP 572 explicitly states that a lambda, being a function definition, establishes its own scope boundary. That means an assignment expression inside a lambda binds to the lambda's scope, not to the enclosing scope:

Python Walrus operator scope inside lambda
# This works: walrus binds inside lambda scope
f = lambda x: (y := x * 2, y + 1)[1]

# The variable y does NOT leak into the enclosing scope
f(5)  # Returns 11
# print(y)  would raise NameError

This scoping rule is worth knowing because it affects how walrus-equipped lambdas interact with comprehensions and generator expressions. If you need to capture an intermediate result inside a lambda, the walrus operator now makes that possible -- but the scope isolation means you cannot use this to communicate values outward, which is a deliberate safety rail.

The Traceback Problem

This is the single most practical reason to prefer def over lambda for any function you plan to keep. When an exception occurs inside a lambda, the traceback identifies it only as <lambda>:

lambda traceback
process = lambda x: x / 0
process(5)
def traceback
def process(x):
    return x / 0
process(5)
Lambda error
File "app.py", line 1, in <lambda> ZeroDivisionError: division by zero
def error
File "app.py", line 2, in process ZeroDivisionError: division by zero

If you have one lambda in your code, this is not a problem. But if you have twenty -- say, as callbacks in a GUI framework, or as key functions in a data pipeline -- and one of them throws an error, the traceback says in <lambda> for all of them. You cannot tell from the traceback alone which lambda failed. With def, each function has a distinct name, and the traceback points you directly to it.

This problem compounds in profiling tools as well. cProfile, py-spy, and similar profilers aggregate time by function name. If you have multiple lambdas in a hot code path, the profiler merges them all under <lambda>, making it impossible to identify which anonymous function is the bottleneck. This is a practical cost that rarely appears in tutorials but routinely appears in production debugging sessions.

This is the primary motivation behind the PEP 8 rule on lambda assignment.

What PEP 8 Actually Says

PEP 8, Python's official style guide, contains a specific recommendation about lambda in its "Programming Recommendations" section. It instructs developers to prefer a def statement over directly binding a lambda expression to a name.

Prefer def over assigning a lambda to an identifier.

-- PEP 8, "Programming Recommendations" (van Rossum, Warsaw, Coghlan) -- paraphrased

This recommendation is enforced by linters as rule E731. Flake8, Ruff, and pycodestyle will all flag my_func = lambda x: x + 1 as a violation. The rationale is that assigning a lambda to a name eliminates the one advantage lambda has over a def: the ability to appear inline inside an expression. If you're going to bind it to a name anyway, a def statement gives you a function whose __name__ reflects the actual name you chose -- which means better tracebacks, better introspection, and no ambiguity about intent.

PEP 8
Style Guide for Python Code — Authors: Guido van Rossum, Barry Warsaw, Alyssa Coghlan. Status: Active. First created July 5, 2001. The foundational style guide for all Python code. The lambda-assignment recommendation was added to emphasize that lambda is for anonymous inline use, and def is for named reusable functions.

Note that PEP 8 does not say "never use lambda." It says do not assign a lambda to a name. Using lambda inline -- as a key argument, a callback, an element in a data structure -- is perfectly fine and idiomatic. The rule is about the assignment pattern specifically.

When to Use lambda

Lambda has a clear, well-defined role in Python: short, throwaway functions used exactly once at the point of definition. These are the idiomatic use cases:

Sorting with key

Python The single most common lambda use case
students = [
    {"name": "Alice", "gpa": 3.8},
    {"name": "Bob", "gpa": 3.2},
    {"name": "Carol", "gpa": 3.9},
]

# Sort by GPA descending
students.sort(key=lambda s: s["gpa"], reverse=True)

# Get the student with the highest GPA
top = max(students, key=lambda s: s["gpa"])

The key parameter in sorted(), min(), max(), and list.sort() is the single most common use case for lambda in Python. It needs a function, the function is one expression, and you will never reference it again. Lambda is perfect here.

Quick map() and filter() Operations

Python Functional transforms
# Extract names from a list of dicts
names = list(map(lambda s: s["name"], students))

# Keep only passing students
passing = list(filter(lambda s: s["gpa"] >= 3.5, students))

Note that Guido himself argued these are almost always clearer as list comprehensions: [s["name"] for s in students] and [s for s in students if s["gpa"] >= 3.5]. Use whichever you find more readable, but be aware that the comprehension form is the more Pythonic style and -- in CPython -- also runs slightly faster because list comprehension bytecode was inlined starting with PEP 709 in Python 3.12.

Simple Callbacks and Event Handlers

Python GUI and framework callbacks
# Tkinter button callback
button = tk.Button(root, text="Quit", command=lambda: root.destroy())

# Default dictionary with factory
# defaultdict(int) does the same thing here, because int() returns 0.
# The lambda form is useful when the default is not a type's
# zero-value -- e.g., defaultdict(lambda: "unknown") or
# defaultdict(lambda: []) -- but for 0 specifically,
# defaultdict(int) is shorter, faster (no Python function call),
# and the conventional choice.
counts = defaultdict(lambda: 0)

# Deferred evaluation
config = {
    "timeout": lambda: calculate_timeout(retries),
    "batch_size": lambda: get_optimal_batch(),
}

When to Use def

Use def for everything else. Specifically:

If you find yourself assigning a lambda to a variable name, use def. If the logic requires more than one expression, use def. If you need error handling, loops, or intermediate variables, use def. If the function will be called from multiple places, use def. If you need a docstring, type annotations, or a decorator, use def. If debugging matters (and it always does in production), prefer def.

The decision framework is simple: if the function is small enough that giving it a name adds more visual noise than it removes, use lambda. If there is any reason at all to give it a name, use def.

When Not to Use lambda: The Anti-Patterns

Knowing when to use lambda is helpful. Knowing when not to use it is more helpful, because the anti-patterns appear far more frequently in real codebases than the correct uses do. Here are the patterns that signal a lambda is the wrong tool:

Assigning it to a name and reusing it. If you write transform = lambda x: x.strip().lower() and then call transform() in three places, that is a def. You have given it a name, you are reusing it, and every traceback will say <lambda> instead of transform.

Nesting it beyond readability. A lambda inside a lambda inside a map() is not clever code; it is a cognitive trap. The reader must simultaneously parse the scope of two anonymous closures and a higher-order function. Use named functions. Each name is a comment that the reader does not have to write themselves.

Using it to avoid importing a named alternative. If you write key=lambda x: x[2] and the same pattern appears five times in the same module, from operator import itemgetter and key=itemgetter(2) is shorter, faster, and picklable.

Writing multi-line "lambda" with backslash continuations. If the expression is too long for one line, it is too complex for lambda. The backslash is a smell. Use def and give the logic room to breathe.

Passing it to multiprocessing. If the function will be pickled -- as with multiprocessing.Pool.map() -- a lambda will fail at runtime. Use a module-level def function, or switch to cloudpickle/dill if you understand the tradeoffs.

Complete Comparison

Feature lambda def
Type Expression (returns a value) Statement (binds a name)
Body Single expression only Any number of statements
__name__ <lambda> The function's actual name
__qualname__ Enclosing.<lambda> Full dotted path to function
Tracebacks Shows <lambda> Shows the function name
Profiling All lambdas merge under one label Each function separately identified
Docstrings Not supported Fully supported
Type annotations Not supported Fully supported (PEP 3107)
Decorators Not supported syntactically Fully supported
return Implicit (the expression's value) Explicit (return statement)
Closures Yes Yes
Default args Yes Yes
*args/**kwargs Yes Yes
Walrus operator Yes (lambda is its own scope) Yes
Pickling Fails with standard pickle; works with cloudpickle or dill Works at module level with standard pickle
Bytecode Identical to equivalent def Identical to equivalent lambda
Performance No difference No difference
Inline in expressions Yes (this is the whole point) No (syntax error)
PEP 8 assignment E731 violation if assigned to name Always acceptable

The Classic Lambda-in-a-Loop Gotcha

No article on lambda would be complete without the single most famous lambda bug in Python -- creating lambdas inside a loop:

Bug Late binding closure
functions = []
for i in range(5):
    functions.append(lambda: i)

print([f() for f in functions])
Output
[4, 4, 4, 4, 4]

You might expect [0, 1, 2, 3, 4], but every lambda returns 4. The reason is that Python closures capture variables, not values. All five lambdas share the same variable i, and by the time you call them, the loop has finished and i is 4. This is not specific to lambda -- the same behavior occurs with def -- but it appears frequently with lambda because lambda is what people use inside loops.

There are three standard fixes, each with different strengths. Understanding why each one works is more valuable than memorizing the pattern.

Fix 1: Default Argument Capture

Fix Capture the value with a default argument
functions = []
for i in range(5):
    functions.append(lambda i=i: i)  # Default arg captures current value

print([f() for f in functions])
Output
[0, 1, 2, 3, 4]

The i=i default argument evaluates at function definition time, not at call time, so each lambda captures the current value of i as its own local default. The limitation is that the captured value is exposed as an overridable parameter -- someone calling f(99) would get 99, not the original captured value. This is the fastest fix, but the least defensive.

Fix 2: functools.partial

Fix Cleaner alternative: functools.partial
from functools import partial
from operator import mul

functions = [partial(mul, i) for i in range(5)]

print([f(1) for f in functions])
Output
[0, 1, 2, 3, 4]

functools.partial creates a new callable with certain arguments pre-filled at the time of creation. Unlike the i=i workaround, it does not expose the captured value as an overridable parameter, which makes calling the resulting function less error-prone. It also produces a function object that is picklable with standard pickle -- a meaningful advantage in multiprocessing contexts where the i=i lambda approach would still fail serialization.

Fix 3: Factory Function

Wrap the lambda-creation in a factory function. Each call to the outer function creates a new, isolated scope, so each returned closure references its own copy of the loop variable. This is the cleanest and most debuggable form, and works identically whether you use lambda or def for the inner function:

Fix Factory function: cleanest form
def make_returner(val):
    return lambda: val  # or: def inner(): return val; return inner

functions = [make_returner(i) for i in range(5)]

print([f() for f in functions])
Output
[0, 1, 2, 3, 4]

The Alternative Nobody Mentions: The operator Module

There is a third option that both beginner and intermediate tutorials skip entirely: Python's built-in operator module. It provides pre-built, picklable, named callable objects for every standard Python operator. This matters when you reach for a lambda only to wrap a simple operation:

Python operator module vs lambda
from operator import itemgetter, attrgetter, methodcaller
import functools

students = [
    {"name": "Alice", "gpa": 3.8},
    {"name": "Bob",   "gpa": 3.2},
    {"name": "Carol", "gpa": 3.9},
]

# Lambda approach (common)
students.sort(key=lambda s: s["gpa"])

# operator.itemgetter (faster, picklable, self-documenting)
students.sort(key=itemgetter("gpa"))

# Multi-key sort without a lambda
students.sort(key=itemgetter("gpa", "name"))

# Attribute access on objects
# lambda s: s.gpa  becomes:
students.sort(key=attrgetter("gpa"))

# Method call: lambda s: s.lower()  becomes:
words.sort(key=methodcaller("lower"))

The operator module versions are picklable (they work with multiprocessing and distributed frameworks where lambda fails), faster than equivalent lambdas in CPython's implementation, and more readable for anyone unfamiliar with lambda syntax. itemgetter("gpa") reads as "get the item 'gpa'" -- which is exactly what it does. The lambda equivalent requires parsing the entire expression to understand the intent.

When you need to combine multiple key functions, functools.cmp_to_key() and tuple-returning lambdas are common, but for simple attribute-based sorting, the operator module is the cleanest option that never appears in the standard lambda vs def debate.

Can You Test a Lambda?

This question reveals a genuine design consideration that tutorials typically skip. You can call a lambda in a test, but there are practical limits:

Python Testing lambda vs def
# You can test a lambda by calling it
double = lambda x: x * 2
assert double(5) == 10  # Works

# But in a failing test, the traceback says:
# AssertionError in <lambda>
# Not: AssertionError in double

# You can't test a lambda directly with doctest
def double(x):
    """
    >>> double(5)
    10
    """
    return x * 2

# This works with doctest. The lambda version has no place
# for a doctest to live, because it has no docstring.

In practice, any function you write a test for should be a def. The act of wanting to write a test is itself a signal that the function deserves a name. This is a useful heuristic: if you find yourself thinking "I should test this," reach for def first.

Closures: A Gap in the Standard Explanation

Both lambda and def support closures -- functions that capture variables from the enclosing scope. The comparison table marks this as equal, which is correct. But the behavior that leads to the lambda-in-a-loop bug is worth stating as its own principle, because it applies to all closures in Python, not just lambdas:

The Closure Principle

Python closures capture variables, not values. A closure holds a reference to the cell object that contains the variable, not a snapshot of the variable's value at the moment the closure was created. This is true of both lambda and def. The "lambda in a loop" bug is a closure behavior, not a lambda behavior.

Understanding this distinction prevents a class of bugs that appear in event-driven code, GUI frameworks, async callbacks, and data pipelines -- all contexts where functions are defined in one place and called in another. The fix (default argument capture, functools.partial, or a factory function) is the same regardless of whether you used lambda or def to define the inner function.

Python 3.14: Deferred Annotations and Lambda's Place

Python 3.14, released October 2025, introduced deferred evaluation of annotations via PEP 649 (implemented through PEP 749). This is relevant to the lambda-vs-def question for a subtle reason: annotations on def functions are now stored as special-purpose annotate functions and evaluated lazily, rather than eagerly at definition time. This means the runtime cost of writing type annotations on def functions has dropped to near zero.

Lambda, of course, still cannot carry annotations syntactically. But the practical argument that "annotations add overhead at import time, so skip them for hot-path functions" has now been eliminated. There is no longer even a performance-based excuse for choosing lambda over def when annotations would be useful. If you would benefit from expressing parameter types, return types, or forward references, def is now strictly superior with no cost to pay for the annotation machinery.

Real Python's guide to Python 3.14 annotations notes that function annotations apply to functions and methods but explicitly exclude lambdas. As the type system continues to mature in Python, lambda's inability to participate in the annotation ecosystem becomes an increasingly meaningful limitation for any function that lives longer than a single expression.

The Decision Framework

Here is the principle, stated as simply as possible:

The Rule

Use lambda when you need a function for a moment. Use def when you need a function for a name.

If you are passing a one-expression function directly into sorted(), map(), filter(), min(), max(), reduce(), or a callback parameter, lambda is the right tool. The function exists only for that one call and does not need a name, a docstring, or a traceback identity.

If you are defining a function that will be referenced by name -- called from multiple places, tested independently, documented, decorated, or visible in error logs -- use def. The two extra lines of code buy you debuggability, readability, and every feature Python offers for functions.

And if you are wrapping a single standard operation -- extracting a dictionary key, calling a method, comparing an attribute -- check the operator module first. It may well be that neither lambda nor def is the right tool, because a purpose-built, picklable, optimized callable already exists in the standard library.

Guido van Rossum's own position evolved from wanting to remove lambda entirely to accepting that the community debate itself was the larger waste of energy. His original observation remains sound: a nested def can do everything a lambda can, and more. Lambda's value is purely syntactic convenience -- it lets you write a function where a value is expected, nothing more. Use it for that purpose, and only that purpose, and you will never go wrong.