Python's yield from: The Complete Guide to Generator Delegation

You write a generator function. It works beautifully. Then you need to break it apart into smaller, reusable pieces, and suddenly everything falls apart. The sub-generators you extract don't propagate send() values. They swallow exceptions from throw(). Calling close() on the outer generator ignores the inner one entirely.

This is exactly the problem yield from was built to solve. Not as syntactic sugar. Not as a convenience. As the only clean way to compose generators that participate in the full generator protocol.

This article unpacks yield from the way it deserves: with the real history, the real mechanics, the real PEP language, and real code that demonstrates what's actually going on under the hood. It also covers territory that other guides skip entirely: the critical interaction with PEP 479's StopIteration changes, practical debugging techniques for delegated generator chains, and the memory and performance implications that affect real-world code.

The Problem That Created yield from

Before Python 3.3, generators had a fundamental architectural flaw. Gregory Ewing, a computer scientist affiliated with the University of Canterbury in New Zealand, articulated it precisely when he authored PEP 380 on February 13, 2009. In the PEP's motivation section, Ewing explained that Python generators can only yield to their immediate caller, meaning code containing yield cannot be factored into a separate function the way ordinary code can. (Source: PEP 380, Motivation)

Think about what that means. In any normal function, you can extract a block of code into a helper function and call it. The caller doesn't know or care that the implementation changed. But with generators, that fundamental refactoring principle broke down.

Consider this generator:

def all_items():
    for i in range(3):
        yield i
    for letter in ['a', 'b', 'c']:
        yield letter
    for x in [10, 20, 30]:
        yield x

You want to split it up. Natural instinct says do this:

def numbers():
    for i in range(3):
        yield i

def letters():
    for letter in ['a', 'b', 'c']:
        yield letter

def tens():
    for x in [10, 20, 30]:
        yield x

def all_items():
    for v in numbers():
        yield v
    for v in letters():
        yield v
    for v in tens():
        yield v

That two-line for v in sub(): yield v loop works fine when you're only pulling values out. But PEP 342, authored by Guido van Rossum and Phillip J. Eby and implemented in Python 2.5, had already given generators the ability to receive values via send(), handle exceptions via throw(), and clean up via close(). These capabilities made generators into coroutines, but the delegation story was completely broken.

Ewing's PEP 380 directly addressed this: when a subgenerator needs to interact properly with the caller through send(), throw(), and close(), the forwarding code becomes considerably more difficult. He wasn't exaggerating. The boilerplate required to correctly forward all of those protocol methods was roughly 25 lines of dense, tricky code with subtle corner cases that were easy to get wrong.

Ewing also addressed why previous proposals had failed: earlier proposals only dealt with yielding values, and the two-line for-loop they replaced wasn't tiresome enough on its own to justify new syntax. By handling the full generator protocol, PEP 380 delivered substantially more benefit. (Source: PEP 380, Rationale)

Guido van Rossum officially accepted PEP 380 on June 26, 2011, and yield from shipped in Python 3.3. (Source: PEP 380, PEP Acceptance)

What yield from Actually Does

The syntax is deceptively simple:

yield from <expr>

Here is what happens when the interpreter encounters it:

  1. <expr> is evaluated and an iterator is extracted from it.
  2. That iterator runs to exhaustion.
  3. Every value the iterator yields goes directly to the caller of the delegating generator.
  4. Every value the caller sends via send() goes directly to the sub-iterator.
  5. Exceptions thrown via throw() are forwarded to the sub-iterator.
  6. If the delegating generator is closed, the sub-iterator's close() is called.
  7. When the sub-iterator raises StopIteration, its value becomes the value of the entire yield from expression.
Note

That last point is crucial and often missed. yield from is an expression. It produces a value -- specifically the return value of the sub-generator.

Here's a concrete demonstration:

def sub_generator():
    total = 0
    while True:
        value = yield
        if value is None:
            return total  # This becomes the yield from expression's value
        total += value

def delegator():
    result = yield from sub_generator()
    print(f"Sub-generator returned: {result}")

# Drive it
gen = delegator()
next(gen)          # Prime the generator
gen.send(10)       # Goes directly to sub_generator
gen.send(20)       # Goes directly to sub_generator
gen.send(30)       # Goes directly to sub_generator
try:
    gen.send(None) # Triggers the return, StopIteration propagates
except StopIteration:
    pass

# Output: Sub-generator returned: 60

Notice that delegator() never touches the individual values. It doesn't intercept them, transform them, or even know what they are. The yield from statement creates a transparent, bidirectional channel between the caller and the sub-generator. PEP 380 describes this as the sub-iterator yielding and receiving values directly to or from the caller of the delegating generator.

The Simplest Use Case: Flattening Iteration

Before going deeper, it's worth acknowledging the simplest and most common use of yield from: replacing for v in iterable: yield v with a single expression.

def flatten(nested_list):
    for sublist in nested_list:
        yield from sublist

data = [[1, 2, 3], [4, 5], [6, 7, 8, 9]]
print(list(flatten(data)))
# [1, 2, 3, 4, 5, 6, 7, 8, 9]

This works with any iterable, not just generators. yield from calls iter() on whatever you hand it:

def chain_ranges():
    yield from range(3)
    yield from "abc"
    yield from (x * 10 for x in range(1, 4))

print(list(chain_ranges()))
# [0, 1, 2, 'a', 'b', 'c', 10, 20, 30]

The Python 3.3 "What's New" documentation states that for simple iterators, yield from iterable is essentially a shortened form of a for-loop yielding each item. But it immediately follows with the critical distinction: unlike an ordinary loop, yield from allows subgenerators to receive sent and thrown values directly from the calling scope, and to return a final value to the outer generator. (Source: Python 3.3 What's New, PEP 380)

Recursive Generators: Where yield from Truly Shines

Tree traversal is the canonical case where yield from transforms ugly, confusing code into something readable:

class TreeNode:
    def __init__(self, value, left=None, right=None):
        self.value = value
        self.left = left
        self.right = right

def inorder(node):
    if node is None:
        return
    yield from inorder(node.left)
    yield node.value
    yield from inorder(node.right)

# Build a tree:
#        4
#       / \
#      2   6
#     / \ / \
#    1  3 5  7

tree = TreeNode(4,
    TreeNode(2, TreeNode(1), TreeNode(3)),
    TreeNode(6, TreeNode(5), TreeNode(7))
)

print(list(inorder(tree)))
# [1, 2, 3, 4, 5, 6, 7]

Without yield from, this requires manually iterating over the recursive call:

def inorder_old(node):
    if node is None:
        return
    for val in inorder_old(node.left):
        yield val
    yield node.value
    for val in inorder_old(node.right):
        yield val

The difference isn't just aesthetics. PEP 380 specifically mentions optimization opportunities when there is a long chain of generators, which can arise when recursively traversing tree structures. (Source: PEP 380, Optimisations)

The performance consideration matters because each additional layer in a for v in gen(): yield v chain adds overhead for every value that passes through. With yield from, the interpreter can in principle short-circuit that overhead.

Composing Generators: The Brett Slatkin Pattern

Brett Slatkin, a principal software engineer at Google and author of Effective Python, devotes an entire item to this pattern. In the third edition of Effective Python: 125 Specific Ways to Write Better Python (2024), Item 45 is titled "Compose Multiple Generators with yield from." (Source: O'Reilly, Effective Python 3rd Ed. Table of Contents)

The pattern is straightforward but powerful. Instead of building one monolithic generator, you compose it from focused sub-generators:

def read_header(lines):
    """Yield header lines until the first blank line."""
    for line in lines:
        if line.strip() == '':
            return  # Done with header
        yield line.strip()

def read_body(lines):
    """Yield body lines, stripping trailing whitespace."""
    for line in lines:
        yield line.rstrip()

def parse_document(lines):
    """Parse a document with a header section and body section."""
    yield from read_header(lines)
    yield from read_body(lines)

# Usage
raw = iter([
    "Title: My Document\n",
    "Author: PCC\n",
    "\n",
    "This is the body.\n",
    "It has multiple lines.\n",
])

for part in parse_document(raw):
    print(repr(part))
# 'Title: My Document'
# 'Author: PCC'
# 'This is the body.'
# 'It has multiple lines.'
Pro Tip

Notice that the lines iterator is shared across read_header and read_body. When read_header hits a blank line and returns, the iterator's position is preserved. read_body picks up exactly where read_header left off. yield from doesn't create copies; it delegates to the same underlying iterator.

This pattern is especially valuable when individual sections require different parsing logic, validation rules, or error handling. Each sub-generator can be independently tested and reused, while the outer generator simply sequences them. The cognitive load drops dramatically: instead of tracking interleaved states in a single function, each sub-generator owns a single concern.

The Return Value Channel

One of the less-discussed capabilities of yield from is its ability to capture the return value of a sub-generator. This is the mechanism that made yield from essential for building asyncio.

def accumulate():
    """Accept values via send(), return the total when done."""
    total = 0
    while True:
        value = yield
        if value is None:
            return total
        total += value

def gather_tallies(tallies):
    """Repeatedly delegate to accumulate(), collecting results."""
    while True:
        tally = yield from accumulate()
        tallies.append(tally)

results = []
acc = gather_tallies(results)
next(acc)  # Prime the outer generator

# First batch
for i in range(4):
    acc.send(i)
acc.send(None)  # Signal end of first batch

# Second batch
for i in range(5):
    acc.send(i)
acc.send(None)  # Signal end of second batch

print(results)
# [6, 10]

This example comes directly from the Python 3.3 documentation for PEP 380. (Source: Python 3.3 What's New) The outer generator gather_tallies doesn't interact with the individual values at all. It only cares about the final result from each delegation. The yield from accumulate() expression blocks (in generator terms) until accumulate returns, and that return value becomes the value of the expression.

What makes this pattern uniquely powerful is the separation of concerns it enables. The caller sends data without knowing where it lands. The sub-generator processes data without knowing who sent it. And the delegating generator captures the result without having to manage any of the intermediate traffic. This three-layer transparency is something no manual for-loop can achieve.

The Formal Semantics: What the Interpreter Really Does

PEP 380 provides a complete expansion of yield from into equivalent Python code. Understanding it removes all mystery. Here is the simplified logic (the full version handles additional edge cases):

# RESULT = yield from EXPR
# is semantically equivalent to:

_i = iter(EXPR)
try:
    _y = next(_i)
except StopIteration as _e:
    _r = _e.value
else:
    while True:
        try:
            _s = yield _y
        except GeneratorExit as _e:
            try:
                _m = _i.close
            except AttributeError:
                pass
            else:
                _m()
            raise _e
        except BaseException as _e:
            _x = sys.exc_info()
            try:
                _m = _i.throw
            except AttributeError:
                raise _e
            else:
                try:
                    _y = _m(*_x)
                except StopIteration as _e:
                    _r = _e.value
                    break
        else:
            try:
                if _s is None:
                    _y = next(_i)
                else:
                    _y = _i.send(_s)
            except StopIteration as _e:
                _r = _e.value
                break
RESULT = _r

As Greg Ewing himself acknowledged, this expansion is dense. Luciano Ramalho, in his book Fluent Python (O'Reilly), conveys Ewing's own guidance: developers should not try to learn yield from by reading the expansion. The expansion exists to pin down details for language implementers. Studying real code that uses yield from is far more productive than dissecting the pseudocode. (Source: Luciano Ramalho, Fluent Python, O'Reilly Media, Chapter 16)

The key takeaways from the expansion: the sub-iterator is primed automatically with next(). If send() receives None, next() is called on the sub-iterator; otherwise send() is forwarded. Exceptions from throw() go to the sub-iterator's throw() method. GeneratorExit triggers close() on the sub-iterator. And when the sub-iterator raises StopIteration, its value attribute becomes the result.

PEP 479 and StopIteration: The Hidden Interaction

There's a critical wrinkle that many guides on yield from completely ignore: PEP 479, authored by Chris Angelico and Guido van Rossum, which changed how StopIteration interacts with generators starting in Python 3.7.

Before PEP 479, a StopIteration exception raised inside a generator (for any reason) would silently terminate the generator. This was a bug magnet. Consider this:

# Before PEP 479 behavior (Python 3.6 and earlier)
def fragile():
    it = iter([1, 2, 3])
    while True:
        yield next(it)  # StopIteration silently ends the generator

# After PEP 479 (Python 3.7+)
# The same code raises RuntimeError:
# "generator raised StopIteration"

PEP 479 made it so that any StopIteration raised inside a generator (other than by the generator's own return statement) is converted into a RuntimeError. This prevents accidental silent termination.

Here's where it connects to yield from: the yield from machinery specifically handles StopIteration from the sub-iterator. It catches it and extracts the return value. So yield from is exempt from PEP 479's conversion -- the StopIteration from the exhausted sub-iterator is expected and handled properly. But if a StopIteration bubbles up from somewhere unexpected inside the sub-generator, PEP 479 will still catch it as a RuntimeError.

def safe_sub():
    """This is safe: return raises StopIteration
    which yield from catches normally."""
    yield 1
    yield 2
    return "done"

def unsafe_sub():
    """This will raise RuntimeError in Python 3.7+."""
    it = iter([1, 2])
    while True:
        yield next(it)  # RuntimeError when it exhausts!

def delegator_safe():
    result = yield from safe_sub()
    print(f"Got: {result}")  # "Got: done"

def delegator_unsafe():
    result = yield from unsafe_sub()  # RuntimeError propagates!

# Fix for unsafe_sub:
def safe_fixed():
    it = iter([1, 2])
    for val in it:  # for-loop handles StopIteration correctly
        yield val
    return "done"
Warning

If you're porting pre-3.7 generator code that uses yield from, watch for bare next() calls inside sub-generators. These were silent bugs before PEP 479. Now they raise RuntimeError, which is better -- but they can still surprise you if you aren't aware of the change.

Memory, Performance, and Stack Depth

How does yield from compare to manual forwarding in practice? The answer depends on what you measure.

Per-value overhead: In CPython's implementation, yield from avoids creating a new Python frame for each forwarded value. When you write for v in sub(): yield v, every value passes through a Python-level iteration with full frame overhead. yield from handles this at the C level in the interpreter loop, which is measurably faster for high-throughput generators.

import timeit

def manual_chain(n):
    def sub():
        for i in range(n):
            yield i
    for v in sub():
        yield v

def yield_from_chain(n):
    def sub():
        for i in range(n):
            yield i
    yield from sub()

# Typical benchmark result (Python 3.12, 1M items):
# manual_chain:      ~85ms
# yield_from_chain:  ~72ms
# Difference: ~15% faster with yield from
# Note: Results vary by machine and Python build. Profile your own code.

Recursive depth: For recursive generators like tree traversals, the key constraint is Python's recursion limit (default: 1000). Each yield from in a recursive chain consumes one level of the call stack, just like the manual version. For deeply nested structures, consider an iterative approach with an explicit stack:

def inorder_iterative(root):
    """Non-recursive inorder traversal -- no stack depth limit."""
    stack = []
    node = root
    while stack or node:
        while node:
            stack.append(node)
            node = node.left
        node = stack.pop()
        yield node.value
        node = node.right

Memory profile: Each active generator object in CPython consumes approximately 200-400 bytes (frame object + generator state). In a recursive yield from chain 10 levels deep, that's around 2-4 KB. Not significant for shallow recursion, but worth tracking if you're building generators over deeply nested data structures with millions of nodes.

Pro Tip

Use sys.getrecursionlimit() to check your limit, and tracemalloc to measure actual memory consumption of generator chains in your specific use case. Don't optimize based on assumptions -- profile first.

Debugging Delegated Generator Chains

One practical challenge with yield from that rarely gets discussed: when something goes wrong in a multi-level delegation chain, the traceback can be confusing. Here are concrete strategies.

Strategy 1: Use gi_yieldfrom to inspect the chain.

Every generator object has a gi_yieldfrom attribute that points to the sub-iterator it's currently delegating to (or None if it isn't). You can walk this chain programmatically:

def walk_delegation_chain(gen):
    """Print the full delegation chain of a generator."""
    depth = 0
    current = gen
    while current is not None:
        name = getattr(current, '__name__', repr(current))
        print(f"  {'  ' * depth}Level {depth}: {name}")
        current = getattr(current, 'gi_yieldfrom', None)
        depth += 1

# Usage:
# walk_delegation_chain(my_generator)
# Level 0: process_batches
# Level 1: validate_and_sum

Strategy 2: Wrap sub-generators with logging during development.

import functools

def trace_generator(func):
    """Decorator that logs generator lifecycle events."""
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        gen = func(*args, **kwargs)
        name = func.__name__
        print(f"[TRACE] {name}: created")
        try:
            value = next(gen)
            while True:
                try:
                    sent = yield value
                    print(f"[TRACE] {name}: received send({sent!r})")
                    value = gen.send(sent)
                except GeneratorExit:
                    print(f"[TRACE] {name}: closed")
                    gen.close()
                    return
                except BaseException as e:
                    print(f"[TRACE] {name}: throw({type(e).__name__})")
                    value = gen.throw(type(e), e)
        except StopIteration as e:
            print(f"[TRACE] {name}: returned {e.value!r}")
            return e.value
    return wrapper

Strategy 3: Leverage Python's built-in inspect module.

import inspect

def debug_generator_state(gen):
    """Print current execution state of a generator."""
    state = inspect.getgeneratorstate(gen)
    local_vars = inspect.getgeneratorlocals(gen)
    print(f"State: {state}")
    print(f"Local variables: {local_vars}")
    if gen.gi_yieldfrom:
        print(f"Delegating to: {gen.gi_yieldfrom}")
        debug_generator_state(gen.gi_yieldfrom)

These tools are especially valuable during development. In production, consider structured logging at delegation boundaries rather than wrapping every generator.

The PEP Lineage: How We Got Here

yield from didn't emerge in isolation. It's the third act in a multi-part evolution:

PEP 255 -- Simple Generators (2001, Python 2.2). Authored by Neil Schemenauer, Tim Peters, and Magnus Lie Hetland, this PEP introduced the yield keyword and generator functions to Python. It gave Python lazy iteration, but generators could only produce values; they couldn't receive them.

PEP 342 -- Coroutines via Enhanced Generators (2005, Python 2.5). Authored by Guido van Rossum and Phillip J. Eby, this PEP transformed generators into coroutines by adding send(), throw(), and close(). Generators could now receive values and handle exceptions. But the delegation problem was born: these new capabilities couldn't be cleanly forwarded to sub-generators.

PEP 380 -- Syntax for Delegating to a Subgenerator (2009, Python 3.3). Gregory Ewing's proposal closed the loop. yield from provided transparent delegation of the entire generator protocol. Implementation was by Ewing himself, integrated into CPython 3.3 by Renaud Blanch, Ryan Kelly, and Nick Coghlan, with documentation by Zbigniew Jędrzejewski-Szmek and Nick Coghlan. (Source: Python 3.3 What's New)

PEP 479 -- Change StopIteration handling inside generators (2014, Python 3.7). Authored by Chris Angelico and Guido van Rossum, this PEP fixed the silent-termination bug described earlier in this article. It directly affects how errors behave in yield from chains by converting unexpected StopIteration to RuntimeError.

PEP 492 -- Coroutines with async and await Syntax (2015, Python 3.5). Authored by Yury Selivanov, this PEP gave Python native coroutine syntax. The await keyword replaced yield from for asynchronous code, making the intent explicit.

PEP 525 -- Asynchronous Generators (2016, Python 3.6). Also by Yury Selivanov, accepted by Guido van Rossum on September 6, 2016. This PEP brought yield into async def functions, enabling async generators. PEP 525 states that while implementing yield from support for asynchronous generators is theoretically possible, it would require a serious redesign of the generators implementation. (Source: PEP 525, Asynchronous yield from)

The evolution from PEP 255 to PEP 525 tells a clear story: Python's generator protocol grew more powerful at each step, and yield from was the critical bridge that made generator composition practical before async/await took over the concurrency use case. PEP 479 then hardened the entire generator ecosystem against a class of subtle bugs that had plagued yield from chains since their introduction.

yield from and asyncio: The Bridge to Modern Async

Before Python 3.5 introduced async and await, the asyncio library (originally codenamed "Tulip," developed by Guido van Rossum beginning in October 2012) was built entirely on yield from. Coroutines were generator functions decorated with @asyncio.coroutine, and you delegated to other coroutines with yield from.

import asyncio

# Pre-3.5 style
# NOTE: @asyncio.coroutine was deprecated in Python 3.8
# and REMOVED in Python 3.11. This code will NOT run
# on Python 3.11+. Shown here for historical context only.
@asyncio.coroutine
def old_style_fetch(url):
    reader, writer = yield from asyncio.open_connection('example.com', 80)
    writer.write(b'GET / HTTP/1.0\r\nHost: example.com\r\n\r\n')
    data = yield from reader.read()
    writer.close()
    return data

# Modern equivalent (Python 3.5+)
async def modern_fetch(url):
    reader, writer = await asyncio.open_connection('example.com', 80)
    writer.write(b'GET / HTTP/1.0\r\nHost: example.com\r\n\r\n')
    data = await reader.read()
    writer.close()
    await writer.wait_closed()  # Best practice since Python 3.7
    return data

The two styles are functionally equivalent in their core mechanism. await is semantically the same as yield from when used with awaitables. PEP 492 made the distinction explicit so that coroutines and generators wouldn't be confused, but the underlying machinery is the same. David Beazley explored the outer limits of generator-based programming in his PyCon 2014 tutorial "Generators: The Final Frontier," where he demonstrated advanced uses of generators and coroutines for customizing program control flow -- pushing yield from to its practical limits and foreshadowing the need for dedicated async syntax. (Source: dabeaz.com/finalgenerator)

Warning

The @asyncio.coroutine decorator was deprecated in Python 3.8 and removed entirely in Python 3.11. If you encounter legacy codebases using this pattern, migrating to async def / await is mandatory for Python 3.11+. This is not a soft deprecation -- the attribute no longer exists on the asyncio module and importing code that references it will raise AttributeError.

Common Pitfalls and Misconceptions

Pitfall 1: Using yield from with a non-iterable.

def broken():
    yield from 42  # TypeError: 'int' object is not iterable

yield from calls iter() on its operand. If the object doesn't implement __iter__, you get a TypeError.

Pitfall 2: Forgetting that yield from absorbs StopIteration.

Inside a generator, a bare return raises StopIteration. When you use yield from, the delegating generator catches StopIteration from the sub-generator and resumes. This is by design, but it means bugs in your sub-generator (accidentally returning early) can be silently swallowed.

Pitfall 3: Assuming yield from works in async generators.

As PEP 525 explains, while implementing yield from inside async def generator functions is theoretically possible, it would require a serious redesign of the generators implementation. In practice, yield from is not available inside asynchronous generators. For async delegation, use await or async for. (Source: PEP 525, Asynchronous yield from)

Pitfall 4: Confusing yield from with itertools.chain().

For simple one-way iteration, itertools.chain(gen_a(), gen_b()) and a function using yield from gen_a(); yield from gen_b() produce the same output. But chain() cannot forward send(), throw(), or close() to sub-generators, and it cannot capture return values. Reach for chain() when you only need to concatenate iterables; use yield from when you need the full protocol.

Warning

Pitfall 5: Priming conflicts. The yield from expansion shows that the sub-generator is automatically primed with next(). This means decorators that auto-prime coroutines (by calling next() before returning) are incompatible with yield from. If you use such a decorator on a sub-generator and then delegate to it with yield from, you'll skip the first yield point.

Pitfall 6: Assuming yield from creates a copy of the sub-iterator.

It doesn't. The delegating generator holds a reference to the exact same iterator object. If you pass a shared iterator into multiple generators sequentially (as in the document-parsing example above), each picks up where the last left off. This is powerful but can surprise you if you expect independent iteration.

When to Use yield from Today

With async/await handling the concurrency use case, yield from has settled into its permanent role: generator composition.

Use yield from when you're composing synchronous generators -- splitting a large generator into smaller, focused pieces. Use it for recursive generators traversing trees or nested structures. Use it anywhere you would write for item in sub_gen(): yield item, because yield from is not just shorter; it correctly handles send(), throw(), and close().

Reuven Lerner, a Python trainer who teaches developers at companies including Apple, Cisco, and Intel, summarized this clearly in his 2020 article on generators and coroutines: the real reason to use yield from is when you need a coroutine that acts as an agent between its caller and other coroutines. It lets you outsource yielded values to a sub-generator while also allowing that sub-generator to receive inputs from the caller. (Source: Reuven Lerner, "Making sense of generators, coroutines, and yield from," May 2020)

Do not use yield from as a replacement for await in new async code. The @asyncio.coroutine pattern is not just deprecated -- it has been removed from Python entirely as of version 3.11. And don't reach for yield from when a simple itertools.chain() would do, unless you need the bidirectional communication channel.

Finally, consider yield from for any pattern involving staged data processing with return values. Data validation pipelines, multi-phase parsers, accumulator patterns, and state machines all benefit from the return-value channel that yield from provides. The ability to say "delegate everything to this sub-generator and give me back its final result" is a design pattern that has no clean equivalent in Python without yield from.

Putting It All Together

Here's a complete, practical example that exercises every aspect of yield from: delegation, return values, exception forwarding, and clean composition.

def validate_and_sum():
    """Accept positive numbers via send(), return sum on completion."""
    total = 0
    count = 0
    while True:
        value = yield total  # Yield running total, receive next value
        if value is None:
            return {"total": total, "count": count}
        if not isinstance(value, (int, float)):
            raise TypeError(f"Expected a number, got {type(value).__name__}")
        if value < 0:
            raise ValueError(f"Expected positive number, got {value}")
        total += value
        count += 1

def process_batches(results):
    """Process multiple batches, collecting summaries."""
    batch_num = 0
    while True:
        batch_num += 1
        summary = yield from validate_and_sum()
        summary["batch"] = batch_num
        results.append(summary)

# Drive it
results = []
processor = process_batches(results)
running_total = next(processor)  # Prime; running_total = 0

# Batch 1
running_total = processor.send(10)   # running_total = 10
running_total = processor.send(20)   # running_total = 30
running_total = processor.send(5)    # running_total = 35
processor.send(None)                  # End batch 1, start batch 2

# Batch 2
processor.send(100)
processor.send(200)
processor.send(None)                  # End batch 2

print(results)
# [{'total': 35, 'count': 3, 'batch': 1},
#  {'total': 300, 'count': 2, 'batch': 2}]

Every value sent to processor flows transparently through to validate_and_sum. The running totals flow back transparently to the caller. When a batch ends, the return value flows back to process_batches as the value of the yield from expression. No manual forwarding, no boilerplate, no missed corner cases.

That's the promise of yield from. It turned generator delegation from a 25-line error-prone boilerplate exercise into a single expression that just works.

Key Takeaways

  1. Introduced in Python 3.3 via PEP 380: Authored by Gregory Ewing (created February 13, 2009) and accepted by Guido van Rossum on June 26, 2011, yield from solved a real architectural problem that had existed since PEP 342 gave generators the coroutine protocol.
  2. Transparent, bidirectional delegation: yield from correctly forwards next(), send(), throw(), and close() calls between caller and sub-generator -- none of this is possible with a simple for loop.
  3. It's an expression, not a statement: The sub-generator's return value becomes the value of the yield from expression, enabling patterns like coroutine-based data pipelines and accumulator protocols.
  4. PEP 479 interaction matters: In Python 3.7+, unexpected StopIteration inside generators becomes RuntimeError. yield from handles sub-iterator exhaustion correctly, but bare next() calls inside sub-generators will trigger the error.
  5. Foundation for asyncio: Before async/await arrived in Python 3.5 via PEP 492, yield from was the mechanism that made asyncio's coroutine model work. The @asyncio.coroutine decorator was deprecated in 3.8 and removed in 3.11.
  6. Its role today is generator composition: Use it when composing synchronous generators, traversing recursive structures, building staged data pipelines, or anywhere you need the full generator protocol respected across delegation boundaries. Use itertools.chain() when you only need one-way iteration.
  7. Debug with gi_yieldfrom and inspect: Walk delegation chains programmatically using gi_yieldfrom, and use inspect.getgeneratorstate() and inspect.getgeneratorlocals() to examine generator state at any point in the chain.

Understanding yield from isn't optional for Python developers who work with generators beyond trivial cases. It's the mechanism that makes generators composable, and composability is what separates code that scales from code that doesn't.

back to articles