The Difference Between Duck Typing and Structural Typing in Python

Python has always prided itself on caring about what an object can do rather than what it is. Duck typing and structural typing both live by that principle, but they enforce it at entirely different points in your program's life—and understanding that gap is what separates code that merely works from code that is genuinely maintainable.

If you search for the difference between duck typing and structural typing, you will find plenty of articles that treat them as synonyms or that only gesture vaguely at a distinction. They are not synonyms. The Python documentation itself describes structural subtyping as a "static equivalent" of duck typing—which tells you something important: the two mechanisms share a philosophy but occupy opposite ends of the static-versus-dynamic spectrum. Getting that wrong in a large codebase has real consequences.

Where the Confusion Starts

Both duck typing and structural typing reject the idea that an object must inherit from a specific class or explicitly declare that it implements a particular interface. In both systems, compatibility is determined by the methods and attributes an object exposes rather than by its name in a class hierarchy. That shared premise is why the two are routinely conflated.

The confusion is compounded by the fact that Python's own documentation and tooling use the phrase "static duck typing" as a near-synonym for structural subtyping. The PEP 544 specification, authored by Ivan Levkivskyi, Jukka Lehtosalo, and Lukasz Langa and accepted into Python 3.8, formally names its feature "Protocols: Structural subtyping (static duck typing)." The parenthetical is not decorative—it is an acknowledgment that these two concepts are related enough to need a clear label that distinguishes them.

The Core Distinction

Duck typing is a runtime behavior: Python calls the method, and if it doesn't exist, you get an AttributeError or TypeError then and there. Structural typing is a static analysis behavior: a type checker like mypy or Pyright evaluates whether the method exists before your code ever runs.

Build the Mental Model First

Think of duck typing as a trust-on-delivery system: you ship the package first and only discover the address was wrong when the courier returns with it. Structural typing is a pre-flight checklist: you verify the destination is valid before the plane leaves the gate. Same goal—get the package there—but entirely different failure modes. Keeping that image in mind as you read this article will make the code examples land faster.

To understand the practical difference, you need to look at how each mechanism operates under the hood, where each one can fail you, and what Python's modern type system offers to bridge them.

Duck Typing: Behavior Checked at Runtime

Duck typing takes its name from a phrase popularized in the Python community by Alex Martelli, who wrote in a 2000 post to the comp.lang.python Google Group that you should not check whether an object IS-a duck—instead you should check whether it BEHAVES-LIKE-A-duck "depending on exactly what subset of duck-like behaviour you need." That framing shifted the conversation from type identity to behavioral contract.

“check whether it QUACKS-like-a duck, WALKS-like-a duck”

Alex Martelli, comp.lang.python, July 2000

In practice, Python's duck typing means the interpreter makes no upfront promises about what an object can do. It simply attempts to call the method at the moment the code executes. If the method exists, execution proceeds. If it does not, Python raises an error at that moment. There is no earlier checkpoint.

# Pure duck typing: no type hints, no Protocol, no ABC
class Duck:
    def quack(self):
        return "Quack!"

    def walk(self):
        return "Waddle waddle"

class Person:
    def quack(self):
        return "I'm quacking like a duck!"

    def walk(self):
        return "Step step step"

class Cat:
    def meow(self):
        return "Meow!"

def make_it_quack(thing):
    # No type annotation, no inheritance check.
    # Python will try to call .quack() right now.
    return thing.quack()

duck   = Duck()
person = Person()
cat    = Cat()

print(make_it_quack(duck))    # "Quack!"
print(make_it_quack(person))  # "I'm quacking like a duck!"
print(make_it_quack(cat))     # AttributeError: 'Cat' object has no attribute 'quack'

Notice that the Cat error only surfaces when make_it_quack(cat) actually executes. If that line only ran under a particular branch condition or only during certain test scenarios, the bug could ship undetected. That is the defining characteristic of duck typing: it is fully dynamic, completely flexible, and entirely trusting until the moment of execution.

Why This Matters in Production

Imagine make_it_quack(cat) is inside an error-handling branch that only fires when a payment gateway times out. Your test suite never hits that path. The bug lives in your codebase for months. One night, the gateway goes down, the branch fires, and instead of a graceful fallback you get an AttributeError in production. Duck typing did not cause the bug—a bad object did—but duck typing ensured the problem was invisible until the worst possible moment. That is the trade-off you are accepting every time you skip annotations.

Duck Typing and the EAFP Principle

Experienced Python developers pair duck typing with the EAFP style (Easier to Ask Forgiveness than Permission), wrapping calls in try/except blocks rather than using hasattr guards before the call. This is idiomatic Python and it works well at small scale. The issue arrives when a codebase grows and you need to reason about compatibility across modules written by different people at different times.

# EAFP approach to duck typing
def make_it_quack_safely(thing):
    try:
        return thing.quack()
    except AttributeError:
        return f"{type(thing).__name__} cannot quack"

print(make_it_quack_safely(cat))  # "Cat cannot quack" — no crash
Runtime Risk

Duck typing errors surface at runtime, not before. In production systems, this means a method-name typo or a refactored class can pass all static checks and only fail when a specific code path is exercised by a user. Tools like mypy without Protocol annotations cannot catch these errors.

Pop Quiz You pass a Cat object (which has no quack() method) into make_it_quack(). With plain duck typing and no type annotations, when does Python raise an error?

Structural Typing: Behavior Checked at Analysis Time

Structural typing shares duck typing's core premise—that what matters is whether an object has the right shape, not what class it comes from—but it enforces that premise at static analysis time rather than at runtime. According to Wikipedia's definition, structural typing is "a static typing system that determines type compatibility and equivalence by a type's structure," in contrast to duck typing, which checks only the methods exercised along a particular execution path at runtime.

That last clause is worth pausing on: duck typing checks only the methods that are actually called in a given execution path. Structural typing checks the entire declared interface up front, regardless of which paths execute. This is a meaningful difference when your code has branching logic.

Languages like Go, Scala, OCaml, Elm, and TypeScript use structural typing as a first-class feature of their type systems. Go's interface system is a canonical example: a type satisfies an interface simply by having the right methods, without any declaration that it does so—but the Go compiler verifies this at compile time, not at runtime.

Python gained its own structural typing mechanism through the typing.Protocol class introduced in Python 3.8 via PEP 544. The mypy documentation describes this directly: structural subtyping "can be thought of as 'static duck typing'."

# Structural typing via typing.Protocol (Python 3.8+)
from typing import Protocol

class Quackable(Protocol):
    def quack(self) -> str: ...
    def walk(self) -> str: ...

class Duck:
    def quack(self) -> str:
        return "Quack!"
    def walk(self) -> str:
        return "Waddle waddle"

class Person:
    def quack(self) -> str:
        return "I'm quacking like a duck!"
    def walk(self) -> str:
        return "Step step step"

class Cat:
    def meow(self) -> str:
        return "Meow!"
    # NOTE: Cat has no .quack() or .walk()

def make_it_quack(thing: Quackable) -> str:
    return thing.quack()

duck   = Duck()
person = Person()
cat    = Cat()

make_it_quack(duck)    # mypy: OK — Duck has quack() and walk()
make_it_quack(person)  # mypy: OK — Person has quack() and walk()
make_it_quack(cat)     # mypy: ERROR — Cat is incompatible with Quackable
                       # Argument 1 to "make_it_quack" has incompatible type
                       # "Cat"; expected "Quackable"

The critical point: Cat does not inherit from Quackable and does not need to. mypy identifies the incompatibility purely by comparing the structure of Cat against the interface declared in Quackable. This is structural subtyping. Duck and Person are compatible with Quackable not because they said so, but because they structurally satisfy it.

Structural Typing in One Sentence

The Protocol defines the shape of a contract. Any class that has the right shape automatically satisfies it, the way a standard power outlet accepts any plug that matches its socket geometry—no brand registration required. The socket is the Protocol; the plug's shape is the class's structure; and the electrical panel inspector (mypy/Pyright) checks the fit before you ever touch the switch.

Pro Tip

Neither Duck nor Person needs to declare class Duck(Quackable). That's the whole point of structural typing: implicit satisfaction of an interface, verified statically. This keeps your classes decoupled from the protocol definition while still giving type checkers something concrete to verify.

How PEP 544 Formalized the Bridge

Before Python 3.8, developers who wanted any kind of interface enforcement had two options: use Abstract Base Classes (ABCs) with explicit inheritance, or use plain duck typing and trust that callers passed the right objects. Both had drawbacks. ABCs required explicit subclassing, which created coupling between the implementing class and the ABC definition. Pure duck typing provided no tooling support whatsoever.

PEP 544 solved this by introducing typing.Protocol. The PEP's stated motivation is that structural subtyping maps naturally to Python's existing duck typing semantics, allowing objects to be evaluated by their properties independently of their runtime class—without requiring the explicit opt-in declarations that ABCs demand. The Protocol mechanism gives static type checkers a way to reason about duck-typed code without forcing developers to abandon Python's flexible, non-hierarchical style.

A class is structurally compatible with a Protocol if it defines the required methods and attributes with matching signatures. No inheritance declaration is required. The type checker handles the matching. The PEP specifies that protocol classes are "implicitly" satisfied—a class is a structural subtype of a protocol simply by having the right shape, without any opt-in declaration at the class definition site.

# The PEP 544 canonical example: Bucket satisfies SizedIterable
# without explicitly inheriting from either.
# Python 3.9+: use collections.abc.Iterator instead of typing.Iterator (PEP 585)
from collections.abc import Iterator
from typing import Protocol

class SizedIterable(Protocol):
    def __len__(self) -> int: ...
    def __iter__(self) -> Iterator[int]: ...

class Bucket:
    # No inheritance from SizedIterable, no ABC registration.
    # Bucket simply implements the required methods.
    def __init__(self) -> None:
        self._data: list[int] = [1, 2, 3]
        self._size: int = len(self._data)

    def __len__(self) -> int:
        return self._size

    def __iter__(self) -> Iterator[int]:
        return iter(self._data)

def collect(items: SizedIterable) -> int:
    total = 0
    for item in items:
        total += item
    return total

b = Bucket()
result = collect(b)  # mypy: passes type check
print(result)        # 6

Runtime Protocol Checks with @runtime_checkable

By default, Protocol classes cannot be used with isinstance() at runtime—they are purely a static analysis tool. If you want runtime isinstance checks to work as well, you apply the @runtime_checkable decorator. The Python documentation notes that runtime-checkable protocols "check only the presence of given attributes, ignoring their type signatures." That means a runtime check is a weaker guarantee than a static type-checker check, which validates full method signatures.

from typing import Protocol, runtime_checkable

@runtime_checkable
class Quackable(Protocol):
    def quack(self) -> str: ...

class Duck:
    def quack(self) -> str:
        return "Quack!"

class Cat:
    def meow(self) -> str:
        return "Meow!"

duck = Duck()
cat  = Cat()

print(isinstance(duck, Quackable))  # True  — has .quack()
print(isinstance(cat,  Quackable))  # False — no .quack()

# Important: runtime_checkable only checks attribute presence,
# not return type or argument signatures. A method named quack
# that returns None would still pass the isinstance check.

This asymmetry between static and runtime checks is an important nuance. When you use @runtime_checkable, you are getting something closer to a duck typing check (does the method exist?) rather than a full structural typing check (does the method have the correct signature?). The static analysis layer is what provides the stronger guarantee.

The Two-Layer Picture

Think of it as two separate inspection stations at a border crossing. The runtime isinstance() check is the first station: it confirms you are carrying something labeled "passport." The static type checker is the second station: it opens the passport, verifies the photo, checks the expiry date, and confirms the visa page matches your destination. The first station can wave through a convincing forgery. The second station cannot. Using @runtime_checkable alone is like installing only the first station and calling it security.

Performance Warning: @runtime_checkable is Slow

The Python documentation explicitly warns that isinstance() checks against runtime-checkable Protocols are significantly slower than checks against non-protocol classes, making them unsuitable for performance-critical paths. In performance-sensitive code, prefer hasattr() calls for structural checks. In Python 3.12, the internal implementation was updated to use inspect.getattr_static() instead of hasattr(), which eliminates the risk of isinstance() accidentally triggering expensive properties or raising exceptions—and the Python 3.12 release notes document speed improvements of 2x to 20x for protocols with fewer members. However, the general performance caveat compared to nominal class checks remains, and protocols with fourteen or more members can still be slower on 3.12 than on 3.11.

The False-Positive Trap: Attribute Type Mismatches

There is a subtle and practical pitfall with @runtime_checkable that trips up developers who assume it works like a thorough structural check. Because runtime checks only verify attribute presence—not attribute type—you can get false positives that are hard to debug.

import dataclasses
from typing import Protocol, runtime_checkable

@runtime_checkable
class HasIntScore(Protocol):
    score: int

@dataclasses.dataclass
class PlayerWithString:
    score: str  # Wrong type: str, not int

player = PlayerWithString(score="ninety-five")

# This returns True — even though score is a str, not an int.
# @runtime_checkable only checks that the attribute *exists*.
print(isinstance(player, HasIntScore))  # True (misleading!)

# A static type checker (mypy, Pyright) WOULD catch this mismatch.
# The runtime check cannot.

This false positive is not a bug—it is documented behavior. The Python documentation notes that isinstance() with protocols is not fully reliable at runtime because attribute types are not verified, and the mypy documentation is explicit that method signatures are not inspected during runtime protocol checks. The lesson: @runtime_checkable is appropriate for guard-rail checks on method presence, not for enforcing full structural contracts. If you need the full structural guarantee, you need the static analysis layer.

Can You Explicitly Inherit from a Protocol?

The article so far has emphasized that Protocol satisfaction is implicit—you never have to declare class Duck(Quackable). But what if you want to? Explicit inheritance from a Protocol is allowed and has a specific meaning: it signals to the type checker that your class is asserting it satisfies the Protocol, which causes the type checker to verify the implementation immediately rather than waiting until the class is used as an argument. This is useful as a deliberate self-check at definition time, especially when writing a class that is intended to satisfy a Protocol contract for distribution or documentation purposes.

from typing import Protocol

class Quackable(Protocol):
    def quack(self) -> str: ...
    def walk(self) -> str: ...

# Explicit inheritance: type checker verifies Duck at class definition,
# not just at the point of use. The behavior at runtime is identical.
class Duck(Quackable):
    def quack(self) -> str:
        return "Quack!"
    # Forgot walk() — mypy/Pyright will flag this immediately,
    # not later when Duck is passed to make_it_quack().

# NOTE: A class that explicitly inherits from a Protocol is treated
# as a nominal subtype for that Protocol, which means it cannot be
# used as a standalone Protocol itself without also inheriting Protocol.

In practice, implicit satisfaction is the more common pattern because it preserves decoupling between the implementing class and the Protocol definition. Explicit inheritance is a style choice that trades decoupling for earlier error detection at the class definition site. Neither is wrong.

Protocol Composition

Protocols support multiple inheritance, which lets you compose narrower interfaces into broader ones without duplicating method definitions. This is the Pythonic way to build layered interface hierarchies without class coupling.

from typing import Protocol

class Readable(Protocol):
    def read(self) -> str: ...

class Writable(Protocol):
    def write(self, data: str) -> None: ...

# Compose two Protocols into one broader interface.
# ReadWritable requires all methods from both.
class ReadWritable(Readable, Writable, Protocol):
    pass

class FileBuffer:
    def __init__(self) -> None:
        self._data: str = ""

    def read(self) -> str:
        return self._data

    def write(self, data: str) -> None:
        self._data = data

# FileBuffer satisfies ReadWritable implicitly — no inheritance required.
def process(rw: ReadWritable) -> str:
    rw.write("hello")
    return rw.read()

buf = FileBuffer()
print(process(buf))  # mypy: OK — FileBuffer has read() and write()
Protocol in the Inheritance List

When composing Protocols, you must include Protocol itself in the class definition (class ReadWritable(Readable, Writable, Protocol)). Omitting it makes ReadWritable a regular abstract class, not a Protocol, and breaks implicit satisfaction. This is a common trip-up when composing multiple Protocols for the first time.

Spot the Bug Something in this Protocol composition is broken. A type checker will refuse to treat ReadWriteClosable as a Protocol. Can you find the problem?

Study the code carefully, then pick the line or pattern that contains the defect.

from typing import Protocol

class Readable(Protocol):
    def read(self) -> str: ...

class Writable(Protocol):
    def write(self, data: str) -> None: ...

class Closable(Protocol):
    def close(self) -> None: ...

class ReadWriteClosable(Readable, Writable, Closable):
    pass

class FileStream:
    def __init__(self) -> None:
        self._buf: str = ""

    def read(self) -> str:
        return self._buf
    def write(self, data: str) -> None:
        self._buf = data
    def close(self) -> None:
        self._buf = ""

def process(rw: ReadWriteClosable) -> None:
    rw.write("hello")
    rw.close()

Side-by-Side Comparison

The accordion below captures the essential differences across the dimensions that matter most when choosing between these two approaches in production Python code. Tap any dimension to expand the comparison.

  • Duck Typing At runtime, on each call
    Structural Typing (Protocol) At static analysis time (mypy, Pyright)
  • Duck Typing No
    Structural Typing (Protocol) No — Protocol defines the shape; classes satisfy it implicitly
  • Duck Typing No
    Structural Typing (Protocol) Yes, for the type checker to reason about it
  • Duck Typing When the code path executes
    Structural Typing (Protocol) Before the code runs
  • Duck Typing Limited or none
    Structural Typing (Protocol) Full, based on Protocol definition
  • Duck Typing None — method name only
    Structural Typing (Protocol) Full — return type, argument types, arity
  • Duck Typing Always
    Structural Typing (Protocol) Python 3.8 (PEP 544)
  • Duck Typing No
    Structural Typing (Protocol) No
  • Duck Typing Not applicable
    Structural Typing (Protocol) Only with @runtime_checkable
  • Duck Typing Rapid development, scripts, small codebases
    Structural Typing (Protocol) Libraries, APIs, large team codebases
Pop Quiz You apply @runtime_checkable to a Protocol and use isinstance(obj, MyProtocol) at runtime. The check returns True. What exactly has been verified?

Choosing Between Them

The mypy project, which is the de facto standard Python static type checker, is candid about its design choices regarding these two approaches. Its documentation acknowledges that structural subtyping amounts to static duck typing and notes that some consider it better suited to dynamically typed languages like Python. It continues to note that mypy primarily uses nominal subtyping and leaves structural subtyping mostly opt-in, offering several practical reasons: nominal type systems produce shorter error messages, Python's built-in isinstance tests are widely used and nominally oriented, and more programmers are familiar with nominal typing from Java and C#.

That observation from the mypy team encapsulates the practical trade-off. Structural typing via Protocols is more expressive and catches errors earlier, but it requires more upfront work and discipline. Duck typing is effortless and maximally flexible, but it defers all incompatibility errors to runtime.

When Duck Typing Is the Right Choice

Duck typing remains appropriate and even preferable in several real-world scenarios. Scripts, automation tools, and small utilities that will be read and maintained by one or two people rarely benefit from the overhead of Protocol definitions. Python's standard library itself uses duck typing extensively—the built-in for loop works with any object that implements __iter__, without any static Protocol annotation in the loop construct itself.

Duck typing is also the right tool when you are working with objects you do not control. If you are calling into a third-party library whose classes you cannot annotate, duck typing lets you interact with them freely without wrapping every call in an adapter or requiring the library author to implement your Protocol.

Two less-discussed scenarios where duck typing is genuinely the better engineering choice:

Incremental annotation of legacy codebases. When you are introducing type hints into an existing, unannotated project, attempting to retrofit Protocols across the entire codebase at once is counterproductive. Duck typing effectively serves as the baseline: mypy treats unannotated functions as having implicit Any types, which means they remain compatible with almost anything while you incrementally add structure at the boundaries that matter most. Trying to Protocol-ify everything in one pass typically creates merge conflicts, stalls the migration, and produces Protocols that are too broad to be useful. The correct pattern is to let duck typing do its job internally while you harden the external surface progressively.

Test doubles and mock objects. In unit tests, creating a full class hierarchy or a formal Protocol just to inject a mock is often unnecessary overhead. Duck typing lets you pass a lightweight object—or even a types.SimpleNamespace with the right attributes—directly into the function under test without any registration, inheritance, or Protocol decoration. This is faster to write, immediately obvious to read, and does not pollute your production code with test-oriented type scaffolding. The rule of thumb: if the mock only lives in a test file and is used by a single test, duck typing is the right call.

When Structural Typing (Protocol) Is the Right Choice

Structural typing with Protocols earns its place in any codebase where multiple developers are contributing, where an API is being published for external consumption, or where the cost of a runtime type error in production is high. Defining a Protocol for a function's parameter type documents the expected contract explicitly and gives every IDE and type checker in the toolchain something concrete to validate against. If you are new to type annotations in Python or concerned about how they interact with dynamic typing, that is worth understanding before adding Protocols to a codebase.

The mypy team offers a useful heuristic: use nominal classes where possible, and Protocols where necessary. A more precise way to apply this in practice: use a Protocol when you want to accept objects from multiple unrelated class hierarchies, when you are building a plugin or extension interface, or when you want to enforce method signature correctness rather than just method name presence.

Several deeper cases where Protocol is not just convenient but specifically load-bearing:

Callback and strategy patterns across module boundaries. When your core module accepts a callable or a strategy object from a caller it knows nothing about, duck typing gives that caller no indication of what signature is expected. A Protocol makes the contract explicit and gives both sides a tool to verify compliance independently. This is particularly valuable when the core module is maintained by one team and the strategy implementations come from other teams or from plugin authors outside the organization. The Protocol becomes the shared contract that neither side needs to coordinate on directly.

Generic functions that must work across unrelated hierarchies. Consider a serialization function that should accept anything with a to_dict(self) -> dict method—database models, configuration objects, API response wrappers, domain entities from separate packages. A Protocol defines that single-method contract once. Any future type that satisfies the shape is automatically compatible, including types written after the Protocol was defined. An ABC would require every one of those types to be aware of and register with your serialization layer. A Protocol requires nothing from the implementing side.

Overload resolution and narrowing in complex type hierarchies. When a function accepts a Union of several types and uses isinstance checks to branch on behavior, type checkers need concrete structural information to narrow the type correctly inside each branch. A Protocol paired with @runtime_checkable can serve as both the static narrowing signal and the runtime guard in a single construct, replacing an ad-hoc combination of duck typing, hasattr checks, and Union annotations that would otherwise lose type information after the branch.

# Practical Protocol example: a plugin interface for a report system
from typing import Protocol

class ReportRenderer(Protocol):
    def render(self, data: dict) -> str: ...
    def supports_format(self, fmt: str) -> bool: ...

# Any class with these two methods satisfies ReportRenderer.
# Neither needs to inherit from it or know it exists.

class HTMLRenderer:
    def render(self, data: dict) -> str:
        return f"<html>{data}</html>"

    def supports_format(self, fmt: str) -> bool:
        return fmt.lower() == "html"

class MarkdownRenderer:
    def render(self, data: dict) -> str:
        return f"# Report\n{data}"

    def supports_format(self, fmt: str) -> bool:
        return fmt.lower() in ("md", "markdown")

def generate_report(renderer: ReportRenderer, data: dict) -> str:
    # Type checker verifies renderer has correct shape at analysis time.
    # No isinstance check needed. No ABC required.
    return renderer.render(data)

html_r     = HTMLRenderer()
markdown_r = MarkdownRenderer()

print(generate_report(html_r,     {"title": "Q1"}))
print(generate_report(markdown_r, {"title": "Q1"}))

This pattern is particularly powerful when HTMLRenderer and MarkdownRenderer come from different packages written by different authors. As long as both satisfy the structural shape of ReportRenderer, the type checker accepts them and the runtime succeeds—without any coordination between the authors.

mypy vs Pyright: Protocol Strictness Differs

The article has named mypy and Pyright together, but they are not equivalent in their Protocol enforcement. Pyright (the engine behind Pylance in VS Code) applies stricter rules by default: it flags Protocol mismatches more aggressively and reports errors at definition time in more edge cases than mypy does with its default settings. mypy's strictness can be increased with --strict, but out of the box, Pyright will catch certain structural incompatibilities that default mypy silently accepts. If your team relies on VS Code for day-to-day development, the type errors you see from Pylance may not exactly match what mypy reports in CI—this is expected behavior, not a misconfiguration.

Goose Typing: A Middle Ground Worth Knowing

The term was coined by Alex Martelli and first appeared in print in Luciano Ramalho's Fluent Python, 1st edition (O'Reilly, 2015), in a chapter section Ramalho dedicated to Martelli titled "Alex Martelli's Waterfowl." Ramalho publicly credited the coinage to Martelli in a January 28, 2015 post, describing the term as Martelli's coinage for the practice of virtual subclassing of Python ABCs. Martelli himself served as a technical reviewer for Fluent Python and is credited with the phrase in that context.

Goose typing refers to the practice of using isinstance checks against Abstract Base Classes (ABCs) rather than against concrete classes. This occupies the space between pure duck typing and full structural typing—it allows runtime checks while still enabling flexibility through ABC registration and the __subclasshook__ mechanism.

“isinstance(obj, cls) is now just fine”

Alex Martelli, as quoted in Luciano Ramalho, Fluent Python, 1st ed., O'Reilly, 2015 — with the condition that cls must be an Abstract Base Class, not a concrete class

Goose typing predates Protocols and remains relevant for legacy codebases and for situations where you need runtime isinstance semantics with more flexibility than a concrete class check provides. However, for new code where type checker support is a priority, typing.Protocol with @runtime_checkable is the more modern and expressive option.

The Spectrum, Not a Binary

It helps to picture duck typing, goose typing, structural typing, and nominal typing not as four separate categories but as four points on a single spectrum. At one end: pure duck typing, zero guarantees, maximum flexibility. At the other end: nominal subtyping with explicit inheritance, maximum guarantees, minimum flexibility. Goose typing (ABC + isinstance) and structural typing (Protocol + mypy) sit in the middle. Understanding where your codebase needs to sit on that spectrum—and why—is the actual skill. The syntax is just the implementation.

Key Takeaways

  1. Duck typing is a runtime phenomenon. Python makes no upfront contract about what an object can do. It calls the method when the code runs, and raises an error only at that moment if the method is absent. No annotations, no inheritance, no tooling required—but also no early warning.
  2. Structural typing is a static analysis phenomenon. Through typing.Protocol, you declare the shape an object must have. Type checkers verify compatibility before execution. Classes satisfy Protocols implicitly by having the right structure, with no inheritance declaration required.
  3. PEP 544 (Python 3.8) is the foundation. Structural subtyping in Python is not available through any other mechanism. If you need static duck typing, you need from typing import Protocol and a type checker like mypy or Pyright.
  4. The two are complementary, not competing. The same codebase can use duck typing in low-stakes utility functions and structural typing at public API boundaries. The mypy documentation itself recommends a pragmatic approach: nominal classes by default, Protocols where interface flexibility matters.
  5. Runtime Protocol checks are weaker than static ones. Applying @runtime_checkable only validates attribute presence, not signature correctness. The full structural guarantee only exists at the static analysis layer.

Python's type system has expanded considerably since 2015. Understanding where duck typing ends and structural typing begins is not just academic—it directly shapes how you design APIs, how your IDE assists you, and how many bugs you catch before they reach a user. The two concepts share a philosophical root in behavioral compatibility, but they operate at fundamentally different checkpoints in your development workflow. Knowing which checkpoint you need is the difference between reaching for typing.Protocol and simply writing a clean, unannotated function that trusts its callers. For more Python tutorials covering the language's type system, data structures, and core concepts, explore the rest of PythonCodeCrack.

Pop Quiz Your team is building a public Python library. Three different third-party classes from packages you do not control all happen to implement a render(data: dict) -> str method. You want your library's core function to accept any of them. Which approach gives you the best combination of type-checker support and decoupling from those third-party classes?

How to Use typing.Protocol for Structural Typing

The following steps walk through defining and wiring a Protocol from scratch. Each step maps directly to a concept explained earlier in this article. If you are adding Protocols to an existing codebase, you can apply these steps incrementally at one function boundary at a time.

  1. Import Protocol from the typing module. At the top of your file, write from typing import Protocol. This is available in Python 3.8 and later as part of PEP 544. For the Iterator type and other ABCs, import from collections.abc rather than typing (PEP 585 best practice for Python 3.9+).
  2. Define a Protocol class with the required method signatures. Create a class that inherits from Protocol and declare the methods your interface requires, using ellipsis (...) as the body. Example: class Quackable(Protocol): def quack(self) -> str: ...
  3. Annotate your function parameter with the Protocol type. Use the Protocol as the type annotation for any function parameter that should accept structurally compatible objects: def make_it_quack(thing: Quackable) -> str: return thing.quack()
  4. Implement the required methods in any class — no inheritance needed. Any class that defines the required methods with matching signatures automatically satisfies the Protocol. No inheritance declaration, ABC registration, or modification to the implementing class is required.
  5. Run a static type checker to verify structural compatibility. Run mypy or Pyright against your codebase. The type checker will flag any call site where an incompatible object is passed to a Protocol-annotated parameter, before the code ever executes.
  6. Add @runtime_checkable if you need isinstance() support. If you need to use isinstance(obj, MyProtocol) at runtime, decorate the Protocol with @runtime_checkable. Remember that runtime checks only verify attribute presence, not full method signatures. The static type checker provides the stronger guarantee.

Frequently Asked Questions

  • Duck typing checks whether an object has the required methods at runtime — the error only surfaces when the incompatible code path actually executes. Structural typing checks the same thing before your code runs, using a static type checker like mypy or Pyright against a typing.Protocol definition. Both approaches determine compatibility by what an object can do rather than what class it inherits from, but they operate at opposite ends of the static-versus-dynamic spectrum.

  • Yes. PEP 544 designates its own feature as static duck typing in the title, which is not decorative — it is the specification's own acknowledgment that structural typing and static duck typing describe the same mechanism. In Python, both terms refer to the ability to verify interface compatibility at analysis time without requiring explicit inheritance, using typing.Protocol and a type checker like mypy or Pyright.

  • Use typing.Protocol at public API boundaries, in larger codebases where multiple developers work on interconnected modules, and anywhere a type mismatch caught before runtime is worth more than the overhead of writing a Protocol definition. Plain duck typing is appropriate for small utility functions, scripts, and internal helper code where Protocol annotations are unnecessary. The two approaches are complementary — the same codebase can use both.

  • typing.Protocol was introduced in Python 3.8 via PEP 544, authored by Ivan Levkivskyi, Jukka Lehtosalo, and Lukasz Langa. Before Python 3.8, achieving interface-like behavior required either explicit ABC inheritance or plain duck typing with no static tooling support.

  • No. The @runtime_checkable decorator allows isinstance() checks against a Protocol at runtime, but those checks only verify that the required attributes exist — they do not validate method signatures or attribute types. A static type checker verifies full structural compatibility including return types and argument types. An object can pass an isinstance() check against a @runtime_checkable Protocol while still being rejected by mypy or Pyright.

  • Yes. That is the defining characteristic of structural typing in Python. A class satisfies a Protocol implicitly simply by implementing the required methods and attributes with matching signatures. No inheritance declaration, no ABC registration, and no modification to the implementing class are required. This keeps classes decoupled from the Protocol definition.

  • Goose typing is a term coined by Alex Martelli, first published in Luciano Ramalho's Fluent Python (O'Reilly, 2015). It refers to using isinstance() checks against Abstract Base Classes (ABCs) rather than concrete classes, allowing runtime flexibility through ABC registration and the __subclasshook__ mechanism. Goose typing sits between pure duck typing and full structural typing — it predates typing.Protocol and remains relevant for legacy codebases, but for new code where type checker support matters, typing.Protocol with @runtime_checkable is the more modern option.

  • The core difference is how compatibility is declared. With an Abstract Base Class, a class must explicitly inherit from the ABC (or be registered via register()) before it is considered compatible. That is nominal subtyping: the class's name in a hierarchy determines acceptance. With a typing.Protocol, a class needs no declaration at all — if its methods match the Protocol's shape, a type checker treats it as compatible. That is structural subtyping. ABCs were also designed to be mixed with runtime isinstance checks from the start; Protocols support isinstance only when decorated with @runtime_checkable, and even then the check is weaker. For new code, Protocols are the preferred choice when you want flexibility without coupling; ABCs remain appropriate when you want explicit opt-in declarations and shared implementation through @abstractmethod.

  • Yes, and this is the recommended approach for many real codebases. Duck typing and structural typing are not mutually exclusive. A common pattern is to use plain, unannotated duck typing in private helper functions and internal module code where the callers are few and well-understood, and to use typing.Protocol at the boundaries that other modules or external consumers interact with. Type checkers like mypy treat unannotated functions as having implicit Any types, which means they are compatible with almost anything — this is the tool's way of deferring to duck typing for code you have not yet annotated. You can introduce Protocols incrementally, starting at the most exposed interfaces, without needing to annotate every function in the codebase at once.

Sources and Further Reading

PEP 544 — Protocols: Structural subtyping (static duck typing): peps.python.org/pep-0544
Python typing module documentation: docs.python.org/3/library/typing.html
mypy FAQ — Structural subtyping: mypy.readthedocs.io/en/latest/faq.html
Python typing reference — Protocols: typing.python.org/en/latest/reference/protocols.html
Ramalho, Luciano. Fluent Python, 2nd ed. O'Reilly Media, 2022.
Martelli, Alex. "polymorphism (was Re: Type checking in python?)." comp.lang.python, Google Groups, July 26, 2000.
Devopedia. "Duck Typing." Version 9, June 10, 2024. devopedia.org/duck-typing