Learn How to Write Fewer Conditional Statements in Python: Absolute Beginners Tutorial

Final Exam & Certification

Complete this tutorial and pass the 10-question final exam to earn a downloadable certificate of completion.

skip to exam

When your Python code starts growing a long chain of if, elif, and else branches, something has usually gone wrong at the design level. This tutorial teaches a different way to think about that problem — one that uses dictionaries, functions, and objects instead of stacking conditions.

What's in this Python Tutorial

A beginner's first instinct when handling multiple conditions is to reach for if-elif-else. That instinct is not wrong for small problems. The issue appears when the chain keeps growing. Every new case requires opening the same block of code and adding another branch. Tests become harder to write, bugs hide in the middle of long chains, and adding new behavior always means touching existing code. Across Python tutorials at every level, this growing chain pattern comes up as one of the first signs that a refactor is overdue. This tutorial introduces two core techniques for avoiding that trap.

The Problem with Growing if-else Chains

Consider a simple scenario: a program that applies a discount based on a customer type. Here is what that commonly looks like when written with conditionals:

python
def apply_discount(price, customer_type):
    if customer_type == "student":
        return price * 0.80
    elif customer_type == "senior":
        return price * 0.75
    elif customer_type == "employee":
        return price * 0.60
    elif customer_type == "vip":
        return price * 0.50
    else:
        return price

This works. But now imagine the business adds a "military" type, then a "loyalty" type, then a "press" type. Each addition requires re-opening this function and inserting a new branch. The function grows longer and more complex over time, and every change is a risk — editing one branch could accidentally affect another.

Warning

When a function needs to change every time you add a new case, that function is fragile. In software design, the principle that code should be open for extension but closed for modification is called the Open/Closed Principle (OCP). Originally coined by Bertrand Meyer in his 1988 book Object-Oriented Software Construction and later popularized as part of the SOLID principles by Robert C. Martin, the OCP states that software entities should be open for extension but closed for modification. Long if-else chains frequently violate it.

The deeper issue is that the data (which discount applies to which customer type) and the logic (how to apply it) are tangled together inside the conditional. Separating them is the key move.

code builder click a token to place it

Build the correct Python expression to look up a value in a dictionary and call whatever function is stored there, passing price as the argument:

your code will appear here...
handler dispatch.get = ( customer_type , default_handler ) handler ( price ) dispatch[customer_type] return
Why: The pattern is: first retrieve the function stored at the key using dispatch.get(key, default), assign it to a variable, then call that variable with the argument. Using .get() with a default prevents a KeyError when the key is not found. The distractor dispatch[customer_type] would raise an error for unknown types, and return is not part of this lookup expression.

Dictionaries as Dispatch Tables

A dispatch table is a dictionary where each key maps to a callable — a function or method. Instead of a chain of elif branches deciding which function to run, you look up the key and call whatever is stored there. The dictionary holds the decision; your code just retrieves and executes it.

Here is the discount example rewritten using a dispatch table:

python
def student_discount(price):   return price * 0.80
def senior_discount(price):    return price * 0.75
def employee_discount(price):  return price * 0.60
def vip_discount(price):       return price * 0.50
def no_discount(price):        return price

DISCOUNTS = {
    "student":  student_discount,
    "senior":   senior_discount,
    "employee": employee_discount,
    "vip":      vip_discount,
}

def apply_discount(price, customer_type):
    handler = DISCOUNTS.get(customer_type, no_discount)
    return handler(price)

Notice that apply_discount is now just two lines. Adding a new customer type means adding one entry to the DISCOUNTS dictionary and one new function. The existing apply_discount function never needs to change. That is the Open/Closed Principle in practice.

Mental model: routing vs. deciding

The key shift happening here is not about syntax — it is about responsibility. In the if-else version, apply_discount both decides which behavior to run and runs it. In the dispatch table version, apply_discount only routes. The decision has been pushed into the data. Once you internalise this split, you will start seeing routing-vs-deciding as a design question in every function you write.

before / after drag the divider to compare
if-else chain
def apply_discount(price, customer_type):
    if customer_type == "student":
        return price * 0.80
    elif customer_type == "senior":
        return price * 0.75
    elif customer_type == "employee":
        return price * 0.60
    elif customer_type == "vip":
        return price * 0.50
    else:
        return price

# Adding "military" means editing this function
dispatch table
def student_discount(p):  return p * 0.80
def senior_discount(p):   return p * 0.75
def employee_discount(p): return p * 0.60
def vip_discount(p):      return p * 0.50
def no_discount(p):       return p

DISCOUNTS = {
    "student":  student_discount,
    "senior":   senior_discount,
    "employee": employee_discount,
    "vip":      vip_discount,
}

def apply_discount(price, customer_type):
    handler = DISCOUNTS.get(customer_type, no_discount)
    return handler(price)

# Adding "military": one new function + one dict entry
code trace step through the execution

Watch exactly what Python does when apply_discount(100, "senior") runs. Click next step to advance.

1DISCOUNTS = {"student": student_discount, "senior": senior_discount, ...}
2apply_discount(100, "senior") ← call begins
3handler = DISCOUNTS.get("senior", no_discount) ← hash("senior") → bucket → senior_discount
4handler ← is now the function object senior_discount (not its return value)
5return handler(100) ← calls senior_discount(100)
6return 100 * 0.75 ← evaluates to 75.0
775.0 ← final result returned to caller
Step 0 of 6 The dispatch table is built once at module load time. Each value is a reference to a function object — nothing has been called yet.
Pro Tip

The dictionary stores references to functions, not the result of calling them. Notice there are no parentheses after the function names inside the dictionary — writing student_discount stores the function itself, while student_discount(price) would call it immediately and store the return value.

You can also use lambda expressions for very short operations, or even store methods from class instances. The key insight is that functions in Python are objects. They can be stored in dictionaries, passed as arguments, and returned from other functions.

Going Further: functools.singledispatch

Python's standard library includes a tool that formalises dispatch at the type level: functools.singledispatch. Rather than keying on a string, it keys on the type of the first argument. You register handler functions per type and Python routes calls automatically. This pattern is used inside CPython itself — the pprint module uses it to format different container types without a single isinstance check in the calling code.

python
from functools import singledispatch

@singledispatch
def describe(value):
    return f"unknown type: {type(value).__name__}"

# Python 3.7+: annotate the parameter instead of passing the type to register.
# This is now the idiomatic style — the type is read from the annotation.
@describe.register
def _(value: int):
    return f"integer: {value}"

@describe.register
def _(value: str):
    return f"string of length {len(value)}"

@describe.register
def _(value: list):
    return f"list with {len(value)} items"

print(describe(42))          # integer: 42
print(describe("hello"))     # string of length 5
print(describe([1, 2, 3]))   # list with 3 items
print(describe(3.14))        # unknown type: float

# Subclasses are handled automatically via MRO — bool dispatches to int:
print(describe(True))        # integer: True

The @singledispatch decorator registers the base (fallback) implementation on the function itself. Since Python 3.7, the preferred way to register handlers is to annotate the parameter with its type and use @describe.register with no argument — Python reads the type from the annotation. This avoids repeating the type twice and keeps the registration declaration self-documenting. Adding support for a new type means writing a new registered function — the calling code never changes. This is the same Open/Closed Principle applied at the type level rather than at the string-key level. Python 3.8+ also provides functools.singledispatchmethod for use inside classes.

Pro Tip

Using _ as the name for every registered handler is idiomatic Python — only the type annotation determines which function runs, not the name. You can also stack multiple type decorators on a single handler to cover several types at once: @describe.register on a function annotated value: int, then another @describe.register for value: float, or in Python 3.11+ use a union type directly: @fn.register(int | float).

The __missing__ Hook: Dispatch With a Computed Fallback

A lesser-known dispatch technique involves subclassing dict and overriding its __missing__ method. Normally, accessing a missing key raises a KeyError. When you define __missing__, Python calls it instead, handing you the key so you can decide dynamically what to return. This is how collections.defaultdict works internally, but the approach generalises to any computed fallback.

python
class FallbackDispatch(dict):
    def __missing__(self, key):
        print(f"No handler for '{key}' - using generic fallback")
        return self.setdefault(key, lambda x: f"generic: {x}")

handlers = FallbackDispatch({
    "student":  lambda p: p * 0.80,
    "employee": lambda p: p * 0.60,
})

print(handlers["student"](100))   # 80.0
handler = handlers["vip"]         # __missing__ fires, stores a fallback
print(handler(100))               # generic: 100
print(handlers["vip"](200))       # generic: 200 (cached now)

This pattern is useful when the set of valid keys is not fully known at startup, or when generating handlers programmatically. The key distinction from defaultdict is that __missing__ lets you inspect the key itself before deciding what to return, rather than calling a fixed factory function blindly.

Critical: __missing__ is not called by .get() or the in operator

__missing__ is only triggered by bracket access — d[key] — which calls __getitem__ at the Python level. It is not triggered by d.get(key) or key in d. The reason is that dict.get() is implemented directly in C (dict_get() in Objects/dictobject.c) and bypasses __getitem__ entirely. This is a counterintuitive but intentional CPython implementation detail: dict.get() goes straight to the hash table lookup and returns None (or your default) for missing keys without ever calling your __missing__ method. The in operator goes through __contains__, which similarly bypasses __missing__. If you subclass collections.UserDict instead of dict, the behavior is different: UserDict.get() is implemented in pure Python and calls self[key] internally, so __missing__ does fire. This inconsistency across base classes is a genuine sharp edge when building dispatch tables that rely on fallback computation.

Here is a comparison of both approaches side by side:

Adding a new case
Edit the existing function and add a new elif branch
Lookup time
Linear — each condition is checked in order from top to bottom
Adding a new case
Add a new dictionary entry and a new function — no existing code changes
Lookup time
Constant — hash table lookup takes the same time regardless of how many entries exist
if-else chain
Two or three conditions that are unlikely to grow; complex conditions involving ranges or multiple variables
Dispatch table
A known set of discrete keys that maps cleanly to distinct behaviors; any situation where the list of cases is expected to grow
spot the bug click the line that contains the bug

The function below attempts to use a dispatch table to greet a user in different languages. One line contains a bug that causes the wrong result to always be returned. Click the line you think is wrong, then hit check.

1 def greet_english(): return "Hello"
2 def greet_spanish(): return "Hola"
3 def greet_default(): return "Hi"
4 GREETINGS = {"en": greet_english, "es": greet_spanish}
5 handler = GREETINGS.get(lang, greet_default())
6 return handler()
The fix: Change greet_default() to greet_default on line 5. Writing greet_default() calls the function immediately and stores its return value ("Hi") as the default. When the key is missing, the code then tries to call "Hi"(), which raises a TypeError. The dictionary default must be a function reference, not the result of calling the function.

Polymorphism: Letting Objects Decide

Mental model: moving the conditional into the object

A dispatch table moves the conditional into data — a dictionary. Polymorphism takes that one step further and moves it into the objects themselves. In both cases the calling code ends up condition-free, but the mechanism is different: dictionary lookup versus method dispatch through the object's type. Recognising which mechanism fits a given problem is the core skill this section builds.

Dispatch tables are a great tool when you have a single variable that selects a behavior. But sometimes the logic is more entangled with what kind of thing you are working with. That is where polymorphism becomes the cleaner tool.

Polymorphism means that different objects respond to the same method call in their own way. You do not need to know which type you have — you just call the method and the object handles the details. The classic example in Python is a shape hierarchy:

python
# Without polymorphism — every new shape requires editing this function
def calculate_area(shape):
    if shape["type"] == "circle":
        return 3.14159 * shape["radius"] ** 2
    elif shape["type"] == "rectangle":
        return shape["width"] * shape["height"]
    elif shape["type"] == "triangle":
        return 0.5 * shape["base"] * shape["height"]
python
# With polymorphism — each class owns its own area logic
import math

class Circle:
    def __init__(self, radius):
        self.radius = radius
    def area(self):
        return math.pi * self.radius ** 2

class Rectangle:
    def __init__(self, width, height):
        self.width = width
        self.height = height
    def area(self):
        return self.width * self.height

class Triangle:
    def __init__(self, base, height):
        self.base = base
        self.height = height
    def area(self):
        return 0.5 * self.base * self.height

# The calling code never needs to know which shape it has
shapes = [Circle(5), Rectangle(4, 6), Triangle(3, 8)]
for shape in shapes:
    print(shape.area())

The calling code — the loop at the bottom — has no conditionals at all. Adding a Pentagon class means writing a new class with its own area() method. The loop stays exactly the same.

spot the bug click the line that contains the bug

This code tries to use polymorphism to print the area of each shape. One line breaks the pattern and reintroduces a conditional. Find it.

1 class Circle:
2 def area(self): return 3.14159 * self.radius ** 2
3 class Rectangle:
4 def area(self): return self.width * self.height
5 class Triangle:
6 def area(self): return 0.5 * self.base * self.height
7 shapes = [Circle(), Rectangle(), Triangle()]
8 for shape in shapes:
9 if isinstance(shape, Circle): print(shape.area())
10 else: print(shape.area())
The fix: Replace lines 9 and 10 with the single line print(shape.area()). The isinstance check is redundant — both branches call shape.area() identically, so the conditional does nothing. The whole point of polymorphism is that the same call works for every type. Adding an isinstance check here is the exact pattern polymorphism is designed to eliminate.
Kent Beck's guiding sequence for software quality — widely attributed in his writings on Extreme Programming — moves from making code functional, to making it structurally sound, to making it performant. (Beck's documented phrasing uses "run" rather than "work" for the first step; both variants appear in attributions.)

That sequence applies directly here. Getting rid of conditionals is part of making code right — structuring it so that growth is safe and changes are local. The polymorphic version works because each class takes responsibility for its own behavior rather than delegating that responsibility to a central conditional block.

Note

You do not always need a formal base class or inheritance to use polymorphism in Python. Python uses duck typing — if two objects both have an area() method, they can be used interchangeably in code that calls .area(), regardless of whether they share a parent class. This is sometimes called structural polymorphism.

"Special cases aren't special enough to break the rules." — Tim Peters, The Zen of Python (PEP 20, 2004)

This principle sits at the core of why polymorphism works so well in Python. When each shape class implements area() in its own way, there are no special cases — the calling code treats every object identically, and the language's dispatch mechanism handles the rest. The rule holds for every type, with no exceptions carved out in the calling code.

Enforcing the Interface: Abstract Base Classes

Python's duck typing means you never have to declare an interface. But on larger teams, or in any codebase where you want Python to catch missing method implementations at class definition time rather than at runtime, abc.ABC and @abstractmethod give you that guarantee. Any class that inherits from an abstract base class but does not implement every abstract method will raise a TypeError the moment you try to instantiate it — before a single line of application logic runs.

python
from abc import ABC, abstractmethod
import math

class Shape(ABC):
    @abstractmethod
    def area(self) -> float:
        ...

    @abstractmethod
    def perimeter(self) -> float:
        ...

class Circle(Shape):
    def __init__(self, radius: float):
        self.radius = radius

    def area(self) -> float:
        return math.pi * self.radius ** 2

    def perimeter(self) -> float:
        return 2 * math.pi * self.radius

# This raises TypeError immediately — perimeter() is not implemented
class BrokenShape(Shape):
    def area(self):
        return 0
# TypeError: Can't instantiate abstract class BrokenShape
# without an implementation for abstract method 'perimeter'
# (exact message wording varies by Python version)

The important point is that ABC does not slow down your code or change how Python resolves method calls at runtime. It is purely a development-time contract. When you call shape.area() on any Shape subclass, Python dispatches to that class's own implementation — the same polymorphic dispatch described above — but now you have a guarantee that the method exists, enforced at instantiation time rather than discovered deep in production code.

Note

ABCs are not required for polymorphism to work in Python — the duck-typed Circle, Rectangle, and Triangle classes above work fine without them. ABCs are a tool for when you want the language to enforce a contract explicitly, rather than relying on convention and documentation alone.

Deeper Details Worth Knowing

The following material goes beyond what you will find in introductory treatments. Each item addresses a question that comes up in real codebases but rarely appears in beginner resources.

Inspecting a singledispatch registry at runtime

functools.singledispatch exposes its internal type registry through two methods that are easy to miss. Calling describe.dispatch(int) returns the exact registered handler that would be called for an int argument, without actually calling it. Calling describe.registry gives you the full dictionary of type-to-function mappings. Both are useful in tests and in debugging tools that need to inspect dispatch behavior without triggering side effects.

python
from functools import singledispatch

@singledispatch
def describe(value):
    return f"unknown: {type(value).__name__}"

@describe.register
def _(value: int):
    return f"integer: {value}"

@describe.register
def _(value: str):
    return f"string: {value}"

# dispatch(Type) returns the handler that WOULD be called — without calling it.
# Useful in tests and debugging tools.
print(describe.dispatch(int))    # <function _ at 0x...>  — the registered int handler
print(describe.dispatch(float))  # <function describe at 0x...>  — falls back to base (object)
print(describe.dispatch(bool))   # <function _ at 0x...>  — MRO: bool → int, picks int handler

# The full registry: every registered type → its handler function.
# Note: the base fallback is stored under `object`, not under `describe`.
for typ, fn in describe.registry.items():
    print(typ.__name__, "->", fn.__name__)
# object -> describe
# int    -> _
# str    -> _

The dispatch() method follows the method resolution order (MRO) when looking up a type that has no direct registration. If you register a handler for list but call describe with a UserList, Python walks the MRO of UserList until it finds the nearest registered ancestor. This means subclass behavior is automatic — bool dispatches to int, for example, because bool is a subclass of int. Registering a handler for a very general type like object will shadow the base fallback and catch every type that has no more specific registration. One subtlety: describe.dispatch(float) does not return the describe wrapper object — it returns the original unwrapped function that was decorated, which happens to have __name__ == 'describe'. The two are not the same object; the wrapper is describe itself, while the registry stores the inner function under object.

Expert Detail: the singledispatch dispatch cache

Every singledispatch function maintains an internal dispatch_cache stored as a weakref.WeakKeyDictionary keyed on type objects. The first time a given type is dispatched, CPython runs the MRO walk via _find_impl() and stores the result in the cache. Every subsequent call for the same type is a direct cache hit — the MRO walk does not run again. The catch: every call to .register() on any ABC anywhere in the process increments a global ABCMeta._abc_invalidation_counter. On the next dispatch call, singledispatch detects that this counter has advanced and wipes the entire dispatch_cache completely before re-running the lookup. Registering many ABC handlers at module load time is safe; registering them in a hot loop at runtime forces repeated full-cache invalidations and re-walks, which is measurably slower. Register handlers at import time, not dynamically during request handling.

Expert Detail: ambiguous dispatch raises RuntimeError

When a type is registered as a virtual subclass of two different ABCs — for example, both collections.abc.Iterable and collections.abc.Container — and both ABCs have registered handlers on the same singledispatch function, CPython cannot determine which handler is more specific. In this situation, singledispatch does not silently pick one; it raises a RuntimeError: Ambiguous dispatch listing both competing types. This is documented in PEP 443 as an explicit design choice: "faced with ambiguity, @singledispatch refuses the temptation to guess." The fix is to register a handler for the concrete type directly, which always wins over ABC-based registrations.

Pro Tip

You can register handlers after the function is defined by calling describe.register(SomeType, some_function) as a plain method call rather than using it as a decorator. This is useful when the handler lives in a different module from the dispatch function, or when the type you want to register for is defined at runtime.

How dictionary hash randomisation affects dispatch tables

Python dictionaries are O(1) average-case because of hash tables — but the word "average" matters. Since Python 3.3, CPython randomises the hash seed for strings and bytes at interpreter startup using PYTHONHASHSEED. This means two different Python processes will hash the same string to different internal bucket positions. In practice, collisions are so rare that the constant-time claim holds for every dispatch table you are likely to write. The important implication is for security: hash randomisation prevents an attacker from crafting keys that all collide into the same bucket, which would degrade lookup time to O(n) and create a denial-of-service vector. Before Python 3.3, this attack was possible against web frameworks that used user-supplied dictionary keys.

If you need deterministic hashing across processes — for example when serialising a dispatch table's structure for a cache — set PYTHONHASHSEED to a fixed integer in your environment. For most application code, leave it at its default of random.

Expert Detail: how the compact dict layout works internally

Before Python 3.6, CPython's dictionary stored each entry — hash code, key pointer, value pointer — as a single row in one sparse array, with at least one-third of rows left empty to keep collision rates low. A 64-bit Python dict with four entries occupied 192 bytes. In CPython 3.6, Raymond Hettinger introduced the "compact dict" layout (inspired by a 2015 PyPy optimisation): the single sparse array was split into two — a dense dk_entries array that stores entries in insertion order, and a much smaller separate dk_indices sparse array that maps hash bucket positions to offsets in dk_entries. The index array uses the smallest integer type that fits the number of entries: signed bytes for dicts with up to 128 entries, then 16-bit, 32-bit, and 64-bit integers for larger ones. The same four-entry dict now takes 104 bytes — a reduction of roughly 46%. Insertion order preservation was a direct side effect of the dense entries array, not a separate feature added for that purpose. The ordering behaviour was a CPython implementation detail in 3.6 and became a language-level guarantee in Python 3.7, meaning all conforming Python implementations must now preserve it.

Virtual subclasses and __subclasshook__

ABCs have a second mechanism for polymorphism that is rarely covered in beginner material: virtual subclasses. Using Shape.register(SomeClass), you can declare that SomeClass counts as a subclass of Shape for the purposes of isinstance() and issubclass() checks — even if SomeClass does not inherit from Shape and does not implement its abstract methods. This is how the standard library makes built-in types like list and tuple appear to implement collections.abc.Sequence without actually inheriting from it.

python
from abc import ABC, abstractmethod

class Drawable(ABC):
    @abstractmethod
    def draw(self) -> None: ...

    @classmethod
    def __subclasshook__(cls, C):
        # Any class that defines a draw() method counts as Drawable
        if cls is Drawable:
            if any("draw" in B.__dict__ for B in C.__mro__):
                return True
        return NotImplemented

class LegacyWidget:
    # Does not inherit from Drawable
    def draw(self):
        print("drawing legacy widget")

# isinstance returns True because LegacyWidget has draw()
print(isinstance(LegacyWidget(), Drawable))   # True
print(issubclass(LegacyWidget, Drawable))     # True

__subclasshook__ is a classmethod that Python calls inside issubclass(). If it returns True, the class is treated as a virtual subclass. If it returns NotImplemented, the normal subclass check runs. Returning False explicitly prevents a class from being considered a subclass even if it was registered. This mechanism is what lets duck typing and formal ABCs coexist: you can write code that does isinstance(obj, Drawable) as a guard, and that guard will pass for any object that genuinely has a draw() method — regardless of its inheritance chain.

"Program to an interface, not an implementation." — Erich Gamma, Richard Helm, Ralph Johnson & John Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software (1994)

This is precisely what __subclasshook__ enables at the language level: code that depends on Drawable is programming to the interface — the presence of a draw() method — rather than to any specific implementation or inheritance chain. New types slot in automatically as long as they satisfy the structural contract.

Expert Detail: how ABCMeta caches isinstance() results

Every ABC created with ABCMeta (or by inheriting from abc.ABC) maintains two internal weakref.WeakSet caches: _abc_cache for classes that have already passed the subclass check, and _abc_negative_cache for classes that have already failed it. After the first isinstance(obj, SomeABC) call for a given type, subsequent calls for the same type go straight to one of these caches without re-running __subclasshook__ or the MRO walk. The negative cache carries a version stamp: it is compared against a global ABCMeta._abc_invalidation_counter integer on every check, and the entire negative cache is discarded whenever any ABC.register() call happens anywhere in the process (because registration can turn a previous negative result into a positive one). The positive cache is never discarded — registered or structurally-matching classes remain cached until they are garbage collected, which is why WeakSet is used rather than a plain set. Understanding this dual-cache design explains why the first isinstance check against an ABC is always slower than subsequent ones, and why heavy use of ABC.register() at runtime can cause repeated negative-cache invalidations.

How to Replace an if-else Chain with a Dispatch Table in Python

The following steps apply to any situation where a variable controls which of several behaviors runs. Work through them in order the first time you attempt this refactor.

  1. Identify the condition variable

    Find the if-else chain and locate the single variable that determines which branch runs. This is the value that will become the key in your dispatch dictionary. If the condition involves a range check (for example, if x > 100) or multiple variables at once, a dispatch table may not be the right fit — stick with the conditional in that case.

  2. Move each branch into its own function

    Take the code inside each elif branch and give it a dedicated function. Name the function after what it does, not after the condition that triggers it. For example, apply_student_discount is a better name than handle_student_case because the name describes the behavior, not the trigger.

  3. Build the dispatch dictionary

    Create a dictionary mapping each condition value to its corresponding function. Write the function name without parentheses — you are storing a reference to the function, not calling it. Also write a default function for cases where the key is not found, and use it as the second argument to .get().

  4. Look up and call the function

    Replace the entire if-else chain with two lines: one that retrieves the function from the dictionary using .get(key, default), and one that calls it. The original function that contained the chain now simply routes to the right handler without knowing any specifics about what each handler does.

Python Learning Summary Points

  1. A growing chain of if-elif-else statements is a signal that data and behavior have been tangled together. Separating them — by moving each branch into its own function and mapping keys to functions in a dictionary — makes the code easier to extend without modifying existing logic.
  2. A dispatch table is a dictionary where values are callable objects. Retrieving and calling a value from a dispatch table replaces the entire conditional chain with a single lookup, and adding a new case never requires editing the function that performs the lookup. Dictionary lookups are O(1) average case, compared to O(n) worst case for a long if-else chain.
  3. functools.singledispatch formalises dispatch at the type level. It routes calls to the correct registered handler based on the type of the first argument, with no if/isinstance checks in the calling code.
  4. Subclassing dict and overriding __missing__ lets you compute fallback handlers dynamically when a key is not found — useful when the full set of keys is not known at startup.
  5. Polymorphism achieves the same goal at the class level. When each class implements the same method name with its own logic, calling code can work with any object without knowing its type. New types are added by writing new classes, not by editing existing conditional branches.
  6. Adding abc.ABC and @abstractmethod to a class hierarchy gives you a development-time guarantee that every subclass implements the required interface, raising a TypeError at instantiation time rather than at runtime deep in production code.
  7. functools.singledispatch exposes .dispatch(Type) and .registry for runtime introspection of the handler table. Handlers can also be registered after the fact by calling function.register(Type, handler) as a plain method call, which is useful when types or handlers are defined in separate modules.
  8. Python's hash randomisation (PYTHONHASHSEED, default random since Python 3.3) makes dictionary key lookup statistically constant-time and removes the hash-flooding denial-of-service attack vector. Dispatch tables inherit this property: adding entries does not degrade lookup speed.
  9. ABCs support virtual subclasses via ABC.register(SomeClass) and the __subclasshook__ classmethod, letting isinstance() checks pass for any class that structurally satisfies an interface — regardless of inheritance. This is how the standard library makes built-in types appear to implement abstract collection interfaces without modifying their class definitions.

All the techniques covered — plain dispatch tables, singledispatch, __missing__, polymorphism, ABCs, singledispatch introspection, hash randomisation, and virtual subclasses — express the same underlying idea: behavior should live close to the data it belongs to, and the calling code should not need to know which specific behavior will run. When that principle is followed, code grows in predictable, safe ways rather than accumulating in a single overloaded function.

Unifying principle: keep calling code ignorant

Every technique in this article — dispatch tables, singledispatch, polymorphism, ABCs — can be understood as one pattern expressed at different levels. The pattern is: make the calling code ignorant of which specific behavior runs. A dispatch table does this through dictionary lookup. Polymorphism does it through the object's type. singledispatch does it through the argument's type. ABCs guarantee the contract that makes ignorance safe. If you remember only one thing from this article, make it this principle — it transfers to every language and every codebase.

which technique fits? answer 3 questions to get a recommendation

What drives the branching in your conditional?

Do the different behaviors also carry their own state — attributes, multiple related methods?

Is the full set of valid keys known at development time?

Is this inside a class method or a standalone function?

Recommended: Dispatch table

Build a dictionary mapping each key to a function reference. Use dict.get(key, default_fn) for safe lookup. Define handlers as named functions — avoid lambdas for anything beyond a one-line expression. This is the most readable and extensible pattern for your situation.

Recommended: dict subclass with __missing__

Subclass dict and define __missing__(self, key) to compute or generate a handler when an unknown key arrives. Remember: __missing__ is only triggered by bracket access (d[key]), not by d.get(). Use this when the handler table needs to grow dynamically at runtime.

Recommended: Polymorphism

Create a class for each case. Give each class the same method name(s). Let calling code work with any object through that shared interface. Add abc.ABC and @abstractmethod if you want Python to enforce the contract at instantiation time. New cases are new classes — existing code never changes.

Recommended: functools.singledispatch

Decorate your base function with @singledispatch and register handlers per type with @fn.register(Type). The dispatch cache means repeat calls for the same type are fast. Remember that every .register() call invalidates the cache — register all handlers at import time, not in a hot loop.

Recommended: functools.singledispatchmethod

Use @singledispatchmethod (Python 3.8+) for dispatch inside a class. It skips self and dispatches on the type of the next argument. Using plain @singledispatch inside a class will always route to the base implementation because self is the first argument.

Recommended: match / case (Python 3.10+)

Use structural pattern matching when you need to destructure data, capture variables, and apply guard conditions in a single step. Dispatch tables cannot express guard clauses or variable capture. match is purpose-built for parsing nested structures — JSON, command tuples, AST nodes.

technique matching click the best technique for each scenario

For each real-world scenario below, select the technique that fits best. You will see an explanation after each answer.

check your understanding question 1 of 5

Frequently Asked Questions

There is nothing wrong with a small number of if-else statements. The problem appears when you stack many branches together to handle growing lists of conditions. Each new case requires editing the same block of code, making it harder to test, extend, and reason about. This pattern is sometimes called a code smell because it signals that behavior and data are tangled together in a way that creates fragility.

Duck typing is the idea that Python does not check what type an object is before calling a method on it — it just calls the method and trusts the object to know what to do. The name comes from the phrase "if it walks like a duck and quacks like a duck, it is a duck." In Python terms: if the object has the method you want to call, that is all that matters.

In the shape example, the loop calls shape.area() on every object in the list. Python does not verify that each shape is a Circle or a Rectangle before making the call — it just calls .area() and lets the object respond. If you dropped in a completely unrelated LandPlot class that also happened to define area(), it would work in that loop without any modification.

This is why the polymorphic approach eliminates the conditional: instead of the calling code asking "what type is this and which function should I call?", it simply calls the method and each object answers for itself. Duck typing is what makes that possible without requiring every class to formally declare that it implements an interface.

A dispatch table is a plain Python dictionary where keys map to callable objects such as functions or methods. Instead of writing an if-else chain to choose which function to run, you look up the key in the dictionary and call the result. The dictionary acts as a routing table for behavior.

Polymorphism is an object-oriented concept where different classes share the same method name but each class implements the method differently. Instead of writing if type == 'x': do_x() else: do_y(), you call the same method on any object and the object itself handles the correct behavior. The conditional logic moves into the class definition and away from the calling code.

No. A single if or a simple two-branch if-else is completely normal and readable. The techniques in this tutorial are useful specifically when you find yourself adding a new elif every time a new condition arises. That growth pattern is the signal to consider a different architecture.

The Open/Closed Principle (OCP) states that software entities should be open for extension but closed for modification. The term was coined by Bertrand Meyer in his 1988 book Object-Oriented Software Construction and later became the "O" in Robert C. Martin's SOLID principles. In practical terms, adding new behavior should not require editing existing working code. Dispatch tables and polymorphism both support this idea: adding a new case means adding a new dictionary entry or a new class, not rewriting a conditional chain.

Python dictionaries use hash tables internally. According to the CPython Time Complexity wiki, key lookup is O(1) on average — constant time regardless of how many entries exist. (Worst-case is O(n) due to hash collisions, but CPython's randomized hash seeds since Python 3.3 make this extremely rare in practice.) An if-else chain, by contrast, checks each condition in order from top to bottom. For long chains, entries near the end are reached only after every earlier condition has been evaluated — O(n) in the worst case for every lookup.

A missing key raises a KeyError by default. You can handle this with dict.get(key, default_function) to return a fallback callable, or with a try-except block. Providing a sensible default or raising a descriptive error keeps the behavior explicit and avoids silent failures.

functools.singledispatch is a decorator in Python's standard library that implements type-based dispatch. Instead of keying a dictionary on a string, you register handler functions per type and Python automatically routes calls to the right handler based on the type of the first argument. A plain dispatch table keys on any hashable value (usually a string or integer); singledispatch keys specifically on Python types. Both eliminate if-else chains, but singledispatch is the right tool when you are branching on what type an object is rather than on a discrete string key.

When you subclass dict and define a __missing__(self, key) method, Python calls that method automatically whenever a key lookup fails rather than raising a KeyError. This allows a dispatch table to compute or generate a fallback handler dynamically based on the key that was requested. The standard library's collections.defaultdict uses the same hook — the difference is that __missing__ in a custom subclass can inspect the key before deciding what to return, rather than calling a fixed factory function.

Use abc.ABC and @abstractmethod when you want Python to enforce at class-definition time that every subclass implements a required method. Without them, a missing method raises an AttributeError only when the method is actually called at runtime. With them, trying to instantiate a subclass that is missing an abstract method raises a TypeError immediately — before any business logic runs. This is not required for polymorphism to work in Python (duck typing handles that), but it provides an explicit contract that is valuable in larger codebases or team environments.

Yes, and it is a common pattern. The key detail is where you build the dictionary. If you define it as a class-level attribute at class definition time, you cannot reference self yet — so the values must be unbound references like ClassName.method_name, which you then call with handler(self, arg). The more natural approach is to build the dispatch dictionary inside __init__, where self is available, so each value is already a bound method:

class Formatter:
    def __init__(self):
        self._dispatch = {
            "json": self._to_json,
            "csv":  self._to_csv,
            "text": self._to_text,
        }

    def format(self, data, fmt):
        handler = self._dispatch.get(fmt, self._to_text)
        return handler(data)

    def _to_json(self, data): ...
    def _to_csv(self, data):  ...
    def _to_text(self, data): ...

Because the dictionary is built in __init__, each value is already a bound method — calling handler(data) is all you need. The self is already captured. This approach also makes it easy to override the dispatch dictionary in a subclass by reassigning entries in the subclass's own __init__.

The deciding question is whether the different behaviors are fundamentally about data routing or about object identity.

Use a dispatch table when you have a single discrete key — a string, integer, or enum value — that selects a behavior, and the behaviors themselves are short, stateless functions that do not need to carry their own data. The discount calculator is a good example: each handler is a pure calculation, and the key is just a string label.

Use polymorphism when each "case" has its own state, its own initialization logic, or several related methods that naturally belong together. A Circle needs to store a radius, expose an area(), and potentially a perimeter() and a bounding_box(). Grouping all of that into a class is cleaner than managing parallel dictionaries for each method. A rough rule: if you find yourself building multiple dispatch tables that all key on the same value, you probably want a class hierarchy instead.

Both patterns can coexist. A common real-world combination is a dispatch table whose values are instances of polymorphic classes — the table handles routing, and each class handles its own behavior.

Dispatch tables work fine with any number of arguments — the dictionary just stores the function reference, and you call it with however many arguments you need. All handlers should accept the same signature so the calling code stays uniform:

def apply_student(price, tax_rate, member_years):
    base = price * 0.80
    return base * (1 + tax_rate) * (0.99 ** member_years)

def apply_senior(price, tax_rate, member_years):
    return price * 0.75 * (1 + tax_rate)

DISCOUNTS = {
    "student": apply_student,
    "senior":  apply_senior,
}

handler = DISCOUNTS.get(customer_type, lambda p, t, y: p)
result  = handler(price, tax_rate, member_years)

If the handlers genuinely need different signatures, the usual solution is to pass a context object or a dictionary of named parameters instead of positional arguments, keeping the call site as handler(context) regardless of what each handler unpacks from it.

Significantly. A long if-elif-else chain forces you to test one monolithic function by exercising every branch through its input. Adding a new branch means adding new test cases to an already large test function, and a mistake in one branch can mask failures in others.

When each branch is its own function, you test each handler in complete isolation — one test function per handler, with no need to thread a value through a conditional to reach the code you care about. The routing function itself (the two-line dispatcher) needs only a handful of tests: one for a known key, one for an unknown key hitting the default, and one verifying the return value is passed through correctly. That is three tests covering what was previously one untestable tangle.

Polymorphism gives the same benefit at the class level: each class is tested independently, and the calling code that iterates over objects and calls .area() is tested separately from the area calculations themselves. Adding a new shape class does not invalidate any existing test.

Yes, but only for very short expressions. A lambda is an anonymous function, so {"student": lambda p: p * 0.80} is valid Python. The problem is readability: lambda bodies cannot span multiple lines, cannot contain statements like if or return with early exits, and produce opaque tracebacks when something goes wrong. For anything more than a one-expression calculation, a named function is almost always the better choice.

A practical rule: if the lambda fits on the same line as the dictionary key and you can read it at a glance, it is fine. If you find yourself nesting logic inside a lambda, extract a named function instead. The dispatch table pattern is specifically designed to make behavior visible — a wall of lambdas defeats that purpose.

functools.singledispatch dispatches on the type of the first positional argument. When you use it inside a class, the first argument is self — which is always an instance of the class — so every call routes to the base implementation, never to a registered type handler. The dispatch never fires as intended.

functools.singledispatchmethod, added in Python 3.8, is designed specifically for use inside a class. It skips self (or cls for class methods) and dispatches on the type of the next argument instead. Use singledispatch for standalone functions and singledispatchmethod for methods inside a class definition. Confusing the two is one of the most reliable sources of silent misbehavior when first learning type-based dispatch.

Enums are a better choice than plain strings whenever the set of valid keys is fixed and known at development time. A string key lets a typo slip through silently — "stuednt" simply misses the dispatch table and hits the default. An enum key raises a ValueError at the point where the invalid value is created, not deep inside the routing logic.

from enum import Enum, auto

class CustomerType(Enum):
    STUDENT  = auto()
    SENIOR   = auto()
    EMPLOYEE = auto()
    VIP      = auto()

DISCOUNTS = {
    CustomerType.STUDENT:  lambda p: p * 0.80,
    CustomerType.SENIOR:   lambda p: p * 0.75,
    CustomerType.EMPLOYEE: lambda p: p * 0.60,
    CustomerType.VIP:      lambda p: p * 0.50,
}

handler = DISCOUNTS.get(customer_type, lambda p: p)
result  = handler(price)

Enum members are also hashable, so they work as dictionary keys without any additional setup. In a codebase where customer types come from a configuration file or user input as strings, you can convert the string to an enum early — at the boundary of your system — and let the rest of the code work with the safer enum representation throughout.

It depends on what the nesting is doing. If the outer condition selects a category and the inner condition selects a sub-behavior within that category, a two-level dispatch table handles this cleanly — the outer key maps to a second dictionary, and that dictionary maps the inner key to the handler:

HANDLERS = {
    "image": {
        "resize":  resize_image,
        "convert": convert_image,
    },
    "video": {
        "trim":    trim_video,
        "convert": convert_video,
    },
}

handler = HANDLERS.get(file_type, {}).get(action, unknown_action)
handler(file)

If the nesting involves unrelated conditions — for example, an outer check on a range value and an inner check on a string key — a flat dispatch table may not fit neatly. In that case, consider whether the structure is better expressed with a polymorphic class hierarchy, where each class handles its own internal complexity rather than exposing it to the calling code.

Nesting dispatch tables more than two levels deep is usually a warning sign that the problem needs to be reframed, not that you need deeper nesting.

A dispatch table is the right choice when routing is based on a single discrete key and the number of cases is expected to grow. A match statement (introduced in Python 3.10 via PEP 634) is better suited for a different class of problem: when you need to destructure data and match against its shape, type, and content simultaneously.

Consider parsing a command that might arrive as a tuple in different forms:

match command:
    case ("quit",):
        quit_game()
    case ("go", direction):
        go(direction)
    case ("pick", item) if item in inventory:
        pick_up(item)
    case _:
        print("Unknown command")

A dispatch table cannot express the guard clause if item in inventory or capture the direction variable in a single step. For matching against nested data structures, unpacking values, or applying guard conditions, match is more expressive. For routing to one of several handlers based on a simple string or enum key — particularly when new handlers will be added over time — a dispatch table is the cleaner architecture because it does not require touching existing code to extend it.

They solve related problems but are not the same thing. Python's match statement was introduced in Python 3.10 via PEP 634 (authored by Brandt Bucher and Guido van Rossum). It is structural pattern matching that can inspect the shape, type, and content of data — not just compare values. Dispatch tables and polymorphism are code organization strategies that move conditional logic into data structures or class hierarchies. Both can reduce explicit if-else branches, but match is best suited for deconstructing nested data structures (like JSON or ASTs), while dispatch tables shine for routing on a single discrete key.

Sources and References
  1. Python Software Foundation. Python Data Structures — Official Documentation. docs.python.org/3/tutorial/datastructures.html
  2. Python Software Foundation. Time Complexity — CPython Wiki. wiki.python.org/moin/TimeComplexity
  3. Bucher, B. & van Rossum, G. (2020). PEP 634 — Structural Pattern Matching: Specification. Python Enhancement Proposals. peps.python.org/pep-0634/
  4. Meyer, B. (1988). Object-Oriented Software Construction. Prentice Hall. (Origin of the Open/Closed Principle.)
  5. Martin, R. C. (1996). The Open-Closed Principle. The C++ Report. (Restated OCP as part of the SOLID principles.)
  6. Beck, K. Make It Run, Make It Right II. Tidy First (Substack). tidyfirst.substack.com (Attribution for the "make it work/run, make it right, make it fast" principle.)
  7. Python Software Foundation. functools — Higher-order functions and operations on callable objects. docs.python.org/3/library/functools.html (singledispatch documentation.)
  8. Python Software Foundation. abc — Abstract Base Classes. docs.python.org/3/library/abc.html
  9. Peters, T. (2004). PEP 20 — The Zen of Python. Python Enhancement Proposals. peps.python.org/pep-0020/ (Source for the "Special cases aren't special enough" quote.)
  10. Gamma, E., Helm, R., Johnson, R. & Vlissides, J. (1994). Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley. (Source for the "Program to an interface" principle.)
  11. van Rossum, G. & Talin (2007). PEP 3119 — Introducing Abstract Base Classes. Python Enhancement Proposals. peps.python.org/pep-3119/ (Design rationale for ABC, virtual subclasses, and __subclasshook__.)
  12. Python Software Foundation. PYTHONHASHSEED — Command line and environment. docs.python.org/3/using/cmdline.html (Hash randomisation behaviour since Python 3.3.)
Certificate of Completion
Final Exam
Pass mark: 80% · Score 80% or higher to receive your certificate

Enter your name as you want it to appear on your certificate, then start the exam. Your name is used only to generate your certificate and is never transmitted or stored anywhere.

Question 1 of 10