Writing code that works is one thing. Writing code you can use again and again across different projects without rewriting it from scratch is a completely different skill. Python gives you several powerful mechanisms for building reusable patterns, from decorators and context managers to Protocol classes and generator pipelines. This article walks through each one with practical examples you can start using today, along with the decision frameworks, anti-patterns, and design thinking that separate throwaway scripts from professional-grade codebases.
Every developer eventually hits the same wall. You solve a problem in one project, then find yourself solving the exact same problem in the next one. Copy-pasting code between files and repositories creates maintenance nightmares. When you fix a bug in one copy, you have to remember to fix it everywhere else. Reusable code patterns solve this by giving you well-structured, tested building blocks that you write once and apply anywhere.
Python is especially well-suited for this kind of work. Its support for first-class functions, decorators, protocols, and generators means you can build abstractions that are both flexible and readable. As Martin Fowler wrote in Refactoring, good programmers write code that humans can understand (Fowler et al., Refactoring: Improving the Design of Existing Code, Addison-Wesley, 1999). The patterns covered here are the practical tools that experienced Python developers use every day to keep their codebases clean, maintainable, and built for other humans to read.
Why Reusable Patterns Matter
Reusable code patterns are not about writing clever code. They are about writing predictable code. When you establish a pattern for how your application handles logging, database connections, data validation, or error recovery, every developer on the team knows what to expect. New features follow the same structure. Bugs become easier to find because the code behaves consistently.
The core principles behind reusable patterns are straightforward. Each function or class should have a single responsibility. Robert C. Martin formalized this as the Single Responsibility Principle: a class should have only one reason to change (Martin, Agile Software Development, Prentice Hall, 2003). Logic that appears in more than one place should be extracted into a shared component. Interfaces should be defined clearly so that components can be swapped without rewriting the code that depends on them.
But there is a deeper cognitive reason these patterns matter. When you encounter a decorator, your brain immediately recognizes the shape: "this wraps behavior around something else." When you see a context manager, you know: "this handles setup and teardown." These patterns create a shared mental vocabulary that compresses complex ideas into recognizable structures. You stop reading individual lines and start reading intentions.
The DRY principle (Don't Repeat Yourself) is a guideline, not an absolute law. Extracting shared code too aggressively can create tight coupling between unrelated parts of your application. The goal is to eliminate meaningful duplication while keeping components independent. A useful heuristic: if you find yourself changing two pieces of code for the same reason, they probably belong together. If they change for different reasons, duplication may actually be preferable to coupling.
Decorator Patterns
Decorators are one of the tools Python offers for reusable code. A decorator wraps a function or method with additional behavior without changing the original function's code. This makes decorators ideal for cross-cutting concerns like logging, timing, caching, authentication, and retry logic. The underlying mechanism is straightforward: since Python treats functions as first-class objects, you can pass them as arguments, return them from other functions, and wrap them in new behavior. Decorators formalize this pattern with clean syntax.
A Reusable Retry Decorator
Network calls fail. APIs return errors. Databases time out. Instead of scattering retry logic throughout your codebase, you can write it once as a decorator and apply it wherever you need it.
import time
import functools
import logging
from typing import Type
logger = logging.getLogger(__name__)
def retry(
max_attempts: int = 3,
delay: float = 1.0,
backoff: float = 2.0,
exceptions: tuple[Type[Exception], ...] = (Exception,)
):
"""Retry a function on failure with exponential backoff."""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
current_delay = delay
for attempt in range(1, max_attempts + 1):
try:
return func(*args, **kwargs)
except exceptions as e:
if attempt == max_attempts:
raise
logger.warning(
f"{func.__qualname__} attempt {attempt}/{max_attempts} "
f"failed: {e}. Retrying in {current_delay:.1f}s..."
)
time.sleep(current_delay)
current_delay *= backoff
return wrapper
return decorator
@retry(max_attempts=3, delay=0.5, exceptions=(ConnectionError, TimeoutError))
def fetch_api_data(url: str) -> dict:
"""Fetch data from an external API."""
import urllib.request
with urllib.request.urlopen(url, timeout=10) as response:
return response.read()
This decorator accepts configuration parameters, which means you can customize the retry behavior for each function that uses it. The functools.wraps call preserves the original function's name, docstring, and other metadata. According to the Python standard library documentation, functools.wraps copies __module__, __name__, __qualname__, __annotations__, __type_params__, and __doc__ from the wrapped function to the wrapper, and also sets a __wrapped__ attribute pointing back to the original (Python docs, functools). Without it, debugging tools, documentation generators, and introspection will all see the wrapper function's identity instead of the original.
A Reusable Timing Decorator
Performance monitoring is another common need. Rather than adding timing code inside each function, you can attach it from the outside.
import time
import functools
import logging
logger = logging.getLogger(__name__)
def timed(func):
"""Log the execution time of a function."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
try:
result = func(*args, **kwargs)
return result
finally:
elapsed = time.perf_counter() - start
logger.info(f"{func.__qualname__} completed in {elapsed:.4f}s")
return wrapper
@timed
def process_records(records: list[dict]) -> list[dict]:
"""Process a batch of records."""
return [transform(record) for record in records]
Common Decorator Anti-Patterns
Understanding what not to do is as valuable as knowing the correct approach. Here are the mistakes that cause the hardest-to-diagnose bugs:
Swallowing exceptions silently. A decorator that catches exceptions and returns None or a default value without logging or re-raising hides bugs. The calling code has no way to know something went wrong. If your decorator catches exceptions, always log them and either re-raise or return a clearly distinguishable sentinel value.
Decorators that break the function's signature. If your decorator accepts *args, **kwargs but does not forward them correctly, tools like inspect.signature() will report the wrong parameter list. This breaks IDE autocompletion and confuses anyone reading the code. Libraries like wrapt solve this by preserving the full argument specification.
Stacking order confusion. When multiple decorators are applied to a single function, they execute from bottom to top during wrapping but from top to bottom when the function is called. Getting this wrong leads to subtle bugs where, for example, a caching decorator stores results before a validation decorator has a chance to reject bad inputs.
Always use functools.wraps when writing decorators. Without it, the decorated function loses its original __name__, __doc__, and __module__ attributes, which breaks introspection tools, documentation generators, and debugging. The __wrapped__ attribute added by functools.wraps also lets you bypass the decorator entirely when you need to test or inspect the original function.
Context Manager Patterns
Context managers handle setup and teardown logic automatically. You use them every time you write with open('file.txt') as f:. But they are far more useful than just file handling. Any time your code needs to acquire a resource, do something, and then reliably release the resource regardless of whether an error occurred, a context manager is the right tool. The underlying mechanism is Python's __enter__ and __exit__ protocol, which guarantees that cleanup code runs even if an exception is raised inside the with block.
A Reusable Database Transaction Manager
from contextlib import contextmanager
from typing import Generator
@contextmanager
def db_transaction(connection) -> Generator:
"""Manage a database transaction with automatic commit/rollback."""
cursor = connection.cursor()
try:
yield cursor
connection.commit()
except Exception:
connection.rollback()
raise
finally:
cursor.close()
# Usage
with db_transaction(conn) as cursor:
cursor.execute("INSERT INTO users (name, email) VALUES (?, ?)",
("Alice", "[email protected]"))
cursor.execute("UPDATE accounts SET balance = balance - 100 WHERE id = ?",
(sender_id,))
If anything inside the with block raises an exception, the transaction is rolled back automatically. If everything succeeds, it commits. The cursor is always closed. This pattern eliminates an entire category of bugs where developers forget to handle rollbacks or close cursors.
A Reusable Temporary Directory Manager
import tempfile
import shutil
from pathlib import Path
from contextlib import contextmanager
from typing import Generator
@contextmanager
def working_directory(prefix: str = "work_") -> Generator[Path, None, None]:
"""Create a temporary working directory that is cleaned up automatically."""
tmp_dir = Path(tempfile.mkdtemp(prefix=prefix))
try:
yield tmp_dir
finally:
shutil.rmtree(tmp_dir, ignore_errors=True)
# Usage
with working_directory(prefix="data_processing_") as work_dir:
input_file = work_dir / "raw_data.csv"
output_file = work_dir / "processed_data.csv"
# Do your processing here
# The directory and all its contents are deleted when the block exits
The contextlib.contextmanager decorator lets you write context managers as generator functions, which is often more readable than implementing the full __enter__ and __exit__ protocol on a class.
When to Use a Class-Based Context Manager Instead
The generator-based approach with @contextmanager works well for simple cases, but there are situations where a class-based context manager is the better choice. If your manager needs to maintain state across multiple uses, if it needs to be reentrant (used in nested with blocks), or if the setup and teardown logic is complex enough to warrant separate methods, a class gives you more structure. The class-based approach also makes unit testing easier, since you can instantiate the manager and call its methods independently.
class ConnectionPool:
"""A reusable context manager that manages a pool of database connections."""
def __init__(self, dsn: str, pool_size: int = 5):
self.dsn = dsn
self.pool_size = pool_size
self._connections: list = []
def __enter__(self):
self._connections = [create_connection(self.dsn) for _ in range(self.pool_size)]
return self
def __exit__(self, exc_type, exc_val, exc_tb):
for conn in self._connections:
conn.close()
self._connections.clear()
return False # Don't suppress exceptions
def get_connection(self):
if not self._connections:
raise RuntimeError("Pool exhausted")
return self._connections.pop()
Context managers created with @contextmanager are single-use by design. If you need a reusable context manager (one that can be entered multiple times), use the class-based approach with __enter__ and __exit__. The standard library's contextlib.ExitStack is also worth knowing about -- it lets you programmatically combine multiple context managers, which is useful when the number of resources to manage is not known until runtime.
Protocol Classes for Flexible Interfaces
One of the challenges with reusable code is defining interfaces. You want your functions and classes to work with different types of objects, but you also want type safety. Python's Protocol class solves this elegantly through structural subtyping, sometimes called static duck typing. Introduced in PEP 544 and available since Python 3.8, Protocols let you define what methods or attributes an object must have. Any class that has those methods automatically satisfies the Protocol without needing to inherit from it or declare anything.
As the PEP explains, structural subtyping is natural for Python because it matches how duck typing already works at runtime: an object is treated based on what it can do, not on what class it inherits from (Levkivskyi, Lehtosalo, and Langa, PEP 544). The difference is that Protocols bring this flexibility into the realm of static type checking, so tools like mypy can catch interface mismatches before your code runs.
from typing import Protocol, runtime_checkable
from dataclasses import dataclass
@runtime_checkable
class Serializable(Protocol):
"""Any object that can convert itself to a dictionary."""
def to_dict(self) -> dict:
...
@runtime_checkable
class Identifiable(Protocol):
"""Any object that has a unique identifier."""
@property
def id(self) -> str:
...
def save_to_database(entity: Serializable) -> None:
"""Save any serializable entity to the database."""
data = entity.to_dict()
# database insertion logic here
print(f"Saved: {data}")
def log_entity(entity: Identifiable) -> None:
"""Log any identifiable entity."""
print(f"Processing entity: {entity.id}")
# These classes satisfy the protocols WITHOUT inheriting from them
@dataclass
class User:
user_id: str
name: str
email: str
@property
def id(self) -> str:
return self.user_id
def to_dict(self) -> dict:
return {"user_id": self.user_id, "name": self.name, "email": self.email}
@dataclass
class Product:
sku: str
title: str
price: float
@property
def id(self) -> str:
return self.sku
def to_dict(self) -> dict:
return {"sku": self.sku, "title": self.title, "price": self.price}
# Both work with the same functions
user = User(user_id="u-001", name="Alice", email="[email protected]")
product = Product(sku="SKU-999", title="Widget", price=29.99)
save_to_database(user) # Works
save_to_database(product) # Works
log_entity(user) # Works
log_entity(product) # Works
The key advantage here is decoupling. The save_to_database function does not know or care about User or Product specifically. It works with anything that has a to_dict method. You can add new entity types in the future without changing the function at all.
Protocols vs. Abstract Base Classes: Which to Choose
Python offers two approaches to defining interfaces: Protocols (structural subtyping) and Abstract Base Classes (nominal subtyping via the abc module). The choice matters because it affects how tightly your code is coupled to specific class hierarchies.
Use Protocols when you are writing library code that should work with any object having the right shape, when you want to accept third-party types you cannot modify, or when you want to keep coupling as loose as possible. Use ABCs when you want to enforce that implementers explicitly opt in to your interface, when you need to provide shared default method implementations, or when you want runtime isinstance checks that verify full method signatures (Protocols' runtime checks only verify that methods exist as attributes, not that their signatures match).
The @runtime_checkable decorator allows you to use isinstance() checks with Protocol classes. Without it, Protocols only work with static type checkers like mypy. Keep in mind that runtime checks only verify that the required methods exist as attributes. They do not validate argument types or return types. The Python standard library documentation notes that isinstance() checks against runtime-checkable protocols can be surprisingly slow compared to checks against non-protocol classes (Python docs, typing.runtime_checkable).
Dataclass Patterns for Structured Data
Dataclasses reduce boilerplate for classes that primarily hold data. Introduced in Python 3.7 via PEP 557, they auto-generate __init__, __repr__, and __eq__ methods from type-annotated field definitions. But they also support patterns that make your data structures more robust and reusable across a codebase.
Immutable Value Objects
When you want data that cannot be changed after creation, use frozen=True combined with slots=True for memory efficiency and faster attribute access. The slots=True option (available since Python 3.10) prevents the creation of a per-instance __dict__, which reduces memory usage. Note that PEP 557 acknowledges that there is a small performance cost to frozen dataclasses during initialization, because __init__ must use object.__setattr__ instead of simple assignment (PEP 557). For the vast majority of use cases, this overhead is negligible, and the safety benefits of immutability far outweigh it.
from dataclasses import dataclass
@dataclass(frozen=True, slots=True)
class Coordinate:
latitude: float
longitude: float
def distance_to(self, other: "Coordinate") -> float:
"""Calculate approximate distance in kilometers using the Haversine formula."""
from math import radians, sin, cos, sqrt, atan2
R = 6371.0 # Earth's mean radius in km
lat1, lon1 = radians(self.latitude), radians(self.longitude)
lat2, lon2 = radians(other.latitude), radians(other.longitude)
dlat = lat2 - lat1
dlon = lon2 - lon1
a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2
return R * 2 * atan2(sqrt(a), sqrt(1 - a))
@dataclass(frozen=True, slots=True)
class Money:
amount: int # Store as cents to avoid float precision issues
currency: str
def __post_init__(self):
if self.amount < 0:
raise ValueError(f"Amount cannot be negative, got {self.amount}")
if len(self.currency) != 3 or not self.currency.isalpha():
raise ValueError(f"Currency must be a 3-letter ISO 4217 code, got '{self.currency}'")
def __add__(self, other: "Money") -> "Money":
if self.currency != other.currency:
raise ValueError(f"Cannot add {self.currency} and {other.currency}")
return Money(amount=self.amount + other.amount, currency=self.currency)
def display(self) -> str:
return f"{self.amount / 100:.2f} {self.currency}"
# Immutable, hashable, and safe to use as dict keys or in sets
price_a = Money(amount=1999, currency="USD")
price_b = Money(amount=500, currency="USD")
total = price_a + price_b
print(total.display()) # 24.99 USD
Dataclass with Validation
Dataclasses do not validate their fields by default, but you can add validation in the __post_init__ method to catch bad data early. This is a critical pattern for reusable code: if your data structures validate themselves, every consumer of those structures is protected from invalid state without needing to write their own checks.
from dataclasses import dataclass, field
from datetime import date
@dataclass
class DateRange:
start: date
end: date
def __post_init__(self):
if self.start > self.end:
raise ValueError(
f"start ({self.start}) must not be after end ({self.end})"
)
@property
def days(self) -> int:
return (self.end - self.start).days
def overlaps(self, other: "DateRange") -> bool:
return self.start <= other.end and other.start <= self.end
def contains(self, d: date) -> bool:
"""Check if a single date falls within this range."""
return self.start <= d <= self.end
@dataclass
class Config:
host: str
port: int
debug: bool = False
allowed_origins: list[str] = field(default_factory=list)
def __post_init__(self):
if not 1 <= self.port <= 65535:
raise ValueError(f"Port must be between 1 and 65535, got {self.port}")
if not self.host:
raise ValueError("Host cannot be empty")
When using frozen=True and slots=True together, your dataclass instances become both immutable and memory-efficient. If you need to set attributes inside __post_init__ on a frozen dataclass (for computed fields or normalization), use object.__setattr__(self, 'field_name', value) to bypass the freeze during initialization. Be aware that freezing only prevents reassignment of the instance's own attributes -- if a frozen dataclass contains a mutable object like a list, that inner object can still be modified. Use tuple instead of list for truly immutable nested collections.
Generator Pipelines
Generators let you build data processing pipelines where each step processes one item at a time without loading the entire dataset into memory. This pattern is especially valuable when working with large files, API responses, or database result sets. The key insight is that generators are lazy: no computation happens until you actually iterate over them. This lets you compose arbitrarily complex pipelines that still use O(1) memory regardless of dataset size.
from typing import Generator, Iterable
from pathlib import Path
def read_lines(filepath: Path) -> Generator[str, None, None]:
"""Read lines from a file one at a time."""
with open(filepath, "r") as f:
for line in f:
yield line.rstrip("\n")
def filter_nonempty(lines: Iterable[str]) -> Generator[str, None, None]:
"""Skip blank lines."""
for line in lines:
if line.strip():
yield line
def parse_csv_row(lines: Iterable[str], delimiter: str = ",") -> Generator[list[str], None, None]:
"""Split each line into fields."""
for line in lines:
yield line.split(delimiter)
def filter_by_column(
rows: Iterable[list[str]],
column: int,
value: str
) -> Generator[list[str], None, None]:
"""Keep only rows where a specific column matches a value."""
for row in rows:
if len(row) > column and row[column] == value:
yield row
# Compose the pipeline
log_file = Path("server_access.log")
pipeline = filter_by_column(
parse_csv_row(
filter_nonempty(
read_lines(log_file)
)
),
column=3,
value="ERROR"
)
# Nothing runs until you iterate
for error_row in pipeline:
print(error_row)
Each function in this pipeline is independent and reusable. You can compose them in different orders, swap one step for another, or add new steps without modifying any of the existing functions. The entire pipeline is lazy, meaning it only processes data as you consume it. This is the same philosophy behind Unix pipes, applied at the function level.
A Reusable Batching Generator
from typing import Generator, Iterable, TypeVar
from itertools import islice
T = TypeVar("T")
def batched(iterable: Iterable[T], batch_size: int) -> Generator[list[T], None, None]:
"""Yield successive batches from an iterable."""
iterator = iter(iterable)
while True:
batch = list(islice(iterator, batch_size))
if not batch:
break
yield batch
# Process 1000 records at a time
for batch in batched(read_lines(Path("large_dataset.csv")), batch_size=1000):
process_batch(batch)
Python 3.12 introduced itertools.batched as a built-in. If you are running Python 3.12 or later, you can use from itertools import batched directly instead of writing your own. Note that the standard library version yields tuples, not lists, and Python 3.13 added a strict parameter that raises ValueError if the final batch is shorter than n (Python docs, itertools).
Generator Pipelines vs. List Comprehension Chains
A question that comes up frequently: when should you use a generator pipeline versus chaining list comprehensions or using map/filter? The answer comes down to two factors: dataset size and composability. If your entire dataset fits comfortably in memory and you do not need to reuse individual steps, a list comprehension is simpler and often faster due to lower per-element overhead. But once your dataset exceeds available memory, or once you need to mix and match processing steps across different contexts, generator pipelines become essential. The lazy evaluation model means you can process a 50 GB log file with the same pipeline that processes a 50 KB one, without changing any code.
Mixin Classes for Composable Behavior
Mixins are small classes that provide a specific piece of functionality. They are not meant to stand alone. Instead, you combine them with other classes through multiple inheritance to compose the behavior you need.
import json
from datetime import datetime, timezone
class TimestampMixin:
"""Adds created_at and updated_at tracking."""
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
original_init = cls.__init__
def new_init(self, *args, **kw):
original_init(self, *args, **kw)
now = datetime.now(timezone.utc)
if not hasattr(self, "created_at"):
self.created_at = now
self.updated_at = now
cls.__init__ = new_init
def touch(self):
"""Update the modified timestamp."""
self.updated_at = datetime.now(timezone.utc)
class JsonMixin:
"""Adds JSON serialization and deserialization."""
def to_json(self, indent: int = 2) -> str:
data = {}
for key, value in self.__dict__.items():
if isinstance(value, datetime):
data[key] = value.isoformat()
else:
data[key] = value
return json.dumps(data, indent=indent)
@classmethod
def from_json(cls, json_str: str):
data = json.loads(json_str)
return cls(**data)
class ValidatedMixin:
"""Adds a validate method that checks required fields are not None."""
required_fields: list[str] = []
def validate(self) -> list[str]:
errors = []
for field_name in self.required_fields:
value = getattr(self, field_name, None)
if value is None or (isinstance(value, str) and not value.strip()):
errors.append(f"{field_name} is required")
return errors
# Compose mixins into a concrete class
class AuditLog(TimestampMixin, JsonMixin, ValidatedMixin):
required_fields = ["action", "user_id"]
def __init__(self, action: str, user_id: str, details: str = ""):
self.action = action
self.user_id = user_id
self.details = details
entry = AuditLog(action="login", user_id="u-001", details="From mobile app")
print(entry.to_json())
print(entry.validate()) # [] (no errors)
print(entry.created_at) # Automatically set
Each mixin handles one concern. TimestampMixin tracks when objects are created and modified. JsonMixin adds serialization. ValidatedMixin adds field validation. You pick the ones you need and compose them together. This approach is far more flexible than a deep inheritance hierarchy because you can mix and match capabilities without creating a rigid class tree.
Multiple inheritance in Python follows the Method Resolution Order (MRO), which uses the C3 linearization algorithm to create a predictable lookup sequence. When combining mixins, be aware that the order of base classes matters. If two mixins define a method with the same name, the one listed first in the class definition takes priority. Use ClassName.__mro__ or ClassName.mro() to inspect the resolution order if you encounter unexpected behavior. The C3 algorithm guarantees that child classes always precede their parents and that the order specified in your class definition is respected (Python docs, MRO).
When Not to Abstract: The Anti-Pattern Trap
Every pattern in this article has a shadow: the premature abstraction. It is tempting, once you learn these tools, to create decorators and Protocols and mixin classes for everything. Resist this urge. The wrong abstraction is far more costly than duplicated code.
Fowler's advice in Refactoring is to extract code into a named function whenever you have to pause and think about what a block of code is doing (Fowler et al., Refactoring, Addison-Wesley, 1999). But the flip side is equally important: do not extract code into an abstraction until you understand the pattern well enough to name it. Here are the warning signs that you are abstracting too early:
You have only one consumer. A reusable pattern that is only used once is not reusable. It is indirection. Wait until you see the same logic appear in at least two genuinely independent contexts before extracting it.
Your abstraction has more configuration than logic. If a decorator requires eight parameters to handle all the edge cases it might encounter, the abstraction is probably trying to do too much. Simpler, more focused patterns are easier to understand and more likely to be reused.
You are fighting the abstraction to add new features. When every new requirement forces you to modify the shared pattern and re-test all of its consumers, the abstraction is creating coupling rather than reducing it. Kent Beck captured this idea in Implementation Patterns: group logic that changes at the same rate together, and separate logic that changes at different rates (Beck, Implementation Patterns, Addison-Wesley, 2007).
The "Rule of Three" heuristic. A useful guideline is to wait until you see the same pattern three times before abstracting. The first time, you write it. The second time, you notice the similarity. The third time, you have enough context to understand what varies and what stays the same, and you can design an abstraction that captures the right level of generality.
Choosing the Right Pattern: A Decision Framework
With six different reusable patterns available, how do you choose which one to apply? Here is a decision framework based on the nature of the problem you are solving:
Is the reusable behavior something that wraps around existing functions without modifying them? Use a decorator. Logging, timing, retries, caching, authentication, and rate limiting are all decorator territory. The function's core logic stays untouched.
Does the behavior involve acquiring and releasing a resource? Use a context manager. Database connections, file handles, locks, temporary directories, and network sockets all need guaranteed cleanup. If you find yourself writing try/finally blocks, you probably want a context manager.
Are you defining an interface that multiple unrelated classes should conform to? Use a Protocol. If the implementing classes come from third-party code you cannot modify, Protocols are the only option. If you control all implementers and want enforcement, consider an ABC.
Are you modeling structured data that needs validation, equality comparison, or immutability? Use a dataclass. Configuration objects, API request/response models, value objects like money or coordinates, and domain entities are all natural fits. Use frozen=True when the data should not change after creation.
Are you processing data in a series of independent, composable steps? Use a generator pipeline. ETL workflows, log processing, data cleaning, and streaming transformations benefit from lazy evaluation and composable steps.
Do you need to add a specific capability to multiple unrelated classes? Use a mixin. Serialization, timestamp tracking, validation, and audit logging are the kinds of cross-cutting capabilities that mixins handle well. Keep each mixin focused on a single responsibility and avoid deep mixin hierarchies.
Key Takeaways
- Decorators let you wrap reusable behavior like retries, timing, logging, and caching around any function without modifying its internal code. Always use
functools.wrapsto preserve function metadata. Be aware of stacking order and avoid silently swallowing exceptions. - Context managers handle resource acquisition and cleanup reliably. The
contextlib.contextmanagerdecorator makes writing them straightforward for simple cases, while class-based managers give you more control for complex or reentrant scenarios. - Protocol classes define interfaces through structural subtyping as specified in PEP 544. Functions that accept Protocol types work with any object that has the right methods, which makes your code flexible without sacrificing type safety. Choose between Protocols and ABCs based on whether you need opt-in enforcement.
- Frozen dataclasses with slots give you immutable, memory-efficient value objects with built-in equality comparison and hashing. Add validation through
__post_init__to catch invalid data early, and be mindful that nested mutable objects can still be modified. - Generator pipelines let you compose independent data processing steps that run lazily and use minimal memory, making them ideal for processing large datasets or streaming data. Use list comprehensions for small, in-memory work; switch to generators when data size or composability demands it.
- Mixin classes provide composable behavior that you can combine through multiple inheritance. Each mixin handles one responsibility, and you assemble them as needed without building rigid class hierarchies. Understand the MRO to predict method resolution.
- Premature abstraction is the real enemy. Wait for the pattern to emerge from real usage before extracting shared code. The wrong abstraction creates more problems than the duplication it was meant to eliminate.
The common thread across all of these patterns is that they separate what your code does from how it does it. A retry decorator does not know or care what function it wraps. A Protocol does not know which classes implement it. A generator pipeline does not know where its data comes from. This separation is what makes the patterns truly reusable. Build them once, test them thoroughly, and then apply them confidently across every project you work on.
As Tim Peters wrote in the Zen of Python: "Simple is better than complex. Complex is better than complicated." The patterns in this article sit at that boundary. They add just enough structure to make your code predictable without adding so much that it becomes opaque. That balance is the real skill.