Python Class Example: The Complete Guide to Writing Real Classes That Actually Do Something

Many "Python class example" tutorials hand you a Dog class with a bark() method and call it a day. You copy the code, it runs, and you still have no idea when or why you would use a class in your own projects. That ends here.

This article walks through Python classes the way they actually work under the hood -- grounded in the language specification, informed by the Python Enhancement Proposals (PEPs) that shaped the feature set, and demonstrated with code you could adapt for production use tomorrow. No toy examples. No hand-waving. If you came here searching for a Python class example, you are going to leave understanding the machinery.

What a Class Actually Is

Before writing a single line of code, it helps to understand what Python is doing when you type class. A class in Python is not merely a template for creating objects -- it is itself an object. It is an instance of its metaclass, which by default is type. In a 2002 interview with Linux Journal, Guido van Rossum described the Python 2.2 release as the first step toward a unified object model where built-in and user-defined classes have equal standing (source: Linux Journal, Issue 98, 2002). That unification is the foundation of everything modern Python classes can do.

In practical terms, when the interpreter encounters a class statement, it evaluates the class body as a code block, collects the resulting namespace into a dictionary, and then calls the metaclass (usually type) with the class name, base classes, and that namespace dictionary to produce the class object. This process was formalized and extended by PEP 3115 -- Metaclasses in Python 3000, authored by Talin, which introduced the __prepare__ method and changed the metaclass declaration syntax from the Python 2 __metaclass__ attribute to the keyword argument form class Foo(metaclass=MyMeta) (source: peps.python.org/pep-3115).

Note

Understanding this creation process is not academic trivia. It explains why class bodies execute at definition time, why you can put arbitrary statements inside them, and why the order you write things in can matter. Think of a class block as a function that runs once and whose local namespace becomes the class dictionary -- because that is literally what happens.

Your First Real Class: A Configuration Manager

Forget dogs and cats. Here is a class that solves an actual problem -- managing application configuration with defaults, validation, and the ability to export settings:

class AppConfig:
    """Manages application configuration with defaults and validation."""

    _defaults = {
        "debug": False,
        "log_level": "INFO",
        "max_retries": 3,
        "timeout_seconds": 30,
    }

    def __init__(self, **overrides):
        self._settings = dict(self._defaults)
        for key, value in overrides.items():
            self.set(key, value)

    def get(self, key):
        if key not in self._settings:
            raise KeyError(f"Unknown configuration key: {key!r}")
        return self._settings[key]

    def set(self, key, value):
        if key not in self._defaults:
            raise KeyError(
                f"Unknown configuration key: {key!r}. "
                f"Valid keys: {', '.join(self._defaults)}"
            )
        self._validate(key, value)
        self._settings[key] = value

    def _validate(self, key, value):
        expected_type = type(self._defaults[key])
        if not isinstance(value, expected_type):
            raise TypeError(
                f"{key!r} expects {expected_type.__name__}, "
                f"got {type(value).__name__}"
            )
        if key == "log_level" and value not in ("DEBUG", "INFO", "WARNING", "ERROR"):
            raise ValueError(f"Invalid log level: {value!r}")
        if key == "max_retries" and value < 0:
            raise ValueError("max_retries cannot be negative")

    def to_dict(self):
        return dict(self._settings)

    def __repr__(self):
        changed = {
            k: v for k, v in self._settings.items()
            if v != self._defaults[k]
        }
        if changed:
            overrides = ", ".join(f"{k}={v!r}" for k, v in changed.items())
            return f"AppConfig({overrides})"
        return "AppConfig()"
config = AppConfig(debug=True, log_level="DEBUG")
print(config)              # AppConfig(debug=True, log_level='DEBUG')
print(config.get("timeout_seconds"))  # 30

config.set("max_retries", 5)
print(config.to_dict())
# {'debug': True, 'log_level': 'DEBUG', 'max_retries': 5, 'timeout_seconds': 30}

This class demonstrates several things that matter in practice: class-level attributes (_defaults) shared across all instances, instance-level attributes (_settings) unique to each object, input validation in a private method, and a useful __repr__ that shows only what the user changed. Notice how _defaults uses a single leading underscore -- this follows the convention documented in PEP 8 -- Style Guide for Python Code, which defines single leading underscores as indicating "internal use."

Ask yourself: why is _defaults a class attribute and not an instance attribute? Because the defaults never change per instance -- they define the schema. Putting them on the class saves memory (one dictionary instead of one per instance), signals intent ("this is shared configuration"), and makes subclassing easier: a child class can override _defaults to add new keys without touching the parent.

The __init__ Method Is Not a Constructor

This is one of the more common misunderstandings about Python classes. The __init__ method does not create the object -- it initializes one that already exists. The actual constructor is __new__, which is called first and is responsible for creating and returning the new instance. In the vast majority of cases, you never need to touch __new__ because the default implementation inherited from object handles instance creation perfectly well. But understanding the distinction matters when you encounter patterns like singletons, immutable types, or cached instances.

Here is the actual call sequence when you write MyClass(arg1, arg2):

# What Python actually does (simplified):
# 1. Call MyClass.__new__(MyClass, arg1, arg2) -> returns instance
# 2. If instance is of type MyClass, call instance.__init__(arg1, arg2)
# 3. Return instance

The mental model to carry: __new__ controls whether and what object exists. __init__ controls what state that object starts with. This two-phase creation is why you can subclass immutable types like int and str -- by the time __init__ runs, the value is already set in __new__, and since the type is immutable, __init__ cannot change it.

Pro Tip

The __new__/__init__ split was part of the Python 2.2 type/class unification -- the design decision that gave Python classes the same power as built-in types. You can override __new__ to implement patterns like the singleton or to subclass immutable built-ins like int and str. If your __new__ returns an instance of a different class, __init__ will not be called at all -- Python checks the return type.

Inheritance: The Mechanism and the Method Resolution Order

Inheritance in Python is straightforward in simple cases and surprisingly nuanced in complex ones. Here is a practical example -- a logging system where different log destinations inherit from a common interface:

import datetime


class LogHandler:
    """Base class for all log handlers."""

    def __init__(self, min_level="DEBUG"):
        self.min_level = min_level
        self._levels = ("DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL")

    def _should_log(self, level):
        return self._levels.index(level) >= self._levels.index(self.min_level)

    def emit(self, level, message):
        raise NotImplementedError("Subclasses must implement emit()")

    def log(self, level, message):
        if self._should_log(level):
            timestamp = datetime.datetime.now().isoformat(timespec="seconds")
            formatted = f"[{timestamp}] {level}: {message}"
            self.emit(level, formatted)


class ConsoleHandler(LogHandler):
    """Writes log messages to the console."""

    def emit(self, level, message):
        print(message)


class FileHandler(LogHandler):
    """Appends log messages to a file."""

    def __init__(self, filepath, min_level="DEBUG"):
        super().__init__(min_level)
        self.filepath = filepath

    def emit(self, level, message):
        with open(self.filepath, "a") as f:
            f.write(message + "\n")

The super().__init__(min_level) call in FileHandler uses the zero-argument form of super() introduced in Python 3 by PEP 3135 -- New Super, authored by Calvin Spealman, Tim Delaney, and Lie Ryan (source: peps.python.org/pep-3135). The PEP's rationale was direct: the old Python 2 syntax super(FileHandler, self).__init__(min_level) violated the DRY principle by requiring you to explicitly name the current class -- a problem when renaming classes. Under the hood, the compiler detects the use of super within a method and automatically provides a __class__ cell reference, so the zero-argument call resolves the correct class without you naming it.

Multiple Inheritance and the MRO

Python supports multiple inheritance, and the method resolution order (MRO) determines which parent's method gets called. Python uses the C3 linearization algorithm, which guarantees a consistent, predictable ordering. You can inspect it directly:

class Timestamped:
    """Mixin that adds timestamp tracking."""

    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        self.created_at = datetime.datetime.now()


class Auditable:
    """Mixin that tracks who created an object."""

    def __init__(self, created_by="system", **kwargs):
        super().__init__(**kwargs)
        self.created_by = created_by


class AuditedConfig(Timestamped, Auditable, AppConfig):
    """Configuration with full audit trail."""
    pass


print(AuditedConfig.__mro__)
# (<class 'AuditedConfig'>, <class 'Timestamped'>,
#  <class 'Auditable'>, <class 'AppConfig'>, <class 'object'>)
Note

The **kwargs pattern in __init__ is critical for cooperative multiple inheritance. Each class in the MRO takes the keyword arguments it recognizes and passes the rest along via super().__init__(**kwargs). Without this pattern, the chain breaks and parent classes do not get initialized correctly. This is a design contract: every class that participates in cooperative inheritance must forward unknown keyword arguments.

Properties: Computed Attributes Without the Getter/Setter Ceremony

One of Python's strengths is that attribute access looks the same whether it is a simple value or a computed property. The @property decorator lets you define methods that behave like attributes:

class Temperature:
    """Temperature with automatic Fahrenheit conversion."""

    def __init__(self, celsius):
        self.celsius = celsius   # This triggers the setter

    @property
    def celsius(self):
        return self._celsius

    @celsius.setter
    def celsius(self, value):
        if value < -273.15:
            raise ValueError("Temperature below absolute zero is not physical")
        self._celsius = value

    @property
    def fahrenheit(self):
        return self._celsius * 9 / 5 + 32

    @fahrenheit.setter
    def fahrenheit(self, value):
        self.celsius = (value - 32) * 5 / 9

    def __repr__(self):
        return f"Temperature({self._celsius:.1f}C / {self.fahrenheit:.1f}F)"
t = Temperature(100)
print(t)                 # Temperature(100.0C / 212.0F)
t.fahrenheit = 32
print(t)                 # Temperature(0.0C / 32.0F)
t.celsius = -300         # ValueError: Temperature below absolute zero

Properties are implemented using Python's descriptor protocol, which is worth understanding on its own. Under the hood, property is a class with __get__, __set__, and __delete__ methods. When you access t.celsius, Python's attribute lookup machinery finds the property descriptor on the class and calls its __get__ method instead of returning the object directly. This mechanism is documented in the Python Data Model reference and is one of the more elegant parts of the language design.

The Descriptor Protocol: How Python Really Manages Attribute Access

Properties are the gateway to a much deeper mechanism: the descriptor protocol. Any object that defines __get__, __set__, or __delete__ is a descriptor, and Python will invoke those methods when the descriptor is accessed as a class attribute. This is how property, classmethod, staticmethod, and even plain functions work -- a function object has a __get__ method that returns a bound method when accessed through an instance.

Here is a practical descriptor -- a typed field validator that you can reuse across classes:

class Validated:
    """Descriptor that enforces a type constraint on assignment."""

    def __init__(self, expected_type, min_value=None, max_value=None):
        self.expected_type = expected_type
        self.min_value = min_value
        self.max_value = max_value

    def __set_name__(self, owner, name):
        # Called automatically when the class is created (PEP 487)
        self.public_name = name
        self.private_name = f"_{name}"

    def __get__(self, obj, objtype=None):
        if obj is None:
            return self
        return getattr(obj, self.private_name, None)

    def __set__(self, obj, value):
        if not isinstance(value, self.expected_type):
            raise TypeError(
                f"{self.public_name!r} requires {self.expected_type.__name__}, "
                f"got {type(value).__name__}"
            )
        if self.min_value is not None and value < self.min_value:
            raise ValueError(f"{self.public_name!r} must be >= {self.min_value}")
        if self.max_value is not None and value > self.max_value:
            raise ValueError(f"{self.public_name!r} must be <= {self.max_value}")
        setattr(obj, self.private_name, value)


class ServerConfig:
    port = Validated(int, min_value=1, max_value=65535)
    host = Validated(str)
    workers = Validated(int, min_value=1)

    def __init__(self, host, port, workers=4):
        self.host = host
        self.port = port
        self.workers = workers
srv = ServerConfig("0.0.0.0", 8080, workers=8)
print(srv.port)   # 8080
srv.port = 70000  # ValueError: 'port' must be <= 65535
srv.host = 9001   # TypeError: 'host' requires str, got int

Notice the __set_name__ method. This hook, introduced alongside __init_subclass__ in PEP 487, is called automatically when the owning class is created. Before PEP 487, descriptors had no way to know what attribute name they were assigned to without metaclass intervention. Now, __set_name__ receives the owner class and the attribute name, making self-aware descriptors trivial to write.

Pro Tip

Descriptors divide into two categories: data descriptors (which define __set__ or __delete__) and non-data descriptors (which only define __get__). Data descriptors take priority over instance dictionaries, while non-data descriptors do not. This is why a property setter intercepts assignment: it is a data descriptor. Understanding this priority order is the key to debugging attribute access surprises.

Class Methods and Static Methods: When self Is Not What You Need

Regular methods receive the instance as their first argument. But sometimes you need a method that operates on the class itself, or a method that belongs logically to the class but does not need access to either the instance or the class:

class User:
    _registry = {}

    def __init__(self, username, email):
        self.username = username
        self.email = email
        User._registry[username] = self

    @classmethod
    def from_email(cls, email):
        """Create a user with username derived from email."""
        username = email.split("@")[0]
        return cls(username, email)

    @classmethod
    def find(cls, username):
        """Look up a user by username."""
        return cls._registry.get(username)

    @staticmethod
    def is_valid_email(email):
        """Basic email format check."""
        return "@" in email and "." in email.split("@")[-1]

    def __repr__(self):
        return f"User({self.username!r}, {self.email!r})"
# Regular construction
alice = User("alice", "alice@example.com")

# Factory method via @classmethod
bob = User.from_email("bob@example.com")
print(bob)  # User('bob', 'bob@example.com')

# Lookup
print(User.find("alice"))  # User('alice', 'alice@example.com')

# Validation utility (no instance or class needed)
print(User.is_valid_email("bad-address"))  # False

The @classmethod decorator is particularly important for inheritance. Because cls receives the actual class being called (which may be a subclass), factory methods like from_email correctly create instances of the subclass when called on one. If you hardcoded User(username, email) instead of cls(username, email), subclasses would always produce User instances instead of their own type. This is a subtle but critical difference -- it determines whether your class is truly extensible or just appears to be.

Slots: Controlling Memory Layout and Attribute Access

By default, Python stores instance attributes in a per-instance __dict__ dictionary. For classes with a fixed set of attributes, especially those that create many instances, you can replace this dictionary with a fixed-size structure using __slots__:

class Point:
    __slots__ = ("x", "y")

    def __init__(self, x, y):
        self.x = x
        self.y = y

    def distance_to(self, other):
        return ((self.x - other.x) ** 2 + (self.y - other.y) ** 2) ** 0.5

    def __repr__(self):
        return f"Point({self.x}, {self.y})"
p = Point(3, 4)
print(p.x)            # 3
p.z = 5               # AttributeError: 'Point' has no attribute 'z'
print(hasattr(p, '__dict__'))  # False

Slots offer three advantages: they reduce memory consumption (no per-instance dictionary), they produce slightly faster attribute access, and they prevent accidental attribute creation through typos. The trade-off is reduced flexibility -- you cannot dynamically add new attributes to slotted instances.

Watch Out

Slots do not participate in inheritance the way you might expect. If a parent class defines __slots__ but a child class does not, the child class will still have a __dict__ and lose the memory benefits. Both parent and child must define __slots__ for the optimization to carry through. Also, you cannot use __slots__ with multiple inheritance when more than one parent defines non-empty slots (you will get a layout conflict error).

In Python 3.10 and later, you can combine slots with dataclasses using @dataclass(slots=True), which automatically generates the __slots__ definition from your field annotations -- the best of both worlds.

Data Classes: The Modern Approach (PEP 557)

In 2017, Eric V. Smith authored PEP 557 -- Data Classes, which was accepted for Python 3.7 (source: peps.python.org/pep-0557). The PEP explicitly acknowledged the influence of Hynek Schlawack's attrs library on its design. Data classes eliminate the boilerplate of writing __init__, __repr__, and comparison methods for classes that are primarily data containers:

from dataclasses import dataclass, field


@dataclass
class NetworkRequest:
    url: str
    method: str = "GET"
    headers: dict = field(default_factory=dict)
    timeout: float = 30.0
    retries: int = 3

    def __post_init__(self):
        if not self.url.startswith(("http://", "https://")):
            raise ValueError(f"URL must start with http:// or https://: {self.url!r}")
        self.method = self.method.upper()
req = NetworkRequest("https://api.example.com/data", headers={"Auth": "Bearer xyz"})
print(req)
# NetworkRequest(url='https://api.example.com/data', method='GET',
#                headers={'Auth': 'Bearer xyz'}, timeout=30.0, retries=3)

# Equality comparison is auto-generated
req2 = NetworkRequest("https://api.example.com/data", headers={"Auth": "Bearer xyz"})
print(req == req2)  # True
Watch Out

The field(default_factory=dict) pattern is essential. If you wrote headers: dict = {} instead, every instance would share the same dictionary object -- the classic mutable default argument trap. The default_factory parameter tells the @dataclass decorator to call dict() fresh for each new instance.

The __post_init__ method runs after the generated __init__ completes, giving you a hook for validation and transformation. For cases where you want immutability, use @dataclass(frozen=True):

@dataclass(frozen=True)
class Coordinate:
    latitude: float
    longitude: float

    def distance_to(self, other):
        """Approximate distance in km using equirectangular projection."""
        import math
        lat1, lon1 = math.radians(self.latitude), math.radians(self.longitude)
        lat2, lon2 = math.radians(other.latitude), math.radians(other.longitude)
        x = (lon2 - lon1) * math.cos((lat1 + lat2) / 2)
        y = lat2 - lat1
        return math.sqrt(x**2 + y**2) * 6371

Frozen data classes raise FrozenInstanceError if you try to modify any field after creation, and they automatically get a __hash__ method, making them usable as dictionary keys and set members.

Python 3.10+ Dataclass Features

Python 3.10 added two significant dataclass features worth knowing. The slots=True parameter automatically generates __slots__ from your type-annotated fields, combining the convenience of dataclasses with the memory efficiency of slots. The kw_only=True parameter makes all fields keyword-only in the generated __init__, preventing positional argument mistakes in classes with many fields:

@dataclass(slots=True, kw_only=True)
class APIResponse:
    status_code: int
    body: str
    headers: dict = field(default_factory=dict)
    elapsed_ms: float = 0.0

# Must use keyword arguments -- positional will raise TypeError
response = APIResponse(status_code=200, body='{"ok": true}')
print(hasattr(response, '__dict__'))  # False -- slots are active

Python 3.10 also added match_args=True (enabled by default), which generates a __match_args__ tuple for use with structural pattern matching (match/case statements). This means your dataclasses are automatically compatible with Python's pattern matching syntax without any extra work.

Abstract Base Classes: Enforcing Interfaces (PEP 3119)

Python's duck typing is powerful, but sometimes you want to guarantee that a subclass implements specific methods. Abstract base classes (ABCs), introduced by PEP 3119 and available via the abc module, provide this enforcement:

from abc import ABC, abstractmethod


class Serializer(ABC):
    """Interface for objects that can be serialized and deserialized."""

    @abstractmethod
    def serialize(self, data):
        """Convert data to a serialized format."""
        ...

    @abstractmethod
    def deserialize(self, raw):
        """Reconstruct data from serialized format."""
        ...

    def round_trip(self, data):
        """Verify serialization integrity (concrete method)."""
        return self.deserialize(self.serialize(data))


class JSONSerializer(Serializer):
    def serialize(self, data):
        import json
        return json.dumps(data)

    def deserialize(self, raw):
        import json
        return json.loads(raw)


# This would raise TypeError at instantiation time:
# s = Serializer()  # TypeError: Can't instantiate abstract class

The critical difference between ABCs and the raise NotImplementedError approach used in the LogHandler example above is when you discover the problem. With NotImplementedError, you find out at runtime, when the missing method is called -- possibly in production. With an ABC, you find out at instantiation time: Python refuses to create an instance of any class that has unimplemented abstract methods. This shifts the failure from "sometime later" to "immediately," which is a significant improvement for large codebases.

Note

ABCs can also define concrete methods (like round_trip above) that subclasses inherit for free. This makes ABCs more powerful than pure interfaces -- they let you provide shared behavior alongside the interface contract. You can also use @abstractmethod in combination with @property, @classmethod, and @staticmethod to require subclasses to implement specific attribute types.

Context Managers: Resource Safety Through Protocol

The with statement relies on two dunder methods -- __enter__ and __exit__ -- to guarantee resource cleanup regardless of whether exceptions occur. This is one of the most practically important class protocols in Python:

class DatabaseConnection:
    """Manages a database connection lifecycle."""

    def __init__(self, connection_string):
        self.connection_string = connection_string
        self._connection = None

    def __enter__(self):
        print(f"Connecting to {self.connection_string}")
        self._connection = self._connect()
        return self._connection

    def __exit__(self, exc_type, exc_val, exc_tb):
        if self._connection is not None:
            if exc_type is not None:
                print(f"Rolling back due to {exc_type.__name__}: {exc_val}")
                self._connection.rollback()
            else:
                self._connection.commit()
            self._connection.close()
            self._connection = None
        return False  # Do not suppress exceptions

    def _connect(self):
        # Placeholder for actual connection logic
        class FakeConnection:
            def commit(self): print("Committed")
            def rollback(self): print("Rolled back")
            def close(self): print("Connection closed")
            def execute(self, sql): print(f"Executing: {sql}")
        return FakeConnection()
# Resource is guaranteed to be cleaned up
with DatabaseConnection("postgres://localhost/mydb") as conn:
    conn.execute("INSERT INTO users VALUES ('alice')")
# Output: Connecting to postgres://localhost/mydb
#         Executing: INSERT INTO users VALUES ('alice')
#         Committed
#         Connection closed

The return value of __exit__ controls exception propagation. Returning False (or any falsy value) re-raises any exception that occurred in the with block. Returning True suppresses it. In practice, you almost always want False -- suppressing exceptions silently is a recipe for hidden bugs. The three arguments (exc_type, exc_val, exc_tb) are None when the block completes without error, giving you a clean way to distinguish normal exit from error exit.

Customizing Class Creation Without Metaclasses (PEP 487)

Metaclasses are powerful but notoriously difficult to combine. PEP 487 -- Simpler customisation of class creation, authored by Martin Teichmann and accepted for Python 3.6, addressed a key friction point: combining two base classes from different libraries can require manually creating a combined metaclass (source: peps.python.org/pep-0487). PEP 487 introduced __init_subclass__, a hook that runs whenever a class is subclassed without requiring a custom metaclass. This is the modern way to build plugin registries, enforce interface contracts, or run setup logic when new classes are defined:

class Plugin:
    """Base class that auto-registers plugins."""

    _registry = {}

    def __init_subclass__(cls, plugin_name=None, **kwargs):
        super().__init_subclass__(**kwargs)
        name = plugin_name or cls.__name__.lower()
        Plugin._registry[name] = cls

    @classmethod
    def create(cls, name, *args, **kwargs):
        if name not in cls._registry:
            raise ValueError(
                f"Unknown plugin: {name!r}. "
                f"Available: {', '.join(cls._registry)}"
            )
        return cls._registry[name](*args, **kwargs)


class MarkdownRenderer(Plugin, plugin_name="markdown"):
    def render(self, text):
        return f"<p>{text}</p>"


class JSONRenderer(Plugin, plugin_name="json"):
    def render(self, data):
        import json
        return json.dumps(data, indent=2)


# The registry is populated automatically at class definition time
renderer = Plugin.create("markdown")
print(renderer.render("Hello world"))  # <p>Hello world</p>

print(Plugin._registry)
# {'markdown': <class 'MarkdownRenderer'>, 'json': <class 'JSONRenderer'>}
Pro Tip

The __init_subclass__ method is implicitly a classmethod -- you do not need to apply the @classmethod decorator. PEP 487 also introduced __set_name__, which is called on descriptors when the class they belong to is created. This is what allows descriptors to know what attribute name they were assigned to, which was previously impossible without metaclass intervention.

Dunder Methods That Matter: Making Your Classes Pythonic

Python's data model defines a rich set of special methods (the "dunder" methods) that let your classes integrate seamlessly with the language's syntax and built-in functions. Here is a practical example -- a class representing a time duration that supports arithmetic and comparison:

class Duration:
    """Represents a time duration with arithmetic support."""

    def __init__(self, seconds=0, minutes=0, hours=0):
        self._total_seconds = seconds + minutes * 60 + hours * 3600

    @property
    def total_seconds(self):
        return self._total_seconds

    def __add__(self, other):
        if isinstance(other, Duration):
            return Duration(seconds=self._total_seconds + other._total_seconds)
        return NotImplemented

    def __sub__(self, other):
        if isinstance(other, Duration):
            return Duration(seconds=self._total_seconds - other._total_seconds)
        return NotImplemented

    def __mul__(self, factor):
        if isinstance(factor, (int, float)):
            return Duration(seconds=self._total_seconds * factor)
        return NotImplemented

    def __rmul__(self, factor):
        return self.__mul__(factor)

    def __eq__(self, other):
        if isinstance(other, Duration):
            return self._total_seconds == other._total_seconds
        return NotImplemented

    def __lt__(self, other):
        if isinstance(other, Duration):
            return self._total_seconds < other._total_seconds
        return NotImplemented

    def __le__(self, other):
        if isinstance(other, Duration):
            return self._total_seconds <= other._total_seconds
        return NotImplemented

    def __hash__(self):
        return hash(self._total_seconds)

    def __bool__(self):
        return self._total_seconds != 0

    def __repr__(self):
        hours, remainder = divmod(abs(self._total_seconds), 3600)
        minutes, seconds = divmod(remainder, 60)
        sign = "-" if self._total_seconds < 0 else ""
        parts = []
        if hours:
            parts.append(f"{hours}h")
        if minutes:
            parts.append(f"{minutes}m")
        if seconds or not parts:
            parts.append(f"{seconds}s")
        return f"Duration({sign}{' '.join(parts)})"
meeting = Duration(minutes=45)
break_time = Duration(minutes=15)
full_block = meeting + break_time
print(full_block)         # Duration(1h)

double = meeting * 2
print(double)             # Duration(1h 30m)
print(3 * break_time)     # Duration(45m) -- works because of __rmul__
print(bool(Duration()))   # False -- zero duration is falsy
Watch Out

Returning NotImplemented (the singleton, not the exception NotImplementedError) is critical. It tells Python that this particular type combination is not supported, allowing Python to try the other operand's reflected method. If you raise an exception or return None instead, you break Python's operator dispatch mechanism. Also note the __hash__ implementation -- in Python 3, if you define __eq__, your class becomes unhashable by default unless you also define __hash__. This is documented in the Data Model and catches many developers off guard.

When to Use a Class vs. When Not To

Guido van Rossum has expressed the idea that the joy of Python lies in short, readable classes that pack a lot of action into a small amount of clear code -- not in reams of trivial code (source: attributed to van Rossum, widely cited in the Python community). The key word there is concise. Not every problem needs a class.

Use a class when you have state that multiple functions need to operate on together, when you need multiple instances of something with the same behavior, when you want to take advantage of polymorphism through inheritance, or when you are modeling a concept from your problem domain that has both data and behavior.

Do not force a class when a function or a simple dictionary would suffice. A class with only __init__ and one other method is almost always better expressed as a function. A class whose methods never reference self should probably be a module with functions. Python gives you the tools -- the judgement of when to use which one is part of the craft.

A useful heuristic: if you find yourself passing the same three or four arguments between several functions, that cluster of data and behavior probably wants to be a class. Conversely, if your class has a single public method called something generic like run() or execute() and no meaningful state beyond what gets passed in, a function is the right tool.

The class system you use today was shaped by a series of deliberate language design decisions, each documented in a PEP. Here are the ones that matter:

PEP 3115 -- Metaclasses in Python 3000 (Talin, 2007) changed how metaclasses are declared and introduced __prepare__, allowing metaclasses to control the namespace used during class body evaluation. This enabled use cases like preserving the declaration order of class members (source: peps.python.org/pep-3115).

PEP 3119 -- Introducing Abstract Base Classes (Guido van Rossum, Talin, 2007) formalized interfaces in Python through the abc module, bridging the gap between duck typing and explicit contracts. It gave Python a way to enforce method implementation at instantiation time rather than at call time (source: peps.python.org/pep-3119).

PEP 3135 -- New Super (Calvin Spealman, Tim Delaney, Lie Ryan, 2007) introduced the zero-argument super() syntax in Python 3, eliminating the need to explicitly pass the current class and instance. The PEP identified the old syntax as a DRY violation that hindered class renaming (source: peps.python.org/pep-3135).

PEP 487 -- Simpler customisation of class creation (Martin Teichmann, 2016) added __init_subclass__ and __set_name__ in Python 3.6, providing lightweight hooks for common metaclass use cases without the complexity and compatibility risks of actual metaclasses (source: peps.python.org/pep-0487).

PEP 557 -- Data Classes (Eric V. Smith, 2017) introduced the @dataclass decorator in Python 3.7, automating the generation of __init__, __repr__, __eq__, and other methods for data-oriented classes. It was designed with static type checker support as a primary goal (source: peps.python.org/pep-0557).

PEP 8 -- Style Guide for Python Code establishes the naming conventions that make class code readable: CapWords for class names, lowercase_with_underscores for methods and attributes, and a single leading underscore for internal-use attributes.

Key Takeaways

  1. A class is an object: Python classes are instances of their metaclass -- understanding this explains why class bodies execute at definition time and why the declaration order of members can matter.
  2. __init__ initializes, __new__ constructs: The two-step object creation process is intentional. For many use cases __new__ stays invisible, but knowing it exists prevents confusion with patterns like singletons and immutable types.
  3. Use the right method type: Regular methods for instance behavior, @classmethod for factory methods and inheritance-aware logic, @staticmethod for utilities that belong to the class namespace but need neither instance nor class.
  4. Descriptors power everything: Properties, classmethods, staticmethods, and even plain functions are all built on the descriptor protocol. Understanding __get__, __set__, and __set_name__ unlocks a huge portion of Python's internal machinery.
  5. Prefer @dataclass for data-oriented classes: When a class's primary job is holding data, PEP 557's @dataclass eliminates boilerplate and integrates cleanly with type checkers. Use frozen=True for immutability, slots=True for memory efficiency, and kw_only=True for safer constructors.
  6. ABCs catch errors earlier: When you need subclasses to implement specific methods, ABC with @abstractmethod moves the failure from runtime method calls to instantiation time -- a significant advantage in large systems.
  7. Reach for __init_subclass__ before metaclasses: PEP 487 gives you a clean, composable hook for plugin registries and interface enforcement without the complexity of custom metaclasses.
  8. Context managers are a class protocol worth mastering: The __enter__/__exit__ pair guarantees resource cleanup. Any time your class acquires a resource that must be released, implementing the context manager protocol should be your default approach.

Python classes are a mechanism for bundling data and behavior into reusable, composable units. The language gives you a remarkable amount of control over how classes are created, how attributes are accessed, how operators behave, and how instances are represented -- all through a consistent protocol of special methods documented in the data model. Write classes when they earn their complexity, use dataclasses when your primary need is structured data, prefer composition over deep inheritance hierarchies, and remember that the simplest correct solution is usually the best one.

back to articles