Python Type Hints Without Breaking Dynamic Typing

Python's type hint system gives you the benefits of static analysis — cleaner documentation, better IDE support, fewer surprise bugs — without forcing Python to behave like a statically typed language. The key is understanding what type hints actually are: metadata for tools, not instructions to the interpreter.

Python has been dynamically typed since its earliest days. Variables don't carry fixed types — they hold references to objects, and those objects carry their own type information. When you write x = 42 and later x = "hello", Python does not object. That flexibility is a feature, not a bug, and it is one of the main reasons Python became the dominant language for rapid prototyping, data science, and scripting.

Type hints, introduced formally in Python 3.5 via PEP 484, sit on top of that dynamic system without replacing it. The annotations you write are stored as metadata — accessible through __annotations__ — and are otherwise invisible to the Python interpreter during execution. This is the fundamental fact that makes gradual typing work, and it is worth holding onto throughout everything that follows.

What Type Hints Actually Do at Runtime

The single most important thing to understand is that Python does not enforce type hints. None. Not even a little. The following code runs without error:

def add(a: int, b: int) -> int:
    return a + b

# Python does not raise an error here
result = add("hello", " world")
print(result)  # hello world

Python concatenates the strings, returns the result, and moves on. The annotations int and -> int are stored in add.__annotations__ and do nothing else. A static type checker like mypy would flag the call above as an error — but only when you run mypy separately, before or outside of Python itself. At runtime, dynamic typing governs everything.

PEP 484

The specification explicitly states that no particular processing of annotations is required, and that other uses of annotations are not prevented. This design decision keeps type hints as opt-in metadata rather than enforced contracts. (PEP 484, python.org)

This design was intentional. The authors of PEP 484 wanted type hints to be a communication tool — between developers, between developers and their IDEs, and between developers and static analysis tools — without changing what Python does when it runs your code.

The authors have no desire to make type hints mandatory, even by convention. — Guido van Rossum, Jukka Lehtosalo, and Łukasz Langa, PEP 484
Note

Type hints are stored in the __annotations__ dictionary on functions, classes, and modules. You can inspect them at runtime using inspect.get_annotations() (Python 3.10+) or by accessing __annotations__ directly. They do not affect how Python resolves names, dispatches calls, or handles values.

Pop Quiz
You annotate a function as def greet(name: str) -> str, then call it with an integer: greet(42). What happens at runtime?
Correct
Python's runtime ignores type annotations entirely. The interpreter does not read __annotations__ during execution — it only stores them there. The call proceeds, and whatever greet does with 42 happens normally. A static checker like mypy would flag this before you ran anything, but the runtime never sees it as an error.
Not quite
This is the most common misconception about type hints. Python does not raise TypeError for annotation violations — that only happens for actual language-level type errors (like trying to add a string to an integer). Type hints are stored in __annotations__ as metadata and do nothing at runtime. Only static tools like mypy enforce them, and only when you run them separately.
Not quite
Python does not perform implicit type coercion based on annotations. The language has no mechanism for that — annotations are passive metadata, not instructions to the interpreter. If greet receives 42, it works with 42. Whether that causes a problem depends entirely on what the function body does with it, not on what the annotation says.

Writing Type Hints That Don't Constrain You

The practical challenge is writing annotations that are honest without becoming a straitjacket. Python's typing module gives you several tools specifically designed to handle ambiguity and dynamism. But the real leverage comes from thinking beyond individual annotations — toward enforcement strategies, CI integration, and annotation-driven architecture.

Pair static checking with runtime enforcement selectively

Static checkers catch violations before execution, but they cannot protect you at API boundaries where external callers — other services, user input, deserialized data — send values your types don't expect. The solution is not to abandon type hints; it is to add runtime enforcement at the entry points that actually need it, and leave the rest static-only.

Beartype, applied as a decorator, enforces parameter types at call time with near-zero overhead compared to validation libraries. It reads your existing annotations — no schema duplication — and raises a clean exception on violation. Apply it to the outermost boundary functions: HTTP handlers, CLI entry points, and public library APIs. Everything inside those boundaries runs under static checking alone.

from beartype import beartype

@beartype
def handle_webhook(payload: dict[str, str], event_type: str) -> bool:
    # Beartype enforces types on every call, not just statically
    return process(payload, event_type)

Pydantic's BaseModel is better suited when you need coercion alongside validation — converting a string "42" to an integer when the annotation says int — which is the standard pattern for parsing external JSON or form data. Beartype and Pydantic solve different problems; using both in the right places is not redundancy, it is layering.

Enforce type checking in CI, not just locally

Type hints have no value if nobody runs the checker. The most common failure mode is developers annotating code but never running mypy or Pyright outside of their IDE. Integrating the checker into your CI pipeline — as a required step that blocks merges on type errors — transforms annotations from optional documentation into enforced contracts.

A practical CI configuration runs mypy in strict mode only on new or recently modified files, using --follow-imports=silent to avoid failing on unannotated third-party libraries. This avoids the "fix everything at once" trap that kills gradual adoption. Tools like mypy-baseline can snapshot the current error count and only fail the build when new errors are introduced, letting you tighten coverage incrementally without blocking existing work.

Use annotation-driven architecture for framework code

Annotations are not just for static checkers. Libraries like FastAPI, SQLModel, and dependency injectors read __annotations__ at runtime to derive routing, schema generation, and dependency wiring from the types you declare. This is a fundamentally different use of type hints: they become the single source of truth for behavior, not just documentation.

If you are building internal frameworks or shared libraries, consider adopting this pattern deliberately. A function annotated with specific argument types can be introspected to generate CLI argument parsers, configuration validators, or API documentation automatically. The key is using inspect.get_annotations(func, eval_str=True) rather than accessing __annotations__ directly, so deferred annotations (from PEP 563 or 649) are resolved correctly before you work with them.

import inspect
import functools
from typing import get_origin, get_args

def auto_validate(func):
    """Decorator that uses annotations to validate inputs at runtime.

    Handles plain types (str, int) but not parameterized generics
    like list[int] — use Beartype or Pydantic for full generic support.
    """
    hints = inspect.get_annotations(func, eval_str=True)
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        sig = inspect.signature(func)
        bound = sig.bind(*args, **kwargs)
        bound.apply_defaults()
        for param_name, value in bound.arguments.items():
            expected = hints.get(param_name)
            if expected is None or param_name == "return":
                continue
            # Unwrap generic origins: list[int] -> list, dict[str, str] -> dict
            origin = get_origin(expected)
            check_type = origin if origin is not None else expected
            if isinstance(check_type, type) and not isinstance(value, check_type):
                raise TypeError(
                    f"{param_name}: expected {expected}, got {type(value).__name__}"
                )
        return func(*args, **kwargs)
    return wrapper

Write stub files for third-party code that lacks annotations

One of the sharpest edges in typed Python projects is calling into libraries that have no annotations. Mypy treats unannotated third-party functions as returning Any, which silently propagates unchecked types throughout your annotated code. The solution is stub files — .pyi files containing only signatures, with no implementation — that you maintain alongside your project.

The typeshed project maintains stubs for the standard library and many popular packages. For packages that are not covered, you can create a stubs/ directory in your project and configure mypy to find it with mypy_path = stubs. This is especially valuable for internal shared libraries: annotating just the public surface in a stub file gives you type safety at the call sites without requiring the library itself to be fully annotated.

Adopt an incremental annotation playbook for legacy code

Annotating an existing codebase all at once is rarely practical. A structured incremental approach produces real value faster and avoids the annotation debt that builds up when developers annotate mechanically without understanding what the types should actually be.

Start at the public API layer: annotate the functions and methods that other modules call. These provide the highest leverage because the checker can now validate every call site that reaches them. Next, annotate the data layer — classes, TypedDicts, and dataclasses that represent core domain objects. These propagate type information through a large portion of the codebase automatically once annotated. Leave internal helper functions and short private methods for last; the checker treats them as Any until annotated, which is safe as long as the boundaries above them are typed.

Run mypy with --strict on newly annotated modules only, using per-module configuration in mypy.ini to apply strict checking selectively. This lets you maintain a high standard for new code while not failing on legacy modules that have not been reached yet.

Use Any as an explicit escape hatch

When a value genuinely can be anything — or when you are annotating legacy code incrementally — Any is the right tool. A value typed as Any is compatible with every other type in both directions: you can pass an Any where a str is expected, and vice versa. Static checkers treat unannotated functions as implicitly Any, so using it explicitly is a deliberate signal, not an admission of defeat.

from typing import Any

def process(data: Any) -> Any:
    # static checkers will not flag usage of this function
    return data

The important distinction, documented in the Python standard library, is between Any and object. Use object when you want to express "this can hold any type, but you may only use operations defined on all objects." Use Any when you want to opt a value out of type checking entirely. They look similar but communicate very different intentions to both readers and tools.

Union types for values that can be more than one thing

Since Python 3.10 (via PEP 604), you can express unions with the | operator, which is cleaner than importing Union from the typing module. Both approaches work; the newer syntax is preferred in code targeting Python 3.10 and later.

# Python 3.10+ syntax (PEP 604)
def square(n: int | float) -> float:
    return float(n * n)

# Equivalent older syntax
from typing import Union
def square_legacy(n: Union[int, float]) -> float:
    return float(n * n)

Optional for values that might be None

Optional[X] is exactly equivalent to X | None. Both tell the type checker that a value might be absent. Use whichever reads more clearly in context. On Python 3.10+, str | None is idiomatic. On older code, Optional[str] communicates the intent without requiring the newer syntax.

Structural subtyping with Protocols

One of the most powerful tools for preserving Python's duck-typing philosophy while still having meaningful annotations is Protocol, introduced in PEP 544. Rather than requiring a class to explicitly inherit from an interface, you define what methods or attributes an object must have, and any class providing those is considered compatible — just as duck typing works at runtime.

from typing import Protocol

class Drawable(Protocol):
    def draw(self) -> None:
        ...

class Circle:
    def draw(self) -> None:
        print("Drawing a circle")

class Square:
    def draw(self) -> None:
        print("Drawing a square")

def render(shape: Drawable) -> None:
    shape.draw()

# Both work — neither inherits from Drawable explicitly
render(Circle())
render(Square())

This is gradual typing at its most Pythonic. The annotation expresses a structural contract without demanding inheritance. At runtime, Python's attribute lookup does exactly what it always has. The type checker validates the contract statically. No behavior changes.

TypeVar for generic functions

When a function should accept a type and return the same type — but you don't know in advance which type — TypeVar maintains the relationship across the call without collapsing it to Any.

from typing import TypeVar

T = TypeVar("T")

def first(items: list[T]) -> T:
    return items[0]

# Checker knows the return is str here
name = first(["Alice", "Bob"])

# And int here
count = first([1, 2, 3])

Python 3.12 introduced a cleaner inline syntax for type parameters via PEP 695, removing the need to assign TypeVar objects manually in many cases. The older TypeVar form still works and remains the standard for code targeting versions below 3.12.

# Python 3.12+ syntax (PEP 695) — no manual TypeVar needed
def first[T](items: list[T]) -> T:
    return items[0]

# Equivalent to the TypeVar form above, but cleaner
name = first(["Alice", "Bob"])  # checker infers str

Literal for value-constrained parameters

Sometimes the type alone does not say enough. A function that only accepts the strings "left", "center", or "right" technically takes a str, but annotating it as str leaves the checker — and the reader — with no information about the valid range. Literal from the typing module solves this precisely. It tells the checker which specific values are permitted, not just which type they belong to.

from typing import Literal

def align(text: str, direction: Literal["left", "center", "right"]) -> str:
    ...

align("hello", "left")    # fine
align("hello", "middle")  # mypy: Argument 2 has incompatible type

At runtime this changes nothing — Python will accept any string. But static checkers narrow the valid inputs and will catch typos and bad values before the code ever runs. Literal works with integers, booleans, bytes, and None as well, and can be combined with unions to express multiple valid values from different types.

TypedDict for structured dictionaries

Plain dictionaries are everywhere in Python, but annotating them as dict[str, Any] throws away almost all information about their structure. TypedDict, introduced in PEP 589, lets you define the exact keys and value types a dictionary is expected to carry. The checker can then verify that you are not accessing keys that do not exist or placing the wrong types in them.

from typing import TypedDict

class UserRecord(TypedDict):
    id: int
    username: str
    active: bool

def deactivate(user: UserRecord) -> UserRecord:
    return {**user, "active": False}

record: UserRecord = {"id": 1, "username": "kandi", "active": True}
deactivate(record)        # fine

# Checker flags this — "email" is not a defined key
bad: UserRecord = {"id": 2, "username": "bob", "active": True, "email": "[email protected]"}

This is still just a plain Python dictionary at runtime — no class instantiation, no wrapper overhead. The TypedDict class exists only for the type checker's benefit. If you want optional keys, the total=False parameter marks all keys as not required, or you can mix required and optional keys across two TypedDict classes that inherit from each other.

Spot the Bug
The code below tries to use TYPE_CHECKING to avoid a circular import. It imports correctly and the type checker is happy — but something will blow up at runtime. Which line is the problem?
from __future__ import annotations
from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from myapp.models import UserRecord   # line 5

def activate(user: UserRecord) -> None:   # line 7
    print(f"Activating {user['username']}")

def verify(user: UserRecord) -> bool:     # line 10
    if not isinstance(user, UserRecord):  # line 11
        raise TypeError("expected UserRecord")
    return user.get("active", False)
Bug found
Exactly. TYPE_CHECKING is False at runtime, so the if TYPE_CHECKING: block never executes when Python actually runs your code. The name UserRecord simply does not exist in the module's namespace. Lines 7 and 10 are fine because from __future__ import annotations makes all annotations lazy strings — they are never evaluated at runtime. But line 11 calls isinstance(), which is executed code, not an annotation. That call needs UserRecord to exist as an actual object. The fix is to move the import outside the TYPE_CHECKING guard, or restructure the code to avoid the circular dependency.
Close — but not that line
Line 7 looks suspicious, but it is actually safe here. Because the file starts with from __future__ import annotations, every annotation in the module is stored as a string and never evaluated at runtime. The type checker sees the annotation and resolves it; the interpreter never tries to look up UserRecord when the function is defined or called. Look at where UserRecord is used outside of an annotation context.
Not this one
Imports inside if blocks are perfectly valid Python. The if TYPE_CHECKING: pattern is a well-established idiom specifically because it works this way. The block simply never runs at runtime — which is the point. The problem is not the import syntax, but a different line that relies on UserRecord being available outside of an annotation.

@overload for functions with type-dependent return values

One situation that pushes people toward overusing Any is a function that legitimately returns different types depending on its input. Without @overload, the only honest annotation is a union — but then the checker loses the ability to narrow the return type at call sites. The @overload decorator from the typing module solves this by letting you declare multiple signatures for the same function, each describing a specific input-to-output relationship.

from typing import overload

@overload
def parse(value: str) -> list[str]: ...
@overload
def parse(value: bytes) -> list[bytes]: ...

def parse(value: str | bytes) -> list[str] | list[bytes]:
    if isinstance(value, bytes):
        return value.split(b",")
    return value.split(",")

# Checker knows result is list[str] here, not list[str] | list[bytes]
result = parse("a,b,c")

The overloaded signatures are visible only to the type checker — they are eliminated at runtime, so there is no performance or behavioral impact. The actual implementation signature is the one that runs. This pattern is common in standard library stubs and is the cleanest way to express input-conditional return types without reaching for Any.

Type hints as functional requirements: dataclasses

There is one context where type hints cross from optional documentation into functional requirements: @dataclass. The dataclasses module, introduced in Python 3.7, uses annotations to discover which fields a class should have and generates __init__, __repr__, and other methods from them. Without annotations, @dataclass generates nothing useful.

from dataclasses import dataclass

@dataclass
class Point:
    x: float
    y: float
    label: str = ""

p = Point(1.0, 2.5)
print(p)  # Point(x=1.0, y=2.5, label='')
Important

Even here, Python does not enforce the annotated types at runtime. You can pass Point(x="oops", y=None) and Python will not object. The annotation drives code generation by the decorator, not type enforcement by the interpreter. If you want runtime enforcement on a dataclass, pair it with a library like Pydantic or Beartype.

When Type Hints Pull Their Weight

Type hints are not free. They take time to write, they require maintenance as your code changes, and overusing them in the wrong contexts creates visual noise without adding clarity. Understanding where they genuinely earn their place helps you apply them deliberately rather than reflexively.

Annotations tend to provide high return in a few specific situations. Public API boundaries — functions and methods that other modules or external callers depend on — benefit most, since the type information lives exactly where readers need it. Long-lived shared code, where the original author will not be available to explain intent, benefits similarly. Codebases that run mypy or Pyright in CI get direct, measurable value: type errors caught before they reach a runtime environment. And anywhere you use @dataclass, Pydantic, or a similar annotation-driven library, accurate hints are load-bearing.

Annotations add cost with little payoff in several other contexts: small throwaway scripts, test helper functions called only once or twice, short private functions where the types are obvious from the code, and genuinely polymorphic code where the most honest annotation would be Any throughout anyway. Forcing annotations into these places rarely helps and can make code harder to read.

Practical Rule

A useful heuristic: annotate anything you would describe in a docstring. If a parameter's type is obvious from its name and the surrounding context, leaving it unannotated lets the static checker treat it as Any with no harm done. When the type is non-obvious or carries important constraints — write the hint.

The Annotation Timeline: PEP 563, PEP 649, and Python 3.14

Understanding what has changed in recent Python versions around annotation evaluation is important if you want to write forward-compatible code — and it directly affects how type hints interact with dynamic behavior.

Before Python 3.7, annotations were evaluated eagerly at class or function definition time. This caused two problems: forward references (referencing a class inside its own definition) had to be written as string literals, and circular imports involving annotated types could cause NameError at import time.

PEP 563 (Python 3.7) introduced from __future__ import annotations, which converted all annotations to strings at compile time and deferred their evaluation. This solved the forward reference problem but created a new one: libraries that read annotations at runtime — such as Pydantic and dataclasses — had to eval() those strings to use them, which restricted annotations to module-level names only and caused its own set of edge cases.

Compatibility Warning

Using from __future__ import annotations on Python 3.10+ can break the int | str union syntax for runtime uses of annotations. If you use Pydantic, dataclasses, or any library that inspects __annotations__ at runtime, test carefully before adding this import. See the documented issues for details.

PEP 649, accepted in 2023 and implemented in Python 3.14 (released October 7, 2025), takes a third approach. Rather than converting annotations to strings, it stores a special function called __annotate__ on each annotated object. This function computes and returns the annotations dictionary only when something asks for it. The result is lazy evaluation without the string-parsing problem.

Annotations are no longer evaluated at definition time; they are deferred on demand. — What's New In Python 3.14, docs.python.org

Python 3.14 also ships a new standard library module, annotationlib, which provides tools for inspecting deferred annotations in VALUE, VALUE_WITH_FAKE_GLOBALS, FORWARDREF, and STRING formats. For anyone writing libraries that read annotations at runtime — think serialization, dependency injection, or schema generation — this module is the right way to handle annotations in 3.14 and beyond.

What this means practically: on Python 3.14+, forward references in annotations no longer require string quoting. A class can reference itself directly:

# Python 3.14+ — no string quoting needed for self-reference
class Node:
    def __init__(self, value: int, next: Node | None = None) -> None:
        self.value = value
        self.next = next

# On Python 3.12 or earlier, you needed:
# next: "Node" | None = None
# or: from __future__ import annotations
Pop Quiz
Which statement best describes the change PEP 649 made to Python annotations in 3.14?
Correct
PEP 649's approach is distinct from both eager evaluation (before Python 3.7) and the string-based approach of from __future__ import annotations. The __annotate__ function defers computation until needed — solving forward reference problems without requiring annotations to be stored as raw strings that must later be parsed with eval(). This makes the new annotationlib module the right tool for libraries that inspect annotations at runtime.
Not quite
Converting annotations to strings was the approach of PEP 563 (from __future__ import annotations), introduced back in Python 3.7. PEP 649 takes a different approach that avoids string conversion altogether. The two look similar on the surface — both defer annotation evaluation — but PEP 649 uses a callable (__annotate__) rather than string storage, which sidesteps the eval() problems that PEP 563 introduced for runtime-introspecting libraries.
Not quite
PEP 649 made no change to runtime type enforcement — that remains entirely outside Python's core interpreter. Annotations are still metadata. The change was purely about when and how that metadata is computed and stored. Runtime enforcement is still the domain of third-party tools like Beartype and Pydantic, which you opt into explicitly.

Tooling: Where the Real Value Lives

Type hints pay for themselves through static analysis tools. The annotations you write are the raw material; the tools are what turn them into actionable feedback.

mypyPythonStatic
The original Python type checker, started by Jukka Lehtosalo and adopted by the Python community as the reference implementation. Mypy has the best PEP compliance of the established checkers, strong gradual mode support that lets you check annotated functions without touching unannotated code, and the largest ecosystem of plugins (including integrations for Django, SQLAlchemy, and Pydantic). It is the most battle-tested option for CI integration and is the checker referenced in the Python documentation itself.
PyrightTypeScriptStatic
Microsoft's type checker, written in TypeScript, powers the Pylance extension in VS Code. Pyright is faster than mypy on initial checks, applies stricter type inference by default, and has particularly strong support for newer typing features. It is the most popular choice for developers who use VS Code as their primary editor, since Pylance provides real-time feedback without a separate CI step.
tyRustStatic
Released in beta by Astral (the team behind Ruff and uv) in December 2025, ty is consistently 10x to 60x faster than mypy and Pyright without caching. It introduces first-class intersection types, advanced type narrowing, and reachability analysis that catches unreachable code paths other checkers miss. Designed from the ground up as a language server, it provides real-time diagnostics in VS Code, Neovim, Zed, and PyCharm (2025.3+). Still in beta with 0.0.x versioning, but already used in production by Astral's own projects.
PyreflyRustStatic
Meta's open-source type checker, written in Rust and released in beta in November 2025 as the successor to their earlier Pyre checker (OCaml). Pyrefly can check 1.85 million lines per second on large codebases and includes a VS Code extension with IDE features like code navigation and auto-completion. Designed for monorepo-scale codebases, it infers types for return values and local variables even on unannotated code. Typing compliance has reached 70% in the beta, with over 350 user-reported bugs resolved since the alpha release.
BeartypePythonRuntime
A runtime type enforcement library that reads your existing annotations and validates them at call time via a decorator. Beartype raises clean exceptions when actual type violations occur, with near-zero overhead compared to full validation frameworks. Best suited for boundary enforcement: HTTP handlers, CLI entry points, and public library APIs where external callers may pass unexpected types. Does not replace static checking; use it alongside mypy or Pyright for full coverage.
PydanticPython (Rust core)Runtime
The dominant runtime validation and serialization library for Python API development. Pydantic uses annotations to define data models, validates incoming data against those annotations, and performs type coercion (converting a string "42" to an integer when the annotation says int). Pydantic v2 uses a Rust core for validation speed and is the foundation of FastAPI's request parsing. Unlike static checkers, Pydantic operates at runtime and creates model instances rather than just checking types.

None of the static tools above modify Python's runtime behavior. They analyze your source code in a separate pass and report issues before you run anything. The Rust-based checkers — ty and Pyrefly — represent a new generation of tooling that emerged in 2025, both delivering orders-of-magnitude speed improvements over their predecessors while maintaining compatibility with existing type annotations. Beartype and Pydantic are different in that they operate at runtime, but they do so through explicit decoration or model inheritance — you opt in, and the mechanism is transparent.

Pro Tip

Start with mypy --ignore-missing-imports on an existing codebase. Mypy's gradual typing mode only checks functions with explicit annotations, so it will not flood you with errors on unannotated legacy code. You can tighten the configuration incrementally as coverage grows.

Patterns That Go Wrong

There are a handful of patterns that lead people to believe type hints are "breaking" their code when the real culprit is something adjacent.

Checking __annotations__ directly in production code

Accessing func.__annotations__ directly in runtime logic is fragile. Before Python 3.14, the presence of from __future__ import annotations would cause all annotations to appear as strings rather than types. Starting with 3.14, lazy evaluation changes what that dictionary contains and when it is populated. The correct API is inspect.get_annotations(obj, eval_str=True) on Python 3.10+, or the new annotationlib module on 3.14+. Both handle deferred evaluation correctly.

import inspect

def greet(name: str) -> str:
    return f"Hello, {name}"

# Prefer this over __annotations__ in runtime logic
hints = inspect.get_annotations(greet, eval_str=True)
print(hints)  # {'name': <class 'str'>, 'return': <class 'str'>}

Confusing type aliases with runtime type checks

A type alias like UserId = int does not create a distinct type at runtime — it is just another name for int. isinstance(x, UserId) is the same as isinstance(x, int). If you need a truly distinct type for runtime checks with minimal overhead, use NewType — though note that NewType is a class since Python 3.10, with a small additional cost over a plain function call.

from typing import NewType

# Static checker treats UserId as distinct from int
# Runtime: UserId(42) just returns 42 — no wrapper object
UserId = NewType("UserId", int)

def get_user(user_id: UserId) -> str:
    return f"User {user_id}"

# Checker will flag this; runtime will not
get_user(42)  # mypy: Argument 1 has incompatible type "int"

Applying TYPE_CHECKING incorrectly

The TYPE_CHECKING constant from the typing module is False at runtime and True only when a static type checker is analyzing your code. It is used to import types needed only for annotations without incurring the runtime import cost or creating circular imports. If you import something inside if TYPE_CHECKING:, that name will not exist at runtime — so do not use it in any code path that actually executes.

from __future__ import annotations
from typing import TYPE_CHECKING

if TYPE_CHECKING:
    from mymodule import HeavyClass  # only visible to type checkers

def process(obj: HeavyClass) -> None:
    # annotation is fine — evaluated lazily or as a string
    # but you cannot use isinstance(obj, HeavyClass) here at runtime
    pass

Using collections from the typing module when built-ins work

Prior to Python 3.9, type hints for collections required importing List, Dict, Tuple, and similar names from the typing module. Since PEP 585 (Python 3.9), the built-in types themselves support parameterization. Code still using typing.List[str] on Python 3.9+ is not wrong, but it is now unnecessary, and those names from the typing module will be removed no sooner than Python 3.9's end of life.

# Before Python 3.9
from typing import List, Dict
def old_style(names: List[str]) -> Dict[str, int]:
    return {name: len(name) for name in names}

# Python 3.9+ — no import needed
def modern(names: list[str]) -> dict[str, int]:
    return {name: len(name) for name in names}

Key Takeaways

  1. Type hints are metadata, not enforcement. Python's interpreter ignores them entirely. No runtime behavior changes when you add or remove annotations.
  2. Gradual adoption is by design. PEP 484 introduced a gradual type system intentionally: annotate only what you want checked, use Any for the rest, and tighten over time. Static checkers only inspect functions that carry explicit annotations.
  3. Python 3.14 changed how annotations are stored. PEP 649 makes annotation evaluation lazy by default. Forward references no longer need string quoting on 3.14+, and the new annotationlib module provides the correct API for runtime annotation inspection.
  4. Protocols preserve duck typing. Use Protocol from the typing module to annotate structural contracts rather than nominal inheritance hierarchies. This keeps annotations honest without forcing your classes to inherit from anything.
  5. Use the right API to read annotations at runtime. Access __annotations__ only for simple introspection. For production code that consumes annotation data, use inspect.get_annotations() on Python 3.10+, or annotationlib on 3.14+, to handle deferred and stringized forms correctly.
  6. The tooling landscape changed in 2025. Rust-based type checkers — ty from Astral and Pyrefly from Meta — joined mypy and Pyright, bringing 10x to 60x speed improvements and native language server support. The choice of checker now depends on your priorities: mypy for PEP compliance and plugin ecosystem, Pyright for VS Code integration, or the newer tools for raw speed and IDE-first development.

The contract Python type hints offer is unusual compared to languages like Java or Go: you get the documentation and tooling benefits of a typed system, but you keep the flexibility of a dynamic one. That tradeoff is not an accident. It reflects a deliberate design philosophy that has been refined across a decade of PEPs, and understanding it is what separates developers who use type hints well from those who fight them. For more python tutorials covering type systems, static analysis, and core language features, explore the rest of the PythonCodeCrack library.

Sources

How to Add Type Hints to an Existing Python Codebase

Adding type hints to a codebase that has none is a gradual process. Trying to annotate everything at once leads to annotation debt and stalled pull requests. The steps below describe the order that produces the highest return with the least disruption.

  1. Run mypy in gradual mode on the existing code. Start with mypy --ignore-missing-imports on the project root. Mypy's gradual typing only checks annotated functions, so this will report zero errors on a fully unannotated codebase. It establishes the baseline and confirms the tool works in your environment.
  2. Annotate public API boundaries first. Functions and methods that other modules import and call provide the highest leverage. Once annotated, every call site that reaches them gets validated. Focus on parameter types and return types; leave internal variables for the checker to infer.
  3. Annotate core data structures. Classes, TypedDicts, and dataclasses that represent domain objects propagate type information through a large portion of the codebase automatically. Annotating these second gives the checker enough context to catch real errors in calling code.
  4. Add type checking to your CI pipeline. Integrate mypy or Pyright as a required check that blocks merges on new type errors. Use mypy-baseline or per-module configuration to only enforce strict checking on newly annotated modules, so existing unannotated code does not block the build.
  5. Add runtime enforcement at external boundaries. For functions that receive data from external sources — HTTP handlers, CLI entry points, deserialized JSON — apply Beartype or Pydantic to validate types at call time. This catches violations that static analysis cannot reach because the data originates outside your codebase.

Frequently Asked Questions

Do Python type hints affect runtime behavior?

No. Python does not enforce type hints at runtime. Annotations are stored as metadata in __annotations__ and are ignored by the interpreter during execution. A static type checker like mypy or Pyright must be run separately to catch type errors.

What changed about type annotations in Python 3.14?

Python 3.14 implements PEP 649, making annotation evaluation lazy by default. Annotations are stored in a special __annotate__ function and computed only when accessed. This eliminates the need to quote forward references and improves import performance. The new annotationlib module provides the recommended API for reading annotations at runtime.

What is the difference between Any and object in Python type hints?

Any opts a value out of type checking entirely and is compatible with every type in both directions. object is the base of the type hierarchy, so it accepts any value, but the checker will only permit operations defined on all objects. Use Any to bypass checking; use object when you want to accept any type but still enforce that only universal operations are called.

Do Python type hints slow down my code?

Minimally. In Python 3.14+, annotations are evaluated lazily (PEP 649), so they carry almost no import-time cost. In earlier versions, annotations were evaluated eagerly at definition time, which added a small overhead per annotated function. For ordinary application code, type hints do not affect runtime speed in any meaningful way.

What is gradual typing in Python?

Gradual typing is Python's approach to type hints, introduced in PEP 484. It lets you add annotations incrementally. Static checkers only inspect functions that carry explicit annotations, so unannotated code is left unchecked. You can start by annotating a small part of a codebase and expand coverage over time.

What are the fastest Python type checkers in 2026?

The fastest Python type checkers are ty (from Astral, the team behind Ruff and uv) and Pyrefly (from Meta), both written in Rust. Astral reports that ty is 10x to 60x faster than mypy and Pyright without caching. Meta's Pyrefly checks 1.85 million lines per second on large monorepo-scale projects. Both are in beta as of early 2026 and include full language server support. The established options remain mypy (best PEP compliance) and Pyright (powers VS Code Pylance).

What is a Protocol in Python type hints?

Protocol, introduced in PEP 544, enables structural subtyping. Instead of requiring a class to inherit from an interface, you define a Protocol specifying the methods or attributes an object must have. Any class that provides those is compatible by the type checker, with no explicit inheritance needed. This preserves Python's duck-typing philosophy while giving static checkers enough information to validate structural contracts.