Python type annotations let you attach type information directly to variables, function parameters, and return values. They do not change how Python runs your code at runtime—but they transform what your editor and your teammates can understand about it before a single line executes.
Python has always been dynamically typed—the interpreter figures out the type of every value at runtime, and nothing stops you from reassigning a variable from an integer to a string. That flexibility is part of the language's appeal, especially for rapid prototyping and scripting. The problem surfaces as projects grow: a function written six months ago returns something, but without looking at its body, a caller cannot know what. Type annotations address this directly, without sacrificing runtime flexibility. They sit alongside the code as structured, machine-readable documentation that tools can act on.
A Brief History: From PEP 3107 to PEP 747
Understanding why annotations work the way they do requires a quick look at the PEP trail that built them. PEP 3107 (drafted December 2006 for Python 3.0, which shipped in December 2008) introduced the general syntax for annotating function arguments and return values, but deliberately left semantics undefined—they were just arbitrary expressions attached to a function. PEP 484 (Python 3.5, 2015) gave those expressions a standard meaning by defining a type hinting system built around a new typing module. It introduced List, Dict, Optional, Union, and Callable, among others. PEP 526 (Python 3.6) extended that system to variables, so annotations were no longer limited to function signatures.
The syntax evolved further with each release. PEP 585 (Python 3.9) let you write list[str] instead of typing.List[str], eliminating the need to import a parallel type hierarchy just for annotations. PEP 604 (Python 3.10) replaced the verbose Union[X, Y] with the cleaner pipe operator X | Y, the same syntax used in pattern matching introduced in the same release. PEP 673 (Python 3.11) added Self for methods that return the current class, and PEP 695 (Python 3.12) introduced a new, lighter syntax for generic type parameters. PEP 747, which standardizes TypeForm—a way to annotate values that are themselves type expressions—was accepted by the Python Steering Council and is scheduled for Python 3.15. In the meantime it is available via the typing_extensions backport package.
Type annotations in Python have no runtime enforcement by default. The interpreter stores them in __annotations__ but does not check them during execution. Tools like mypy, Pyright, and Pydantic perform that checking separately, either statically (before running) or at runtime on demand.
typing module with List, Dict, Optional, Union, Callable
list[str] replaces typing.List[str]
X | Y replaces Union[X, Y]
Self type for methods that return the current class instance
from __future__ import annotations
TypeForm: annotate values that are themselves type expressions; useful for runtime type checkers and metaprogramming frameworks. Available now via typing_extensions.
How Annotations Improve Code Readability
The clearest readability gain is at the function signature. Compare these two definitions:
# Without annotations
def get_user_info(user_id):
return database.lookup(user_id)
# With annotations
def get_user_info(user_id: int) -> dict[str, str]:
return database.lookup(user_id)
The second version answers three questions without opening the function body: what type of argument it accepts, what type it returns, and what the caller can expect to do with that return value. The same principle applies to variables, particularly in class definitions:
class UserSession:
user_id: int
token: str
expires_at: float
is_admin: bool = False
Without annotations, a reader scanning this class for the first time has no way to know what type each attribute holds unless they trace every assignment. With them, the class itself becomes a contract.
Annotations also do important work in larger-scale patterns. When a function accepts a list[int] rather than an untyped list, reviewers and collaborators immediately know that mixed-type lists will not work as intended. When a function returns str | None rather than just something that might be a string, callers are alerted that they need to handle the None case. The Optional[str] pattern (and its modern equivalent str | None) catches an entire class of bugs—calling a method on a None return—at the annotation level before any code runs.
The 2025 Typed Python Survey described type hints as serving as "in-code documentation"—making code clearer and easier to reason about for authors and collaborators alike, particularly in larger codebases. — Python Typing Survey 2025, Meta Engineering / JetBrains (published December 22, 2025)
There is a subtler readability benefit that comes from annotations as a forcing function. Writing -> dict[str, Any] and noticing that Any is everywhere often signals that a function is doing too much. Annotations do not just document the code—they surface design problems by making vague types visible. A function that honestly cannot be annotated without reaching for Any is a function worth refactoring.
The Union Pipe Operator Changed How Annotations Read
Before Python 3.10, expressing that a value could be one of two types required:
from typing import Union, Optional
def process(value: Union[int, str]) -> Optional[str]:
...
From Python 3.10 onward, PEP 604 reduces this to:
def process(value: int | str) -> str | None:
...
The verbosity objection to annotations—that they clutter the signature more than they clarify it—weakens substantially with the pipe syntax. The annotation now reads almost like English: "value is either an int or a string."
Literal, Final, and ClassVar: The Constructs You Will Reach for Early
Three constructs from the typing module come up repeatedly in real code and are worth knowing before you get far into annotation work. Literal lets you constrain a parameter or return value to a specific set of values rather than just a broad type like str. This is particularly useful for functions that accept a fixed set of string options—status codes, direction labels, mode flags—where a plain str annotation would be technically accurate but too permissive:
from typing import Literal
def set_alignment(direction: Literal["left", "center", "right"]) -> None:
# IDEs and type checkers flag any value outside these three
...
Final signals that a variable should not be reassigned after its initial assignment. It is the annotation equivalent of a constant—a way to communicate intent without reaching for a class-level attribute or a naming convention like all-caps. Type checkers will flag any attempt to reassign a Final-annotated name. ClassVar serves a similar narrowing function specifically for class bodies: it tells the type checker that a particular attribute belongs to the class itself rather than to instances, which matters for static analysis and for tools that generate __init__ signatures (like dataclass).
from typing import Final, ClassVar
from dataclasses import dataclass
MAX_RETRIES: Final = 3 # reassignment is a type error
MAX_RETRIES = 5 # mypy: Cannot assign to final name "MAX_RETRIES"
@dataclass
class Connection:
DEFAULT_TIMEOUT: ClassVar[float] = 30.0 # class-level, not per-instance
host: str
port: int
None of these constructs changes runtime behavior. Their value is entirely in what they communicate to the type checker and to the next developer reading the code. Literal in particular is effective at eliminating a class of bugs where a string parameter is passed with a valid type but an invalid value—something no amount of str annotation alone will catch.
When you annotate a function parameter as list[str] in Python, what happens at runtime if a caller passes a list[int] instead?
Annotations and dataclasses: More Than Documentation
So far the examples have treated annotations as information that tools observe passively. The @dataclass decorator is the clearest example of annotations that actually drive behavior. When you decorate a class with @dataclass, Python reads the class-level annotations at definition time and uses them to generate __init__, __repr__, and __eq__ methods automatically:
from dataclasses import dataclass, field
@dataclass
class Order:
order_id: int
items: list[str] = field(default_factory=list)
total: float = 0.0
fulfilled: bool = False
# Python generates this __init__ for you:
# def __init__(self, order_id: int, items: list[str] = ...,
# total: float = 0.0, fulfilled: bool = False) -> None:
o = Order(order_id=42)
print(o) # Order(order_id=42, items=[], total=0.0, fulfilled=False)
Without annotations, @dataclass has nothing to work from—the decorator depends entirely on the class-level annotations to know which attributes exist, in what order, and with what defaults. This makes dataclasses one of the most concrete arguments for annotating class bodies: the annotation is not just documentation, it is the specification from which executable code is generated at class definition time.
The same principle extends to Pydantic models. A BaseModel subclass reads its field annotations at import time to build validators, generate JSON schemas, and power FastAPI's automatic request parsing. Forgetting an annotation on a Pydantic model field does not just confuse the type checker—it means the field is not included in the model at all. In these frameworks, annotations graduate from optional metadata to required structural definitions.
This distinction matters when thinking about annotation strategy: for plain Python functions and scripts, annotations are genuinely optional and can be added incrementally. For code that uses @dataclass, Pydantic, or similar annotation-driven frameworks, they are load-bearing. The runtime behavior of those systems depends on them being present and correct.
Understanding Any and TypeVar
Two typing constructs trip up intermediate Python developers more than any others: Any and TypeVar. They seem like technical edge cases but come up constantly in real annotation work.
What Any actually means
Any is a special form in typing that is simultaneously compatible with every other type. A variable typed as Any can be assigned a value of any type without a type error, and it can be used anywhere any type is expected. Static checkers effectively opt out of checking it:
from typing import Any
def process(data: Any) -> Any:
# mypy and Pyright will not flag anything done with `data`
return data.whatever.you.like # no error raised by type checker
Any is not the same as object. Annotating a parameter as object tells the type checker "any Python value is valid here, but I can only use the methods defined on object." Annotating it as Any tells the type checker "skip checking this entirely." The distinction matters: object is a real type constraint that causes errors downstream if you try to call type-specific methods; Any silences the checker completely.
There are legitimate reasons to reach for Any. When calling an untyped third-party library that has no stub files, the values coming back are implicitly typed as Any. During incremental annotation of a large codebase, using Any as a placeholder in functions you are not ready to annotate fully lets you commit the rest of the annotations without mypy errors blocking the CI pipeline. The problem arises when Any spreads: a function that returns Any passes that silence forward to every caller, gradually undermining the type system across the codebase. Running mypy --disallow-any-generics or reviewing for Any in code review helps keep its use intentional.
What TypeVar is for
Consider a function that takes a list and returns its first element. How do you annotate the return type? If the input is list[str], the output should be str. If the input is list[int], the output should be int. A return type of Any would work but would lose precision. This is exactly the problem TypeVar solves:
from typing import TypeVar
T = TypeVar("T")
def first(items: list[T]) -> T:
return items[0]
result = first([1, 2, 3]) # type checker infers: int
name = first(["a", "b"]) # type checker infers: str
TypeVar creates a type variable that, within a single function call, must be consistent. The checker sees "whatever type T is on the input list, the return value is the same type" and infers accordingly. Python 3.12's PEP 695 syntax makes this cleaner by allowing def first[T](items: list[T]) -> T: directly in the function definition, without the separate TypeVar declaration. Both approaches do the same thing. For developers who encounter TypeVar in existing code or library stubs and are not sure what they are looking at, the short answer is: it is a placeholder that links two or more positions in a signature together by type.
Annotating *args, **kwargs, and Overloaded Signatures
Two practical questions come up quickly once annotation work starts on real codebases: how to type variadic arguments, and how to handle functions with fundamentally different return types depending on how they are called.
Annotating *args and **kwargs
When annotating *args and **kwargs, the annotation describes the type of each individual item, not the container. Annotating *args: int means every positional argument passed is an int; inside the function, args has type tuple[int, ...]. Annotating **kwargs: str means every keyword argument value is a str; inside the function, kwargs has type dict[str, str]:
def log_event(event: str, *tags: str, **metadata: int) -> None:
# tags is tuple[str, ...] inside this function
# metadata is dict[str, int] inside this function
print(event, tags, metadata)
log_event("login", "auth", "web", user_id=42, session_id=7)
When the arguments are genuinely heterogeneous and cannot be described with a single type, *args: Any and **kwargs: Any are the pragmatic choice. Unpack and TypeVarTuple, introduced by PEP 646 (Python 3.11), provide a more precise mechanism for functions that accept a fixed, typed sequence of positional arguments—particularly useful when wrapping other typed functions—though they are an advanced feature and the *args: type pattern covers the common cases cleanly.
When a function needs @overload
Some functions behave differently depending on how they are called, returning different types for different argument combinations. A simple example: a function that returns a string when passed decode=True and bytes otherwise. A union return type of str | bytes is technically correct but loses precision—the caller cannot tell from the annotation which type they will receive for a given call. @typing.overload solves this:
from typing import overload, Literal
@overload
def read_data(source: str, decode: Literal[True]) -> str: ...
@overload
def read_data(source: str, decode: Literal[False]) -> bytes: ...
def read_data(source: str, decode: bool) -> str | bytes:
raw = _fetch(source)
return raw.decode() if decode else raw
The overloaded signatures (decorated with @overload and containing only ... as their body) exist solely for the type checker. The actual implementation below them contains the real logic. When a caller passes decode=True, the checker resolves the return type as str. When they pass decode=False, it resolves as bytes. The caller gets a precise type rather than a union they have to narrow themselves.
@overload is a pattern you will encounter frequently in the standard library's own stub files and in well-typed third-party libraries. Knowing it exists saves time when annotating functions that genuinely have multiple distinct signatures, and helps decode what you are looking at when reading typed library source.
The most immediately felt effect of adding type annotations is what happens in your editor. IDE features that depend on type information include autocompletion, parameter hints, hover documentation, go-to-definition accuracy, and inline error highlighting. Without annotations, an IDE must infer what it can from variable assignments and usage patterns—often correctly, but never completely. With annotations, the IDE receives a direct contract from the developer.
The two most widely used Python IDEs handle this differently under the hood. VS Code delegates type analysis to Pylance, Microsoft's language server built on Pyright, which runs incrementally in the background and surfaces errors inline as you type. PyCharm ships with its own built-in type inference engine that has existed since before PEP 484 and has evolved alongside the typing spec; it provides autocompletion, parameter inspection, and type-mismatch highlighting without requiring a separate language server installation. The 2024 Typed Python Survey found VS Code to be the most popular IDE, with the most common individual configuration being VS Code plus mypy, followed by PyCharm plus mypy. Notably, PyCharm's built-in checker and Pyright/Pylance diverge on some edge cases—particularly around Protocol variance and generic constraints—so teams working across both editors occasionally encounter inconsistencies that are worth flagging in code review.
Consider a function that returns a dict[str, int]. Once annotated, VS Code via Pylance will offer autocompletion on the return value: the dictionary methods (.items(), .get(), .update()) with correct type signatures on each. Without the annotation, the IDE either guesses or offers a generic object completion. This distinction becomes stark when working with custom classes or dataclasses where the IDE needs the annotation to know the structure.
from dataclasses import dataclass
@dataclass
class Product:
name: str
price: float
in_stock: bool
def get_product(product_id: int) -> Product:
# IDE now knows the return type is Product
# Autocompletion offers .name, .price, .in_stock
return fetch_from_db(product_id)
The IDE can also catch type mismatches inline as you type. If a function expects a list[str] and you pass a list[int], the editor highlights the error immediately—no run, no test, no traceback. This feedback loop tightens the development cycle in a way that matters at scale.
Refactoring is the other major area. When a function signature changes, IDEs with type information can identify every call site where the old signature was used and flag the ones that no longer match. Without type annotations, a rename or parameter type change might pass through the codebase silently until a runtime error surfaces in a code path that tests never hit.
If you are starting to add annotations to an existing project, begin with the functions that other code calls most frequently. Annotating high-traffic interfaces gives the IDE the most leverage immediately and surfaces the most type errors on first analysis.
The 2025 Typed Python Survey found that over 400 respondents now use LLM chat tools to help with Python typing questions, and nearly 300 use in-editor LLM suggestions while annotating code. This matters practically: LLMs are particularly effective at suggesting correct TypedDict structures, filling in complex Callable signatures, and explaining error messages from mypy or Pyright. One limitation worth knowing: LLMs have training cut-off dates and may not know the latest PEP syntax (such as PEP 695 generics or the TypeForm construct from PEP 747). Always verify generated annotations against the official Python typing documentation for your target Python version.
Do Annotations Have a Runtime Cost? The from __future__ Question
The claim that type annotations have "no runtime cost" requires a qualifier. In Python 3.10 and earlier, annotation expressions are evaluated eagerly at module import time—they are executed as real Python expressions the moment the class or function definition is loaded. For lightweight annotations like str or int, this is trivial. For forward references (annotating a class before it has been fully defined) or for deeply nested generic expressions, eager evaluation can cause NameError at import time or add measurable overhead in modules imported frequently.
PEP 563, accepted in Python 3.7 and enabled via from __future__ import annotations, changed evaluation to be lazy: annotations are stored as strings and only evaluated on demand. This resolved the forward reference problem cleanly and reduced import overhead for annotation-heavy modules. PEP 563 was originally planned to become the default behavior in Python 3.10, but that plan was withdrawn after significant pushback from libraries—particularly Pydantic and other frameworks that rely on runtime annotation introspection. A lazy string cannot be introspected without an explicit evaluation step, which broke their APIs.
The replacement proposal, PEP 649, takes a different approach: annotations are deferred using a descriptor mechanism rather than stringification, preserving runtime introspection without the eager evaluation cost. PEP 649 was implemented in Python 3.14 (released October 2025), making deferred evaluation the default behavior without any import required. For projects still supporting Python 3.13 and earlier, the practical decision remains straightforward:
# Python 3.7 - 3.13: opt-in lazy evaluation (Python 3.14+ defers by default)
from __future__ import annotations
class Node:
def __init__(self, next: Node | None = None) -> None:
# Without the __future__ import on Python 3.13 and earlier,
# "Node" would be a NameError here because the class is not
# fully defined when __init__ is parsed
self.next = next
If you use Pydantic, dataclasses with field defaults, or any library that reads __annotations__ at runtime to construct objects, verify that from __future__ import annotations does not break anything before adding it globally. These libraries call typing.get_type_hints() to resolve string annotations, which requires all referenced names to be in scope at call time—a constraint that occasionally fails in complex import graphs.
Annotated: Attaching Metadata to Type Hints
typing.Annotated, introduced in Python 3.9 via PEP 593, lets you attach arbitrary metadata to a type annotation without changing the type itself from the checker’s perspective. The first argument is the type; subsequent arguments are metadata that tools, frameworks, or validators can read:
from typing import Annotated
from pydantic import BaseModel, Field
class Product(BaseModel):
name: Annotated[str, Field(min_length=1, max_length=120)]
price: Annotated[float, Field(gt=0)]
sku: Annotated[str, Field(pattern=r"^[A-Z]{3}-\d{4}$")]
Static type checkers see Annotated[str, ...] as simply str—the metadata is transparent to mypy and Pyright. Pydantic v2 and FastAPI, however, read the Field metadata at runtime to apply validation constraints, generate OpenAPI schema properties, and build form validation logic. This is why Annotated has become central to FastAPI’s dependency injection system and Pydantic v2’s model definition style: it lets the same annotation simultaneously satisfy a static checker and carry runtime-enforceable constraints, without the two layers conflicting.
How to Add Type Annotations to an Existing Project
The 2025 survey’s most repeated positive theme was optionality: annotations can be added incrementally, file by file, function by function. Python will not break if half a codebase is annotated and half is not. But having a deliberate strategy makes the process sustainable rather than overwhelming. The options below go deeper than the usual advice of "start with public APIs and use MonkeyType."
Use Pyright's reportUnknown* rules as a precision diagnostic
When you first enable Pyright on an unannotated codebase, the default output is noisy. A more targeted approach is to enable only the reportUnknownVariableType, reportUnknownParameterType, and reportUnknownMemberType rules in your pyrightconfig.json. These produce a focused list of every place where the checker has genuinely lost type information—not just where annotations are absent, but where Any is propagating from unannotated third-party code into your own. This is a better prioritization signal than counting unannotated functions: it tells you where missing annotations are causing cascading inference failures downstream.
// pyrightconfig.json
{
"reportUnknownVariableType": "warning",
"reportUnknownParameterType": "warning",
"reportUnknownMemberType": "warning"
}
Annotate from the call graph outward, not from the file structure inward
The standard advice is to start with your most-imported modules. That is correct in principle but often leads teams to annotate utility code first and leave the domain layer—the models, services, and orchestration functions that call everything else—untyped the longest. A more effective approach is to map the call graph for a single user-facing feature from top to bottom, then annotate every function in that chain before moving to the next feature. This produces end-to-end type coverage for complete code paths instead of wide-but-shallow coverage across many modules. It also means each annotated chain is useful to the type checker immediately, rather than waiting for dependent layers to catch up.
Use reveal_type() to audit propagation before committing annotations
Before writing an annotation you are unsure about, insert a reveal_type(expr) call at the relevant point in the function body and run mypy or Pyright. The checker will print what type it has inferred for that expression. This is particularly useful when the inferred type is narrower or wider than expected—for instance, when a function that passes through values from a dictionary is inferring str | int | None when you expected str. Seeing the actual inferred type before writing the annotation avoids the common mistake of annotating a function with what you think it returns rather than what it actually returns.
def get_display_name(user: dict) -> str:
name = user.get("name") or user.get("username")
reveal_type(name) # mypy: Revealed type is "str | None"
return name or "Anonymous"
reveal_type() is understood by both mypy and Pyright. Since Python 3.11, it also exists as a real function in the typing module—when imported and called at runtime, it prints the type of the value and returns it. On Python 3.10 and earlier, calling reveal_type() without an import raises a NameError at runtime. In either case, remove it before committing; it is a diagnostic tool, not a permanent annotation construct.
Use TYPE_CHECKING to break import cycles without deferring everything
One of the friction points that slows annotation adoption in larger projects is circular import errors: annotating module A with types from module B causes import cycles when B also imports from A. The conventional workaround is string-based forward references, but these obscure intent and break runtime introspection. A cleaner solution for imports that exist purely to satisfy type annotations is the TYPE_CHECKING constant:
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from myapp.models import UserRecord # only imported by the checker, not at runtime
def process(record: UserRecord) -> None:
...
Because TYPE_CHECKING is False at runtime, the import block executes only when a type checker is analyzing the file. Combined with from __future__ import annotations (which makes all annotation expressions lazy strings), this pattern resolves most circular import problems without restructuring the module graph.
Introduce mypy --strict incrementally using per-module overrides
Running mypy or Pyright in lenient mode first to understand the scope remains the right starting point for teams new to type checking:
# Start permissive: understand the shape of the problem first
mypy --ignore-missing-imports --no-strict-optional src/
# Graduate toward strict over time, module by module
mypy --strict src/core/models.py
The less commonly used approach is to encode this progression in mypy.ini using per-module overrides, so strict checking is automatically applied to modules that have been fully annotated without requiring developers to remember which modules are ready. This also makes the strictness level part of the repository's source-controlled configuration rather than a flag passed on the command line:
# mypy.ini
[mypy]
ignore_missing_imports = True
no_strict_optional = True
[mypy-myapp.core.models]
strict = True
disallow_any_generics = True
[mypy-myapp.core.services]
strict = True
For large legacy codebases, use MonkeyType with targeted test selection
MonkeyType can observe your test suite at runtime and generate annotation stubs based on the types it actually sees. The output requires manual review—types observed in tests may not cover every code path—but it substantially shortens the initial annotation work on a project where functions number in the hundreds. A less commonly used refinement: rather than running MonkeyType over the entire test suite, run it against the tests for a single module at a time and commit those stubs before moving to the next. This keeps the generated stubs scoped and reviewable rather than producing hundreds of annotations at once.
Add py.typed when publishing, and use stub-only packages for untyped dependencies
If you are publishing a library and want downstream users and their IDEs to receive type information from your package, add an empty py.typed marker file to your package directory and declare it in your pyproject.toml. This signals to PEP 561-compliant tools—mypy, Pyright, Pylance—that your package ships inline types and should be fully checked rather than treated as untyped third-party code:
# pyproject.toml
[tool.setuptools.package-data]
mypackage = ["py.typed"]
The parallel problem—untyped third-party dependencies silently infecting your type coverage with Any—has a less commonly used solution: stub-only packages from the typeshed-client ecosystem or from types-* stub distributions on PyPI. Many popular packages that ship without py.typed have community-maintained stubs available separately. Installing types-requests, types-boto3, or similar stubs is often faster and more precise than writing local *.pyi stub files by hand, and it is one of the most effective ways to stop Any from propagating across import boundaries.
The following function is meant to accept a list of user IDs and return a formatted summary string. It has been partially annotated. One annotation contains a type error that mypy would flag. Read carefully, then pick the line where the bug lives.
from typing import Final
MAX_DISPLAY: Final = 5 # line 3
def summarize_users(
user_ids: list[int], # line 6
label: str = "Users", # line 7
) -> str: # line 8
count: int = len(user_ids)
shown: list[int] = user_ids[:MAX_DISPLAY]
parts: list[str] = [str(uid) for uid in shown]
summary: str = f"{label}: {', '.join(parts)}"
if count > MAX_DISPLAY:
summary += f" (+{count - MAX_DISPLAY} more)"
MAX_DISPLAY = 10 # line 15
return summary
Final means for a name, and what happens when you try to change it.The Tooling Ecosystem: mypy, Pyright, and Beyond
Type annotations on their own are inert—they only change the developer experience when a tool reads them. The two dominant tools are mypy and Pyright, and they serve overlapping but distinct roles.
Mypy is the original static type checker for Python and remains the command-line default for many projects. It reads source files, follows imports, and reports every type inconsistency it finds based on the annotations present. Running mypy --strict on a codebase is one of the fastest ways to identify missing annotations and subtle type mismatches before a test suite runs.
# example.py
def total(numbers: list[int]) -> int:
return sum(numbers)
total(["1", "2", "3"]) # Type error
# Running: mypy example.py
# example.py:6: error: List item 0 has incompatible type "str"; expected "int" [list-item]
# Found 1 error in 1 file (checked 1 source file)
Pyright is Microsoft's type checker, written in TypeScript and optimized for real-time feedback in editors. It powers Pylance, the Python language server that ships with VS Code. Pyright is generally faster than mypy for large codebases and provides incremental checking—it only re-analyzes files that changed. The 2024 Typed Python Survey (conducted by Meta, Microsoft, and JetBrains across 1,083 developers) found that 67% used mypy and 38% used Pyright, with 24% using both. By the 2025 survey, mypy's share had slipped to 58% as faster Rust-based alternatives gained ground, though mypy's position as the default for CI pipelines remained largely intact.
Beyond static checkers, two other tools extend what annotations can do at runtime. Pydantic uses annotations to validate data as it enters a program—particularly important in API development where external data cannot be trusted to match the expected types. FastAPI, one of the most widely adopted Python web frameworks, builds its entire request parsing and automatic API documentation generation on top of Pydantic models, which are themselves built on type annotations. Beartype is a newer library that decorates functions and enforces their type annotations at call time, raising exceptions when passed values violate the declared types.
The 2025 survey added new detail on the checker landscape. Mypy remained the single most widely used type checker, but its share dropped to 58%—down from 67% in the 2024 survey—as Rust-based type checkers including Pyrefly (from Meta) and ty (from Astral) collectively reached over 20% usage. Both Pyrefly and ty provide extremely fast static type checking and language server protocol support; the speed improvement addresses one of the most consistently cited pain points in the 2024 and 2025 surveys: the slow performance of mypy on large codebases.
Annotating a function parameter as list[str] does not stop a caller from passing a list[int] at runtime—Python will not raise an error. Only a static checker or a runtime enforcer like Pydantic or Beartype will catch it. Annotations are only as useful as the tools reading them.
TypedDict and Protocol: Annotations for Structural Contracts
Two constructs from the typing module deserve particular attention because they solve problems that basic annotations cannot. TypedDict lets you annotate dictionaries with a fixed set of keys and their corresponding types, giving IDEs full autocompletion on dictionary access patterns without converting the code to a class:
from typing import TypedDict
class UserRecord(TypedDict):
id: int
name: str
email: str
def send_welcome(user: UserRecord) -> None:
# IDE offers ["id"], ["name"], ["email"] key autocompletion
print(f"Welcome, {user['name']}")
Protocol enables structural subtyping—sometimes called duck typing with documentation. Instead of requiring a class to inherit from a base, a Protocol specifies the methods and attributes a type must have. Any class that implements those members satisfies the protocol, with no inheritance required. This is particularly useful for annotating callbacks, plug-in interfaces, and utility functions that only need a subset of an object's behavior.
What Developers Say: 2024–2025 Typed Python Survey Data
The 2024 Typed Python Survey, conducted by Meta, Microsoft, and JetBrains across 1,083 Python developers, found that better IDE support was rated the single most useful feature of type hints by 59% of respondents, ahead of bug prevention at 49.8% and documentation value at 49.2%. VS Code was the most popular IDE, and the most common configuration was VS Code with mypy, followed by PyCharm with mypy. Notably, the 2024 survey found 88% of respondents reported using type hints always or often—a figure that held near-steady in the following year.
The 2025 Typed Python Survey, conducted by JetBrains, Meta, and the broader Python typing community (1,241 respondents—a 15% increase), found 86% of respondents reported using type hints always or often. Developers with five to ten years of Python experience showed the highest adoption at 93%, while senior developers with ten or more years came in at 80%—a finding the survey authors note may reflect experience with pre-annotation codebases or more frequent work on legacy projects. When asked what they valued about the type system, the four themes cited most were optionality and gradual adoption, improved readability and documentation, enhanced IDE support, and earlier bug detection during development or refactoring. The survey also received blunt negative responses—a signal that dissatisfaction with the type system, while a minority view, remains a real part of the ecosystem.
Both surveys document the challenges in consistent detail. Third-party library support was the most cited pain point across both years: many popular packages—particularly in data science (NumPy, Pandas, Django)—have incomplete or inaccurate stub files, meaning the IDE and type checker lose context at the boundary of those imports. Advanced generics and complex callable types were also consistently cited as difficult to express correctly. The 2025 survey added a notable new pain point: ecosystem fragmentation, with developers calling for a single official type checker given the inconsistencies between mypy and Pyright behavior. These are genuine friction points, not minor complaints, and they explain why annotation adoption, while high among experienced Python developers, still encounters resistance in certain domains.
Frequently Asked Questions
No. Python type annotations have no runtime enforcement by default. The interpreter stores them in __annotations__ but does not check them during execution. Tools like mypy and Pyright perform static checking before the code runs, while libraries like Pydantic and Beartype can enforce them at runtime on demand.
Both are static type checkers for Python. Mypy is the original and most widely used, favored for CLI use and CI pipelines (58% of developers in the 2025 Typed Python Survey). Pyright is Microsoft's type checker written in TypeScript; it powers Pylance inside VS Code and is optimized for real-time incremental analysis in the editor. The 2024 survey found 67% of developers used mypy and 38% used Pyright, with 24% using both.
The annotation syntax was introduced by PEP 3107, drafted in 2006 for Python 3.0, which shipped in 2008. The formal type hinting system — with the typing module, Optional, Union, and related constructs — was standardized by PEP 484 in Python 3.5 (2015). Variable annotations arrived in Python 3.6 via PEP 526.
According to the 2025 Typed Python Survey (1,241 respondents, conducted by JetBrains, Meta, and the Python typing community), 86% of developers use type hints always or often. The 2024 survey of 1,083 developers, conducted by Meta, Microsoft, and JetBrains, found 88% using them always or often.
TypedDict (from the typing module) lets you annotate dictionaries with a fixed set of named keys and their types. IDEs use this to offer autocompletion on dictionary key access, similar to class attribute access, without requiring you to convert the dictionary to a class.
PEP 604, adopted in Python 3.10, allows union types to be written as X | Y instead of Union[X, Y]. For example, str | None replaces Optional[str]. This makes multi-type annotations significantly more readable and is available in annotation expressions at runtime in Python 3.10+.
It enables PEP 563 lazy evaluation: annotation expressions are stored as strings rather than evaluated at import time. This resolves forward reference errors and reduces import overhead in annotation-heavy modules. It is available from Python 3.7 onward. Note that some libraries relying on runtime introspection — such as Pydantic v1 — may behave differently with this import enabled. PEP 649, implemented in Python 3.14 (released October 2025), replaces PEP 563's approach with a descriptor-based deferral that preserves runtime introspection and makes deferred evaluation the default.
typing.Annotated (PEP 593, Python 3.9) lets you attach arbitrary metadata to a type annotation. The first argument is the actual type; subsequent arguments are metadata that tools like Pydantic v2 or FastAPI can read at runtime to apply validation constraints or generate API schemas. Static type checkers treat Annotated[str, ...] as simply str, so the metadata is invisible to mypy and Pyright.
Start by enabling Pyright's reportUnknownVariableType and related rules as warnings — they identify exactly where Any is propagating from unannotated code into your own, which is a more useful prioritization signal than simply counting unannotated functions. Annotate from the call graph outward (top-to-bottom for a single feature chain) rather than by file structure. Use reveal_type() to audit inferred types before writing annotations, and TYPE_CHECKING blocks to resolve circular import issues cleanly. Run mypy with --ignore-missing-imports and --no-strict-optional initially to see scope without being blocked. Encode per-module strictness in mypy.ini rather than command-line flags so the progression is source-controlled. For large legacy codebases, run MonkeyType against individual module test selections rather than the full suite, and install types-* stub packages for untyped third-party dependencies to prevent Any from crossing import boundaries.
Both accept any Python value, but they behave differently for type checkers. Annotating a parameter as object tells the checker any value is valid but constrains you to only the methods defined on object. Annotating it as Any tells the checker to skip checking entirely — no errors will be raised for anything done with that value. Any is useful as a migration placeholder or when bridging untyped third-party code, but it propagates silence forward to every caller.
Annotate *args and **kwargs with the type of each individual item, not the container. *args: int means every positional argument is an int — inside the function, args has type tuple[int, ...]. **kwargs: str means every keyword argument value is a str — inside the function, kwargs has type dict[str, str]. When arguments are genuinely heterogeneous, *args: Any and **kwargs: Any are the pragmatic fallback.
typing.overload lets you declare multiple distinct type signatures for a single function when it returns different types depending on how it is called. The overloaded variants — decorated with @overload and containing only ... as their body — exist solely for the type checker. The actual implementation below them contains the real logic. This gives callers a precise return type rather than a union they have to narrow themselves.
For plain Python functions, annotations remain optional documentation that tools read passively. For @dataclass, they are load-bearing: the decorator reads the class-level annotations at definition time to generate __init__, __repr__, and __eq__ methods. Without annotations, @dataclass has nothing to work from. The same applies to Pydantic BaseModel subclasses — missing or incorrect annotations cause runtime failures, not just type checker warnings.
Key Takeaways
- Annotations are documentation that tools can read. Unlike docstrings, which are free-form text, type annotations follow a defined grammar that IDEs, static checkers, and runtime validators can all parse and act on. Adding them once pays dividends across autocomplete, error detection, and refactoring.
- The syntax has become genuinely readable. The shift from
Union[X, Y]andOptional[X]toX | YandX | Nonein Python 3.10, combined with the native generic syntax (list[str]instead oftyping.List[str]) in 3.9, removed the verbosity that was one of the earliest objections to type hints. - IDE support is the most immediately felt benefit. According to the 2024 Typed Python Survey, 59% of developers cited better IDE support as the single most useful thing type hints provide—ahead of bug prevention and documentation. Annotations give language servers the information they need to deliver accurate autocompletion, parameter hints, and refactoring tools.
- In annotation-driven frameworks, annotations are load-bearing. For plain Python functions, annotations remain optional and can be added incrementally. For
@dataclass, Pydantic models, and FastAPI routes, the annotation is the specification from which executable code is generated. Missing or incorrect annotations in those contexts cause runtime failures, not just type checker warnings. - Any silences the type checker; it does not improve types.
Anyhas legitimate uses during gradual migration and when bridging untyped third-party code, but it spreads—a function returningAnypasses that silence forward to every caller.TypeVaris the correct tool when you need a return type that is linked to an input type rather than discarded intoAny. - Static checkers and runtime validators serve different gaps. Mypy and Pyright catch errors before the code runs; Pydantic and Beartype catch errors when untrusted data enters the system at runtime. A production-grade Python project benefits from both layers.
- Adoption is high and growing, with real pain points remaining. The 2025 Typed Python Survey found 86% of developers using type hints always or often. Mypy's share of the checker market slipped to 58% as Rust-based tools like Pyrefly (Meta) and ty (Astral) passed 20% combined usage. The main friction points remain untyped third-party libraries, complex generics, and inconsistencies across checkers—areas the ecosystem is actively addressing.
Type annotations occupy an unusual position in the Python ecosystem: they are optional, have no runtime cost when unused, and yet have become the standard practice in any codebase where multiple people work or where long-term maintenance matters. They represent one of the clearest examples of a language feature that pays compound interest—small upfront cost, accumulating returns as the codebase and team grow.