The names are almost identical, they live in the same GitHub repository, and one of them literally depends on the other to function. Yet mypy and mypyc are doing entirely different jobs. Understanding what separates them — and what binds them together — changes how you think about Python type annotations and performance optimization at the same time.
The confusion is understandable. Search for either one and you land on the same documentation site, the same GitHub organization, and release notes that discuss both tools in a single changelog entry. But conflating them creates real problems: developers skip mypyc because they assume it is just mypy under a different name, or they add mypyc to a project expecting type-checking behavior and get something very different. Getting this distinction right matters both for how you structure your workflow and how you reason about Python's performance ceiling.
What mypy Is (and Is Not)
Mypy is a static type checker for Python. That sentence is precise and worth unpacking. Static means it analyzes your code without running it. Type checker means its sole job is to find type mismatches, report them, and exit. It does not modify, compile, or speed up your code in any way.
The project's documentation is explicit on this point: mypy performs static type checking and does not improve performance. That line has been true from mypy's early days through its current release. As of April 2026, the latest stable version is mypy 1.20.0, released March 31, 2026.
You install mypy, point it at your Python files, and it reads your type annotations — written according to PEP 484 — to warn you when something does not add up. A string passed where an integer is expected. A function returning None when the caller expects a list. A method called on a variable that could be None at that point in the control flow. Those warnings appear in your terminal. Then mypy exits, and your code runs exactly as it would have without mypy ever being involved.
According to the mypy GitHub repository, the tool reads PEP 484 type annotations and reports violations — its job begins and ends with warning you about type mismatches before your code ever runs. — mypy GitHub repository
Mypy supports gradual typing, which means you can annotate one function at a time and leave the rest untyped. It works with the full typing module — generics, Union, Optional, Callable, TypeVar, Protocol, and more. Recent releases have brought meaningful improvements across the board. Mypy 1.16 added support for asymmetric property getter and setter types. Mypy 1.17 introduced optional exhaustive match checking via --enable-error-code exhaustive-match. Mypy 1.18 delivered roughly 40% speedup in self-checking performance through type caching and a new binary cache format — in extreme cases the improvement reaches 10x. Mypy 1.19, released November 2025, was the last version to support Python 3.9 and introduced the librt PyPI package as a new dependency for cache serialization. Mypy 1.20 (March 31, 2026) dropped Python 3.9 entirely, introduced substantially reworked type narrowing logic, enabled the binary fixed-format cache by default, added support for Python 3.14 t-strings (PEP 750), and expanded mypyc-compiled wheel availability to Windows ARM64 and CPython 3.14 free-threading builds.
Think of mypy as a proofreader who reads your manuscript before it goes to print. The proofreader catches consistency errors — a character whose name changes mid-chapter, a fact that contradicts an earlier statement — but does not rewrite a single word. The manuscript that goes to press is identical to the one the proofreader reviewed. That is the relationship between mypy and your Python program: mypy reads, flags, and exits. Nothing about your runtime changes because of it.
Looking ahead, the mypy team has publicly outlined plans for mypy 2.0 — the next feature release after 1.20. The development team has stated they intend to enable --local-partial-types by default in 2.0, and to enable --strict-bytes by default as well. Both changes typically require at most minor code adjustments. The intent is to avoid a Python 2-to-3 style migration by limiting 2.0 breaking changes to only what is technically necessary. (1.20 and 2.0 Release Planning, python/mypy GitHub). For teams managing large codebases, enabling --local-partial-types now in 1.20 is the recommended path to make the eventual 2.0 upgrade less disruptive.
Mypy has zero runtime overhead in your production code. It runs during development or in CI pipelines, then gets out of the way entirely. The type annotations it reads are syntactically valid Python and are ignored by the interpreter at runtime unless you explicitly inspect them through introspection.
What mypyc Is (and Is Not)
Mypyc is an ahead-of-time compiler. It takes Python source files that have type annotations and compiles them into C extension modules — the same format used by CPython's standard library for performance-critical components. Once compiled, those modules are loaded by CPython as native extensions, bypassing the normal Python interpreter overhead for the compiled code paths.
The mypyc documentation describes its scope as compiling Python modules to C extensions using standard type hints to generate fast code. The key phrase is "generates fast code" — that is something mypy never does. Mypyc treats the annotated language as a strict, gradually typed Python variant, restricting some dynamic features in exchange for performance while remaining mostly compatible with standard Python. (mypyc documentation)
Mypyc produces .so files on Linux and macOS and .pyd files on Windows. These are binary artifacts that you distribute like any other compiled extension. On a Linux x86-64 system, compiling a module called engine.py might produce a file named engine.cpython-312-x86_64-linux-gnu.so. That file is not Python anymore — it is compiled C that CPython loads as an import.
If mypy is the proofreader, mypyc is the typesetter who takes the finalized manuscript and casts it in lead type. The words are the same, but the form changes entirely — from flexible manuscript pages to fixed metal type that prints fast and consistently. Once cast, you cannot easily revise individual letters mid-run. That trade-off maps precisely to mypyc: your annotated Python logic is preserved, but flexibility features like monkey patching, pdb stepping, and dynamic attribute injection no longer apply to the compiled output.
python# setup.py — compiling selected modules with mypyc from setuptools import setup from mypyc.build import mypycify setup( name='myproject', packages=['myproject'], ext_modules=mypycify([ '--disallow-untyped-defs', # mypy flag passed through 'myproject/engine.py', 'myproject/parser.py', ]))
The mypycify() call is the entry point. It accepts a list of Python files and optional mypy flags, runs the type checker internally, and then generates the C code and compiles it. You do not invoke mypy and mypyc as separate steps — mypyc calls mypy as part of its own pipeline.
How the Two Tools Relate
This is where precision matters. Mypyc does not merely use mypy's annotation syntax — it calls mypy directly as a library to perform type checking and type inference before generating any C code. The mypyc documentation is unambiguous: you cannot compile code that generates mypy type check errors. (mypyc: Differences from Python)
This dependency flows in one direction only. Mypy has no knowledge of mypyc and does not require it. Mypyc cannot function without mypy. The relationship in practice looks like this:
The mypy project itself has been compiled with mypyc since 2019. When you install mypy today with pip install mypy, you receive a mypyc-compiled binary wheel — not interpreted Python. The mypy 1.20 release notes confirm that mypyc compilation makes mypy 3–5x faster than the interpreted version. In the mypy 1.15 release notes, the team extended this availability to Linux ARM64 users. With 1.20, mypyc-accelerated wheels now also cover Windows ARM64 and CPython 3.14 free-threading builds, so pip install mypy delivers the compiled version across all major platforms.
This means mypy is simultaneously the most prominent user of mypyc and the tool mypyc depends on internally — a self-referential relationship that makes it harder to explain either tool in isolation.
The Core Differences at a Glance
The accordion below covers the key dimensions side by side. Each row compares how mypy and mypyc handle the same concern.
.so on Linux/macOS, .pyd on WindowsTypeError at runtimeRuntime Behavior: Where Things Get Interesting
The most counterintuitive difference is what happens to type annotations at runtime. In standard Python — with or without mypy — type annotations are metadata. The interpreter largely ignores them. You can annotate a variable as int and assign a string to it at runtime, and Python will not complain. This is dynamic typing at work — mypy catches that mismatch during static analysis, but once you run your code, the annotation is gone from enforcement.
Mypyc changes this. When code is compiled, the generated C enforces type annotations at runtime. Pass a string to a compiled function that expects an integer, and you get a TypeError immediately, not through some subtle downstream failure. The mypyc documentation frames this as intentional: strict enforcement of type annotations at runtime results in better runtime type safety and easier debugging. (mypyc documentation)
In standard Python with mypy, your type annotations are like road signs — they tell drivers what is expected, but nothing stops a car from ignoring them. Mypy is a traffic inspector who checks the signs for consistency before the road opens. Mypyc is different: it replaces the road entirely with one that physically cannot be used incorrectly. If you try to drive the wrong vehicle on a compiled-class highway, you get a TypeError barrier immediately — not a sign you could have chosen to ignore.
This is why mypyc's runtime enforcement is not a safety net layered on top of Python — it is a fundamentally different execution model for the compiled portion of your code.
Because compiled classes in mypyc are native extension classes without a __dict__ by default, monkey patching — including swapping methods for mocks in tests — does not work on compiled code. This has significant implications for testing workflows and must be planned for before adopting mypyc in a distributed package.
Several other runtime behavioral changes exist. Compiled modules cannot be run with python3 module.py — you must use python3 -c "import module" or a wrapper script. The if __name__ == "__main__": pattern does not work in compiled code. inspect.iscoroutinefunction() returns False for compiled async functions. Profiling tools like cProfile do not trace compiled functions. Stack overflow detection is absent from compiled code — runaway recursion may produce a hard crash rather than a clean Python exception.
None of these limitations exist when using mypy, which touches nothing at runtime. Mypy is a pre-flight check; mypyc is a fundamental change to how the module executes.
Performance: What the Numbers Say
The performance gains from mypyc are real but variable. The mypyc documentation states that existing code with type annotations often runs 1.5x to 5x faster when compiled, and code tuned specifically for mypyc can achieve 5x to 10x. (mypyc documentation)
The mypyc benchmarks repository provides concrete data. The Richards benchmark — a classic systems programming workload — shows a particularly dramatic result: interpreted execution takes about 0.190 seconds on average, while the compiled version completes in about 0.019 seconds. That is close to a 10x improvement for that specific workload. (mypyc/mypyc-benchmarks on GitHub)
However, an empirical study presented at the 2025 International Conference on Evaluation and Assessment in Software Engineering found that mypyc gains are not universal. In a multi-benchmark evaluation, some workloads — including the n_body benchmark — showed no measurable improvement over CPython when compiled with mypyc, while tools like PyPy and Numba delivered consistent speedups on numerical tasks across the board. (EASE 2025: An Empirical Study on the Performance and Energy Usage of Compiled Python Code)
This aligns with mypyc's own stated scope: it currently aims to speed up non-numeric code, such as server applications. Numeric computation with large arrays belongs to NumPy, Numba, and Cython. Mypyc's strongest results appear in general-purpose application code — parsers, compilers, servers, and tooling — where Python's function call overhead and object allocation dominate, rather than raw numerical throughput.
python# Selective compilation — only compile the hot path from setuptools import setup from mypyc.build import mypycify setup( name='myapp', packages=['myapp'], ext_modules=mypycify([ 'myapp/parser.py', # compile this 'myapp/engine.py', # compile this # myapp/config.py # leave interpreted # myapp/cli.py # leave interpreted ]))
You do not have to compile your entire codebase. Mypyc supports compiling individual modules, so a practical approach is to profile your application first, identify which modules consume the CPU time, annotate those thoroughly, and compile only them. The rest of the codebase stays interpreted and fully debuggable.
Limitations You Need to Know
Mypyc is currently alpha software. The official documentation is direct about this: it is only recommended for production use cases with careful testing, and when you are willing to contribute fixes or work around issues you will encounter. (mypyc documentation). Mypy, by contrast, is production-stable and used at scale across large Python codebases worldwide.
Several Python language features are not yet supported in mypyc compiled code as of early 2026. Nested class definitions are not supported. Function and class definitions guarded by if statements are not supported. Generator expressions are silently replaced with list comprehensions, which can change behavior. Native classes cannot contain arbitrary descriptors beyond properties, static methods, and class methods. Any class that needs to be subclassed by non-compiled code must receive the @mypyc_attr(allow_interpreted_subclasses=True) decorator. This comes with a narrow performance cost: accessing methods and attributes through a non-native subclass falls back to standard Python attribute lookup, which is slower. The native class instances themselves are unaffected; only cross-boundary subclass attribute access carries the penalty.
Free-threading support — tied to CPython's experimental no-GIL builds in Python 3.13 and 3.14 — is available in mypyc but still marked experimental. The 1.18 release introduced initial support; by 1.19, all mypyc tests pass on free-threading Python 3.14 release candidate builds. The mypy release notes indicate that multi-threaded micro-benchmark throughput scales favorably, but that native attribute access and list item access remain unsafe under race conditions. (mypy Release Notes). Free-threading should be treated as a watch item, not a production option.
One limitation that catches developers off guard is the interaction with class structure. By default, mypyc compiles classes as native extension classes without __dict__, similar to Python's __slots__. Code that dynamically adds attributes to instances at runtime will fail. The mypyc 1.16 release introduced the @mypyc_attr(native_class=False) decorator as an explicit opt-out, allowing specific classes to remain fully dynamic at the cost of the performance gain native compilation provides. Conversely, you can use @mypyc_attr(native_class=True) to assert that a class must compile as native — mypyc will then generate an explicit error rather than silently falling back.
Mypy 1.20 introduced two notable additions on the mypyc side worth planning around. First, acyclic native classes: classes decorated with @mypyc_attr(acyclic=True) skip garbage collection tracing entirely, which reduces allocation overhead and memory usage for simple value objects — but if those objects participate in reference cycles, memory leaks will occur. The acyclic property is not inherited by subclasses; each subclass must apply the decorator explicitly. Second, mypyc formally documented and extended its runtime library, librt, with a SIMD-accelerated base64 encoding and decoding API via librt.base64, along with more optimized data structures and string utilities planned for future releases. (The Mypy Blog: Mypy 1.20 Released). Install it with pip install librt — though if you already have mypy installed, librt will be present automatically, since mypy 1.19 added librt as a required dependency for its binary cache format. Compiled modules do not require librt unless they explicitly import it. These are additive features — they do not change existing mypyc behavior.
One nuance on the @mypyc_attr(allow_interpreted_subclasses=True) decorator worth clarifying: accessing methods and attributes of a non-native subclass — or a subclass defined outside the compilation unit — will be slower because it falls back to standard Python attribute access. The native class instances themselves are not penalized; only cross-boundary subclass attribute access incurs the overhead. Plan accordingly if you expect heavy inheritance from compiled base classes in uncompiled user code.
When to Use Each Tool
Mypy is the right choice whenever you want to improve code quality, catch bugs before runtime, or make a large codebase easier to maintain and onboard new contributors to. It integrates with every major IDE, works with pre-commit hooks, and fits into CI pipelines trivially. There is no build step, no binary to distribute, and no runtime change. For teams adopting type annotations for the first time, mypy is the starting point and often the only tool needed. If you are still building your foundation with Python, the python tutorials on this site cover type hints, typing module usage, and annotation patterns that prepare you to get the most out of mypy.
Mypyc is the right choice when you have identified a Python module as a genuine CPU bottleneck, that module is already well-annotated, and you want to avoid the complexity of writing C or Cython extensions. The mypy team's use of mypyc to compile mypy itself is the clearest proof of concept: a large, real-world Python tool that gained a consistent 4x speedup without being rewritten in another language.
Developer Glyph Lefkowitz documented a complementary pattern in a public experiment using the High Throughput Fizzbuzz Challenge. After optimizing Python code for throughput, adding mypyc compilation delivered a further 50% gain, bringing throughput to 233 MiB/s. The experiment showed that mypyc's relative advantage doubled once the underlying Python was already algorithmically optimized. (Deciphering Glyph: You Should Compile Your Python). The lesson: mypyc and algorithmic optimization are not substitutes — they stack.
Steve Meadows documented a real-world mypyc adoption on a production HTTP server in 2022, finding that compilation reduced response time noticeably on computation-heavy routes while leaving debug tooling intact for uncompiled modules. The key takeaway from that experiment: selective compilation — profiling first, compiling only what profiling identifies — consistently outperformed blanket compilation of an entire project. (Making Python Fast for Free: Adventures with mypyc)
Using both tools together is the natural production pattern. Mypy runs in CI on every commit to catch type errors early. Mypyc compiles the hot path for distribution. The same type annotations serve both purposes simultaneously — the annotations written for mypy correctness are the same ones mypyc uses to generate efficient C code. Adding mypyc to a well-annotated codebase requires almost no annotation changes; the work done for mypy is already the work mypyc needs.
python# Well-annotated Python: works with both mypy and mypyc def process_records(records: list[dict[str, str]]) -> list[str]: output: list[str] = [] for record in records: key = record.get("id", "") if key: output.append(key.upper()) return output # mypy checks this for correctness at dev/CI time (no runtime cost) # mypyc compiles this to a C extension for production speed # The annotations serve both roles — no duplication of effort
mypyc vs the Python Speed-Up Ecosystem
A question the tool comparison raises almost immediately is where mypyc sits relative to the broader landscape of Python compilers and accelerators. It is not the only option, and different tools have meaningfully different tradeoffs worth understanding before committing to one. The framing that matters is not "which tool is fastest" but "which tool matches the shape of my bottleneck and the constraints of my deployment."
Cython is the longest-established route to C extensions from Python code. It uses its own dialect — a Python superset with C-type declarations — that gives direct access to low-level optimizations including C pointer arithmetic and typed arrays. That power comes at a cost: Cython requires learning a distinct syntax, and porting existing pure Python is not automatic. Mypyc works with standard PEP 484 annotations and requires no syntax changes at all. According to the Cython project's own comparison documentation, mypyc offers solid coverage of the Python language and PEP 484 typing, but lacks access to low-level C-level optimizations that Cython exposes directly. That difference matters most on code that calls into C APIs or manipulates raw memory. The EASE 2025 empirical study found that mypyc produces negligible speedups on the n-body numerical benchmark, where Cython (properly tuned) reaches 99–124x over CPython. (EASE 2025: An Empirical Study on the Performance and Energy Usage of Compiled Python Code)
PyPy is a JIT-compiled alternative interpreter rather than a compiler for CPython. You do not compile modules — you replace the interpreter entirely. PyPy's JIT can produce dramatic speedups (6x to 66x in the EASE 2025 study) on code it warms up well, but it carries compatibility tradeoffs with CPython extension modules and does not suit projects that must deploy as standard CPython packages. Mypyc runs inside CPython and produces standard C extension modules; your users install and run them with the standard CPython they already have.
Numba applies JIT compilation at the function level via a decorator, and it works hand-in-hand with NumPy. For numerical workloads on arrays, Numba is frequently the right tool — the EASE 2025 study shows it reaching 56–135x speedups on numerical benchmarks, far beyond what mypyc achieves on the same code. The constraint is that Numba has limited support for general Python objects, strings, and arbitrary class structures. It is a scalpel aimed at numeric kernels, not a general-purpose Python accelerator.
Nuitka is an AOT Python-to-C compiler that prioritizes language compliance. Unlike mypyc, it does not require type annotations — it analyzes Python semantics statically without them. Nuitka is also capable of producing standalone executables that bundle all dependencies, which mypyc cannot do. In the EASE 2025 study, mypyc and Nuitka produced similar speedup ranges on general-purpose benchmarks. The Cython project's documentation characterizes Nuitka as offering solid language compliance and modest performance gains, but no low-level optimization access — a ceiling it shares with mypyc.
The practical upshot is that mypyc occupies a specific niche: the easiest path to a compiled C extension for a codebase that is already annotated and already passes mypy. It costs almost nothing to try if you already have strict mypy compliance, and it integrates into a standard Python packaging workflow. Think of the options as sitting on a cost-benefit curve: mypyc costs type annotations; PyPy costs a binary swap; Numba costs data restructuring; Cython costs days and C knowledge; Rust costs learning a new language. The 2x–10x range mypyc delivers is available for nearly zero migration effort on well-typed code.
Less Obvious Approaches Worth Evaluating
Beyond the standard tool comparisons, several strategies go underexplored in typical discussions of Python performance. These are worth evaluating seriously before committing to mypyc or any compiler.
Rewriting a single hot function in C via ctypes or cffi. If profiling reveals one function responsible for 80% of CPU time — a common result — writing that function directly in C and calling it through ctypes or cffi is often faster than compiling an entire module with mypyc. The scope is narrow, the dependency is zero, and the resulting binary works across Python versions without a build matrix. This approach is underused because it sounds harder than it is: a 30-line C function exposed through ctypes can outperform a fully compiled mypyc module on a numeric inner loop.
Extracting the bottleneck into a subprocess with a faster runtime. If the hot path is logically separable — a parser, a compression routine, a search algorithm — spawning a subprocess written in Go, Rust, or even compiled C with subprocess.run() is worth measuring. The inter-process communication overhead (which is minimal for batch-style workloads) is often smaller than the cost of maintaining a compiled Python extension across platform wheel builds. For server applications, a sidecar process exposes the fast implementation over a socket or Unix pipe without any Python packaging complexity at all.
Restructuring data layout before reaching for a compiler. Compilers accelerate the code you have. If that code is operating on Python dicts and lists in an inner loop, mypyc's gains are constrained regardless of how well annotated the code is. Switching that loop to operate on array.array, NumPy arrays, or dataclasses with __slots__ before applying any compiler can produce larger improvements than compilation alone. The EASE 2025 study's finding that mypyc shows negligible improvement on n-body benchmarks while Numba reaches 135x is partly a tool capability difference and partly a reflection of data structure choice: Numba is operating on contiguous typed arrays while mypyc is compiling code that still allocates Python objects per iteration. Profile what your hot code is actually doing to memory, not just where it spends time.
Adopting __slots__ aggressively before compiling. Mypyc compiles classes to native extension classes without __dict__ — which is essentially what __slots__ does in interpreted Python. If you apply __slots__ to your hot-path classes before mypyc compilation, you get a measurable allocation speedup even without any compiled code, and the compiler has less redundant work to do. This also forces you to enumerate all instance attributes explicitly, which frequently reveals design problems that slow the code down regardless of what tool you apply afterward.
Using the multiprocessing module to bypass the GIL entirely. For CPU-bound Python code that cannot be vectorized, spawning worker processes via multiprocessing.Pool or concurrent.futures.ProcessPoolExecutor sidesteps the GIL without any compilation at all. On a modern multi-core server, four worker processes running interpreted Python can outperform a single mypyc-compiled process on workloads where the computation is embarrassingly parallel. This is not always applicable — shared state, startup latency, and IPC overhead all constrain it — but on batch processing, file transformation, or request handling pipelines that already use a process-per-worker model, it deserves explicit benchmarking before adding a compilation step.
Compiling only the type checker itself, not your application. If the use case driving the performance discussion is CI pipeline speed rather than application throughput, the most direct solution is already deployed: the prebuilt mypyc-compiled mypy wheel on PyPI. Installing mypy from PyPI already gives you the compiled binary — no build step, no annotation work, no platform wheel matrix. The 3–5x speedup in type checking time may be all the "compilation benefit" a project needs, without ever compiling its own code.
For compute-heavy numerical work with NumPy arrays, Numba or Cython will likely outperform mypyc. For general-purpose application logic — parsers, servers, type checkers, CLI tools — where your code is already annotated, mypyc is the lowest-friction path to a meaningful speedup.
Shipping Compiled Code: The Distribution Workflow
The article so far has covered what mypyc produces — binary C extension modules — but a question that trips up many developers is what happens next. How do you actually distribute compiled modules? What does your end user receive? And what happens on platforms mypyc has not yet compiled a wheel for?
-
Annotate your module with type hints
Add PEP 484 type annotations to all functions and class attributes in the target module. Unannotated code compiles but gains little performance benefit.
-
Verify the module passes mypy
Run
pip install mypy && mypy myproject/engine.pyand resolve all reported errors. Mypyc calls mypy internally — code that fails mypy cannot be compiled. -
Create a setup.py using mypycify
Import
mypycifyfrommypyc.buildand pass your module paths in theext_modulesargument. Optional mypy flags such as--disallow-untyped-defscan be included in the list. -
Build and test the extension locally
Run
pip install -e .to build in editable mode. Python loads the compiled.soor.pydfile transparently on import. Validate behavior on representative inputs and confirm that test helpers relying onunittest.mock.patchare excluded from compilation. -
Build platform wheels with cibuildwheel and publish to PyPI
Use
cibuildwheelin a GitHub Actions workflow to build wheels for Linux, macOS, and Windows across supported Python versions. Upload all wheels plus a source distribution (sdist fallback) to PyPI.
Mypyc-compiled extensions are platform-specific binaries. A .so file built on Linux x86-64 will not load on macOS or Windows ARM64. This means you must build a separate wheel for each target platform and Python version combination you want to support. The standard tooling for this is cibuildwheel, a GitHub-maintained tool that automates building wheels across Linux, macOS, and Windows in CI — including manylinux, musllinux, and ARM variants. The mypy project itself uses cibuildwheel to build and publish mypyc-compiled wheels to PyPI on every release.
yaml# .github/workflows/build.yml — building mypyc wheels in CI name: Build wheels on: [push, pull_request] jobs: build: runs-on: ${{ matrix.os }} strategy: matrix: os: [ubuntu-latest, macos-latest, windows-latest] steps: - uses: actions/checkout@v4 - uses: pypa/cibuildwheel@v2 with: package-dir: . output-dir: dist - uses: actions/upload-artifact@v4 with: name: wheels-${{ matrix.os }} path: dist/*.whl
An important design decision is whether to include a source distribution (sdist) alongside your compiled wheels. The mypyc documentation explicitly recommends shipping a fallback interpreted version for platforms that mypyc does not yet support — for example, platforms where a C compiler is unavailable at install time. An sdist allows pip to fall back to the pure Python version on unsupported platforms. Mypy itself follows this pattern: if pip cannot find a mypyc-compiled wheel for your platform, it installs the interpreted fallback from the sdist automatically.
If you use Hatch as your build backend, the hatch-mypyc plugin integrates mypyc compilation directly into the Hatch build system without requiring a manual setup.py. For teams using Pants, the python_distribution target accepts a uses_mypyc=True field that configures the build environment with mypy and the necessary type stubs automatically.
Mypyc's setuptools integration runs the full parse-type-check-transcompile pipeline twice during a PEP 517 build because pip runs the build in isolation. Budget for slower wheel builds in CI compared to a pure Python package. Passing debug_level=0 to mypycify() strips debug symbols and reduces wheel size, which matters at scale.
One practical consequence of platform-specific binaries is that pre-commit hooks will not pick up your compiled wheels automatically, since pre-commit downloads hooks from source. Teams that rely on tools like Black (which ships mypyc-compiled wheels) should be aware that the speedup will not apply inside pre-commit without explicit configuration using a mirror shim. For your own compiled packages, plan for this gap if performance in pre-commit contexts matters to your workflow.
Key Takeaways
- mypy is a static analyzer, not a compiler. It reads your code and reports type errors. It does not touch your runtime, change your code, or speed anything up. It is zero-overhead at runtime by design.
- mypyc is a compiler, not a type checker. It produces binary C extensions from type-annotated Python. It uses mypy internally for type information, but its output is performance, not error messages.
- mypyc depends on mypy; mypy does not depend on mypyc. The dependency flows in one direction. You can use mypy without ever touching mypyc. You cannot use mypyc without mypy running under the hood.
- Runtime type enforcement is a mypyc feature, not a mypy feature. In compiled code, type mismatches raise
TypeErrorat runtime. In interpreted Python, annotations are metadata and the interpreter ignores them — mypy only checks them statically. - mypyc is strongest on non-numeric, general-purpose application code. For numerical workloads, Numba or Cython typically outperform it. For server logic, parsers, and tooling, mypyc can achieve 4x to 10x speedups.
- mypyc is alpha software with real limitations. No nested classes, no monkey patching of compiled classes, no
pdbdebugging inside compiled functions, and partially supported async and generator features. Plan for these constraints before committing to a mypyc-based distribution strategy. - Annotations serve both tools simultaneously. The type hints you add for mypy are exactly what mypyc needs to compile efficiently. There is no annotation rework when moving from one tool to the other.
The naming similarity between mypy and mypyc is not accidental — they share a codebase, a development team, and a shared philosophy that Python type annotations should do more than serve as comments. Mypy validates that your annotations are consistent with your code. Mypyc uses those same annotations to make your code faster. They are designed to be used together, with mypy doing the correctness work and mypyc doing the performance work, on the same annotated Python source.
Check Your Understanding
Work through each question at your own pace. Click an answer to see the feedback, then use the Try Again button to explore what the other options would have told you. Navigate between questions with the arrow below.
Spot the Bug
The setup code below looks plausible but contains a real mistake — the kind that would compile silently and only reveal itself at runtime or during distribution. Read through it, identify the problem, and select your answer.
Frequently Asked Questions
What is the difference between mypy and mypyc?
Mypy is a static type checker: it reads annotated Python code, reports type errors, and exits without touching your runtime. Mypyc is an ahead-of-time compiler: it takes the same annotated Python and produces C extension modules that run faster inside CPython. Mypy checks for correctness; mypyc generates performance. The dependency flows in one direction — mypyc calls mypy internally, but mypy has no knowledge of mypyc.
Does mypyc replace mypy?
No. Mypyc calls mypy as part of its internal compilation pipeline. Code that fails mypy type checking cannot be compiled by mypyc. The two tools operate in sequence: mypy for ongoing type correctness, mypyc as an optional performance step once the code is well-annotated.
How fast is mypyc compilation?
The mypyc documentation states that existing annotated code typically runs 1.5x to 5x faster after compilation, and code tuned specifically for mypyc can reach 5x to 10x. Mypy itself is compiled with mypyc and runs 3–5x faster than the interpreted version, according to the mypy 1.20 release notes. For specific workloads, results vary: the Richards benchmark shows close to 10x improvement, while numerical benchmarks may show minimal gain.
Is mypyc production-ready?
Mypyc is alpha software. The official documentation states it is recommended for production only with careful testing and willingness to contribute fixes or work around issues. Mypy, by contrast, is production-stable. If you adopt mypyc in a distributed package, plan around its known limitations: no monkey patching compiled classes, no pdb debugging inside compiled functions, and restricted support for dynamic class features.
Can you use mypy without mypyc?
Yes — mypy is fully independent. It requires no build step, produces no binaries, and has zero runtime overhead. You can run it in CI on every commit for years without touching mypyc. Mypyc is an optional later step for modules where you have profiled a real CPU bottleneck.
What Python version does mypy 1.20 support?
Mypy 1.20 (released March 31, 2026) dropped support for running on Python 3.9, which reached end of life in October 2025. It runs on Python 3.10 and later, though you can still type-check code targeting 3.9 using the --python-version 3.9 flag. Support for type-checking 3.9 code will be dropped in the first half of 2026. Mypy 1.20 also added support for Python 3.14 t-strings (PEP 750), and extended mypyc wheel availability to Windows ARM64 and CPython 3.14 free-threading builds.
What is changing in mypy 2.0?
Mypy 2.0 is the next planned feature release after 1.20. The development team has publicly committed to enabling --local-partial-types by default and --strict-bytes by default in 2.0. Both changes typically require at most minor code adjustments. The team has explicitly stated they want to avoid a Python 2-to-3 style disruptive migration, so breaking changes in 2.0 will be limited to only what is technically necessary. Teams can prepare now by enabling --local-partial-types in their mypy 1.20 configuration — the flag compatibility was significantly improved in 1.20 to make the transition easier. (1.20 and 2.0 Release Planning, python/mypy GitHub)
How does mypyc compare to Cython, PyPy, and Numba?
The tools serve different use cases. Mypyc requires existing PEP 484 annotations and costs almost nothing to adopt on a well-typed codebase, delivering 2x–10x speedups on general-purpose application logic. Cython requires its own syntax and C knowledge but can reach 100x or more on carefully tuned numerical code. PyPy replaces the interpreter rather than compiling individual modules, producing dramatic speedups via JIT on code it warms up well, but it requires users to run a different Python implementation. Numba applies JIT per-function via a decorator and excels on NumPy-based numerical kernels, reaching 56–135x in empirical benchmarks on those workloads — well beyond mypyc on the same code. Mypyc's advantage is the low migration cost and standard CPython deployment: your users install a normal pip package and run standard CPython.
How do you ship mypyc-compiled code to end users?
Mypyc produces platform-specific binary extensions, so you must build a separate wheel for each target platform and Python version. The standard approach is to use cibuildwheel in a CI pipeline (GitHub Actions, CircleCI, etc.) to build wheels across Linux, macOS, and Windows automatically on each release. You then upload all wheels alongside a source distribution to PyPI. When a user runs pip install, pip selects the prebuilt wheel matching their platform, or falls back to the source distribution for unsupported platforms where mypyc does not provide a prebuilt binary. Including an sdist fallback is the recommended pattern — it ensures your package installs everywhere, even when a compiled wheel is not available.
- mypy documentation — Frequently Asked Questions. https://mypy.readthedocs.io/en/latest/faq.html
- mypy documentation — Release Notes (1.13 through 1.20). https://mypy.readthedocs.io/en/stable/changelog.html
- python/mypy GitHub repository. https://github.com/python/mypy
- mypyc documentation — Introduction. https://mypyc.readthedocs.io/en/latest/introduction.html
- mypyc documentation — Differences from Python. https://mypyc.readthedocs.io/en/latest/differences_from_python.html
- mypyc/mypyc GitHub repository. https://github.com/mypyc/mypyc
- mypyc-benchmarks GitHub repository. https://github.com/mypyc/mypyc-benchmarks
- The Mypy Blog — Mypy 1.20 Released. https://mypy-lang.blogspot.com/2026/03/mypy-120-released.html
- EASE 2025 — An Empirical Study on the Performance and Energy Usage of Compiled Python Code. https://arxiv.org/html/2505.02346v1
- Glyph Lefkowitz — You Should Compile Your Python And Here's Why. https://blog.glyph.im/2022/04/you-should-compile-your-python-and-heres-why.html
- Steve Meadows — Making Python Fast for Free: Adventures with mypyc. https://blog.meadsteve.dev/programming/2022/09/27/making-python-fast-for-free/
- mypyc development focus areas 2025 — GitHub Issue #785. https://github.com/mypyc/mypyc/issues/785
- cibuildwheel documentation — pypa/cibuildwheel. https://cibuildwheel.pypa.io/en/stable/
- Cython GitHub repository — Comparison with other Python compilers. https://github.com/cython/cython
- Cemrehan Cavdar — The Optimization Ladder (2026). https://cemrehancavdar.com/2026/03/10/optimization-ladder/
- Richard Si — Compiling Black with mypyc, Pt. 3 — Deployment. https://ichard26.github.io/blog/2022/05/compiling-black-with-mypyc-part-3/
- mypyc documentation — Getting Started. https://mypyc.readthedocs.io/en/stable/getting_started.html