Microsoft Pulls the Plug on Faster CPython: What the Cancellation Means for Python's Performance Roadmap

In May 2025, Microsoft cancelled its funding for the Faster CPython project and laid off the majority of the six-person team that had been responsible for the most meaningful performance gains in CPython's history. The news broke while several team members were literally en route to PyCon US in Pittsburgh, and it landed like a cold bucket of water on an already complicated performance story.

The Faster CPython project was not a side experiment. It was a funded, structured, full-time engineering effort led by some of the most experienced CPython contributors in the world, including Python's creator Guido van Rossum. Its cancellation does not end CPython performance work entirely, but it does fundamentally change how that work gets done, who does it, and how fast it can realistically move. Understanding what was lost requires understanding what the project was actually trying to accomplish — and how close, or how far, it got.

The Project That Was Going to Change Everything

The Faster CPython project has its roots in a proposal that circulated in late 2020. CPython core developer Mark Shannon, who holds a PhD in virtual machine implementation and had spent years studying interpreter optimization, posted to the python-dev mailing list with a direct thesis: CPython is slow, very little had been done to fix it, and he had a concrete four-phase plan to change that. The goal was audacious: a 5x speedup over four years, achieved by delivering roughly a 50% performance improvement per release cycle.

Shannon's plan — quickly dubbed the "Shannon plan" by the community — proposed a tiered execution model where code would be handled differently depending on how frequently it ran. Rarely executed code would be loaded quickly with minimal overhead. Frequently executed "hot" code would be progressively specialized and optimized. The technical foundation for this was what would become PEP 659: the Specializing Adaptive Interpreter, which would allow CPython to observe how individual bytecodes were being used at runtime and replace them with faster, type-specialized versions on the fly.

Shannon was not the first person to propose speeding up CPython. The history of Python optimization is littered with promising efforts that stalled, forked, or simply ran out of funding. PyPy, the JIT-compiled alternative interpreter, had long demonstrated that Python code could run dramatically faster under the right conditions. But PyPy required a separate runtime and came with compatibility trade-offs. What Shannon was proposing was something different: improvements to CPython itself, so that every Python user would benefit simply by upgrading, with no code changes required.

"It was an important effort and it was too much for one person." — Guido van Rossum, on why Microsoft needed to fund a team

Microsoft saw the opportunity. Guido van Rossum, who had come out of retirement and joined Microsoft as a Distinguished Engineer in 2020, championed the effort internally. As he later explained, his thinking from the start was to see if Microsoft could hire Shannon and assemble a small team around him. Microsoft agreed, and by 2021 the Faster CPython team existed as a formal engineering group. At its peak the team had six members: Shannon in the UK, Droettboom in the US, Irit Katriel, Eric Snow, L. Pereira, and Brandt Bucher, distributed across time zones and collaborating daily on the single goal of making CPython faster.

The team was structured to work directly on the CPython main development branch, not in a fork. Every change went through the standard open-source review process. The effect of this approach was that the improvements were real, incremental, and immediately available to the entire Python community.

Note

The Faster CPython team was not a research lab producing theoretical results. Every optimization they developed was committed directly to the CPython main branch, reviewed by other core developers, and shipped in official Python releases. That model is one reason the cancellation has such direct consequences: there is no separate "Faster CPython build" to maintain. Their work is woven into CPython itself.

How the Cancellation Unfolded

The announcement came on May 15, 2025, when Mike Droettboom, who served as principal software engineering manager for the team at Microsoft, posted to LinkedIn. The message was short and unambiguous. "It's been a tough couple of days," Droettboom wrote. "Microsoft's support for the Faster CPython project was canceled yesterday, and my heart goes out to the majority of the team that was laid off. A hard day for me, but even harder for others."

He added a detail that made the timing feel particularly sharp: "We were all (minus one) set to attend the Python Language Summit at PyCon today, and in fact the notifications went out while we were en route to Pittsburgh."

CPython core developer Brett Cannon subsequently confirmed on LinkedIn that three core developers from the Faster CPython team — Eric Snow, Irit Katriel, and Mark Shannon — were among those laid off in Microsoft's broader round of global cuts. The Register, which covered the story, reported that Droettboom himself was spared but that the majority of the team was let go. Ron Buckton, an 18-year Microsoft veteran who had worked on TypeScript for nearly a decade, was also among those dismissed in the same wave.

The cuts were part of Microsoft's announced elimination of approximately 6,000 employees globally, representing under 3 percent of its workforce. According to Washington state government documents reviewed by Bloomberg and reported widely in the trade press, software engineering was by far the hardest-hit job category in Microsoft's home state, accounting for over 40 percent of the roughly 2,000 Washington-based layoffs. Product management and technical program management roles accounted for a further 30 percent.

"Microsoft's support for the Faster CPython project was canceled yesterday, and my heart goes out to the majority of the team that was laid off. A hard day for me, but even harder for others." — Mike Droettboom, LinkedIn, May 2025

In a fitting irony noted by multiple technology publications, Microsoft's Director of AI for Startups, Gabriela de Queiroz, was also among those laid off in the same round. De Queiroz, who had over 15 years of experience in AI strategy and product innovation, confirmed her dismissal publicly, noting that employees were asked to stop working immediately and set out-of-office notifications. The simultaneous elimination of both an open-source performance team and a senior AI leader, at a company that had been publicly proclaiming its AI-first future, struck many observers as difficult to reconcile.

Microsoft's official response to media inquiries was a single boilerplate sentence: "We are continuing to make the organizational changes necessary to best position the company for success in a dynamic market."

Context

CEO Satya Nadella had publicly stated the month prior that 30 percent of Microsoft's code was now written by AI. Social media speculation immediately connected this figure to the layoffs of software engineers, though Microsoft stated it would be misleading to assume a direct causal link. The real picture is almost certainly more complicated: the company's plan to invest approximately $80 billion in AI-enabled data center infrastructure may have been placing intense pressure on headcount budgets across teams that did not directly contribute to that build-out.

What the Numbers Actually Showed

To understand what was lost, it is worth examining honestly what was gained. The project's ambitions and its actual results do not tell the same story, and that gap matters for assessing where CPython performance work stands today.

The Shannon plan called for a 50 percent performance improvement per release cycle over four years, for a cumulative 5x speedup. What the project delivered was substantial but considerably more modest than that target. Python 3.11 was the standout release, averaging approximately 25 percent faster than 3.10 on the pyperformance benchmark suite, with some workloads seeing gains in the 10 to 60 percent range. Python 3.12 delivered around 4 to 5 percent improvement over 3.11. Python 3.13 improved by roughly 7 percent over 3.12. Python 3.14, released on October 7, 2025, introduced the new tail-calling interpreter (contributed by Ken Jin), which official Python documentation reports as a geometric mean of 3 to 5 percent faster on pyperformance when enabled with Clang 19 or newer on x86-64 and AArch64 architectures. The tail-calling interpreter is opt-in in 3.14, requiring the --with-tail-call-interp build flag, and is not enabled in standard binary releases.

Across the full span from 3.10 (when the project began) to 3.14, the cumulative gain was approximately 20 to 40 percent depending on the workload — a meaningful improvement, but well short of the 5x target. Ken Jin, a CPython core developer and one of the longest-running community volunteers on the project, summarized this plainly when the cancellation was announced: "Python 3.14 is roughly 20-40% faster than 3.10 (when this project first started)."

Python Version Performance vs. Previous Key Optimization
3.11 (Oct 2022) ~25% faster than 3.10 Specializing Adaptive Interpreter (PEP 659)
3.12 (Oct 2023) ~4–5% faster than 3.11 Interpreter frame optimization, immortal objects
3.13 (Oct 2024) ~7% faster than 3.12 Experimental JIT (copy-and-patch), free-threading preview
3.14 (Oct 2025) ~3–5% geometric mean via tail-calling interpreter (opt-in, Clang 19+) Tail-calling interpreter (Ken Jin, gh-128563), improved free-threading (PEP 703 complete), JIT refinements
3.10 → 3.14 total ~20–40% cumulative (workload-dependent) Multiple stacked improvements across all releases

This is not a failure in the ordinary sense. A 20 to 40 percent cumulative speedup delivered to every Python user worldwide, without requiring any code changes, is a meaningful engineering achievement. The Python community had gone years without substantial performance improvements to CPython before this project began. But the gap between the stated goal and the actual result is real, and the project was cancelled before the technically harder and potentially more rewarding phases — most notably a mature JIT compiler — had a chance to deliver.

The JIT compiler, introduced experimentally in Python 3.13 using a technique called copy-and-patch, initially produced disappointing results. Early benchmarks in 3.13 and 3.14 showed the JIT was often slower than the standard interpreter, not faster. Brandt Bucher, who worked on the JIT and was one of the team members who survived the layoffs, gave a talk at PyCon US 2025 titled "What they don't tell you about building a JIT compiler for CPython" only days after the team was disbanded. He was candid about the JIT's early struggles but clear about his intention to continue working on it as a core developer independent of Microsoft's funding.

By early 2026, progress on the JIT had resumed under community direction. Ken Jin posted a technical update in March 2026 noting that Python 3.15 alpha JIT builds had achieved approximately 11 to 12 percent speedup on macOS AArch64 over the tail-calling interpreter, and 5 to 6 percent on x86-64 Linux. "The JIT is now back on track," he wrote, adding: "I cannot overstate how tough this was. There was a point where I was seriously wondering if the JIT project would ever produce meaningful speedups."

% faster than prev. 0% 10% 20% 30% 25% 3.11 ~4% 3.12 ~7% 3.13 ~4%* 3.14 ~11%* 3.15 Released *Projected (JIT, community-led)
CPython performance gains per release since the Faster CPython project began. Python 3.11 saw the largest single-release gain. The 3.14 figure reflects the geometric mean gain of the opt-in tail-calling interpreter per official Python documentation (3–5%; shown as ~4%). Post-cancellation community work on the JIT produced the next notable jump in 3.15 alpha (projection based on Ken Jin's benchmark report, Python Insider Blog, March 17, 2026). *3.14 tail-calling interpreter is opt-in, requiring Clang 19+ build flag; not enabled in standard binaries.

The Broader Pattern: Tech Giants and Open Source

The Faster CPython cancellation did not happen in isolation. It is part of a visible pattern that has emerged over the past two years, in which large technology companies have quietly reduced or eliminated their investments in open-source language infrastructure, even for languages central to their own AI and data science strategies.

In April 2024, Google laid off its US-based Python team — a group of fewer than 10 engineers who maintained Python infrastructure internally, contributed to CPython core development, and held positions on the Python Steering Council. Thomas Wouters, a Python Steering Council member and release manager for Python 3.12 and 3.13, described the situation on Mastodon: "It's a tough day when everyone you work with directly, including your manager, is laid off — excuse me, 'had their roles reduced' — and you're asked to onboard their replacements, people told to take those very same roles just in a different country who are not any happier about it." Google subsequently moved the Python team function to Munich, Germany, framing the change as a reorganization rather than an elimination. The effect on community contributions from that team, however, was real.

The juxtaposition in both cases is difficult to miss. Google and Microsoft are two of the largest consumers of Python in the world. Python powers a significant share of the machine learning research, data pipeline engineering, and AI tooling at both companies. Both have invested heavily in AI products and services built on Python-ecosystem foundations. Yet both moved to reduce or eliminate the teams that were most directly responsible for improving the language those foundations depend on.

"Open-source teams are often the first to be cut in multinational companies. Teams that don't directly generate revenue are usually the first to go." — Community reaction to the Microsoft layoffs, as reported widely in May 2025

The pattern reflects a structural tension that has always existed in corporate open-source sponsorship: the teams doing this work create diffuse, long-term value for the entire ecosystem, but they do not generate revenue on any timeline that fits a quarterly earnings discussion. When budget pressure arrives — whether driven by AI infrastructure spending, macroeconomic conditions, or shifts in strategic priority — these teams are exposed in ways that product engineering groups are not.

This structural tension is not new, but the scale of the recent disruptions is worth noting. The Python community now faces a situation in which the two companies that had been doing the most to fund CPython core development work have both, within a span of 13 months, reduced that funding significantly.

The Python Software Foundation's own funding position compounded the difficulty. In August 2025, the PSF paused its Grants Program after hitting its 2025 funding cap earlier than expected, citing exponential community growth that its revenue had not kept pace with. The PSF stated that PyCon US 2024 had generated a significant financial loss, and that PyCon US 2025 was projected to produce another. In October 2025, the PSF additionally withdrew a $1.5 million proposal to the US National Science Foundation under the Safety, Security, and Privacy of Open Source Ecosystems program. The PSF rejected the grant after discovering the contract terms would require the foundation to affirm it does not operate any programs that "advance or promote DEI," a condition the board voted unanimously to refuse. The PSF described the combined effect of the NSF withdrawal, lower sponsorship, and economic pressure as creating a situation where "the PSF needs financial support now more than ever." For an organization that operates on an annual budget of approximately $5 million with a staff of 14, the loss of either of those funding sources would have been material.

What Happens to CPython Performance Now

The short answer is: the work continues, but more slowly, and under a different organizational model. The situation is not as bleak as the layoff announcement made it seem, but it is meaningfully different from the conditions of the previous four years.

The day the cancellation was announced, Ken Jin opened a thread on Python Discourse titled "Community Stewardship of Faster CPython." He proposed three paths forward: finding a new corporate sponsor, shutting down the ongoing experimental strands of work, or transitioning to a community-led model. He advocated for the third option and outlined a concrete proposal: forming an informal performance working group to replace the weekly Faster CPython syncs that had been held with Microsoft and Meta, setting up Python Software Foundation-owned benchmarking infrastructure using a machine donated by ARM, and exploring next design choices for the JIT with community maintainability as the primary constraint.

Brandt Bucher, responding from Pittsburgh where PyCon was underway, struck a measured tone. He made clear that he intended to continue working on the JIT in his personal capacity as a CPython core developer, and that he did not believe radical shifts in approach were necessary. "Community maintainability (and the active solicitation of community involvement) has always been a primary goal of our work," he wrote. "Progress will be slower, but in my mind there's really no need for the roadmap to change too much."

The planning became more concrete at the CPython core developer sprint in Cambridge, hosted by ARM, in autumn 2025. At that sprint, a JIT working group — consisting of Ken Jin, Savannah Ostrowski, Mark Shannon (now a volunteer contributor), Diego Russo, and Brandt Bucher — wrote a public plan: a 5 percent faster JIT by Python 3.15 and a 10 percent faster JIT by Python 3.16, with free-threading support as a target for 3.16. The plan also addressed what Jin described as the "bus factor" problem: the original JIT had only two active contributors to its middle-end optimizer. The group set a target of two active maintainers in each of the three stages (frontend, middle-end, and backend). By the time Jin published his March 2026 progress update, the JIT had four active middle-end contributors, including two non-core developers.

The key technical work in progress at the time of cancellation included the JIT compiler, the free-threading implementation (work to remove the Global Interpreter Lock), and the continued development of the specializing adaptive interpreter. Of these, the free-threading work had perhaps the most complex dependency structure: PEP 703, which describes the free-threading implementation, was accepted and the feature shipped experimentally in 3.13 and was significantly improved in 3.14. However, free-threaded builds do not currently support the JIT compiler, and the interaction between free-threading and JIT compilation remains an open problem scheduled for 3.15 and 3.16.

python
# Check which CPython interpreter variant you are running
import sys

# Check for free-threading (GIL-disabled) build
is_free_threaded = not sys._is_gil_enabled() if hasattr(sys, '_is_gil_enabled') else False

# Check if JIT is active (Python 3.13+)
# Enable with: PYTHON_JIT=1 python your_script.py
import os
jit_enabled = os.environ.get('PYTHON_JIT', '0') == '1'

print(f"Python version : {sys.version}")
print(f"Free-threaded  : {is_free_threaded}")
print(f"JIT enabled    : {jit_enabled}")

# Note: free-threaded and JIT cannot be combined in Python 3.14.
# This combination is a target for 3.15/3.16 development.

The Python Software Foundation's role in all of this is worth examining carefully. The PSF has historically been a steward of the language's governance and community, not a direct funder of engineering work at the scale the Faster CPython project represented. Microsoft was providing full-time salaries to six engineers. The PSF has neither the budget nor the infrastructure to replicate that arrangement. What it can do — and what community members are working toward — is to provide hosting and coordination infrastructure, such as benchmarking machines and structured meeting cadences, that makes volunteer work more effective.

The community has one asset that should not be underestimated: the code is already in CPython. The specializing adaptive interpreter, the tail-calling interpreter, the copy-and-patch JIT framework, the free-threading groundwork — all of this is present in the codebase and being shipped to users. The question is not whether these technologies exist, but how quickly they can be advanced by a team working on volunteer time rather than full corporate salaries.

On the JIT in Python 3.14

The JIT compiler in Python 3.14 is present in Windows and macOS binary releases but disabled by default. It can be activated by setting the environment variable PYTHON_JIT=1. As of 3.14, it is not recommended for production use: benchmarks show it can actually slow down certain workloads, particularly recursive code with short call frames. It also cannot be combined with the free-threaded build. The recommendation from core developers is to experiment with it, report results, and wait for 3.15 improvements before deploying it in production contexts.

One dimension that often goes undiscussed in coverage of the cancellation is the loss of institutional knowledge. Mark Shannon spent years developing a precise mental model of CPython's interpreter loop, its memory model, and the specific bottlenecks that mattered most to real-world workloads. Eric Snow spent years on the sub-interpreter and free-threading architecture. Irit Katriel contributed deeply to exception handling and performance tooling. This knowledge does not vanish when people leave a company, but it does become harder to access and apply at scale when those people are no longer working full time on the problem.

Key Takeaways

  1. The cancellation was abrupt and publicly painful: Several team members learned about it while traveling to PyCon US, a detail that illustrated how little warning was given. The lack of a transition plan underscores how precarious corporate sponsorship of open-source infrastructure can be, even at companies as large as Microsoft.
  2. The gains were real but fell well short of the original targets: CPython 3.14 is roughly 20 to 40 percent faster than 3.10 — a genuine, user-visible improvement — but nowhere near the 5x speedup the Shannon plan envisioned. The most ambitious phases of that plan, including a mature JIT compiler, were still in progress when funding was cut.
  3. The JIT is alive but immature: The copy-and-patch JIT framework is present in CPython and improving. Community contributors working under much tighter constraints than the Microsoft team have already made progress toward meaningful JIT speedups in 3.15. This is the most consequential in-progress work, and its trajectory over the next two release cycles will define the next chapter of CPython performance.
  4. Free-threading and JIT integration remains unresolved: These two major performance features cannot currently be combined. Making them work together is one of the highest-priority items for 3.15 and 3.16, and it represents the kind of deep systems engineering problem that benefits from dedicated, full-time attention rather than volunteer cycles.
  5. Corporate dependency on Python creates a structural funding gap: The same companies that eliminated these teams depend on Python for their AI and data science infrastructure. That dependency may eventually create pressure for renewed sponsorship, either through individual companies, industry consortia, or the PSF. But the community should not assume that pressure will translate into action on any particular timeline.

The Faster CPython project accomplished something important. It proved that CPython's performance could be improved meaningfully through sustained, structured effort, and it built the technical infrastructure — the adaptive interpreter, the JIT framework, the free-threading architecture — that forms the basis for the next generation of Python performance work. What was cancelled was the corporate funding that made that pace of progress possible. The community now carries that work forward under conditions that are harder than anyone would have chosen. Whether the next chapter produces results comparable to what a fully funded team could have delivered is a question that will be answered over the next several Python release cycles.