Every time you type import pathlib or import dataclasses without installing a single package, you are benefiting from a deliberate, carefully governed process. PEP 2 is the document that defines that process -- the rules, responsibilities, and expectations that determine whether a module earns a place inside Python itself.
What Is PEP 2 and Why Does It Exist?
Python Enhancement Proposals, or PEPs, are the formal mechanism through which changes to Python are proposed, debated, and ratified. They range from sweeping language changes (PEP 572 introduced the walrus operator) to style conventions (PEP 8 covers code formatting) to process documents that govern how the Python community operates. PEP 2 falls into that last category. It is classified as a Process PEP, and its status is listed as Active -- meaning it is a living part of Python's governance, not a historical artifact.
The question PEP 2 answers is deceptively simple: how does a new module get added to Python's standard library? Behind that question is a great deal of complexity. The standard library ships with every Python installation on every platform. A module that enters the standard library immediately becomes the responsibility of the Python core development team to maintain, test, document, and keep compatible across every future release. That is a substantial, open-ended commitment. PEP 2 formalizes the gate that a module must pass through before that commitment is made.
PEP 2 was originally created on July 7, 2001, and has been updated over the years. Its current primary author is Brett Cannon, one of Python's most active core developers and a CPython contributor with decades of involvement in the language's direction. The PEP is intentionally short. Its brevity is not an accident -- it delegates the specifics of evaluation criteria to the Python Developer's Guide, keeping itself focused on procedure rather than judgment calls.
PEP 2 does not define what makes a module good enough for the standard library -- it defines the procedure for proposing one. The qualitative criteria (usefulness, stability, licensing, API design) live in the Python Developer's Guide, which PEP 2 explicitly references.
The "Batteries Included" Philosophy
To understand why PEP 2 matters, you first need to understand the philosophy it protects. Python has long been described as a language that comes with "batteries included." The phrase means that a fresh Python installation contains enough in its standard library to handle a wide range of real-world tasks without the programmer needing to hunt down and install third-party packages first. This principle is one of several design philosophies -- alongside the guiding aphorisms captured in PEP 20, The Zen of Python -- that have shaped Python's identity from its earliest days.
Think about what you can do the moment Python is installed. You can parse JSON with import json. You can work with dates and times using import datetime. You can traverse the filesystem with import pathlib, handle regular expressions with import re, write unit tests with import unittest, launch a simple HTTP server, parse HTML, work with CSV files, generate random numbers, compress data, hash strings, manage threads, and much more -- all without a single pip install.
As PEP 206 describes, Python's design philosophy centers on shipping a comprehensive standard library so that developers can be productive immediately -- without needing to download anything beyond Python itself.
This philosophy has been a cornerstone of Python's success, particularly for beginners and in environments where network access or package management infrastructure may be limited. A Python developer working in a locked-down enterprise environment, on an embedded system, or learning the language for the first time can be productive immediately. That is a powerful promise -- and maintaining it requires discipline about what gets added to the library and how.
The downside of "batteries included" is that batteries can go dead. A module that enters the standard library cannot simply be removed when a better alternative appears in the wild. The Python community values backward compatibility very highly. Once something is in the standard library, users depend on it, and removing it breaks code. This is precisely why PEP 2 establishes a deliberate acceptance process rather than allowing modules to drift in informally.
PEP 2 at a Glance: Metadata and Authors
Before getting into the mechanics of the procedure, it is worth understanding the formal metadata of PEP 2 itself, because it illustrates how the PEP system works.
The fact that PEP 2 was created in 2001 and is still marked Active in 2026 tells you something important: this is not a one-time decision. It is a continuously valid piece of governance. The most recent modification was made on February 1, 2025, which confirms the document is maintained and evolves alongside the rest of Python's development practices.
Martijn Faassen, one of the original co-authors, was a prominent figure in the early Python web development community. Brett Cannon took over primary authorship and has shaped much of the document's current direction. The involvement of contributors of this caliber underscores how seriously the Python community takes the question of what belongs in the standard library.
The Acceptance Procedure: Top-Level vs. Submodules
PEP 2 draws a clear line between two types of additions to the standard library: top-level modules and packages, and submodules added to an existing package. The procedure differs significantly between these two cases.
Top-Level Modules and Packages
If you want to propose an entirely new top-level module -- something that would be imported as import yourmodule at the top level -- a full PEP is required. There is no shortcut. The PEP must go through the standard PEP process as described in PEP 1, which means drafting a formal proposal, getting it reviewed by the Python core team, going through rounds of discussion on the python-dev mailing list or Discourse forum, and ultimately receiving a decision from the Python Steering Council.
This requirement reflects the weight of the commitment. A top-level module is a highly visible addition to the language. It takes up a namespace that can never be fully reclaimed without breaking code. It must be documented, tested, and maintained indefinitely. Requiring a full PEP ensures that the proposal is carefully examined from multiple angles before any commitment is made.
If you have a Python package you think belongs in the standard library, the best first step is to establish it as a successful third-party package on PyPI with a clear use case, a stable API, and wide adoption. A proven track record dramatically strengthens any eventual PEP proposal.
Submodules of Existing Packages
The rules are more flexible when the addition is a submodule within a package that already lives in the standard library. In this case, the decision falls to the discretion of the Python development team. A formal PEP is not strictly required, although significant additions will often prompt discussion. This flexibility acknowledges that expanding an existing package is generally lower-risk than introducing an entirely new namespace. The package is already there, users are already familiar with it, and the addition can be evaluated in context.
For example, adding a new helper function to the os.path module or a new class to the email package would not automatically require a full PEP. It might still go through a significant review process, but the formal PEP requirement only kicks in for top-level additions.
The Developer's Guide
PEP 2 explicitly defers the question of what criteria a module must meet to the Python Developer's Guide. This is an important architectural decision. By keeping the procedural rules in PEP 2 and the evaluative criteria in a living guide document, the Python community can update its standards for module acceptance without having to formally amend the PEP each time. The criteria in the Developer's Guide are detailed and cover areas such as:
- Whether the module addresses a widely recognized need
- Whether the module has an established track record as a third-party package
- Whether the API is stable, well-designed, and appropriately Pythonic
- Whether the module has adequate documentation and tests
- Whether the licensing is compatible with Python's own license
- Whether the module's author or another developer is willing to maintain it within the standard library going forward
What Goes Into a Standard Library PEP?
Since PEP 2 requires a full PEP for top-level module additions, it is worth understanding what a well-constructed PEP looks like. The PEP process, described in PEP 1, requires a formal document that covers the motivation for the change, a concrete technical specification, information about backward compatibility, and a discussion of alternatives that were considered and rejected.
For a standard library addition specifically, a strong PEP would typically address several key questions. First, what problem does this module solve, and why is it best solved at the standard library level rather than as a third-party package on PyPI? Second, what does the proposed API look like? This includes concrete code examples showing how the module would be used. Third, how does the module relate to existing standard library modules? Does it replace, complement, or overlap with anything already there? Fourth, what is the maintenance plan? Who will take ownership of the module within the CPython repository?
# A well-written PEP would include concrete API examples like this.
# For instance, PEP 428 proposed pathlib with examples such as:
from pathlib import Path
# Construct paths in a platform-independent way
p = Path('/usr/bin/python3')
# Navigate the filesystem
config_dir = Path.home() / '.config' / 'myapp'
config_dir.mkdir(parents=True, exist_ok=True)
# Read and write files
data = config_dir / 'settings.json'
data.write_text('{"theme": "dark"}')
print(data.read_text())
The PEP also needs to address what happens at both ends of the lifecycle -- how the module gets in, and under what conditions it might eventually be deprecated or removed. This holistic view is part of what distinguishes a serious standard library proposal from a casual suggestion.
Not every PEP for a standard library module gets accepted. Many well-crafted proposals are rejected or indefinitely deferred. Rejection is not a judgment on the quality of the code -- it may reflect a policy decision that the standard library should not expand in a particular direction, or that a third-party package is a better home for a given piece of functionality.
The Maintenance Procedure: A Lifetime Commitment
The second major section of PEP 2 addresses what happens after a module is accepted. This is where PEP 2 makes its most demanding requirement clear. Any module accepted into the standard library is expected to be primarily maintained there, within Python's development infrastructure.
This means the module's home becomes the CPython repository on GitHub. Bug reports come in through the Python issue tracker. Changes are reviewed by CPython core developers. The module must comply with CPython's testing requirements, coding standards (PEP 8 for Python code), and release schedule. The original author of a third-party package who proposed its inclusion may or may not remain the primary maintainer -- but someone within the Python development ecosystem must take ownership.
The Backport Question
PEP 2 acknowledges a common scenario: a developer proposes a module that they originally wrote as a standalone package. After acceptance, some team members may choose to continue maintaining a backport of the module on PyPI -- a version that works with older Python releases that don't yet include the new module. This is permitted, but PEP 2 is explicit that keeping such an external backport synchronized with the standard library version is the responsibility of the individual who maintains it. The core development team's obligation is to the version within CPython, not to any external copy.
The pathlib module is a good example of this pattern in practice. Before it was added to Python 3.4, an independent pathlib package existed on PyPI. After acceptance, a backport called pathlib2 was maintained to support older Python versions. The standard library version moved forward according to CPython's own development process, while the backport was the responsibility of its separate maintainers.
If you propose a module for the standard library and it is accepted, you are implicitly accepting that the canonical version now lives in CPython. The Python core team can make changes to it, and your external version may diverge if not actively synchronized. This is a significant consideration before proposing inclusion.
The Cost of Perpetual Maintenance
PEP 2's introduction section articulates something that developers sometimes overlook: every addition to the standard library carries a perpetual cost. The Python development team is finite. Every module in the standard library requires attention during new CPython releases, during security audits, and whenever the module's dependencies or the underlying platform APIs change. There is also a cognitive cost for users who must be familiar with an ever-growing library to understand what tools are available to them.
This perpetual cost is why the standard library's acceptance bar is deliberately high. It is not that the Python team does not appreciate well-written code. It is that adding something to the standard library is a decades-long promise, and the team takes that promise seriously.
Real Examples: Modules That Went Through This Process
Looking at modules that successfully went through the process outlined in PEP 2 helps make the abstract procedure concrete.
pathlib (PEP 428, Python 3.4)
The pathlib module provides an object-oriented interface to filesystem paths. Before its inclusion, Python developers had to use a combination of os, os.path, and shutil functions -- a functional but ergonomically awkward approach. PEP 428 made the case for a path object that treated filesystem paths as first-class objects rather than strings, and demonstrated clear improvement in readability and composability. It had a prior life as a third-party package, which helped establish its API and gather real-world usage data before the formal proposal.
dataclasses (PEP 557, Python 3.7)
The dataclasses module addressed a very common pattern in Python: writing classes that are primarily containers for data, requiring boilerplate __init__, __repr__, and __eq__ methods. PEP 557 proposed a decorator-based approach inspired by libraries like attrs. The proposal went through significant debate about scope and API design before landing in Python 3.7. Its success illustrates how a module can enter the standard library even when a popular third-party alternative already exists -- if the standard library version can be designed well enough to serve as a universal baseline.
from dataclasses import dataclass, field
from typing import List
@dataclass
class Student:
name: str
student_id: int
grades: List[float] = field(default_factory=list)
def average_grade(self) -> float:
if not self.grades:
return 0.0
return sum(self.grades) / len(self.grades)
# No need to write __init__, __repr__, or __eq__ manually
student = Student(name="Alex", student_id=12345, grades=[88.5, 91.0, 79.5])
print(student)
# Student(name='Alex', student_id=12345, grades=[88.5, 91.0, 79.5])
print(student.average_grade()) # 86.33...
zoneinfo (PEP 615, Python 3.9)
The zoneinfo module brought proper IANA time zone database support into the standard library. For years, working correctly with time zones in Python required the third-party pytz library. The zoneinfo addition resolved a long-standing gap and demonstrated that even well-established third-party libraries can eventually be superseded by a standard library implementation when the need is broad enough and the API can be designed cleanly.
tomllib (PEP 680, Python 3.11)
The tomllib module for reading TOML configuration files was added in Python 3.11, reflecting how widely TOML had been adopted in the Python ecosystem as a configuration format (most visibly in pyproject.toml). This is a good example of a module entering the standard library based on demonstrated ecosystem-wide need rather than just theoretical usefulness.
The Decision Flowchart: How a Module's Fate Is Determined
Reading PEP 2 in isolation can make the process feel abstract. In practice, the decision path a module follows is a sequence of gates, each with its own set of stakeholders and criteria. Thinking through these gates as a connected chain makes the logic more concrete.
The first question is whether the proposal is for a top-level module or a submodule of an existing package. If it is a submodule, the decision is delegated to the Python development team and does not require a formal PEP, though significant additions will still prompt discussion. If it is a top-level module, a full PEP is required, which triggers the remainder of the chain.
The PEP itself must answer a sequence of increasingly difficult questions. Does the module address a genuine, broadly recognized need? Has it proven itself as a third-party package with real adoption? Is the API stable, Pythonic, and well-documented? Is the licensing compatible? And critically: is someone willing to maintain this module within CPython's infrastructure indefinitely?
If the PEP clears the community discussion phase, it reaches the Python Steering Council for a final decision. The Steering Council considers not just the merits of the individual module, but the systemic implications -- does this addition set a precedent? Does it conflict with the direction the standard library is moving? Would accepting this module implicitly obligate the team to accept similar proposals in the future?
This is not a rubber-stamp process. PEP 431 proposed adding a timezone module to the standard library. It was deferred, and the community eventually took a different path entirely -- the zoneinfo module (PEP 615) arrived years later in Python 3.9 with a fundamentally different design. The fact that a need was recognized early did not mean the first proposed solution was the right one.
What the Gatekeepers Are Really Asking
At every stage, the underlying question is not "is this module good?" but rather "is this the right commitment for the next twenty years?" This distinction is essential. A module can be well-written, useful, and popular, and still not belong in the standard library. The inverse is also true -- a module that seems niche today might address a need that will only grow over time. The Steering Council's role is not to evaluate code quality alone. It is to make a judgment about the long-term trajectory of both the module and the language.
Consider the comparison between modules that made it through the gate and those that did not. The pattern is instructive.
| Module | PEP | Python Version | Outcome | Why |
|---|---|---|---|---|
pathlib |
PEP 428 | 3.4 | Accepted | Addressed a universal need with a cleaner API than os.path |
dataclasses |
PEP 557 | 3.7 | Accepted | Eliminated pervasive boilerplate with a clean decorator-based design |
zoneinfo |
PEP 615 | 3.9 | Accepted | Filled a critical gap -- time zones were virtually unusable without pytz |
tomllib |
PEP 680 | 3.11 | Accepted | TOML became a core part of Python packaging via pyproject.toml |
compression.zstd |
PEP 784 | 3.14 | Accepted | Zstandard emerged as a dominant compression format across the industry |
timezone |
PEP 431 | -- | Deferred | Design was premature; zoneinfo later solved the problem differently |
enum34 (as stdlib) |
PEP 435 | 3.4 | Accepted | Enum types are fundamental; third-party package proved the design |
The pattern that emerges from this comparison is clear. Successful proposals share three traits: they solve a problem that affects a large percentage of Python developers, they arrive with a proven track record as a third-party package, and their API is designed to complement rather than conflict with existing standard library modules.
The Standard Library Is Still Moving: 2025 and Beyond
It is tempting to read PEP 2 as a historical document -- a governance artifact from 2001 that describes how Python used to work. That impression is wrong. PEP 2's process is actively shaping the Python standard library right now. Python 3.14, released in October 2025, delivered several notable standard library additions that went through exactly this process.
compression.zstd (PEP 784)
The compression.zstd module brought Zstandard compression directly into the standard library. Zstandard had become the dominant modern compression algorithm in systems programming, data engineering, and container runtimes, but Python developers still needed to install the third-party zstd or zstandard package to use it. PEP 784 made the case that the format had reached the level of ubiquity that warranted standard library inclusion -- the same threshold that json, gzip, and bz2 had cleared in their time.
Beyond the standalone module, Python 3.14 also introduced a new compression package that re-exports existing compression modules like lzma, bz2, gzip, and zlib. These new import names under the compression namespace are now the recommended canonical names, signaling a long-term organizational strategy for the standard library. This is PEP 2's process in action -- not just adding a module, but thinking about how it fits into the library's structure for decades to come.
# Python 3.14: Zstandard compression is now a standard battery
from compression import zstd
# Compress data in memory
original = b"Hello, standard library!" * 1000
compressed = zstd.compress(original)
print(f"Original: {len(original)} bytes -> Compressed: {len(compressed)} bytes")
# The new compression namespace also provides a cleaner organizational model
from compression import gzip, bz2, lzma, zstd # all under one roof
concurrent.interpreters (PEP 734)
Perhaps the most architecturally significant addition in Python 3.14 is the concurrent.interpreters module. Multiple interpreters have existed in CPython's C API for years, but no standard library module exposed them to Python code. PEP 734 changed that, giving Python developers a built-in way to create isolated interpreter contexts within a single process. Each interpreter gets its own GIL, enabling genuine parallel execution of Python code without the overhead of spawning separate OS processes.
This is a case where PEP 2's requirement for a full PEP was particularly important. The implications of exposing subinterpreters to the standard library are profound -- it changes the concurrency story for the entire language. The proposal went through extensive review precisely because the commitment is not just to maintain a module, but to shape how Python developers think about parallelism going forward.
annotationlib (PEP 749)
The annotationlib module, introduced alongside PEP 649's deferred evaluation of annotations in Python 3.14, provides tools for introspecting annotations as real objects rather than strings. This addition is more subtle than the others, but it illustrates an important pattern: sometimes a standard library module exists not to solve a user-facing problem, but to provide infrastructure that other tools and libraries need to build on.
What Python 3.15 Tells Us About the Future
Python's next major release, Python 3.15, is currently in alpha development and is scheduled for October 2026. There was a notable proposal -- PEP 2026 -- to adopt calendar versioning, which would have made the next release Python 3.26 instead of Python 3.15. The Steering Council ultimately rejected that proposal, and Python continues with its established sequential versioning. But the discussion itself is revealing. It shows a community that is thinking carefully not just about what goes into the standard library, but about how the language communicates its evolution to developers. Regardless of version numbering, PEP 2's process remains the gate through which every new standard library module must pass.
The Tension Nobody Talks About
There is a philosophical tension at the heart of PEP 2 that is rarely discussed openly, but that shapes every decision the Steering Council makes about the standard library. It is the tension between two legitimate and competing goals: making Python immediately useful out of the box, and keeping the standard library from becoming a graveyard of abandoned code.
The "batteries included" philosophy is powerful. It means a Python developer in a restricted environment -- an air-gapped network, a corporate laptop without package manager access, an educational setting where students cannot install dependencies -- can still accomplish meaningful work. That is a real and important guarantee. But every battery that gets added is a battery that must be maintained. And the Python core development team is a finite group of volunteers and sponsored contributors. Every hour spent maintaining a standard library module is an hour not spent on the interpreter, the type system, or the build pipeline.
This tension has grown more acute as the PyPI ecosystem has matured. In 2001, when PEP 2 was written, there was no reliable, universal package installer. Making something part of the standard library was often the only practical way to ensure it was available to Python developers everywhere. Today, pip ships with every Python installation, PyPI hosts over 500,000 packages, and the infrastructure for discovering, installing, and managing dependencies is robust. The economic case for including a module in the standard library has shifted.
The question is no longer "is this module useful?" but "would this module be better served inside the standard library than as a well-maintained package on PyPI?"
The answer is increasingly "no" for many categories of functionality. Libraries that benefit from rapid iteration, frequent breaking changes, or tight coupling to third-party services are often better off on PyPI where they can release independently of CPython's annual cycle. Libraries that provide stable, foundational infrastructure -- compression algorithms, data formats used by the packaging system itself, concurrency primitives -- still make strong cases for inclusion.
This is the real governing logic behind PEP 2 in practice. The text of the PEP itself is short and procedural. The judgment calls that happen within that procedure are shaped by this deeper tension, and understanding that tension is what separates a surface-level reading of PEP 2 from a genuine understanding of how the standard library is governed.
If you are considering proposing a module for the standard library, ask yourself: would this module's users be better served by the stability guarantees and universal availability of the standard library, or by the faster release cycle and lower contribution friction of a PyPI package? If the answer is not clearly the former, PyPI is probably the right home.
The Other Side: Deprecation and PEP 4
PEP 2 explicitly notes its relationship to PEP 4, which governs the deprecation of standard library modules. Understanding this pairing is essential to understanding how the standard library is managed as a whole.
Modules do not simply leave the standard library. When a module is deemed obsolete, redundant, or insufficiently maintained, it must go through a formal deprecation process. PEP 594, authored by Brett Cannon and Christian Heimes, is a prominent recent example. It proposed removing a set of modules that had become what the Python community called "dead batteries" -- modules that no longer served a meaningful purpose in the modern Python ecosystem, had better third-party alternatives, or had design flaws that made them more harmful than helpful.
The modules targeted by PEP 594 were deprecated in Python 3.11 with warnings, then removed entirely in Python 3.13, giving developers a multi-release window to migrate. This controlled, gradual approach to removal reflects the same philosophy of care that PEP 2 applies to additions. The standard library changes slowly and deliberately in both directions.
If you encounter a DeprecationWarning from a standard library module, treat it seriously. Deprecated standard library modules follow a predictable removal timeline that is documented in the PEP that deprecated them. You typically have at least two minor version cycles to find an alternative.
PEP 2 in the Modern Python Ecosystem
PEP 2 was written in 2001, before PyPI existed in its modern form, before pip was a concept, and before the Python packaging ecosystem had matured into the infrastructure developers rely on today. The landscape has shifted considerably, and PEP 2 reflects that evolution.
One of the significant changes in context since 2001 is the rise of a robust, well-functioning package ecosystem. In the early days of Python, getting a third-party package installed could be genuinely difficult. Making something part of the standard library was often the only practical way to ensure wide availability. Today, pip install is available by default in virtually every Python environment, and PyPI hosts hundreds of thousands of packages. This has changed the calculus for what belongs in the standard library.
The Python Steering Council and core development team have been increasingly thoughtful about the scope of the standard library. The question is no longer just "is this module useful?" but "is this module better served as part of the standard library than as a well-maintained package on PyPI?" In many cases, a high-quality PyPI package offers advantages: faster release cycles, easier contribution, freedom from CPython's backward compatibility constraints, and the ability to support a wider range of Python versions simultaneously.
The Provisional Module Mechanism
One practical evolution in how PEP 2's process plays out is the provisional module designation, described in PEP 411. A module can be marked as provisional in its documentation for one or more release cycles, signaling to users that the API may still change before being finalized. This mechanism allows promising modules to enter the standard library and receive real-world use before the API is locked in. It is a pragmatic compromise that acknowledges both the value of wide exposure and the cost of premature API lock-in.
Other PEPs That Reference PEP 2
PEP 2 is not an island. Several other PEPs in the Python ecosystem explicitly build on or reference it, which illustrates its foundational role. PEP 399 requires that new standard library modules providing a C-accelerated implementation also have a pure Python reference implementation, so that alternative Python runtimes (Jython, PyPy, etc.) can support the module without reimplementing it from scratch. Any module proposed under PEP 2's process must satisfy PEP 399's requirements if it uses C extensions. PEP 408 and PEP 411 explored different mechanisms for a provisional or preview state, both explicitly requiring compliance with PEP 2's acceptance conditions.
# PEP 399's requirement means modules with C acceleration look like this:
# The pure Python fallback is always available
import importlib
# Python automatically uses the C accelerator (_csv) when available,
# but the pure Python csv module exists as a reference implementation
import csv
# This works on CPython (C accelerator), PyPy, Jython, etc.
reader = csv.reader(open('data.csv'))
for row in reader:
print(row)
Why PEP 2 Stays Short
One of the instructive things about PEP 2 is its brevity. The actual text of the PEP is only a few paragraphs. This is not an oversight -- it is a deliberate design choice. By keeping the formal procedural requirements minimal and deferring evaluative criteria to the Developer's Guide, the Python community maintains flexibility. The specific standards for what makes a good standard library module can be updated without reopening a formal PEP. The procedure itself -- write a PEP, go through the review process, accept maintenance responsibility -- is stable enough to codify permanently.
This separation of concerns is a pattern seen throughout well-maintained governance systems. The constitution defines the process; the legislation defines the specifics. PEP 2 is the constitution for standard library additions.
Key Takeaways
- PEP 2 is a Process PEP, not a technical specification. It defines the procedure for adding modules to Python's standard library, not the criteria for what makes a module good. Those criteria live in the Python Developer's Guide, which PEP 2 explicitly references.
- Top-level modules require a formal PEP. There is no shortcut for adding a new top-level namespace to the standard library. The proposal must go through the full PEP process, including review by the Steering Council.
- Submodule additions are more flexible. Adding a submodule to an existing standard library package is at the discretion of the Python development team and does not automatically require a full PEP, though significant additions will still receive scrutiny.
- Acceptance means perpetual maintenance commitment. Once a module enters the standard library, it becomes the Python development team's responsibility. This ongoing cost is why the acceptance bar is high and the process is deliberate.
- PEP 2 works alongside PEP 4. The addition and removal procedures are complementary. The standard library grows carefully through PEP 2's process and shrinks carefully through PEP 4's deprecation procedure.
- The ecosystem context has changed since 2001. The modern PyPI and pip infrastructure means the standard library is no longer the only viable way to achieve wide distribution. This has raised the bar for what genuinely belongs in the standard library rather than on PyPI.
- PEP 2 is actively shaping Python right now. Python 3.14 (October 2025) delivered
compression.zstd,concurrent.interpreters, andannotationlib-- each of which went through the process PEP 2 defines. This is not a historical document. - The real governing logic is about tension management. Every standard library decision balances the value of universal availability against the cost of perpetual maintenance. Understanding that tension is what turns a surface-level reading of PEP 2 into a genuine understanding of Python's governance.
Understanding PEP 2 gives you a clearer picture of why Python's standard library looks the way it does. The modules that are there were not placed there casually. Each one cleared a deliberate governance process designed to protect both the quality of the library and the long-term stability that Python users depend on. And the modules that are not there -- the proposals that were deferred, rejected, or never made -- tell an equally important story about a community that takes the weight of its commitments seriously. The next time you type import and a module is simply there, you are benefiting from a process that has been running, adapting, and protecting that reliability since 2001. And as Python 3.14's additions demonstrate, it is still doing that work today.