Clean Up Debugging Remnants in Python Code

Every developer has done it. You scatter print() calls and breakpoint() statements across your code while hunting down a bug, fix the problem, and then commit your changes — with all that debugging clutter still in place. Leftover debugging code is one of the easiest problems to prevent, yet it remains one of the common reasons for messy pull requests and unexpected production behavior.

This article walks through what debugging remnants look like, why they matter, and how to catch them before they ever leave your machine. Whether you prefer manual cleanup or automated tooling, you will walk away with a practical strategy to keep your Python projects clean.

The Common Culprits

Debugging remnants come in several forms. Knowing what to look for is the first step toward eliminating them. Here are the types you will encounter across almost any Python codebase.

Print Statements

The print() function is the quickest debugging tool available. It requires no imports, no configuration, and no setup. That convenience is exactly why stray print calls end up in production so often. They range from the vague to the marginally descriptive:

# These are all real examples of debugging prints
# that have made it into production code

print("here")
print(f"DEBUG: user = {user}")
print("-----")
print(x)
print(f"value before loop: {total}")

Breakpoint and Debugger Calls

Python 3.7 introduced the built-in breakpoint() function, which drops you into the interactive debugger. Before that, developers would import pdb directly. Both approaches leave traces that are far worse than stray print calls because they will pause your application entirely when triggered:

# Any of these will halt execution in production
breakpoint()
import pdb; pdb.set_trace()
import ipdb; ipdb.set_trace()
Warning

A forgotten breakpoint() in production will freeze your application at that line and wait for interactive input that never comes. This can cause timeouts, hung processes, and service outages.

Commented-Out Code and Debug Flags

Sometimes the remnants are subtler. Developers comment out lines instead of deleting them, leave manual debug flags toggled on, or write temporary TODO comments that never get addressed:

# TODO: remove this before merging
DEBUG = True

def process_order(order):
    # print(f"order details: {order}")  # commented out but still here
    if DEBUG:
        print(f"Processing order #{order.id}")
    result = calculate_total(order)
    # result = old_calculate_total(order)  # old version, keeping just in case
    return result

Each of these makes the codebase harder to read. They add noise that future developers — including your future self — will have to mentally filter out every time they read the file.

Why Debugging Remnants Cause Real Problems

It might be tempting to dismiss leftover print statements as harmless. They do not crash the program, after all. But the problems they introduce are real and accumulate over time.

Noisy output. Print statements pollute stdout. In a web application, that output ends up in server logs, making it harder to find actual log messages. In a CLI tool, debug output confuses users who expect clean, formatted results.

Leaked sensitive data. A careless print(user) or print(request.headers) can dump passwords, tokens, or personal information straight into plain-text logs. This is a security concern, not just a style issue.

Performance overhead. In tight loops or high-throughput code paths, unnecessary print calls add I/O operations that slow execution down. String formatting inside those calls can add up when processing millions of records.

Frozen processes. As mentioned above, a stray breakpoint() or pdb.set_trace() will halt execution entirely and wait for terminal input. In a server or background worker, that means a process that hangs indefinitely.

Code review friction. Pull requests littered with debugging artifacts signal carelessness. Reviewers spend time flagging print statements instead of evaluating the logic of the change itself.

Replace Print Statements With Logging

The single best habit for reducing debugging remnants is to stop using print() for debugging in the first place. Python's built-in logging module gives you everything print does, plus the ability to control output levels, format messages consistently, and turn debug output on or off without changing any code.

import logging

logger = logging.getLogger(__name__)

def process_payment(amount, user_id):
    logger.debug("Processing payment of %s for user %s", amount, user_id)

    if not user_id:
        logger.error("No user_id provided for payment")
        raise ValueError("User ID required")

    result = charge_card(amount, user_id)
    logger.info("Payment of %s completed for user %s", amount, user_id)
    return result

The key difference is control. With logging, you configure the output level once in your application's entry point. Set the level to DEBUG during development and WARNING or ERROR in production. The debug messages stay in your code, ready to be activated again when you need them, but they produce zero output in production unless you choose otherwise.

# In your application's entry point or config file
import logging

# Development configuration
logging.basicConfig(
    level=logging.DEBUG,
    format="%(asctime)s %(name)s %(levelname)s %(message)s"
)

# Production configuration
logging.basicConfig(
    level=logging.WARNING,
    format="%(asctime)s %(name)s %(levelname)s %(message)s"
)
Pro Tip

Use logger.debug() with the %s formatting style instead of f-strings. This defers string formatting until the message actually needs to be emitted, avoiding the performance cost of building strings that will never be displayed in production.

Logging does not eliminate every scenario where you need quick, throwaway output during development. Sometimes you really do just want to see a value on one line and delete it immediately. That is fine. The point is to make logging your default and treat print() as the exception, not the other way around.

Find Remnants Automatically With Linters

Manual code review catches debugging remnants eventually, but automated tools catch them immediately and consistently. Two approaches stand out for Python projects in 2026.

Ruff

Ruff is an extremely fast Python linter written in Rust that has become the standard tool for Python code quality. It replaces Flake8, isort, pyupgrade, and dozens of other tools with a single command. Ruff includes built-in rules that specifically target debugging remnants:

  • T201 — Flags any use of print()
  • T203 — Flags any use of pprint()
  • T100 — Flags breakpoint() and debugger imports like pdb.set_trace()

To enable these rules, add them to your pyproject.toml:

# pyproject.toml
[tool.ruff.lint]
select = [
    "E",    # pycodestyle errors
    "F",    # pyflakes
    "T20",  # flake8-print (catches print and pprint)
    "T10",  # flake8-debugger (catches breakpoint and pdb)
]

Now running ruff check . will flag every print statement, pprint call, and breakpoint in your codebase. If you have intentional print statements in CLI scripts or output-focused files, you can exempt specific files:

# pyproject.toml
[tool.ruff.lint.per-file-ignores]
"cli.py" = ["T201"]
"scripts/*" = ["T201"]

Flake8 With Plugins

If your project still uses Flake8, you can achieve the same coverage by installing two plugins: flake8-print for catching print statements and flake8-debugger for catching breakpoint and pdb calls. These plugins use the same T20 and T10 rule codes that Ruff implements.

Note

Ruff runs up to 150–200 times faster than Flake8. If you are starting a new project or looking to consolidate your toolchain, Ruff is the recommended choice. It can replace Flake8 and dozens of its plugins with a single, faster tool.

Stop Remnants at the Gate With Pre-Commit Hooks

Finding debugging remnants after you have already committed them is better than nothing, but the ideal approach is to block them from being committed in the first place. Pre-commit hooks run automated checks every time you execute git commit, and they reject the commit if any check fails.

The pre-commit framework makes this straightforward. First, install it:

# Install the pre-commit framework
pip install pre-commit

Then create a .pre-commit-config.yaml file in the root of your project:

# .pre-commit-config.yaml
repos:
  # Standard checks including debug statement detection
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v5.0.0
    hooks:
      - id: debug-statements
      - id: trailing-whitespace
      - id: end-of-file-fixer

  # Ruff for linting and formatting
  - repo: https://github.com/astral-sh/ruff-pre-commit
    rev: v0.15.5
    hooks:
      - id: ruff-check
        args: [--fix]
      - id: ruff-format

Finally, install the hooks into your local Git repository:

# Install the hooks
pre-commit install

From this point forward, every commit will be checked automatically. The debug-statements hook from pre-commit-hooks catches breakpoint(), pdb.set_trace(), and similar debugger calls. The Ruff hook catches print statements, formatting issues, and anything else covered by your Ruff configuration.

When a hook fails, the commit is blocked. You see the offending lines in your terminal, fix them, re-stage the files, and commit again. The entire cycle takes seconds.

Pro Tip

Add the same Ruff checks to your CI/CD pipeline as well. Pre-commit hooks protect individual developers, but CI checks protect the entire team by catching anything that slips through when someone bypasses hooks with git commit --no-verify.

A Quick Manual Sweep With grep

Not every project has pre-commit hooks or a linter configured yet. When you need a fast, zero-setup check before pushing your code, grep does the job. Here is a simple command that scans your Python files for the usual suspects:

# Search for common debugging remnants
grep -rn --include="*.py" \
    -e "print(" \
    -e "breakpoint()" \
    -e "pdb.set_trace()" \
    -e "ipdb.set_trace()" \
    -e "import pdb" \
    -e "import ipdb" \
    .

The -r flag searches recursively, -n shows line numbers, and --include="*.py" limits results to Python files. This is not a replacement for proper tooling, but it works anywhere, requires no installation, and gives you a quick confidence check.

You can save this as a shell script or alias it in your shell configuration to run it with a single command before every push:

# Add to your .bashrc or .zshrc
alias pycheck='grep -rn --include="*.py" -e "print(" -e "breakpoint()" -e "pdb.set_trace()" .'

Key Takeaways

  1. Know what to look for: Print statements, breakpoint calls, debugger imports, commented-out code, and debug flags are all forms of debugging remnants that should be removed before committing.
  2. Use logging instead of print: Python's logging module gives you level-based control over debug output. You can leave debug messages in your code and silence them in production without changing a single line.
  3. Automate detection with Ruff: Enable the T20 and T10 rule groups to flag print statements and debugger calls automatically. Ruff is fast, comprehensive, and replaces an entire stack of older tools.
  4. Block remnants with pre-commit hooks: The pre-commit framework combined with Ruff and the debug-statements hook ensures that debugging code never makes it into your commit history.
  5. Fall back to grep when needed: A simple recursive grep gives you a quick manual check that works on any machine with no setup required.

Debugging is a normal and necessary part of writing code. The cleanup afterward should be just as normal. Whether you automate the process with hooks and linters or keep a manual checklist, building the habit of removing debugging remnants will make your code cleaner, your logs quieter, and your pull requests sharper.

back to articles