You wrote what looks like perfectly reasonable code. Maybe you tried to add two variables together, subtract a price from a budget, or use the | operator for a type hint. Then Python hit you with this: TypeError: unsupported operand type(s) for +: 'int' and 'str'.
If you have ever stared at that message wondering why Python cannot just figure out what you meant, you are not alone. This error is one of the frequently encountered errors in all of Python programming, and it shows up in contexts ranging from beginner arithmetic mistakes to advanced operator overloading in production-grade libraries.
But here is the thing that tutorials typically miss: this error is not just a nuisance. It is a window into Python's entire type philosophy — a design decision that stretches back to the earliest days of the language and connects to active, evolving PEPs (Python Enhancement Proposals) that are still shaping how Python works today.
This article will take you through what this error actually means at the interpreter level, why Python raises it instead of guessing, the operator resolution protocol that runs behind every +, -, and * you write, and how recent and upcoming PEPs have changed the landscape around type-related errors. Real code, real traceback output, real comprehension.
What the Error Actually Means
At its core, TypeError: unsupported operand type(s) means exactly one thing: you asked Python to perform an operation between two objects, and neither object knows how to handle that operation with the other.
The error is not about "wrong syntax" or "bad logic" in the abstract. It is a specific runtime signal that Python's operator dispatch mechanism exhausted all its options. The interpreter tried the left operand's method, tried the right operand's reflected method, and both declined.
The form you will see varies depending on the types and operators involved:
result = "100" + 50
Traceback (most recent call last):
File "main.py", line 1, in <module>
result = "100" + 50
~~~~~~^~~~
TypeError: can only concatenate str (not "int") to str
Or the classic arithmetic variant:
budget = 400
price = input("Enter price: ") # returns a string
remaining = budget - price
TypeError: unsupported operand type(s) for -: 'int' and 'str'
The error message always follows the same template: it tells you which operator failed (+, -, *, /, |, **, etc.) and the types of the two operands involved. That template is your first and most important debugging clue.
Notice the structure: operator, then left type, then right type. This ordering is not arbitrary. It mirrors the dispatch order the interpreter follows internally, which is the subject of the next section.
But the real question is: why does Python refuse here? Other languages might coerce that string to an integer automatically. Python does not, and that is by design.
Python's Type Philosophy: Strong and Dynamic
Python is a dynamically typed language, meaning you do not declare variable types in advance. But it is also strongly typed, meaning it does not silently convert between incompatible types just because an operation would otherwise fail.
This distinction is critical and frequently misunderstood. JavaScript, for example, is dynamically typed and weakly typed — writing "100" + 50 in JavaScript produces "10050" (string concatenation wins). PHP does the reverse, coercing the string to a number in certain contexts. Python refuses both approaches, requiring you to state your intention explicitly.
Guido van Rossum, Python's creator, addressed this design tension in a 2003 interview with Bill Venners for Artima. Van Rossum argued that strong typing helps find bugs, but overemphasizing type correctness distracts from overall program design (Artima, "Strong versus Weak Typing," 2003). That observation captures the design balance Python strikes: it will not guess what you meant when types clash, but it also will not force you to declare types everywhere.
Van Rossum also clarified an important terminological point in that same interview series. He suggested that "runtime typing" was a more accurate description than "weak typing," since every Python object carries its type with it at all times (Artima, "Strong versus Weak Typing," 2003). The community later adopted "dynamic typing" and "duck typing" as the standard terms.
This philosophy means that when you write "100" + 50, Python does not assume you want string concatenation or integer addition. It simply tells you the operation is not defined for those types and asks you to be explicit. You choose: int("100") + 50 or "100" + str(50). The programmer makes the decision, not the interpreter.
Understanding this philosophy reframes how you think about the error. It is not Python being "dumb" about your intent. It is Python being rigorous about a core design principle: ambiguity should not pass silently.
The Operator Resolution Protocol: What Happens Behind the Scenes
Here is where things get really interesting. When you write x + y, Python does not simply "add two things." It initiates a multi-step resolution protocol defined in the Python Data Model, documented in the official Python documentation at docs.python.org/3/reference/datamodel.html.
The sequence works like this:
Step 1: Python checks whether y is an instance of a subclass of x's type. If it is, Python tries the reflected method y.__radd__(x) first. This subclass priority rule ensures that subclasses can override the behavior of their parent class without the parent's method running first and potentially returning a wrong result.
Step 2: If the subclass exception does not apply, Python calls x.__add__(y) — the left operand's __add__ dunder method.
Step 3: If that method does not exist, or if it returns the special sentinel value NotImplemented, Python tries the reflected version: y.__radd__(x).
Step 4: If __radd__ also does not exist or returns NotImplemented, Python raises TypeError: unsupported operand type(s).
The subclass priority rule in Step 1 is often overlooked but has real consequences. If class B inherits from A, then A() + B() will try B.__radd__ before A.__add__. This allows B to return a B instance from the operation rather than having A.__add__ produce an A instance. If both classes are unrelated, the left operand's method gets priority.
Here is a concrete example of this protocol in action:
class Velocity:
def __init__(self, value):
self.value = value
def __add__(self, other):
if isinstance(other, (int, float)):
return Velocity(self.value + other)
if isinstance(other, Velocity):
return Velocity(self.value + other.value)
return NotImplemented # Critical: don't raise, return NotImplemented
def __radd__(self, other):
return self.__add__(other)
def __repr__(self):
return f"Velocity({self.value})"
v = Velocity(60)
print(v + 10) # Velocity(70) -- calls v.__add__(10)
print(10 + v) # Velocity(70) -- int.__add__ fails, falls back to v.__radd__(10)
print(v + "fast") # TypeError -- both __add__ and str.__radd__ return NotImplemented
That return NotImplemented line is critical, and it trips up many developers. If your __add__ method raises TypeError directly when it encounters an unfamiliar type, you prevent Python from trying the reflected method on the other operand. You short-circuit the protocol. The correct pattern is always to return NotImplemented and let Python's machinery decide whether to raise the error.
The NotImplemented Sentinel: Not What You Think
A common point of confusion: NotImplemented is not NotImplementedError. They are entirely different things with entirely different purposes.
NotImplementedError is an exception you raise when a subclass has not yet implemented a required method. It is part of the exception hierarchy, inheriting from RuntimeError. You use it in abstract base classes and placeholder methods that need to be overridden.
NotImplemented is a sentinel value — a special built-in singleton that dunder methods return (not raise) to signal that a particular operation is not supported for a given pair of types. It is one of Python's built-in constants, alongside None, True, False, and Ellipsis.
An important change arrived in Python 3.14: evaluating NotImplemented in a boolean context now raises a TypeError. Previously, it evaluated to True (with a deprecation warning since Python 3.9), as documented in the Python 3.14 Built-in Constants reference. This change was made because developers were accidentally writing code like if not_impl_result: without realizing they were testing a sentinel value rather than a meaningful boolean. The deprecation path spanned five years: the warning was added in 3.9, and the hard error landed in 3.14.
# Python 3.14+
result = NotImplemented
bool(result) # TypeError: NotImplemented should not be used in a boolean context
# The old behavior (Python 3.8 and earlier):
bool(NotImplemented) # True -- silently dangerous
Why does this matter? Because NotImplemented evaluating as True could mask bugs. If a dunder method accidentally leaked NotImplemented into application code and it was tested in a conditional, the truthiness would suggest success when in fact the operation was never handled.
PEP 661 (Sentinel Values), authored by Tal Einat, proposes a standardized way to create sentinel values in Python's standard library. In September 2025, the PEP author met with the Python Steering Council and reported productive discussion with a clear path forward (discuss.python.org, September 2025). As of early 2026, the PEP still awaits a final decision, with community members tracking its progress toward the 3.15 feature freeze (discuss.python.org, February 2026). While NotImplemented predates this PEP, the proposal reflects the broader recognition that sentinel values play a critical role in Python's internal protocols, and that their behavior needs clear, documented semantics. The standard library already contains over 15 different sentinel implementations, and PEP 661 would unify the pattern.
The Seven Operator Families
The "unsupported operand type" error can arise from any of Python's operator families. Each has its own set of dunder methods, and understanding which family you are dealing with narrows your debugging considerably:
- Arithmetic:
__add__,__sub__,__mul__,__truediv__,__floordiv__,__mod__,__pow__,__divmod__(plus their__r*__reflected variants and__i*__in-place variants) - Bitwise:
__and__,__or__,__xor__,__lshift__,__rshift__ - Unary:
__neg__,__pos__,__invert__,__abs__ - Matrix multiplication:
__matmul__(introduced in PEP 465, Python 3.5, primarily used by NumPy and data science libraries) - Comparison:
__eq__,__ne__,__lt__,__le__,__gt__,__ge__(these returnNotImplementedusing the same protocol, but Python raisesTypeErrordifferently for ordering operations on incompatible types) - Type union:
__or__ontypeobjects (added in Python 3.10 via PEP 604, used for type hint expressions likeint | str) - Containment and iteration:
__contains__,__iter__— while these do not follow the full reflected-method protocol, they can produceTypeErrorwhen types do not support the expected interface
Each operator method can return NotImplemented, and each binary operator has a reflected counterpart. For in-place operators like +=, Python tries x.__iadd__(y) first, then falls back to x.__add__(y), then to y.__radd__(x). Only when all three paths return NotImplemented (or are not defined) does the TypeError get raised. This three-tier fallback is why in-place operators sometimes succeed where regular operators would not — they get an extra method lookup.
Here is a quick example that catches many beginners — the NoneType variant:
data = [10, 20, None, 40]
total = sum(data)
TypeError: unsupported operand type(s) for +: 'int' and 'NoneType'
The sum() function internally uses + to accumulate values. When it reaches None, int.__add__(None) returns NotImplemented, and NoneType has no __radd__ method, so Python raises the error. The fix is to filter or handle None values before summing:
total = sum(x for x in data if x is not None)
PEP 604 and the | Operator: A Modern TypeError Trap
One of the more surprising contexts for this error in modern Python involves type hints and the union operator |.
PEP 604, authored by Philippe Prados and Maggie Moss with Guido van Rossum serving as BDFL-Delegate, was accepted for Python 3.10. It introduced the ability to write union types as X | Y instead of the more verbose Union[X, Y] from the typing module. The official Python 3.10 release notes describe the change as a new type union operator enabling the syntax X | Y (docs.python.org, "What's New in Python 3.10").
But here is where developers get caught:
# This works fine in Python 3.10+
def greet(name: str | None) -> str:
return f"Hello, {name}"
# This fails in Python 3.9 and earlier
def greet(name: str | None) -> str: # TypeError!
return f"Hello, {name}"
TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'
In Python 3.9 and earlier, the | operator is not defined on type objects (the metaclass of built-in types). The expression str | None tries to call type.__or__(str, None), which does not exist, so Python raises our familiar TypeError.
The mechanism behind this is precisely the operator resolution protocol covered earlier. Python checks type.__or__, does not find it, checks the right operand's __ror__, does not find it on NoneType either, and raises the error. PEP 604's implementation added __or__ and __ror__ to the type metaclass starting in 3.10.
The workaround for older Python versions comes from PEP 563 (Postponed Evaluation of Annotations), authored by Lukasz Langa and introduced in Python 3.7. By adding from __future__ import annotations at the top of your module, all annotations are stored as strings rather than being evaluated at definition time. This means str | None becomes the string 'str | None' in __annotations__, avoiding the runtime evaluation entirely.
from __future__ import annotations
# Now this works even in Python 3.7+
def greet(name: str | None) -> str:
return f"Hello, {name}"
Be careful: if your code evaluates annotations at runtime (for instance, through libraries like Pydantic or SQLAlchemy that inspect __annotations__), using from __future__ import annotations combined with the | syntax on Python versions below 3.10 can cause failures when those libraries attempt to resolve the stringified annotations. Pydantic v2 handles this well, but older Pydantic v1 and some ORMs may not.
PEP 657: Better Error Messages for TypeError
For years, one of the biggest frustrations with TypeError was the lack of precision in tracebacks. Consider this code:
result = a.value + b.value * c.value - d.value
If one of those attributes was None, the old traceback would only point to the line number, leaving you to guess which of the four .value accesses was the problem.
PEP 657, authored by Pablo Galindo Salgado, Batuhan Taskaya, and Ammar Askar, was implemented in Python 3.11. It introduced fine-grained error locations in tracebacks by adding column offset information to bytecode. Now, the interpreter can point to the exact subexpression that caused the error:
# Python 3.11+
x = {"a": 1, "b": None}
result = x["a"] + x["b"] * 3
Traceback (most recent call last):
File "main.py", line 2, in <module>
result = x["a"] + x["b"] * 3
~~~~~~^~~
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
Those tilde (~) and caret (^) characters underneath the expression pinpoint exactly which operation failed and which operands were involved. The PEP explicitly cited Java's JEP 358, which similarly improved NullPointerException messages, as prior art (peps.python.org, PEP 657). The Python 3.9 switch to a PEG-based parser (PEP 617) also facilitated the richer positional data that PEP 657 requires, since the new parser can naturally track more precise source locations.
Python 3.14 continues this trajectory with even more refined error messages. For example, unhashable type errors now include context about what you were trying to do: instead of just unhashable type: 'list', Python 3.14 reports cannot use 'list' as a dict key (unhashable type: 'list'), as documented in CPython issue #132825 and confirmed in the Python 3.14 release.
Python 3.12 and 3.13 also contributed improvements in this area: 3.12 added better error messages for common NameError and ImportError scenarios (including "Did you forget to import?" hints and import name suggestions) and improved the suggestion engine for instance attributes, while 3.13 expanded "Did you mean...?" suggestions to cover incorrect keyword arguments in function calls and added module-shadowing warnings. Together, these three releases represent a significant leap in Python's error reporting quality.
Real-World Patterns That Trigger This Error
Beyond the textbook str + int case, here are the patterns that experienced developers encounter in production code:
Pattern 1: Functions That Return None Unexpectedly
def calculate_discount(price, rate):
if rate > 0:
return price * rate
# Forgot the else clause -- returns None implicitly
discount = calculate_discount(100, 0)
final_price = 100 - discount # TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'
This is arguably the most insidious variant. The function sometimes works and sometimes returns None, making the bug intermittent and hard to reproduce. The fix is to ensure every code path returns a value:
def calculate_discount(price, rate):
if rate > 0:
return price * rate
return 0
A deeper fix is to use mypy --strict or pyright, which will catch the implicit None return. A function that sometimes returns float and sometimes returns None has an inferred return type of float | None, and a static checker will flag any arithmetic performed on that return value without a None guard.
Pattern 2: Unparsed Input Data
import json
data = json.loads('{"price": "49.99", "quantity": "3"}')
total = data["price"] * data["quantity"]
# TypeError: can't multiply sequence by non-int of type 'str'
JSON does not distinguish between numeric strings and actual numbers if the source encoded them as strings. Always validate and convert external data before performing arithmetic. Consider using Pydantic models or custom deserialization to enforce type contracts at the boundary:
from decimal import Decimal
data = json.loads('{"price": "49.99", "quantity": "3"}')
total = Decimal(data["price"]) * int(data["quantity"])
Pattern 3: Pandas and NumPy Mixed-Type Operations
import pandas as pd
df = pd.DataFrame({"revenue": [100, 200, "N/A", 400]})
total = df["revenue"].sum() # May raise TypeError depending on pandas version
DataFrames with mixed types are a constant source of operand errors. Use pd.to_numeric(df["revenue"], errors="coerce") to convert safely, replacing unconvertible values with NaN. In NumPy, similar issues arise with np.add on object-dtype arrays. The fix is always the same: enforce homogeneous types before performing arithmetic.
Pattern 4: Operator Overloading Gone Wrong
class Matrix:
def __init__(self, data):
self.data = data
def __add__(self, other):
if not isinstance(other, Matrix):
raise TypeError(f"Cannot add Matrix and {type(other)}") # Wrong!
# ... matrix addition logic
As discussed earlier, raising TypeError directly prevents Python from trying the reflected method. If other has a __radd__ that could handle Matrix objects, you have just blocked it. Always return NotImplemented instead:
def __add__(self, other):
if not isinstance(other, Matrix):
return NotImplemented
# ... matrix addition logic
Pattern 5: Datetime and Timedelta Confusion
from datetime import datetime
start = datetime.now()
end = datetime.now()
# This works: timedelta = end - start
# This fails:
total = start + end # TypeError: unsupported operand type(s) for +: 'datetime' and 'datetime'
You can subtract two datetime objects (producing a timedelta), and you can add a timedelta to a datetime, but you cannot add two datetime objects together. The error seems counterintuitive until you think about what "adding two dates" would mean semantically — it is undefined, so Python rightly refuses.
Pattern 6: Accidental Boolean Arithmetic
flags = {"enabled": True, "count": 5}
result = flags["enabled"] + flags["count"] # This actually works! result = 6
Wait — this one does not raise an error. That is because bool is a subclass of int in Python, so True + 5 evaluates to 6. The danger is the inverse: when you expect an error and do not get one, leading to subtle logic bugs.
A Systematic Debugging Framework
When you encounter this error, resist the urge to immediately cast types. Instead, apply this systematic framework to find and fix the root cause:
Step 1: Read the error message completely. The error tells you three things: the operator, the left operand's type, and the right operand's type. On Python 3.11+, it also shows you the exact subexpression. Start from these facts.
Step 2: Identify which operand is unexpected. Usually one of the two types is correct and the other is wrong. If you expected int + int but got int + NoneType, the question is: why is the right operand None?
Step 3: Trace the unexpected value backward. Where did that None (or str, or whatever the wrong type is) come from? Was it a function return value? A dictionary lookup? An API response? User input? This step is where the real debugging happens.
Step 4: Fix at the source, not at the symptom. Wrapping the operation in int() or str() at the point of error might silence the exception, but if the upstream function is returning None when it should return a number, the cast is a band-aid that will fail again when the value truly is None.
Step 5: Add a guard to prevent recurrence. This could be a type hint, a runtime assertion, a Pydantic validator, or an isinstance check. The goal is to ensure this specific type mismatch cannot reach the operator again.
Use type(variable) liberally when debugging this error interactively. Printing the types of both operands right before the failing line almost always reveals the culprit immediately. In production, structured logging that includes type information (logger.debug(f"a={a!r} type={type(a)}")) pays dividends when debugging intermittent type errors.
Defensive Coding: Catching It Before It Catches You
There are several strategies for preventing unsupported operand errors before they hit production, ranging from static analysis to runtime contracts.
Type hints with static analysis. PEP 484, authored by van Rossum, Lehtosalo, and Langa, established the standard for type hints. The PEP explicitly states that Python will remain dynamically typed and that type hints will never be mandatory (peps.python.org, PEP 484). But adopting them voluntarily pays dividends. Using type annotations with a static checker like mypy, pyright, or pytype catches type mismatches at analysis time rather than runtime. A function signature like def compute(a: int, b: int) -> int allows mypy to flag compute("10", 5) as an error before the code ever runs.
Runtime validation with Pydantic or attrs. For code that processes external data, validate types at the boundary using structured validation libraries rather than manual isinstance checks:
from pydantic import BaseModel
class OrderItem(BaseModel):
price: float
quantity: int
# Pydantic validates and coerces types at construction time
item = OrderItem(price="49.99", quantity="3") # Coerced to float and int
total = item.price * item.quantity # Always safe
Manual runtime validation. When external libraries are not available, validate types explicitly:
def safe_add(a, b):
if not isinstance(a, (int, float)) or not isinstance(b, (int, float)):
raise ValueError(f"Expected numeric types, got {type(a).__name__} and {type(b).__name__}")
return a + b
The isinstance check in dunder methods. When implementing custom operators, always check the type and return NotImplemented for unrecognized types rather than raising exceptions. This preserves the operator resolution protocol and allows third-party types to interoperate with yours.
typing.Protocol for structural typing. Python 3.8 introduced typing.Protocol (PEP 544), which allows you to define structural interfaces that static checkers can verify. Instead of checking for specific types, you can define what operations an object must support:
from typing import Protocol
class SupportsAdd(Protocol):
def __add__(self, other: int) -> "SupportsAdd": ...
def double(value: SupportsAdd) -> SupportsAdd:
return value + value
This approach aligns with Python's duck typing philosophy while still catching type errors statically.
Related PEPs at a Glance
Here is a consolidated reference of the PEPs relevant to understanding and working with TypeError: unsupported operand type(s):
- PEP 3107 (Function Annotations, 2006) — introduced the syntax for attaching metadata to function parameters, laying the groundwork for type hints.
- PEP 465 (Matrix Multiplication Operator, 2014) — added the
@operator and__matmul__dunder method, expanding the set of operators that can produce this error. - PEP 484 (Type Hints, 2014) — established the standard semantics for annotations as type hints, enabling static type checking tools that catch
TypeErrorbefore runtime. - PEP 526 (Variable Annotations, 2016) — extended type annotation syntax to variables, not just function parameters.
- PEP 544 (Protocols: Structural Subtyping, 2017) — introduced
typing.Protocolfor structural typing, enabling static checks on operator support without requiring inheritance. - PEP 563 (Postponed Evaluation of Annotations, 2017) — changed annotations to be stored as strings, avoiding runtime evaluation and resolving forward reference issues. Key workaround for PEP 604-style type unions on pre-3.10 Python.
- PEP 604 (Allow Writing Union Types as X | Y, 2019) — introduced the
|operator for type unions, a common source ofTypeErroron older Python versions. - PEP 617 (New PEG Parser, 2019) — replaced CPython's LL(1) parser with a PEG-based parser, enabling the richer source location tracking that powers PEP 657.
- PEP 657 (Fine Grained Error Locations in Tracebacks, 2021) — added column-level precision to tracebacks, making it dramatically easier to identify which subexpression triggered a
TypeError. - PEP 661 (Sentinel Values, pending decision) — proposes standardizing sentinel objects like
NotImplemented, which are central to the operator resolution protocol. The PEP author met with the Steering Council in September 2025, but the PEP still awaits a final decision as of early 2026.
The Bigger Picture
The TypeError: unsupported operand type(s) message is more than an error. It is the interpreter enforcing one of Python's core design principles: explicit is better than implicit. That principle, from the Zen of Python by Tim Peters (accessible via import this and enshrined in PEP 20), captures why Python refuses to guess what you meant when types do not match.
But there is a deeper idea here worth sitting with. Every TypeError you encounter is a point where your mental model of the data diverged from reality. The value you thought was an integer turned out to be None. The API you assumed returned numbers actually returned strings. The library you imported does not define __add__ for the combination you tried. Each instance is an invitation to tighten the contract between your code and the data it operates on.
Understanding why this error occurs — not just how to fix the immediate symptom — makes you a more effective Python programmer. It means you understand the operator resolution protocol, the role of NotImplemented as a sentinel value distinct from NotImplementedError, the subclass priority rule, and the design philosophy that makes Python simultaneously flexible and strict.
The evolution of this error's reporting, from bare line numbers in Python 3.10 and earlier to the surgical precision of PEP 657's column offsets in 3.11 through the contextual messages of 3.14, tells a story about Python's maturation. The language is getting better at explaining itself. The tracebacks are doing more of the detective work for you. But the fundamental discipline remains the same: know your types, make your intentions explicit, and let the protocol do its work.
Next time this error appears in your traceback, read it carefully. It is telling you exactly which types were involved and which operation failed. With Python 3.11 and later, it is even showing you exactly where on the line the problem occurred. The information is all there. Now you know what to do with it.