Why 0.1 + 0.2 Is Not Equal to 0.3 in Python

Open a Python interpreter and type 0.1 + 0.2 == 0.3. Python says False. The actual sum is 0.30000000000000004. That trailing digit is not a bug -- it's the mathematically correct output of an arithmetic system that has been the global standard for numerical computing since 1985. And this exact behavior has caused missile defense failures, rocket explosions, and stock exchange collapses.

Understanding why this happens -- not just hand-waving it away with "floating point is weird" -- requires tracing the problem from the electrical engineering decisions made at UC Berkeley in the late 1970s through the formal IEEE standard, into Python's memory model, and out through the display algorithm that tries to hide the mess from you. It's one of the foundational things a programmer can learn, and Python's own documentation dedicates an entire chapter to it.

The Core Problem: Your Computer Doesn't Speak Decimal

Humans count in base 10. We have ten digits (0–9), and our positional number system uses powers of 10. The decimal number 0.625 means 6/10 + 2/100 + 5/1000.

Computers count in base 2. They have two digits (0 and 1), and their positional number system uses powers of 2. The binary fraction 0.101 means 1/2 + 0/4 + 1/8, which happens to equal 0.625 in decimal. That particular number converts cleanly.

Most decimal fractions don't.

The Python documentation explains this by pointing to a problem you already understand in base 10. Consider the fraction 1/3. You can write it as 0.333..., repeating forever. No matter how many threes you write down, you'll never have exactly 1/3. You're always approximating.

The same thing happens in base 2, but with different fractions. In base 2, the number 1/10 (our familiar 0.1) becomes an infinitely repeating binary fraction:

0.0001100110011001100110011001100110011001100110011...

That pattern -- 0011 -- repeats forever. Just as you can never write exactly 1/3 in decimal with a finite number of digits, you can never write exactly 1/10 in binary with a finite number of bits. Stop at any point, and you have an approximation.

Here's a question worth sitting with: why is 1/10 non-terminating in base 2 but terminating in base 10? The answer is factorization. A fraction p/q terminates in base b if and only if every prime factor of q is also a prime factor of b. In base 10, the prime factors are 2 and 5. So 1/10 (factors: 2 and 5) terminates. In base 2, the only prime factor is 2. So 1/10 (which has 5 in the denominator) repeats forever. This is why 1/2, 1/4, 1/8, and 1/16 are all perfectly exact in binary, but 1/10, 1/5, and 1/3 are not.

Note

This is not a limitation of Python. The Python documentation makes it clear that this is not a bug in Python or in your code. Every language that uses hardware floating-point arithmetic produces the same behavior. JavaScript, Ruby, C, Java, and Go all return 0.30000000000000004 for the same reason -- they all rely on the same underlying arithmetic standard.

The Standard: IEEE 754

In the 1960s and 1970s, floating-point arithmetic was chaos. Every computer manufacturer implemented their own format with its own precision, its own rounding rules, and its own quirks. IBM used hexadecimal floating point. CDC and Cray machines used ones' complement. Porting numerical software between systems was a nightmare, because the same calculation could produce different results on different hardware.

In 1977, the IEEE formed a standards subcommittee to standardize floating-point arithmetic. William Kahan, a professor of mathematics, computer science, and electrical engineering at UC Berkeley, attended its second meeting in November 1977 and became the driving force behind the effort. During spring break 1978, Kahan, along with his graduate student Jerome Coonen and visiting professor Harold Stone, drafted what became the foundational proposal. Their work, known as the KCS (Kahan-Coonen-Stone) proposal, was submitted to the subcommittee at its April 1978 meeting and formed the basis for the standard. (Source: IEEE Engineering and Technology History Wiki, "IEEE Standard 754 for Binary Floating-Point Arithmetic, 1985" milestone documentation.)

The standard was officially ratified as IEEE 754-1985. Kahan received the ACM Turing Award in 1989 -- not for the standard itself, but for what the ACM called his fundamental contributions to numerical analysis, of which the standard was the crown jewel. He has been widely referred to as "The Father of Floating Point" for his role in shaping the specification. (Source: ACM A.M. Turing Award citation, 1989.) According to the IEEE milestone documentation, virtually all computers manufactured after 1985 adopted the standard. It was revised in 2008 and again in 2019, but the core binary arithmetic format that produces the 0.1 + 0.2 result has remained essentially unchanged for four decades.

IEEE 754 defines several binary floating-point formats. The one that matters for Python is called "binary64" (commonly known as "double precision"), which uses 64 bits of memory to represent each floating-point number:

  • 1 bit for the sign (positive or negative)
  • 11 bits for the exponent (the "scale" of the number)
  • 52 bits for the significand (the "digits" of the number, plus 1 implied bit for 53 total bits of precision)

Those 53 bits of precision are the key constraint. They give you approximately 15-17 significant decimal digits. When Python encounters the literal 0.1 in your source code, it must convert that decimal value into a binary fraction that fits within exactly 53 bits of precision. And 53 bits is not enough to capture 1/10 exactly in binary.

What Python Actually Stores When You Write 0.1

The Python documentation provides the exact analysis. When you type 0.1, the interpreter converts it to the closest value it can express in the form J / 2^N, where J is a 53-bit integer. For 0.1, the best approximation is:

>>> format(0.1, '.55f')
'0.1000000000000000055511151231257827021181583404541015625'

That's the actual value stored in memory. It's not 0.1. It's slightly more than 0.1. You can verify this by looking at the exact fraction Python stores:

>>> (0.1).as_integer_ratio()
(3602879701896397, 36028797018963968)

That's the fraction 3602879701896397 / 2^55, which equals 0.1000000000000000055511151231257827021181583404541015625. Not 0.1. Close, but not equal.

Now do the same for 0.2 and 0.3:

>>> (0.2).as_integer_ratio()
(3602879701896397, 18014398509481984)

>>> (0.3).as_integer_ratio()
(5404319552844595, 18014398509481984)

Here's the critical insight: the best 53-bit binary approximation of the mathematical sum (true 0.1 + true 0.2) is not the same as the best 53-bit binary approximation of the true value 0.3. They land on different sides of the rounding boundary. When the hardware adds the approximation-of-0.1 to the approximation-of-0.2, it gets a result that, after rounding, lands on a different 53-bit value than what you get when you directly approximate 0.3.

from decimal import Decimal

# What Python actually stores for each value
a = Decimal(0.1)
b = Decimal(0.2)
c = Decimal(0.3)

print(a)  # 0.1000000000000000055511151231257827021181583404541015625
print(b)  # 0.200000000000000011102230246251565404236316680908203125
print(c)  # 0.299999999999999988897769753748434595763683319091796875

print(a + b)
# 0.3000000000000000166533453693773481063544750213623046875

print(a + b == c)  # False

The stored value of 0.3 is actually slightly less than the true mathematical 0.3, while the sum of the stored values of 0.1 and 0.2 is slightly more. They're on opposite sides of the true value, separated by an amount too small for 53 bits of precision to capture as zero.

Pro Tip

The specific value 0.30000000000000004 that Python displays is the shortest decimal string that uniquely identifies this particular binary64 float. The actual stored value has 55 significant digits, but Python's display algorithm picks the shortest string that would round back to the same binary value. That's why you see exactly 17 decimal digits and not 55.

Why Python Looks Like It Stores 0.1 Exactly

If 0.1 isn't really stored as 0.1, why does the interpreter show 0.1 when you type it?

>>> 0.1
0.1

This is a deliberate display decision that changed in Python 3.1, released in June 2009. The implementation, by Eric Smith and Mark Dickinson, adapted David Gay's dtoa.c library to find the shortest floating-point representation that doesn't change the underlying binary value. (Source: CPython Issue #1580, originally reported by Noam Raphael; merged April 2009.) The algorithm recognizes that 0.1 and 0.10000000000000001 and the full 55-digit value all convert to the same binary64 float, so it displays the shortest: 0.1.

Before Python 3.1, the interpreter used a simpler approach that displayed 17 significant digits, meaning repr(0.1) showed '0.10000000000000001'. That was ugly but honest. The new algorithm made the display prettier, but the underlying arithmetic didn't change at all.

Warning

The Python 3.1 documentation warns explicitly that the cleaner display can be misleading. Even though 1.1 + 2.2 appears to suggest the correct answer, the underlying comparison still fails. The value stored in memory is not exactly 1/10 -- the display is simply a rounded representation of the true machine value. (Source: Python docs, "What's New in Python 3.1," repr() and float changes.)

The Math, Step By Step

Let's trace the entire computation manually.

Step 1: Convert 0.1 to binary.

0.1 in binary is the repeating fraction 0.0001100110011.... Rounded to 53 significant bits:

0.00011001100110011001100110011001100110011001100110011010

That trailing 10 (instead of continuing the 0011 pattern) is where the rounding happened. The 53rd bit was rounded up, following the IEEE 754 "round to nearest, ties to even" rule -- also known as banker's rounding. This rounding mode was chosen specifically because it minimizes statistical bias when accumulating many rounded values.

Step 2: Convert 0.2 to binary.

0.2 is simply 0.1 shifted left by one bit position (multiplied by 2):

0.0011001100110011001100110011001100110011001100110011010

Same rounding error, just scaled up.

Step 3: Add them.

When the hardware adds these two binary values, it produces a result that, expressed in decimal, is:

0.3000000000000000166533453693773481063544750213623046875

Step 4: Round the result.

The hardware rounds this sum to the nearest representable binary64 value, which is:

0.3000000000000000444089209850062616169452667236328125

Step 5: Compare with stored 0.3.

Meanwhile, 0.3 typed directly is stored as:

0.299999999999999988897769753748434595763683319091796875

These two values differ by approximately 5.55 x 10^-17. That's tiny -- roughly one part in 18 quadrillion -- but it's nonzero, so == returns False.

Interestingly, not all similar additions fail. 0.1 + 0.4 == 0.5 returns True, because in that case the rounding errors happen to cancel out rather than accumulate. Whether a particular sum "works" depends on the exact bit patterns involved, which is why floating-point equality feels so unpredictable.

The Landmark Paper Everyone Should Know About

In March 1991, David Goldberg published "What Every Computer Scientist Should Know About Floating-Point Arithmetic" in ACM Computing Surveys (Vol. 23, No. 1, pp. 5-48). This paper remains the canonical reference for understanding floating-point behavior. Its opening observation -- that floating-point is considered arcane by many programmers despite being present in nearly every computer system -- is just as true today as it was over three decades ago. (Source: ACM Digital Library, doi:10.1145/103162.103163.)

Goldberg systematically demonstrates how representation error, rounding modes, and the IEEE 754 standard interact to produce results that surprise anyone expecting exact decimal arithmetic. The paper has accumulated thousands of citations across academic databases, and remains essential reading decades later. Python's own documentation links to it, and it provides the formal mathematical framework for understanding exactly why 0.1 + 0.2 produces the result it does.

Python's Built-In Solutions

Python doesn't just expose the problem -- it gives you tools to work around it. Each tool serves a different use case, and understanding the tradeoffs is essential.

round() for display and casual comparison

>>> round(0.1 + 0.2, 1) == 0.3
True
>>> round(0.1 + 0.2, 10)
0.3

The round() function works well for display purposes and for comparisons where you know the expected precision. But beware: it uses banker's rounding (round half to even), which can produce unexpected results at the boundary. round(0.5) returns 0, not 1.

math.isclose() for approximate comparison

import math

>>> math.isclose(0.1 + 0.2, 0.3)
True

# You can tighten or loosen the tolerance
>>> math.isclose(0.1 + 0.2, 0.3, rel_tol=1e-9)
True
>>> math.isclose(0.1 + 0.2, 0.3, abs_tol=1e-15)
True

This function, added in Python 3.5 via PEP 485 (authored by Christopher Barker), uses both relative and absolute tolerance to determine whether two floats are "close enough" to be considered equal. It defaults to a relative tolerance of 1e-9. The PEP noted that the lack of a standard approximate-equality function had led to programmers independently writing buggy versions of the same logic.

math.fsum() for compensated summation

When summing many floating-point values, errors accumulate with each addition. Python's math.fsum() uses a compensated summation algorithm -- conceptually related to the Kahan summation algorithm invented by the same William Kahan who designed IEEE 754 -- that tracks intermediate partial sums to minimize error accumulation:

import math

# Naive summation fails
>>> sum([0.1] * 10)
0.9999999999999999

# Compensated summation succeeds
>>> math.fsum([0.1] * 10)
1.0
>>> math.fsum([0.1] * 10) == 1.0
True

The Kahan summation algorithm works by maintaining a separate "compensation" variable that captures the low-order bits lost during each addition. It's a small but elegant idea: when you add a small number to a large one, the small number's least significant bits get lost. The compensation variable remembers them and adds them back in the next iteration. math.fsum() extends this idea further, maintaining a list of partial sums for even greater accuracy.

decimal.Decimal for exact decimal arithmetic

The decimal module was added to Python's standard library in Python 2.4, formalized through PEP 327 (authored by Facundo Batista). It implements the General Decimal Arithmetic Specification (authored by Mike Cowlishaw of IBM), giving you decimal floating-point numbers with user-settable precision.

from decimal import Decimal

>>> Decimal('0.1') + Decimal('0.2') == Decimal('0.3')
True

>>> Decimal('0.1') + Decimal('0.2')
Decimal('0.3')
Pro Tip

You must pass strings to the Decimal constructor. If you pass a float, you get the exact binary value that float represents -- which is the whole problem you're trying to avoid. Decimal(0.1) gives you the 55-digit mess; Decimal('0.1') gives you exactly 0.1.

The decimal module is the standard tool for financial calculations where rounding must follow specific accounting rules and where the difference between 0.1 and 0.1000000000000000055511151231257827021181583404541015625 could mean real money. The default precision is 28 decimal digits, but you can change it:

from decimal import Decimal, getcontext

# Set precision to 50 decimal digits
getcontext().prec = 50
>>> Decimal(1) / Decimal(7)
Decimal('0.14285714285714285714285714285714285714285714285714')

fractions.Fraction for exact rational arithmetic

If you need truly exact arithmetic with no rounding at all, Python's fractions module represents numbers as integer ratios:

from fractions import Fraction

>>> Fraction(1, 10) + Fraction(2, 10) == Fraction(3, 10)
True

>>> Fraction(1, 10) + Fraction(2, 10)
Fraction(3, 10)

This is mathematically exact because it never converts to binary floating point. The tradeoff is performance: Fraction arithmetic is significantly slower than hardware float operations, and the numerator and denominator can grow without bound, consuming increasing memory as calculations chain together.

Beyond the Standard Library

Python's standard tools cover the common cases, but some problems demand more power.

mpmath for arbitrary-precision floating point

The mpmath library, developed by Fredrik Johansson since 2007, provides floating-point arithmetic with configurable precision up to millions of digits. Unlike decimal.Decimal, it also provides hundreds of special mathematical functions (gamma, zeta, Bessel, hypergeometric, and more) that all respect the configured precision. (Source: mpmath.org.)

from mpmath import mp, mpf

# Set precision to 50 decimal places
mp.dps = 50

>>> mpf('0.1') + mpf('0.2')
mpf('0.3')

# Compute pi to 100 digits
mp.dps = 100
>>> +mp.pi
mpf('3.14159265358979323846264338327950288...')

mpmath is particularly valuable in scientific computing where you need to verify whether a result is limited by algorithmic error or by floating-point precision. By rerunning a computation at 50 digits instead of 15, you can separate the two sources of error.

numpy and floating-point type control

NumPy defaults to the same float64 (binary64) as Python's built-in float, so it inherits the same 0.1 + 0.2 behavior. But NumPy gives you explicit control over floating-point types and provides array-aware comparison tools:

import numpy as np

# Same problem
>>> np.float64(0.1) + np.float64(0.2) == np.float64(0.3)
False

# numpy provides np.isclose() and np.allclose() for arrays
>>> np.isclose(0.1 + 0.2, 0.3)
True

# Compare entire arrays with tolerance
>>> a = np.array([0.1 + 0.2, 0.1 + 0.4])
>>> b = np.array([0.3, 0.5])
>>> np.allclose(a, b)
True

For scientific applications, np.isclose() and np.allclose() are the array-aware equivalents of math.isclose(), designed to handle element-wise comparison across large datasets efficiently.

The Traps That Catch Real Code

The 0.1 + 0.2 example might seem academic, but the same class of error creates real bugs in production code. Here are the patterns that catch people:

Accumulation in loops:

total = 0.0
for _ in range(10):
    total += 0.1
print(total)          # 0.9999999999999999
print(total == 1.0)   # False

Each addition introduces a fresh rounding error. Over many iterations, these errors accumulate. For financial ledgers, scientific simulations, or any context where you're summing many small values, this matters. Use math.fsum() for accurate summation, or work with Decimal from the start.

Equality checks as conditionals:

# Dangerous
balance = 10.0 - 9.7 - 0.3
if balance == 0.0:
    print("Account empty")  # This might never execute

Never use == to compare float results against expected values. Use math.isclose() or compare against a tolerance threshold.

Rounding surprises:

>>> round(2.675, 2)
2.67  # Expected 2.68!

This happens because 2.675 is stored internally as a value very slightly less than 2.675, so when rounding to 2 decimal places, it rounds down instead of up. The Python documentation calls this out specifically as one of the "representation error" surprises that follow from binary representation.

String formatting masks the issue:

>>> f"{0.1 + 0.2:.1f}"
'0.3'
>>> 0.1 + 0.2 == 0.3
False

The formatted string looks correct, but the underlying value is still 0.30000000000000004. If you're using formatted output to verify numerical correctness, you're fooling yourself.

Dictionary key collisions:

# This can cause subtle bugs in lookup tables
prices = {}
prices[0.1 + 0.2] = "item_a"
prices[0.3] = "item_b"

# You now have TWO keys instead of one
>>> len(prices)
2

Using floats as dictionary keys or set members is almost always a mistake. If your domain requires exact lookup, use Decimal, Fraction, or integer representations (e.g., cents instead of dollars).

When Rounding Errors Kill: Real-World Disasters

The 0.1 + 0.2 problem isn't just a programming curiosity. The same family of representation errors has caused catastrophic failures in the real world.

The Patriot Missile Failure (1991). On February 25, 1991, during the Gulf War, a Patriot missile defense battery at Dhahran, Saudi Arabia, failed to intercept an incoming Iraqi Scud missile. The Scud struck an Army barracks, killing 28 American soldiers. A U.S. General Accounting Office investigation (Report GAO/IMTEC-92-26) found the root cause: the system's internal clock measured time in tenths of a second as an integer, which was then multiplied by 1/10 to get seconds. But 1/10 is a repeating binary fraction -- the exact same problem as our 0.1 -- and the system's 24-bit fixed-point register truncated it. After 100 hours of continuous operation, the accumulated error reached 0.34 seconds. At the Scud's velocity of approximately 1,676 meters per second (a widely cited estimate from secondary analyses of the incident), this translated to a tracking error of over 500 meters -- far outside the Patriot's range gate. (Source: GAO Report IMTEC-92-26, February 1992; velocity estimate from Prof. Douglas Arnold, University of Minnesota.)

The connection to 0.1 + 0.2 is direct: the Patriot failed because 1/10 cannot be represented exactly in binary. The same mathematical fact that makes Python print 0.30000000000000004 caused a missile defense system to look in the wrong place.

The Ariane 5 Explosion (1996). On June 4, 1996, the maiden flight of the European Space Agency's Ariane 5 rocket ended about 40 seconds after launch when the rocket veered off course and self-destructed. The cause was a conversion of a 64-bit floating-point value to a 16-bit signed integer. The value -- representing horizontal velocity -- exceeded the range of a 16-bit integer (maximum 32,767), triggering an overflow that was misinterpreted as flight data. The rocket's guidance system made a violent, unnecessary course correction that tore the vehicle apart. Estimated losses exceeded $370 million. (Source: Ariane 5 Flight 501 Inquiry Board Report, July 1996.)

The Vancouver Stock Exchange (1982). In January 1982, the Vancouver Stock Exchange launched a new index initialized at 1,000. The index was recalculated after each trade -- thousands of times daily -- but the result was truncated to three decimal places instead of rounded. Over 22 months, these tiny losses accumulated until the index stood at roughly 520, suggesting the market had lost nearly half its value. When the error was corrected over the weekend of November 25–28, 1983, the index jumped from its Friday close of about 524 to roughly 1,099. (Sources: Wall Street Journal, November 8, 1983; documented in McCullough and Vinod, Journal of Economic Literature, June 1999.)

Note

All three of these disasters share the same root cause as Python's 0.1 + 0.2 problem: finite binary representations cannot exactly capture every decimal value, and the resulting errors, however small, can accumulate or interact in ways that produce catastrophic results.

How Other Languages Handle This

Every language that uses IEEE 754 hardware floats faces the same fundamental problem. But languages differ in what tools they provide and how they surface the issue to programmers.

JavaScript has no separate integer type for ordinary numbers (everything was Number, which is binary64). 0.1 + 0.2 === 0.3 returns false. ES2020 added BigInt for exact integer arithmetic, but there's no built-in decimal type. A TC39 Decimal proposal based on IEEE 754 Decimal128 has been in development since at least 2017 but remains at Stage 1 as of early 2026. Libraries like decimal.js fill the gap in the meantime.

Java provides java.math.BigDecimal for arbitrary-precision decimal arithmetic, making it a common choice for financial applications. Java also has strictfp to enforce consistent floating-point behavior across platforms.

C# offers a decimal type (128-bit, base-10 floating point) as a built-in language feature, not just a library. This makes it unusually well-suited for financial calculations without importing anything.

Rust and Go produce the same 0.30000000000000004 result. Both rely on third-party libraries for exact decimal arithmetic.

No mainstream general-purpose language has "solved" this problem at the hardware level, because the problem is inherent to base-2 representation. The IEEE 754-2008 revision did add decimal floating-point formats (decimal32, decimal64, decimal128), but hardware support for these remains limited, and no mainstream language uses them by default.

The Deeper Lesson: All Representation Is Approximation

There's a profound insight hiding behind 0.1 + 0.2 != 0.3, and it extends far beyond Python.

Every number system has values it cannot represent exactly. Base 10 cannot exactly represent 1/3 or 1/7. Base 2 cannot exactly represent 1/10 or 1/5. A hypothetical base 3 system couldn't exactly represent 1/2. The fact that humans happen to find 0.1 intuitive and 0.333... unintuitive is an accident of having ten fingers, not a statement about mathematical truth.

IEEE 754 binary floating point was designed to be fast, because it maps directly to hardware operations. Your CPU has dedicated circuits for binary64 addition, subtraction, multiplication, and division. A single floating-point add operation takes about one clock cycle on modern processors. The tradeoff for that speed is that the format uses base 2, which means it cannot exactly represent the decimal fractions that human-designed systems (currencies, measurements, tax rates) use constantly.

Python's decimal module solves this by using base 10, which means it can exactly represent 0.1, 0.2, and 0.3. But it still can't exactly represent 1/3, it's roughly 100x slower than hardware floats, and it still has finite precision. The fractions module solves that by using exact rationals, but it's even slower and uses unbounded memory. The mpmath library offers precision up to millions of digits, but is slower still and consumes proportionally more memory.

There is no free lunch. Every number representation makes a tradeoff between speed, precision, range, memory, and representational completeness. Understanding those tradeoffs is what separates a programmer who can debug numerical issues from one who just keeps adding round() calls and hoping for the best.

Conclusion

0.1 + 0.2 equals 0.30000000000000004 in Python because 0.1 is not really 0.1. It's 3602879701896397 / 2^55, the closest value that fits in the 53 bits of precision available in an IEEE 754 binary64 float. When you add that to the similarly-approximate stored value of 0.2, the result rounds to a different 53-bit value than the direct approximation of 0.3. The difference is about 5.5 x 10^-17 -- roughly one part in 18 quadrillion -- but it's nonzero, so the equality check fails.

This behavior is universal across programming languages because it's defined by a hardware standard, IEEE 754, that has been implemented in virtually every general-purpose processor since 1985. Python's official tutorial devotes an entire chapter to floating-point issues and limitations, acknowledging directly that this behavior produces surprises that are not bugs. And the same mathematical reality has caused missile failures, rocket explosions, and stock exchange miscalculations worth hundreds of millions of dollars.

Python also gives you the tools to handle it: round() for display, math.isclose() for comparison, math.fsum() for summation, decimal.Decimal for exact decimal arithmetic, fractions.Fraction for exact rational arithmetic, and third-party libraries like mpmath for arbitrary precision. Knowing when to reach for each tool is a skill that pays dividends in every domain from scientific computing to financial software.

The real lesson isn't that floating point is broken. It's that all representation involves choices, and those choices have consequences. The sooner you internalize that 0.1 in your source code and 0.1 in memory are two different things, the sooner you stop being surprised and start writing code that accounts for the difference.

# References

  • IEEE 754-1985 / 2008 / 2019 -- Standard for Floating-Point Arithmetic (IEEE, ratified 1985, revised 2008 and 2019)
  • David Goldberg, "What Every Computer Scientist Should Know About Floating-Point Arithmetic," ACM Computing Surveys, Vol. 23, No. 1, pp. 5-48, March 1991 (doi:10.1145/103162.103163)
  • William Kahan, ACM A.M. Turing Award, 1989 -- "for his fundamental contributions to numerical analysis" (amturing.acm.org)
  • IEEE Engineering and Technology History Wiki, "IEEE Standard 754 for Binary Floating-Point Arithmetic, 1985" -- milestone documentation covering the KCS proposal history (ethw.org)
  • PEP 327 -- Decimal Data Type (Facundo Batista, 2003) -- introduced the decimal module in Python 2.4
  • PEP 485 -- A Function for testing approximate equality (Christopher Barker, 2015) -- introduced math.isclose() in Python 3.5
  • CPython Issue #1580 -- "Use shorter float repr when possible" (Noam Raphael, 2007; implemented by Eric Smith and Mark Dickinson using David Gay's dtoa.c, merged 2009)
  • Python Documentation, Chapter 15: "Floating Point Arithmetic: Issues and Limitations" (docs.python.org/3/tutorial/floatingpoint.html)
  • U.S. General Accounting Office, Report GAO/IMTEC-92-26: "Patriot Missile Defense: Software Problem Led to System Failure at Dhahran, Saudi Arabia," February 1992
  • Douglas N. Arnold, "The Patriot Missile Failure," Institute for Mathematics and Its Applications, University of Minnesota (www-users.cse.umn.edu/~arnold/disasters/patriot.html) -- secondary analysis providing the widely cited Scud velocity estimate of 1,676 m/s
  • Ariane 5 Flight 501 Inquiry Board Report, European Space Agency, July 19, 1996
  • B.D. McCullough and H.D. Vinod, "The Numerical Reliability of Econometric Software," Journal of Economic Literature, Vol. XXXVII, pp. 633-665, June 1999 (documents the Vancouver Stock Exchange incident)
back to articles