Checking whether a number is prime is one of the foundational exercises in programming—and one of the problems that rewards you most for thinking carefully. A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. In this article, you will learn five distinct ways to write a Python program that checks for prime numbers, from a simple loop all the way to production-grade library functions, and you will understand the mathematical reasoning behind each one.
Prime numbers matter far beyond math class. They are the structural foundation of asymmetric encryption—the technology behind secure websites, banking transactions, messaging apps, and digital signatures. The RSA algorithm, developed in 1977 by Ron Rivest, Adi Shamir, and Leonard Adleman, rests entirely on a single mathematical asymmetry: multiplying two large prime numbers together is computationally trivial, but factoring the result back into those original primes is computationally brutal. Every time you see that padlock in your browser, prime numbers are doing the heavy lifting. Learning to check for primes in Python gives you hands-on practice with loops, conditionals, and mathematical reasoning—and connects you directly to the machinery of modern cryptography.
What Is a Prime Number?
A prime number is a natural number greater than 1 that cannot be formed by multiplying two smaller natural numbers together. The number 2 is the smallest prime and the only even prime. Numbers like 3, 5, 7, 11, and 13 are also prime because they have no divisors other than 1 and themselves.
Numbers that are not prime are called composite numbers. For example, 6 is composite because it equals 2 × 3. The number 1 is neither prime nor composite by mathematical convention—this is not arbitrary. The reason 1 is excluded is that including it would break the Fundamental Theorem of Arithmetic, which states that every integer greater than 1 has a unique prime factorization. If 1 were prime, that uniqueness would collapse (you could factor 12 as 2 × 2 × 3, or 1 × 2 × 2 × 3, or 1 × 1 × 2 × 2 × 3, and so on forever).
Negative numbers, zero, and 1 are not considered prime. Any prime-checking function should handle these edge cases by returning False immediately. In Python, integers have arbitrary precision—unlike most languages, Python's int type can hold numbers of any size without overflow, which means your primality functions will technically work on very large inputs, though performance becomes the real constraint.
Method 1: Basic Loop
The simplest approach checks whether the number is divisible by any integer from 2 up to n - 1. If a divisor is found, the number is not prime. If the loop completes without finding one, the number is prime.
def is_prime_basic(n):
if n <= 1:
return False
for i in range(2, n):
if n % i == 0:
return False
return True
# Test the function
number = int(input("Enter a number: "))
if is_prime_basic(number):
print(f"{number} is a prime number.")
else:
print(f"{number} is not a prime number.")
This function starts by ruling out numbers less than or equal to 1. Then it iterates from 2 through n - 1, checking divisibility with the modulo operator %. If n % i == 0 evaluates to True, a factor has been found and the function returns False. Otherwise, it returns True after the loop finishes.
This method is correct and easy to understand, but it is slow for large numbers because it checks every integer up to n - 1. The time complexity is O(n), which means performance degrades linearly as the input grows. For a number like 999,983 (which is prime), this loop runs nearly a million iterations before returning True.
Using for...else for a Cleaner Version
Python has a unique for...else construct that pairs naturally with prime checking. The else block runs only when the loop completes without hitting a break statement. Note that this pattern only makes sense when the loop uses break rather than return; otherwise the else clause is simply unreachable dead code.
def is_prime_for_else(n):
if n <= 1:
return False
if n == 2:
return True
if n % 2 == 0:
return False
for i in range(3, int(n**0.5) + 1, 2):
if n % i == 0:
result = False
break
else:
result = True
return result
print(is_prime_for_else(29)) # True
print(is_prime_for_else(15)) # False
Here, the else block executes only if the loop finishes without a break—meaning no divisor was found. This pattern surprises many developers coming from other languages; it is one of Python's more distinctive features. In practice, the early-return style shown in Method 1 is more common because it is easier to follow at a glance, but understanding for...else is worth the effort since you will encounter it in other Python code.
Method 2: Square Root Optimization
A significant improvement comes from a simple mathematical observation: if a number n has a factor greater than its square root, it must also have a corresponding factor smaller than its square root. Therefore, you only need to check divisors up to the square root of n.
To see why this is true, suppose n = a × b where both a and b are greater than the square root of n. Then a × b would be greater than n—a contradiction. So at least one of the factors must be at or below the square root.
The square root optimization reduces time complexity from O(n) to O(sqrt(n)). For a number like 1,000,000, this means checking roughly 1,000 values instead of 999,999—a 1,000x improvement in the number of iterations. For a 10-digit number, the difference is even more dramatic.
import math
def is_prime_sqrt(n):
if n <= 1:
return False
for i in range(2, int(math.sqrt(n)) + 1):
if n % i == 0:
return False
return True
print(is_prime_sqrt(97)) # True
print(is_prime_sqrt(100)) # False
The math.sqrt(n) function returns the square root of n as a float. Wrapping it with int() converts it to an integer, and adding 1 ensures the range includes the square root itself. You can also write int(n**0.5) + 1 instead of importing the math module—both produce the same result for typical inputs, though math.sqrt is marginally more explicit about intent.
def is_prime_sqrt_no_import(n):
if n <= 1:
return False
for i in range(2, int(n**0.5) + 1):
if n % i == 0:
return False
return True
print(is_prime_sqrt_no_import(37)) # True
print(is_prime_sqrt_no_import(49)) # False
This version avoids the import math statement entirely by using Python's exponentiation operator. It is functionally equivalent and often preferred for quick scripts. One subtle point: for very large integers, floating-point precision in n**0.5 can produce an inaccurate result near perfect squares, because Python floats have 53 bits of mantissa and cannot represent large integers exactly. This can matter for inputs above roughly 1015. For numbers of that magnitude, prefer math.isqrt(n) (available since Python 3.8), which computes the integer square root exactly using integer arithmetic:
import math
def is_prime_isqrt(n):
if n <= 1:
return False
for i in range(2, math.isqrt(n) + 1):
if n % i == 0:
return False
return True
For typical inputs under a few trillion, int(n**0.5) and math.isqrt(n) return the same result. But if you are writing a function intended to be robust at any scale, math.isqrt is the safer choice.
Method 3: Skipping Even Numbers (6k Optimization)
Every integer greater than 3 can be expressed in one of six forms: 6k, 6k+1, 6k+2, 6k+3, 6k+4, or 6k+5. Of these, only 6k+1 and 6k+5 (which is the same as 6k−1) can possibly be prime. The others are all divisible by 2 or 3:
- 6k is divisible by 6 (and by 2 and 3)
- 6k+2 is divisible by 2
- 6k+3 is divisible by 3
- 6k+4 is divisible by 2
By first checking divisibility by 2 and 3, you can then step through only values of the form 6k ± 1, reducing the number of iterations by roughly two-thirds compared to the square root method.
def is_prime_6k(n):
if n <= 1:
return False
if n <= 3:
return True
if n % 2 == 0 or n % 3 == 0:
return False
i = 5
while i * i <= n:
if n % i == 0 or n % (i + 2) == 0:
return False
i += 6
return True
print(is_prime_6k(113)) # True
print(is_prime_6k(114)) # False
This function handles small cases first: numbers 2 and 3 are prime, and anything divisible by 2 or 3 is not. The while loop starts at 5 and increments by 6 on each pass. Inside the loop, it checks both i (representing 6k−1) and i + 2 (representing 6k+1). The condition i * i <= n serves the same purpose as the square root boundary but avoids a function call by using integer multiplication.
This is the approach used in many production codebases when a fast, dependency-free prime check is needed for reasonably sized integers. It is also the function interviewers tend to be most impressed by when candidates arrive at it independently, because it demonstrates genuine mathematical reasoning rather than just code recall.
Method 4: Sieve of Eratosthenes (Batch Primality)
All three methods above answer a single question: is this one number prime? If you need to find all prime numbers up to a limit—for example, to pre-build a lookup table—a fundamentally different approach is more efficient: the Sieve of Eratosthenes.
The algorithm, attributed to the ancient Greek mathematician Eratosthenes of Cyrene, works by starting with a list of all integers from 2 to the limit, then systematically eliminating multiples of each prime it finds. What remains after the process is complete is a list of all primes in that range.
def sieve_of_eratosthenes(limit):
if limit < 2:
return []
# Start with all numbers assumed to be prime
is_prime = [True] * (limit + 1)
is_prime[0] = False
is_prime[1] = False
p = 2
while p * p <= limit:
if is_prime[p]:
# Mark all multiples of p as not prime
for multiple in range(p * p, limit + 1, p):
is_prime[multiple] = False
p += 1
return [num for num, prime in enumerate(is_prime) if prime]
primes = sieve_of_eratosthenes(50)
print(primes)
# [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47]
The key insight is that we start marking multiples of p from p * p rather than from p * 2. Any smaller multiple of p would have already been marked by an earlier prime. This makes the algorithm efficient. The time complexity is O(n log log n), which is extremely fast for generating all primes up to a given limit.
Use the Sieve when you need many primality checks against a bounded range, not when you need to check a single large number. The sieve trades memory for speed: generating all primes up to 10 million requires storing roughly 10 million booleans, but then individual lookups become O(1). If you only need to check one number, the 6k method is more memory-efficient.
You can also use the sieve output to create a fast lookup set:
prime_set = set(sieve_of_eratosthenes(10_000_000))
def is_prime_lookup(n):
return n in prime_set
print(is_prime_lookup(9_999_991)) # True
print(is_prime_lookup(9_999_992)) # False
Method 5: Using Python's sympy Library
If you prefer a ready-made solution, the sympy library includes an isprime() function that handles all edge cases and uses sophisticated algorithms internally.
# First install sympy if needed: pip install sympy
from sympy import isprime
print(isprime(17)) # True
print(isprime(1)) # False
print(isprime(2)) # True
print(isprime(104729)) # True
print(isprime(-7)) # False (negative numbers are not prime)
Understanding what sympy.isprime() actually does is worth the effort, because the details matter for security-conscious work. According to the sympy source code and documentation, the function operates in several stages:
- Trial division: It first looks for small factors. If any are found, it returns
Falseimmediately. - Sieve lookup: If the internal prime sieve is large enough to cover the input, it uses bisection search on the sieve for a fast answer.
- Deterministic Miller-Rabin tests (for n < 2&sup6;&sup4;): For numbers below 264 (approximately 18.4 quintillion), sympy uses a fixed set of Miller-Rabin test bases that are mathematically proven to yield no false positives in that range. The result is fully deterministic—not probabilistic.
- BPSW test (for n ≥ 2&sup6;&sup4;): For numbers beyond 264, sympy applies the Baillie-PSW (BPSW) primality test, which combines a Miller-Rabin test with a strong Lucas probable prime test. As of March 2026, no counterexample to the BPSW test has ever been found, though mathematicians have not proven one cannot exist. (Source: SymPy Number Theory documentation)
Using sympy adds an external dependency to your project. For learning purposes or coding interview questions, writing your own function is expected. For production applications where reliability on arbitrarily large numbers matters, using a well-maintained library is the right call. Never use any primality function from scratch for actual cryptographic key generation—that requires specialist libraries like Python's cryptography package, which handles secure random number generation, padding, and other factors that determine whether the result is actually safe.
Benchmarking Your Own Code with timeit
When comparing algorithms, intuition about speed can be misleading. Python's built-in timeit module lets you measure execution time precisely. Here is how to benchmark the basic loop versus the square root method versus the 6k method on a large prime:
import timeit
# 9,973 is a prime number used for benchmarking
setup = """
def is_prime_basic(n):
if n <= 1:
return False
for i in range(2, n):
if n % i == 0:
return False
return True
def is_prime_sqrt(n):
if n <= 1:
return False
for i in range(2, int(n**0.5) + 1):
if n % i == 0:
return False
return True
def is_prime_6k(n):
if n <= 1:
return False
if n <= 3:
return True
if n % 2 == 0 or n % 3 == 0:
return False
i = 5
while i * i <= n:
if n % i == 0 or n % (i + 2) == 0:
return False
i += 6
return True
"""
# Time each function (fewer repeats for basic since it's very slow)
t_basic = timeit.timeit("is_prime_basic(9973)", setup=setup, number=100)
t_sqrt = timeit.timeit("is_prime_sqrt(9973)", setup=setup, number=10000)
t_6k = timeit.timeit("is_prime_6k(9973)", setup=setup, number=10000)
print(f"Basic loop (x100): {t_basic:.4f}s")
print(f"Sqrt method (x10k): {t_sqrt:.4f}s")
print(f"6k method (x10k): {t_6k:.4f}s")
Running this on a typical machine shows the 6k method completing in a fraction of the time taken by the basic loop, with the gap widening for larger inputs. The timeit module runs the statement many times and returns total elapsed time, smoothing out system noise for a reliable comparison. Note that the number of repetitions is deliberately different across methods: the basic loop is slow enough that 100 repetitions is practical, while the optimized methods can run thousands of iterations in the same wall-clock time.
Common Traps and Edge Cases
Prime-checking functions fail in predictable ways. Being aware of these before writing code saves debugging time.
The number 2. Two is prime, even though it is even. Any implementation that begins by eliminating even numbers must add an explicit check for 2 first, or it will incorrectly return False for the smallest and only even prime.
def broken_is_prime(n):
if n <= 1 or n % 2 == 0: # BUG: eliminates 2
return False
for i in range(3, int(n**0.5) + 1, 2):
if n % i == 0:
return False
return True
print(broken_is_prime(2)) # False -- WRONG
def correct_is_prime(n):
if n < 2:
return False
if n == 2:
return True # Handle 2 explicitly
if n % 2 == 0:
return False
for i in range(3, int(n**0.5) + 1, 2):
if n % i == 0:
return False
return True
print(correct_is_prime(2)) # True -- correct
Float inputs. In Python 3, passing a float like 7.0 to range() raises a TypeError immediately, so the basic loop and square root methods will actually crash rather than silently return a wrong answer. The 6k method, which uses a while loop with integer multiplication, may handle float inputs without error but produce unreliable results. Either way, non-integer inputs are a bug waiting to happen. Adding a type check at the top of any function that will be called by other code is good defensive practice:
def is_prime_safe(n):
if not isinstance(n, int):
raise TypeError(f"Expected int, got {type(n).__name__}")
if n <= 1:
return False
if n == 2:
return True
if n % 2 == 0:
return False
for i in range(3, int(n**0.5) + 1, 2):
if n % i == 0:
return False
return True
# is_prime_safe(7.0) # raises TypeError
Negative numbers. Every function in this article returns False for negative numbers by virtue of the n <= 1 check. This is correct: negative integers are not prime. The convention holds by the definition of primes as natural numbers greater than 1.
The off-by-one in range. Using range(2, int(n**0.5)) instead of range(2, int(n**0.5) + 1) will miss the square root itself. For perfect squares like 25, this causes a wrong answer: the loop never checks 5, so 25 would be reported as prime.
# Demonstrates the off-by-one error
def wrong_sqrt_check(n):
if n <= 1:
return False
for i in range(2, int(n**0.5)): # Missing +1
if n % i == 0:
return False
return True
print(wrong_sqrt_check(25)) # True -- WRONG (25 = 5 x 5)
print(wrong_sqrt_check(49)) # True -- WRONG (49 = 7 x 7)
Why This Goes Beyond Practice Problems
Understanding prime-checking algorithms in Python directly connects to how real-world security systems work. The RSA cryptosystem, which secures a significant portion of internet traffic, begins by selecting two very large prime numbers—typically 1024 or 2048 bits in length. According to NIST guidance, a minimum RSA key size of 2048 bits is now recommended; the older 1024-bit keys are considered breakable with modern hardware. (Source: NIST SP 800-131A Rev. 2)
The security of RSA rests on the fact that while multiplying two large primes together is fast, factoring their product back into the original primes is astronomically expensive for classical computers. A 2048-bit RSA modulus is the product of two roughly 1024-bit primes—a number with over 600 decimal digits. No classical algorithm can factor it in any practical timeframe.
This is also why the choice of primality test matters at scale. The functions you write in this article (basic loop, square root, 6k) are efficient enough for numbers in the millions. For cryptographic prime generation, which requires finding 1024-bit primes reliably and quickly, probabilistic tests like Miller-Rabin and the BPSW test are the standard approach. The Miller-Rabin test was independently developed by Gary L. Miller in 1976 and extended into an unconditional probabilistic form by Michael O. Rabin in 1980; it remains one of the most widely used primality tests in cryptographic software today. (Source: Wikipedia — Miller-Rabin Primality Test)
One thing worth knowing if you pursue this topic further: quantum computers running Shor's algorithm could theoretically factor RSA moduli efficiently, which would break RSA-based encryption entirely. This is a real and actively researched threat; NIST finalized the first set of post-quantum cryptographic standards in 2024 as a response. The algorithms in those standards (such as ML-KEM, formerly CRYSTALS-Kyber) do not rely on prime factorization at all—they use lattice-based mathematics instead. The math of prime numbers is foundational right now, but the landscape is actively shifting. (Source: NIST Post-Quantum Cryptography project)
On a more immediate level, prime-checking logic appears in competitive programming, coding interviews at major tech companies, and anywhere a developer needs to reason about number-theoretic properties. Being able to explain not just the code but the mathematical reasoning behind each optimization—why the square root works, why 6k+1 and 6k-1 are the only candidates worth testing—is what distinguishes a confident candidate from one who memorized a solution.
Comparing All Five Methods
| Method | Time Complexity | Best For | Dependency | Speed |
|---|---|---|---|---|
| Basic Loop | O(n) | Learning; small inputs only | None | Slow |
| Square Root | O(sqrt(n)) | Single checks up to ~1012 | None (or math) |
Medium |
| 6k Optimization | O(sqrt(n)) with ~1/3 fewer steps | Performance-sensitive single checks without dependencies | None | Fast |
| Sieve of Eratosthenes | O(n log log n) to build; O(1) lookups | Batch generation of all primes up to a limit | None | Fast for bulk |
| sympy.isprime() | Effectively O(logk n) via BPSW for large n | Reliable checks on arbitrarily large integers | sympy |
Fastest for large n |
A note on sympy's time complexity: for numbers below 264, results are fully deterministic. For numbers above that threshold, sympy uses the BPSW test, which has no known counterexamples as of March 2026. The theoretical complexity depends on the test applied, but in practical terms it handles numbers of hundreds of digits far faster than any trial-division approach.
Key Takeaways
- Start with edge cases: Always handle numbers less than or equal to 1, the number 2 (prime and even), and the off-by-one at the square root boundary. Missing any of these produces silent incorrect results.
- Use the square root boundary: Checking divisors up to the square root of
ninstead ofn - 1is the single most impactful optimization for single-number primality checks. For inputs above roughly 1015, usemath.isqrt(n)instead ofint(n**0.5)to avoid floating-point inaccuracy near perfect squares. - Skip unnecessary checks with 6k: The 6k ± 1 method eliminates all multiples of 2 and 3 from consideration, cutting the workload by roughly two-thirds compared to the square root method for no added complexity.
- Choose the right tool for the task: One number to check? Use 6k. Many numbers in a range? Use the Sieve. Large arbitrary-precision integers? Use
sympy.isprime(). Never use any of these for actual cryptographic key generation in production. - Measure before you optimize: Use Python's
timeitmodule to verify that your intuitions about speed are correct. The difference between O(n) and O(sqrt(n)) is striking in practice, not just in theory. - Type safety matters: Float inputs and non-integer types can silently produce wrong answers in basic implementations. Add a type check when building functions that will be called by other code.
- Know the bigger picture: The ability to explain why prime numbers matter for RSA encryption, and why post-quantum cryptography is now a serious engineering concern, demonstrates the kind of contextual understanding that sets apart strong engineers from those who only know syntax.
Checking for prime numbers is a problem that appears in coding interviews, competitive programming, and systems that depend on cryptographic security. By understanding these approaches and the mathematics behind each one, you build a foundation in both Python and algorithmic thinking that carries across every area of the discipline.