If you have spent any time reading Python code written by experienced developers, you have probably noticed that function definitions often include arguments with values already assigned right in the signature. That is not an accident or a stylistic quirk. It is one of Python's most practical features: default argument values.
Once you understand how defaults work and where to use them, you will write functions that are more flexible, easier to call, and far less frustrating to maintain. This article covers everything you need to know, from the basics to the edge cases that trip people up, with real-world applications throughout.
What Are Default Argument Values?
A default argument value is a fallback value assigned to a function parameter in its definition. When the function is called without that argument, Python uses the default. When the caller does supply that argument, Python uses the caller's value instead.
The syntax looks like this:
def greet(name, greeting="Hello"):
print(f"{greeting}, {name}!")
Here, greeting has a default value of "Hello". You can call this function two ways:
greet("Kandi") # prints: Hello, Kandi!
greet("Kandi", "Hey") # prints: Hey, Kandi!
Both calls are valid. The first uses the default. The second overrides it.
Python enforces one important rule about argument ordering. Arguments without defaults must come before arguments with defaults. This is non-negotiable. If a default-bearing argument came first, Python would have no way to know whether the caller meant to override it or pass a value for the next parameter.
# This is valid
def connect(host, port=8080, timeout=30):
pass
# This raises a SyntaxError
def connect(port=8080, host, timeout=30):
pass
The Classic Example
The Python documentation uses a function called ask_ok to demonstrate this concept, and it is worth walking through carefully because it shows how defaults integrate into a real, working program:
def ask_ok(prompt, retries=4, reminder='Please try again!'):
while True:
reply = input(prompt)
if reply in {'y', 'ye', 'yes'}:
return True
if reply in {'n', 'no', 'nop', 'nope'}:
return False
retries = retries - 1
if retries < 0:
raise ValueError('invalid user response')
print(reminder)
This function accepts three parameters. The first, prompt, has no default and must always be provided. The second, retries, defaults to 4. The third, reminder, defaults to 'Please try again!'.
This design means the function can be called in three distinct ways:
# Only the required argument
ask_ok('Do you want to continue? ')
# Required argument plus a custom retry count
ask_ok('Are you sure you want to delete this file? ', 2)
# All three arguments fully customized
ask_ok('Overwrite the file? ', 3, 'Come on, yes or no!')
That flexibility is the entire point. The caller gets to decide how much they want to customize the behavior. If they do not care about the retry count or the reminder message, they do not have to think about them at all.
Consider what that same function would look like without defaults:
def ask_ok(prompt, retries, reminder):
...
Now every single call requires all three arguments. That means more typing, more opportunities for mistakes, and a harder-to-read codebase where intent is obscured by noise. Defaults let you encode sensible assumptions directly into the function. The function says, in effect: "Here is what I will do unless you tell me otherwise." That is good API design.
When Are Default Values Evaluated?
This is where many developers get burned, and it is worth spending a moment on.
Default values are evaluated once, when the function is defined, not each time the function is called.
For immutable types like integers, strings, and tuples, this distinction rarely matters. But for mutable types like lists and dictionaries, it matters enormously. Here is a classic mistake:
def add_item(item, collection=[]):
collection.append(item)
return collection
print(add_item("apple")) # ['apple']
print(add_item("banana")) # ['apple', 'banana'] <-- probably not what you wanted
The list [] is created once. Every call that does not provide a collection argument shares that same list object. By the second call, it already has "apple" in it.
Never use a list, dictionary, or other mutable object as a default value directly. Use None as the default and initialize the mutable object inside the function body instead.
The correct pattern is to use None as the default and create the mutable object inside the function:
def add_item(item, collection=None):
if collection is None:
collection = []
collection.append(item)
return collection
print(add_item("apple")) # ['apple']
print(add_item("banana")) # ['banana'] <-- correct
This is such a common pattern in Python that it is essentially a convention. Whenever you need a default mutable argument, use None and initialize inside the function body.
Real-World Applications
Understanding the syntax is only half the story. Seeing where experienced developers actually use default arguments brings the concept to life.
Configuration with Sensible Defaults
One of the most common uses is in configuration-style functions where you want to provide a full working setup out of the box while still allowing customization:
def create_report(
data,
title="Untitled Report",
include_summary=True,
max_rows=100,
output_format="pdf"
):
# build and return the report
pass
A data analyst calling this function can get a working report with just create_report(my_data). A developer building a custom pipeline can specify every parameter. Both use cases are served by one function.
Logging and Debugging
Logging functions often use defaults to handle verbosity levels:
def log_event(message, level="INFO", timestamp=None, include_trace=False):
import datetime
if timestamp is None:
timestamp = datetime.datetime.now().isoformat()
print(f"[{timestamp}] [{level}] {message}")
if include_trace:
import traceback
traceback.print_stack()
Most calls just do log_event("User logged in"). But a debugging session might call log_event("Unexpected state reached", level="ERROR", include_trace=True). The function handles both without separate implementations.
Database Query Wrappers
When wrapping database queries, defaults let you set safe, reasonable limits:
def fetch_users(
db_connection,
active_only=True,
limit=50,
offset=0,
order_by="created_at"
):
query = "SELECT * FROM users"
if active_only:
query += " WHERE active = 1"
query += f" ORDER BY {order_by} LIMIT {limit} OFFSET {offset}"
return db_connection.execute(query)
The default of active_only=True encodes a business rule: unless told otherwise, only return active users. The limit=50 prevents accidentally returning millions of rows. A basic call like fetch_users(conn) is safe and sensible.
Retry Logic in Network Code
The ask_ok example from the Python docs is essentially a user-facing retry loop. The same pattern shows up constantly in network programming:
import time
def fetch_url(url, retries=3, delay=1.0, timeout=10):
import urllib.request
for attempt in range(retries):
try:
with urllib.request.urlopen(url, timeout=timeout) as response:
return response.read()
except Exception as e:
if attempt < retries - 1:
time.sleep(delay)
else:
raise
Most callers do not need to think about retry counts or delays. They just call fetch_url("https://example.com/data") and get their data. But a caller dealing with a flaky endpoint can do fetch_url(url, retries=10, delay=2.0) without any code changes on the library side.
File Processing Utilities
File handling functions benefit enormously from defaults:
def read_csv(
filepath,
delimiter=",",
has_header=True,
encoding="utf-8",
skip_blank_lines=True
):
rows = []
with open(filepath, encoding=encoding) as f:
lines = f.readlines()
if skip_blank_lines:
lines = [l for l in lines if l.strip()]
if has_header:
lines = lines[1:]
for line in lines:
rows.append(line.strip().split(delimiter))
return rows
Reading a standard CSV file is one call: read_csv("data.csv"). Reading a tab-delimited file with no header is still simple: read_csv("data.tsv", delimiter="\t", has_header=False).
Keyword Arguments and Defaults
Default arguments become even more powerful when combined with Python's support for keyword arguments. When calling a function, you can name the argument you are passing:
def ask_ok(prompt, retries=4, reminder='Please try again!'):
...
# Skip retries, but customize the reminder
ask_ok('Continue?', reminder='Try once more.')
This lets callers skip over intermediate defaults without having to remember positional order. It makes call sites far more readable, especially in functions with many optional parameters.
When a function has three or more parameters with defaults, always use keyword arguments at the call site. It makes the code dramatically easier to read and prevents silent errors when the parameter order changes.
Using Expressions as Defaults
Default values do not have to be simple literals. They can be any valid Python expression, as long as it is evaluated at definition time:
MAX_RETRIES = 5
def process(data, retries=MAX_RETRIES):
...
At the moment Python processes the def statement, it evaluates MAX_RETRIES and stores its value as the default. If you later change MAX_RETRIES, the default does not update. This is the same "evaluated once at definition time" rule, just applied to a variable reference.
You can also use function calls as defaults, though this should be done carefully:
def get_default_name():
return "anonymous"
def register_user(username=get_default_name()):
print(f"Registering: {username}")
get_default_name() is called once, when register_user is defined. Every call to register_user() that omits username gets the same value that get_default_name() returned at import time.
Introspecting Default Values
Python exposes function defaults programmatically. You can inspect them on any function:
def ask_ok(prompt, retries=4, reminder='Please try again!'):
pass
import inspect
sig = inspect.signature(ask_ok)
for name, param in sig.parameters.items():
print(f"{name}: default = {param.default}")
This outputs:
prompt: default = <class 'inspect._empty'>
retries: default = 4
reminder: default = Please try again!
This is useful when building frameworks, documentation generators, or any tool that needs to understand function interfaces at runtime.
Common Mistakes to Avoid
Beyond the mutable default antipattern covered earlier, a few other mistakes come up regularly.
Shadowing the default inside the function accidentally:
def process(items, max_count=10):
max_count = len(items) # now the default is meaningless
...
If you are immediately overwriting the parameter, the default serves no purpose.
Using defaults as a substitute for proper validation:
def divide(a, b=1):
return a / b
The default of 1 prevents a ZeroDivisionError in the common case, but it might silently produce wrong results if the caller intended to pass a zero and forgot. Defaults should encode sensible behavior, not paper over errors.
Over-defaulting everything:
def send_email(to="", subject="", body="", cc="", bcc="", reply_to=""):
...
If every argument has a default of an empty string, calling send_email() with no arguments is technically valid but produces nothing useful. Some arguments should be required. Not everything needs a default.
Key Takeaways
- Required arguments come first: Arguments without defaults must always appear before arguments with defaults, or Python will raise a SyntaxError.
- Defaults are evaluated once: Default values are computed when the function is defined, not on each call. This is a critical distinction for mutable types.
- Never use mutable objects as defaults: Use
Noneas the default and initialize the list, dict, or other mutable object inside the function body. - Keyword arguments multiply the power of defaults: Callers can skip intermediate defaults by naming the argument they want to set, making call sites more readable and resilient.
- Defaults encode intent: A good default is not just a convenience -- it communicates the expected, sensible behavior of the function to anyone reading the code.
With these principles in hand, you can design functions that are a pleasure to use: sensible out of the box, fully configurable when needed, and clear in their intent from the very first line of their definition.