Practical Python from fundamentals to advanced techniques. Real code, real examples, real understanding.
Three letters. That is all it takes to unlock one of the most powerful constructs in Python. The def keyword — short for "define" — is how you create functions, and understanding it deeply is what separates someone who writes Python from someone who actually thinks in Python.
Functions are the building blocks of any well-structured Python program. They let you encapsulate logic, name it, reuse it, and compose it into larger systems. But def is far more than a syntactic shortcut for grouping lines of code. Under the hood, it is an executable statement that creates a first-class object, binds it to a name, and opens the door to closures, decorators, generators, and the full expressive power of the language.
This article walks through everything def does, from its simplest form to the advanced parameter syntax that has evolved across more than a decade of Python Enhancement Proposals.
The Basics: Anatomy of a Function Definition
At its core, a function definition in Python follows this structure:
def function_name(parameters):
"""Docstring describing what the function does."""
# function body
return value
When the Python interpreter encounters a def statement, it does not immediately execute the function body. Instead, it creates a new function object and binds it to the name you provided. This is an important distinction. The def statement is itself an executable statement — it runs at the time it is encountered, and its job is to produce an object.
Python's official documentation makes a striking point about this: functions are first-class objects, meaning a def statement inside another function creates a local function that can be returned or passed to other callables. That compact idea contains an enormous amount of information. It means you can assign functions to variables, pass them as arguments to other functions, return them from functions, and store them in data structures — just like any integer or string. (Python Language Reference, "Function definitions")
Here is a simple, real example:
def greet(name):
"""Return a personalized greeting string."""
return f"Hello, {name}. Welcome to Python CodeCrack."
message = greet("Kandi")
print(message)
# Output: Hello, Kandi. Welcome to Python CodeCrack.
Nothing exotic here. But even this tiny function demonstrates several things at once: parameter binding, string interpolation, a return value, and the creation of a callable object bound to the name greet.
Why def and Not Something Else?
Python's creator, Guido van Rossum, has spoken extensively about the design philosophy behind Python's syntax. In an interview published on the Dropbox blog by Anthony Wing Kosner, van Rossum explained his guiding principle: "In Python, every symbol you type is essential." The def keyword embodies this. It is concise, unambiguous, and reads almost like English: define greet, taking name.
The choice of def traces back to Python's earliest days in the late 1980s and early 1990s. Van Rossum, working at Centrum Wiskunde & Informatica in Amsterdam, drew inspiration from ABC (a teaching language) and C while deliberately prioritizing readability. In the same Dropbox interview, van Rossum described the practical philosophy behind Python's approachability for beginners: showing them very small snippets of code that need almost no programming terminology to make sense — something that distinguishes Python sharply from Java, where even the smallest program contains, as he put it, a cluster of characters that look like noise to the uninitiated.
Tim Peters, one of Python's most influential core contributors, codified this design ethos in PEP 20 — The Zen of Python (2004). Two of its aphorisms are particularly relevant to how def works: "Explicit is better than implicit" and "Readability counts." When you see def at the start of a line, there is zero ambiguity about what is happening. You are defining a function. Period.
Parameters and Arguments: The Full Spectrum
The parameter system in Python functions has evolved significantly over the language's lifetime. What started as a straightforward positional system has grown into one of the most flexible argument-handling mechanisms in any mainstream programming language, shaped by several important PEPs.
Positional and Keyword Arguments
Every parameter you define in a function signature can, by default, be passed either by position or by keyword:
def calculate_bmi(weight_kg, height_m):
"""Calculate Body Mass Index from weight and height."""
return round(weight_kg / (height_m ** 2), 1)
# Both of these are valid:
calculate_bmi(70, 1.75) # positional
calculate_bmi(weight_kg=70, height_m=1.75) # keyword
calculate_bmi(70, height_m=1.75) # mixed
This flexibility is a deliberate design choice. It lets callers use whichever style is clearest for a given call site.
Default Parameter Values
You can give parameters default values, making them optional at the call site:
def connect_to_database(host, port=5432, timeout=30):
"""Establish a database connection with sensible defaults."""
print(f"Connecting to {host}:{port} with {timeout}s timeout")
# connection logic here
Default parameter values are evaluated once, at the time the function is defined, not each time the function is called. This matters enormously when the default is a mutable object like a list or dictionary.
# DANGEROUS -- the list is shared across all calls
def append_to_list(item, target=[]):
target.append(item)
return target
print(append_to_list(1)) # [1]
print(append_to_list(2)) # [1, 2] -- probably not what you wanted
The official Python documentation explicitly warns about this behavior and recommends a well-known workaround:
# SAFE -- use None as a sentinel value
def append_to_list(item, target=None):
if target is None:
target = []
target.append(item)
return target
print(append_to_list(1)) # [1]
print(append_to_list(2)) # [2] -- each call gets its own list
Using None as a default and creating the mutable object inside the function body is one of the most important idioms to internalize when working with def. Make it a habit before the mutable-default bug bites you in production.
Variable-Length Arguments: *args and **kwargs
Python allows functions to accept an arbitrary number of positional and keyword arguments through the *args and **kwargs syntax:
def log_event(event_type, *details, **metadata):
"""Log a security event with flexible detail and metadata."""
print(f"[{event_type}]")
for detail in details:
print(f" - {detail}")
for key, value in metadata.items():
print(f" {key}: {value}")
log_event(
"LOGIN_FAILURE",
"Invalid password",
"Account locked after 3 attempts",
user="admin",
ip="192.168.1.100",
timestamp="2026-02-15T10:30:00Z"
)
The *args parameter collects extra positional arguments into a tuple, and **kwargs collects extra keyword arguments into a dictionary. These names are conventions, not requirements — you could name them *items and **options if that is more descriptive for your use case.
PEP 3102: Keyword-Only Arguments (Python 3.0)
One of the most important changes to Python's function signature syntax came with PEP 3102, authored by Talin and finalized for Python 3.0 in 2006. This PEP introduced keyword-only arguments — parameters that can only be supplied by keyword and will never be automatically filled in by a positional argument.
The motivation was practical. PEP 3102 describes the problem directly: it is easy to envision a function accepting a variable number of arguments that also needs one or more keyword-style options. Before this PEP, the only way to achieve that was to use **kwargs and manually extract the desired keys from the dictionary — a clunky and error-prone approach. (PEP 3102, Talin — peps.python.org)
The syntax uses a bare * as a separator. Everything after the * must be passed as a keyword:
def sort_records(*records, reverse=False, key=None):
"""Sort an arbitrary number of records with explicit options."""
return sorted(records, reverse=reverse, key=key)
# 'reverse' and 'key' MUST be passed by keyword
sort_records("Charlie", "Alice", "Bob", reverse=True)
You can also use the bare * without accepting variable positional arguments, which forces all subsequent parameters to be keyword-only:
def create_user(username, email, *, is_admin=False, send_welcome=True):
"""Create a new user account."""
print(f"Creating {username} ({email})")
print(f"Admin: {is_admin}, Welcome email: {send_welcome}")
# This works:
create_user("kcrowder", "[email protected]", is_admin=True)
# This raises TypeError:
# create_user("kcrowder", "[email protected]", True, True)
Keyword-only arguments are not required to have a default value. When they lack one, they become required keyword arguments — the caller must supply them, and must do so by name. PEP 3102 describes these as "required keyword arguments," which is a powerful tool for designing APIs where you want to force explicit, self-documenting call sites.
PEP 570: Positional-Only Parameters (Python 3.8)
A decade after keyword-only arguments arrived, the complementary feature landed. PEP 570, authored by Larry Hastings, Pablo Galindo Salgado, Mario Corchero, and Eric N. Vander Weele, introduced positional-only parameters in Python 3.8. The syntax uses a / separator — everything before the / must be passed positionally.
Guido van Rossum himself proposed the / character in a 2012 discussion, noting that "it's kind of the opposite of * which means 'keyword argument', and / is not a new character." He also served as the BDFL-Delegate for PEP 570, underscoring how important he considered this feature.
def calculate_power(base, exponent, /, *, mod=None):
"""Calculate base raised to exponent, optionally modulo mod.
'base' and 'exponent' are positional-only.
'mod' is keyword-only.
"""
result = base ** exponent
if mod is not None:
result = result % mod
return result
# Valid:
calculate_power(2, 10)
calculate_power(2, 10, mod=1000)
# Invalid -- raises TypeError:
# calculate_power(base=2, exponent=10)
This feature is particularly valuable for library authors. When parameter names are positional-only, users cannot rely on those names, which means the library author is free to rename internal parameters without breaking anyone's code. It also prevents collisions between parameter names and **kwargs keys:
def make_request(method, url, /, **kwargs):
"""Send an HTTP request. 'method' and 'url' are positional-only,
so callers can safely pass method= or url= in kwargs for other purposes.
"""
print(f"{method} {url}")
for k, v in kwargs.items():
print(f" {k}: {v}")
With both PEP 3102 and PEP 570 in play, you can now write function signatures that precisely control how every parameter is supplied. The complete syntax looks like this:
def full_signature(pos_only, /, pos_or_kwd, *, kwd_only):
pass
This is arguably one of the clearest expressions of PEP 8's central insight: code is written once but read repeatedly, often by people other than the original author. Explicit parameter categories make the contract between function author and caller unmistakable. (PEP 8, van Rossum, Warsaw, Coghlan — peps.python.org)
PEP 3107 and PEP 484: Function Annotations and Type Hints
PEP 3107, authored by Collin Winter and Tony Lownds and finalized for Python 3.0, introduced the syntax for function annotations — the ability to attach arbitrary metadata to parameters and return values. The annotations themselves carried no enforced semantics; they were simply a way to associate Python expressions with parts of a function definition.
def calculate_area(length: float, width: float) -> float:
"""Calculate the area of a rectangle."""
return length * width
This laid the groundwork for PEP 484, authored by Guido van Rossum, Jukka Lehtosalo, and Lukasz Langa, which standardized type hints in Python 3.5. Type hints use the annotation syntax from PEP 3107 to express expected types, enabling static analysis tools like mypy to catch errors before runtime.
In an interview published on ODBMS Industry Watch in October 2025, van Rossum offered a practical guideline for when type hints become essential: he suggested that once a codebase reaches roughly ten thousand lines, "it's hard to maintain code quality without type hints." He also added that he would not push them on beginners, since dynamic tests work well enough for smaller projects.
Here is a more realistic example that combines type hints with several of the parameter features discussed above:
def scan_ports(
target: str,
/,
start_port: int = 1,
end_port: int = 1024,
*,
timeout: float = 2.0,
verbose: bool = False
) -> list[int]:
"""Scan a range of ports on a target host.
Args:
target: IP address or hostname (positional-only).
start_port: Beginning of port range.
end_port: End of port range.
timeout: Connection timeout in seconds (keyword-only).
verbose: Whether to print progress (keyword-only).
Returns:
List of open port numbers.
"""
open_ports = []
for port in range(start_port, end_port + 1):
if verbose:
print(f"Scanning {target}:{port}...")
# actual scanning logic would go here
return open_ports
This single function definition uses positional-only parameters, default values, keyword-only parameters, type annotations, and a proper docstring following PEP 257 conventions. That is the full power of def on display.
PEP 318: Decorators and the def Statement
PEP 318, which landed in Python 2.4, introduced decorator syntax — the @ symbol placed before a def statement. The PEP explains the key problem with the old style: applying a transformation after a long function body puts critical behavioral context far away from the function's signature, making it harder to understand the function at a glance. (PEP 318, Smith et al. — peps.python.org)
Before decorators, you had to write the function name three times for a single logical declaration:
# Pre-PEP 318 style
def my_method(cls):
# ... long method body ...
pass
my_method = classmethod(my_method)
With PEP 318, that became:
@classmethod
def my_method(cls):
# ... long method body ...
pass
The decorator syntax acknowledges something fundamental about def: because it creates a function object, that object can be passed to any callable. A decorator is simply a function that receives the decorated function as its argument and returns a replacement (or the same function, modified). This is closures and first-class functions working together:
import functools
import time
def measure_time(func):
"""Decorator that prints execution time of the wrapped function."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
start = time.perf_counter()
result = func(*args, **kwargs)
elapsed = time.perf_counter() - start
print(f"{func.__name__} took {elapsed:.4f} seconds")
return result
return wrapper
@measure_time
def process_data(records):
"""Simulate processing a batch of records."""
time.sleep(0.5) # simulate work
return len(records)
process_data([1, 2, 3, 4, 5])
# Output: process_data took 0.5003 seconds
Notice the use of functools.wraps — this preserves the original function's __name__ and __doc__ attributes on the wrapper, which is essential for debugging and introspection. Always use it when writing decorators.
PEP 257: Documenting Your Functions
PEP 257, authored by David Goodger and Guido van Rossum, establishes the conventions for writing docstrings. A docstring is a string literal that occurs as the first statement in a function (or module, class, or method) definition. It becomes the __doc__ attribute of that object and is accessible at runtime through help() and introspection.
PEP 257 specifies that one-line docstrings should prescribe the function's effect as a command — for example, "Return the pathname" rather than "Returns the pathname." For multi-line docstrings, the summary line should be followed by a blank line and then a more detailed description. PEP 8 further reinforces that docstrings should appear for all public functions, classes, and methods, with the docstring appearing after the def line.
def validate_email(address):
"""Validate an email address with basic structural checks.
Checks for the presence of an @ symbol, a non-empty
local part, and a domain with at least one dot. Does not verify
that the domain actually exists or that the address conforms
to the full RFC 5322 specification.
Args:
address: The email address string to validate.
Returns:
True if the address passes basic validation, False otherwise.
"""
if "@" not in address:
return False
local, _, domain = address.rpartition("@")
return bool(local) and "." in domain
Well-written docstrings transform a function from a black box into a contract. They tell the caller what the function expects, what it returns, and what assumptions it makes — without requiring anyone to read the implementation.
def vs. lambda: When to Use Which
Python also offers lambda for creating small, anonymous functions. PEP 8 is direct on this point: always use a def statement rather than assigning a lambda expression to a name. The reason is that a named function produces a proper __name__ attribute, which makes tracebacks and string representations far more useful. And once you assign a lambda to a name, you eliminate the only real advantage a lambda has over a def — the ability to live inline inside a larger expression. (PEP 8, "Programming Recommendations" — peps.python.org)
# PEP 8 says don't do this:
square = lambda x: x ** 2
# Do this instead:
def square(x):
"""Return the square of x."""
return x ** 2
Lambdas are appropriate when you need a small throwaway function inline, such as a sorting key:
users = [("Alice", 30), ("Bob", 25), ("Charlie", 35)]
users.sort(key=lambda user: user[1])
But for anything you would name, test, or document, def is the right tool.
Nested Functions, Closures, and Scope
Because def is an executable statement, you can use it inside other functions to create nested (or local) functions. These inner functions have access to the enclosing function's local variables through closure:
def make_multiplier(factor):
"""Return a function that multiplies its argument by factor."""
def multiplier(x):
return x * factor
return multiplier
double = make_multiplier(2)
triple = make_multiplier(3)
print(double(5)) # 10
print(triple(5)) # 15
The inner multiplier function "closes over" the factor variable from the enclosing scope. Each call to make_multiplier creates a new closure with its own captured value of factor. This is the mechanism that makes decorators, callback registries, and many other patterns possible in Python.
When def Becomes a Generator
There is one more transformation def can perform that changes its nature entirely: the moment you place a yield statement anywhere in a function body, the function stops being a regular callable and becomes a generator function. Calling it no longer runs the body — it returns a generator object that will run the body lazily, pausing at each yield and resuming from that point on the next call to next().
def read_log_lines(path, *, keyword=None):
"""Yield lines from a log file, optionally filtered by keyword.
This is a generator function: it yields one line at a time
and never loads the entire file into memory.
Args:
path: Path to the log file.
keyword: If provided, only yield lines containing this string.
"""
with open(path, "r") as f:
for line in f:
if keyword is None or keyword in line:
yield line.rstrip()
# The function body does not run until we iterate:
for line in read_log_lines("/var/log/auth.log", keyword="FAILED"):
print(line)
This matters because the signature of a generator function looks exactly like any other function. You use def, you define parameters with the full range of positional-only, keyword-only, and default-value syntax, and you add a docstring and type hints. The only behavioral difference comes from the yield inside the body. The return type annotation for a generator function is Generator[YieldType, SendType, ReturnType] from the typing module, or the simpler Iterator[YieldType] when you only care about the yielded values.
Generators are not a separate topic from def — they are a natural extension of it. If you find yourself building a list just to return it from a function that will be iterated once, consider whether a generator function would be more memory-efficient. For large files, streams, or infinite sequences, the difference is significant.
The Cognitive Question: When Should Code Become a Function?
Technical knowledge of def is only half the skill. The other half is the judgment to know when a block of code deserves to become a named function. This question rarely gets asked in tutorials, but it is where experience separates maintainable code from the kind that slowly becomes impossible to reason about.
There are patterns that signal it is time to extract a function:
The comment problem. If you find yourself writing a comment that says what a block does — rather than why it makes a specific choice — that comment is a function name waiting to be written. A function called is_rate_limited() communicates intent far more clearly than a comment that says # check if user exceeded limit above an inline expression.
The duplication threshold. The three-strikes rule is a practical heuristic: once you write the same logic a third time, it belongs in a function. But duplication is not just about identical lines — it is also about identical reasoning. If you are applying the same conditional logic in two different places, even if the variable names differ, the underlying abstraction may deserve a name.
The testability signal. If a piece of logic is difficult to test in isolation because it is embedded in a larger function, that is a strong signal to extract it. Functions that do one thing are inherently easier to test. A function that opens a file, parses its contents, validates each line, and writes results to a database is doing four things and should probably be four functions.
The length heuristic with nuance. Function length guidelines (such as "no more than 20 lines") are useful starting points, but they are proxies for the real goal: a function should fit within a single screen of context and do one conceptually coherent thing. Some domain-specific functions are legitimately longer; some short functions are still doing too much.
# Before: one function doing too many things
def process_user_upload(raw_csv, db_connection):
lines = raw_csv.strip().split("\n")
records = []
for line in lines:
parts = line.split(",")
if len(parts) != 3:
continue
name, email, role = parts
if "@" not in email:
continue
records.append({"name": name.strip(), "email": email.strip(), "role": role.strip()})
for record in records:
db_connection.execute(
"INSERT INTO users (name, email, role) VALUES (?, ?, ?)",
(record["name"], record["email"], record["role"])
)
# After: each concern has a name and a boundary
def parse_csv_lines(raw_csv):
"""Parse raw CSV text into a list of stripped string lists."""
return [line.split(",") for line in raw_csv.strip().split("\n")]
def validate_user_row(parts):
"""Return True if a row has exactly 3 fields and a valid email."""
if len(parts) != 3:
return False
_, email, _ = parts
return "@" in email
def row_to_record(parts):
"""Convert a validated row to a user record dict."""
name, email, role = parts
return {"name": name.strip(), "email": email.strip(), "role": role.strip()}
def insert_user(db_connection, record):
"""Insert a single user record into the database."""
db_connection.execute(
"INSERT INTO users (name, email, role) VALUES (?, ?, ?)",
(record["name"], record["email"], record["role"])
)
def process_user_upload(raw_csv, db_connection):
"""Orchestrate parsing, validation, and insertion of user CSV data."""
for parts in parse_csv_lines(raw_csv):
if validate_user_row(parts):
insert_user(db_connection, row_to_record(parts))
The refactored version is longer in raw line count. But each function can be understood, tested, and modified independently. The orchestrating function reads like a summary of what the process does. That is the goal: code that communicates its own structure.
Testing Your Functions: What def Makes Possible
One of the most underappreciated consequences of defining functions well is how dramatically it improves testability. A well-designed function — with a clear signature, no hidden global state, and a documented return value — is trivially testable with Python's standard unittest module or the widely-used pytest library.
import unittest
from your_module import validate_user_row, row_to_record
class TestValidateUserRow(unittest.TestCase):
def test_valid_row_returns_true(self):
self.assertTrue(validate_user_row(["Alice", "[email protected]", "admin"]))
def test_missing_field_returns_false(self):
self.assertFalse(validate_user_row(["Alice", "[email protected]"]))
def test_invalid_email_returns_false(self):
self.assertFalse(validate_user_row(["Alice", "not-an-email", "admin"]))
class TestRowToRecord(unittest.TestCase):
def test_strips_whitespace(self):
record = row_to_record([" Alice ", " [email protected] ", " admin "])
self.assertEqual(record["name"], "Alice")
self.assertEqual(record["email"], "[email protected]")
if __name__ == "__main__":
unittest.main()
Notice that each test function is itself defined with def and follows exactly the same conventions: a clear name that describes what it is asserting, a docstring if needed, and no side effects beyond the assertion. The cognitive model that makes your functions testable is the same model that makes them readable. They reinforce each other.
Functions that take global state as an implicit input — reading from a module-level variable instead of a parameter — are difficult to test in isolation because you have to mutate global state to change their behavior. Passing all inputs as explicit parameters is not just a style choice; it is what makes a function a true unit that can be tested independently.
The __annotations__ Dictionary and Runtime Introspection
Type hints in Python are not erased at runtime. Every annotation you write is stored in the function's __annotations__ dictionary, which is accessible at runtime just like __name__ and __doc__. This is what tools like mypy, pyright, and FastAPI rely on to do their work.
def scan_ports(
target: str,
/,
start_port: int = 1,
end_port: int = 1024,
*,
timeout: float = 2.0,
verbose: bool = False
) -> list[int]:
"""Scan a range of ports on a target host."""
...
print(scan_ports.__annotations__)
# {'target': <class 'str'>, 'start_port': <class 'int'>,
# 'end_port': <class 'int'>, 'timeout': <class 'float'>,
# 'verbose': <class 'bool'>, 'return': list[int]}
Note that positional-only parameters are included in __annotations__ just like other parameters — the distinction only affects calling convention, not the annotation mechanism. This runtime availability of type information is what enables frameworks like FastAPI to automatically validate request payloads, generate API documentation, and produce OpenAPI schemas directly from function signatures. When you understand that def creates a fully introspectable object, you start to see why well-typed function signatures are a form of executable specification.
Putting It All Together
Let us build a function that uses many of the concepts covered in this article — positional-only and keyword-only parameters, type hints, a proper docstring, a default mutable argument handled safely, and a decorator:
import functools
import logging
logger = logging.getLogger(__name__)
def log_calls(func):
"""Decorator that logs every call to the wrapped function."""
@functools.wraps(func)
def wrapper(*args, **kwargs):
logger.info(f"Calling {func.__name__}")
result = func(*args, **kwargs)
logger.info(f"{func.__name__} returned {result!r}")
return result
return wrapper
@log_calls
def analyze_traffic(
pcap_file: str,
/,
protocol: str = "TCP",
*,
max_packets: int = 1000,
suspicious_ips: list[str] | None = None
) -> dict[str, int]:
"""Analyze network traffic from a packet capture file.
Args:
pcap_file: Path to the .pcap file (positional-only).
protocol: Protocol filter (default "TCP").
max_packets: Maximum packets to inspect (keyword-only).
suspicious_ips: Optional list of IPs to flag (keyword-only).
Returns:
Dictionary mapping event types to their counts.
"""
if suspicious_ips is None:
suspicious_ips = []
results = {"total": 0, "flagged": 0, "dropped": 0}
# In a real implementation, you would parse the pcap file here.
# This is a simplified demonstration.
results["total"] = max_packets
results["flagged"] = len(suspicious_ips) * 10
return results
Every feature of this function is intentional. The pcap_file parameter is positional-only because its name is an implementation detail — callers should not depend on it. The protocol parameter has a sensible default and can be passed either way. The max_packets and suspicious_ips parameters are keyword-only because they are configuration options that should always be passed explicitly for readability. The mutable default (suspicious_ips) is handled with the None sentinel pattern. And the decorator adds logging without modifying the function body.
Summary of Related PEPs
The following PEPs have shaped how def works in modern Python:
PEP 8 (van Rossum, Warsaw, Coghlan) — Style Guide for Python Code. Establishes naming conventions for functions (lowercase with underscores), blank line rules around definitions, and the recommendation to prefer def over lambda assignment.
PEP 20 (Peters, 2004) — The Zen of Python. The philosophical foundation that drives function design: readability, explicitness, and simplicity.
PEP 255 (Schemenauer, Peters, Hetland, 2001; Python 2.2) — Simple Generators. Introduced the yield statement, transforming ordinary def-defined functions into lazy generator objects without requiring any new syntax beyond the single keyword.
PEP 257 (Goodger, van Rossum) — Docstring Conventions. Defines how to write documentation strings for functions, including one-line and multi-line formats.
PEP 318 (Smith, Jewett, Montanaro, Baxter, 2003; Python 2.4) — Decorators for Functions and Methods. Introduced the @decorator syntax that transforms how functions are defined and composed.
PEP 3102 (Talin, 2006; Python 3.0) — Keyword-Only Arguments. Added the ability to declare parameters that must be passed by keyword using the * separator.
PEP 3107 (Winter, Lownds, 2006; Python 3.0) — Function Annotations. Introduced the syntax for annotating parameter and return types.
PEP 484 (van Rossum, Lehtosalo, Langa, 2014; Python 3.5) — Type Hints. Standardized how annotations are used for static type checking.
PEP 570 (Hastings, Galindo, Corchero, Vander Weele, 2018; Python 3.8) — Positional-Only Parameters. Introduced the / separator to declare parameters that must be passed by position.
Final Thoughts
The def keyword is where every Python program begins to take shape. It is the point where you move from writing instructions to designing abstractions. From the simple one-liner that returns a greeting to the fully annotated, decorated, scope-controlled function that anchors a production system, def scales with you.
Understanding the PEPs behind it — how the parameter system evolved, why decorators live above the function body, what docstrings are actually for — is not academic trivia. It is the knowledge that lets you write functions that are correct, maintainable, and a pleasure for other developers to use. And that is the standard Python CodeCrack holds every article to: not copy-paste tutorials, but actual comprehension.