Real World Examples of Python Decorators for Logging and Auth

Decorators become valuable when they solve concrete problems. The two areas where decorators appear in nearly every production Python codebase are logging and authentication. Logging decorators centralize how function calls are recorded so that individual functions stay focused on their own logic. Authentication decorators centralize credential verification so that route handlers never contain auth boilerplate. This article provides six production-grade decorator implementations, three for logging and three for auth, each with complete code you can copy directly into a project. Every code example has been verified against the Python 3.8+ standard library documentation.

Every decorator in this article follows the standard well-behaved template: it uses @functools.wraps(func) to preserve metadata, accepts any function signature with *args, **kwargs, and explicitly returns the original function's result. As Brett Slatkin writes in Effective Python (Item 38), failing to use @functools.wraps breaks introspection tools, serialization with pickle, and the help() function. The Python documentation for functools.wraps confirms it copies __name__, __qualname__, __module__, __annotations__, __type_params__, and __doc__ from the wrapped function. The focus here is on the domain-specific logic inside the wrapper, not on the decorator pattern itself.

Logging Decorators

1. Structured Call Logger

This decorator logs the function name, arguments, return value, and execution time using Python's logging module. It produces structured output that log aggregation systems like the ELK stack or Datadog can parse and index. Timing uses time.perf_counter(), which the Python documentation describes as "a clock with the highest available resolution to measure a short duration." As noted in PEP 418, perf_counter() was introduced specifically to replace the platform-inconsistent time.clock() and provides monotonic, non-adjustable timing across all operating systems.

import functools
import logging
import time

logger = logging.getLogger(__name__)

def log_calls(func):
    """Log function entry, exit, arguments, return value, and duration."""
    @functools.wraps(func)
    def wrapper(*args, **kwargs):
        call_args = [repr(a) for a in args]
        call_args += [f"{k}={v!r}" for k, v in kwargs.items()]
        signature = ", ".join(call_args)

        logger.info(
            "CALL %s(%s)",
            func.__name__, signature
        )

        start = time.perf_counter()
        try:
            result = func(*args, **kwargs)
        except Exception as exc:
            elapsed = time.perf_counter() - start
            logger.error(
                "EXCEPTION %s after %.4fs: %s",
                func.__name__, elapsed, exc
            )
            raise
        elapsed = time.perf_counter() - start

        logger.info(
            "RETURN %s -> %r (%.4fs)",
            func.__name__, result, elapsed
        )
        return result
    return wrapper

@log_calls
def calculate_shipping(weight, zone, expedited=False):
    """Calculate shipping cost based on weight and zone."""
    base = weight * 0.5 * zone
    return base * 1.5 if expedited else base

calculate_shipping(12.5, 3, expedited=True)
# INFO: CALL calculate_shipping(12.5, 3, expedited=True)
# INFO: RETURN calculate_shipping -> 28.125 (0.0001s)

The try/except block ensures that exceptions are logged with the elapsed time before being re-raised. This means failed calls are recorded in the log with full context even when they propagate up the stack. The decorator never swallows exceptions. It logs them and re-raises them unchanged.

2. Configurable Log Level

A parameterized version that lets the caller choose which log level to use. Critical operations can log at WARNING while routine operations log at DEBUG.

import functools
import logging
import time

logger = logging.getLogger(__name__)

def log_calls(func=None, *, level=logging.INFO, log_args=True, log_result=True):
    """Configurable logging decorator.

    Args:
        level: Logging level (DEBUG, INFO, WARNING, etc.).
        log_args: Whether to include arguments in the log message.
        log_result: Whether to include the return value.
    """
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            if log_args:
                call_args = [repr(a) for a in args]
                call_args += [f"{k}={v!r}" for k, v in kwargs.items()]
                logger.log(level, "CALL %s(%s)", func.__name__, ", ".join(call_args))
            else:
                logger.log(level, "CALL %s", func.__name__)

            start = time.perf_counter()
            result = func(*args, **kwargs)
            elapsed = time.perf_counter() - start

            if log_result:
                logger.log(level, "RETURN %s -> %r (%.4fs)", func.__name__, result, elapsed)
            else:
                logger.log(level, "RETURN %s (%.4fs)", func.__name__, elapsed)
            return result
        return wrapper

    if func is not None:
        return decorator(func)
    return decorator

# Usage: all three forms work
@log_calls
def routine_task():
    """Runs frequently, uses default INFO level."""
    return "done"

@log_calls(level=logging.DEBUG, log_result=False)
def process_batch(items):
    """Debug-level logging, omit return value from logs."""
    return [item * 2 for item in items]

@log_calls(level=logging.WARNING, log_args=False)
def delete_records(table):
    """Sensitive operation: warn level, hide arguments."""
    return f"Deleted from {table}"

The log_args=False option is important for functions that handle sensitive data. Logging arguments for a function that receives passwords, tokens, or personal information creates a security risk in the log files themselves. The decorator lets you disable argument logging on a per-function basis.

3. Audit Trail Decorator

An audit trail records not just what happened, but who did it and when. This decorator captures a user identifier and writes a timestamped record suitable for compliance logging.

import functools
import json
import logging
from datetime import datetime, timezone

audit_logger = logging.getLogger("audit")

def audit_trail(action):
    """Record an audit event with user, action, timestamp, and outcome."""
    def decorator(func):
        @functools.wraps(func)
        def wrapper(user, *args, **kwargs):
            record = {
                "timestamp": datetime.now(timezone.utc).isoformat(),
                "user": user.get("username", "unknown"),
                "action": action,
                "function": func.__name__,
                "status": "success",
            }
            try:
                result = func(user, *args, **kwargs)
                return result
            except Exception as exc:
                record["status"] = "failure"
                record["error"] = str(exc)
                raise
            finally:
                audit_logger.info(json.dumps(record))
        return wrapper
    return decorator

@audit_trail("delete_account")
def delete_user_account(user, account_id):
    """Permanently delete a user account."""
    if account_id == "protected":
        raise ValueError("Cannot delete protected account")
    return f"Account {account_id} deleted"

admin = {"username": "kandi", "role": "admin"}

delete_user_account(admin, "acc_12345")
# audit log: {"timestamp":"2026-03-29T22:00:00+00:00","user":"kandi",
#   "action":"delete_account","function":"delete_user_account","status":"success"}

try:
    delete_user_account(admin, "protected")
except ValueError:
    pass
# audit log: {"timestamp":"...","user":"kandi","action":"delete_account",
#   "function":"delete_user_account","status":"failure",
#   "error":"Cannot delete protected account"}

The finally block ensures the audit record is written whether the function succeeds or fails. Using a dedicated "audit" logger with its own handler lets you route audit records to a separate file or service from general application logs, which is a common compliance requirement.

Pro Tip

In production, configure the audit logger to write JSON-formatted records to a separate log stream. Tools like Elasticsearch, Splunk, or CloudWatch Logs can then index, search, and alert on audit events without parsing unstructured text.

POP QUIZtest_logging_knowledge()
A logging decorator catches an exception inside a try/except block. What should the except block do after logging the error?

Authentication and Authorization Decorators

4. Token Authentication (Framework-Agnostic)

This decorator validates a bearer token from a request context before allowing the function to execute. It is written to work with any framework by accepting the token as a keyword argument. Framework-specific adapters can extract the token from headers before passing it in.

import functools
import hmac
import hashlib
import time

# In production, load from environment variable
SECRET_KEY = b"your-secret-key-here"

def _validate_token(token):
    """Validate a simple HMAC-based token. Returns user dict or None."""
    try:
        payload, signature = token.rsplit(".", 1)
        expected = hmac.new(SECRET_KEY, payload.encode(), digestmod=hashlib.sha256).hexdigest()
        if not hmac.compare_digest(signature, expected):
            return None

        parts = payload.split(":")
        username = parts[0]
        expires = float(parts[1])
        if time.time() > expires:
            return None

        return {"username": username, "token_payload": payload}
    except (ValueError, IndexError):
        return None

def require_token(func):
    """Reject calls without a valid bearer token."""
    @functools.wraps(func)
    def wrapper(*args, token=None, **kwargs):
        if token is None:
            raise PermissionError("Missing authentication token")

        user = _validate_token(token)
        if user is None:
            raise PermissionError("Invalid or expired token")

        return func(*args, user=user, **kwargs)
    return wrapper

@require_token
def get_account_balance(account_id, user=None):
    """Return the balance for the given account."""
    return {"account": account_id, "balance": 4250.00, "viewer": user["username"]}

The decorator intercepts the token keyword argument, validates it, and replaces it with a user dictionary that the function can use. If validation fails, the function never executes. Using hmac.compare_digest instead of == for token comparison prevents timing attacks. The Python hmac documentation explains that this function "prevents timing analysis by avoiding content-based short circuiting." A standard == comparison stops at the first mismatched character, which means an attacker can measure response times to determine how many leading characters of a token are correct and reconstruct the full value incrementally. The digestmod parameter is passed as a keyword argument because, as the hmac.new() documentation notes, the parameter has been required since Python 3.8 and keyword usage avoids ambiguity when no initial message is provided.

5. Role-Based Access Control

Authorization verifies that an authenticated user has permission to perform a specific action. This decorator accepts a required role and checks it against the user's role list.

import functools

def require_role(*allowed_roles):
    """Restrict function access to users with specified roles."""
    def decorator(func):
        @functools.wraps(func)
        def wrapper(user, *args, **kwargs):
            user_roles = set(user.get("roles", []))
            required = set(allowed_roles)

            if not user_roles & required:
                raise PermissionError(
                    f"{user.get('username', 'unknown')} lacks required role. "
                    f"Needs one of: {', '.join(allowed_roles)}"
                )
            return func(user, *args, **kwargs)
        return wrapper
    return decorator

@require_role("admin", "moderator")
def ban_user(user, target_username, reason):
    """Ban a user from the platform."""
    return f"{target_username} banned by {user['username']}: {reason}"

admin = {"username": "kandi", "roles": ["admin", "editor"]}
viewer = {"username": "guest", "roles": ["viewer"]}

print(ban_user(admin, "spammer42", "Terms violation"))
# spammer42 banned by kandi: Terms violation

try:
    ban_user(viewer, "spammer42", "Terms violation")
except PermissionError as e:
    print(e)
# guest lacks required role. Needs one of: admin, moderator

The decorator accepts multiple roles using *allowed_roles and uses set intersection (&) to check whether the user has at least one of the required roles. This is a common pattern for functions where either admins or moderators should have access, but regular users should not.

6. API Key Validation

For services that authenticate via API keys rather than user tokens, this decorator validates a key against a registry and attaches the client identity to the request.

import functools

# In production, load from database or secrets manager
API_KEY_REGISTRY = {
    "sk_live_abc123": {"client": "AcmeCorp", "tier": "enterprise"},
    "sk_live_def456": {"client": "StartupInc", "tier": "free"},
}

def require_api_key(func):
    """Validate API key and inject client info."""
    @functools.wraps(func)
    def wrapper(*args, api_key=None, **kwargs):
        if api_key is None:
            raise PermissionError("Missing API key")

        client = API_KEY_REGISTRY.get(api_key)
        if client is None:
            raise PermissionError("Invalid API key")

        return func(*args, client=client, **kwargs)
    return wrapper

@require_api_key
def create_invoice(amount, currency, client=None):
    """Create an invoice for the authenticated client."""
    return {
        "client": client["client"],
        "tier": client["tier"],
        "amount": amount,
        "currency": currency,
    }

result = create_invoice(500, "USD", api_key="sk_live_abc123")
print(result)
# {'client': 'AcmeCorp', 'tier': 'enterprise', 'amount': 500, 'currency': 'USD'}

The decorator swaps the api_key argument for a client dictionary. The function never handles raw API keys. It only works with validated client information. This separation prevents key leakage into business logic and makes the function easier to test with mock client objects.

POP QUIZtest_auth_knowledge()
Why does the @require_token decorator use hmac.compare_digest() instead of == to compare the token signature?

Stacking Logging and Auth Together

The real power of these decorators emerges when they are combined. The correct stacking order is authentication outermost, then authorization, then logging innermost. This ensures unauthenticated requests are rejected before being logged, and the logging decorator measures only the function's execution time without including auth overhead. As the Real Python decorator guide explains, stacked decorators are applied bottom-up at definition time but execute top-down at call time, meaning @require_role on top is the first check a caller encounters. Getting this order wrong is a common source of security gaps in production applications.

@require_role("admin")
@audit_trail("purge_logs")
@log_calls(level=logging.WARNING, log_args=False)
def purge_system_logs(user, days_to_keep=30):
    """Delete system logs older than the specified number of days."""
    return f"Purged logs older than {days_to_keep} days"

admin = {"username": "kandi", "roles": ["admin"]}
purge_system_logs(admin, days_to_keep=7)

# Execution order:
# 1. require_role checks that user has "admin" role
# 2. audit_trail records the action with user and timestamp
# 3. log_calls logs the call at WARNING level (args hidden)
# 4. purge_system_logs runs and returns its result
# 5. log_calls logs the return value and timing
# 6. audit_trail writes the audit record (success/failure)
Warning

If you reverse the order and place @log_calls above @require_role, the logging decorator runs before the role check. Unauthorized users will have their attempts logged, potentially recording sensitive request data before the system rejects them. Always stack auth decorators above logging decorators.

POP QUIZtest_stacking_order()
You need to protect a function with role-based access control, log every call with timing data, and record an audit trail. Which stacking order is correct?

Security Pitfalls When Building Auth Decorators

The decorator examples above follow security best practices, but minor deviations in how you write your own versions can introduce vulnerabilities. These are the errors that appear in real codebases and rarely show up in tutorials.

Swallowing exceptions in logging decorators. If a logging decorator catches an exception and forgets to re-raise it, the caller never sees the error. The @log_calls decorator above uses a bare raise inside the except block for exactly this reason. Replacing raise with return None silently converts every exception into a None return value, hiding bugs for weeks.

Using == instead of hmac.compare_digest(). Python's == operator uses short-circuit evaluation, meaning it returns False as soon as it finds the first mismatched character. An attacker can exploit the microsecond timing differences between comparisons that fail on the first byte versus the tenth byte to reconstruct a valid token character by character. The Python hmac documentation explicitly recommends compare_digest() for any verification routine involving digests.

Logging raw credentials. A logging decorator that records all arguments will write passwords, API keys, and tokens into log files in plain text. The configurable log_args=False parameter in the second decorator above exists for this reason. Any function that handles sensitive input should disable argument logging. Log files often end up in shared storage, monitoring dashboards, or third-party services where credential exposure creates a wider blast radius than the original authentication boundary intended.

Omitting @functools.wraps. Without @functools.wraps, the decorated function's __name__ becomes "wrapper", its __doc__ becomes the wrapper's docstring (or None), and its __module__ points to the decorator's module. This breaks Flask's URL routing (which uses __name__ for endpoint resolution), Sphinx documentation generation, and serialization with pickle. As the functools.wraps documentation states, the decorator copies __name__, __qualname__, __module__, __annotations__, __type_params__, and __doc__ to the wrapper, and updates the wrapper's __dict__.

Hardcoding secrets in source code. The SECRET_KEY in the token validation example uses a placeholder value. In production, this must be loaded from an environment variable or a secrets manager. Hardcoded secrets end up in version control, CI/CD logs, and container images, making them accessible to anyone with repository access.

POP QUIZtest_wraps_knowledge()
What happens to a decorated function's __name__ attribute if the decorator does not use @functools.wraps?

Sources and Further Reading

The code in this article relies on standard library modules documented at docs.python.org. The following references provide the specific documentation and design rationale behind each implementation choice.

  • functools.wraps -- Copies __name__, __qualname__, __module__, __annotations__, __type_params__, and __doc__ from the wrapped function to the wrapper.
  • time.perf_counter() -- High-resolution monotonic timer. Introduced in Python 3.3 via PEP 418 to replace the platform-inconsistent time.clock(). Used by the timeit module internally.
  • hmac module -- Implements HMAC as described in RFC 2104. The digestmod parameter has been required since Python 3.8.
  • hmac.compare_digest() -- Constant-time comparison function designed to prevent timing attacks. Also available as secrets.compare_digest() (which is an alias).
  • logging module -- Python's built-in logging framework. Supports hierarchical loggers, multiple handlers, and configurable formatting.
  • PEP 318 -- The original proposal that introduced the @decorator syntax in Python 2.4.
  • Effective Python, 3rd Edition by Brett Slatkin -- Item 38 covers why functools.wraps is essential for decorator correctness.

Key Takeaways

  1. Logging decorators centralize observation logic. The @log_calls decorator handles function entry, exit, argument recording, exception logging, and timing measurement in one place. Individual functions stay focused on their business logic without any logging code inside their bodies. Configurable parameters like level, log_args, and log_result let you tune the verbosity per function.
  2. Audit trail decorators record who did what and when. By capturing the user, action, timestamp, and outcome in a structured JSON format, the @audit_trail decorator produces records that compliance tools can index and query. Using a finally block ensures the record is written even when the function raises an exception.
  3. Authentication decorators verify identity before the function runs. The @require_token and @require_api_key decorators intercept credential arguments, validate them, and either inject a verified user/client object or reject the request. The protected function never handles raw credentials.
  4. Authorization decorators check permissions on verified identities. The @require_role decorator accepts one or more allowed roles and uses set intersection to determine whether the user qualifies. This keeps role-checking logic out of every function that needs access control.
  5. Stack auth above logging. The correct order from outermost to innermost is: route registration (if using a framework), authentication, authorization, audit trail, and general logging. This ensures that unauthenticated requests are rejected before they reach any logging or business logic layer.

These six decorators cover the two cross-cutting concerns that appear in virtually every production Python application. Each one follows the standard decorator template, each one uses @functools.wraps, and each one solves a specific, recurring problem. Copy them into your project, adapt them to your framework's request model, and the functions they protect can focus entirely on the work they were written to do.

How to Build Production Decorators: Step by Step

  1. Start with the standard decorator template. Import functools, define the outer function that receives func, define the inner wrapper with *args, **kwargs, apply @functools.wraps(func), and return the result. This template is covered in detail in the standard template for well-behaved Python decorators article.
  2. Add structured logging inside the wrapper. Use time.perf_counter() for timing, logging.getLogger(__name__) for the logger, and try/except with bare raise to log exceptions without swallowing them. Make log level and argument logging configurable with keyword parameters.
  3. Create a dedicated audit trail decorator. Capture user, action, timestamp, and outcome in a JSON record. Use a finally block so the record is written even on failure. Route output to a separate "audit" logger.
  4. Implement token validation as a decorator. Intercept the token kwarg, validate with hmac.new() and hmac.compare_digest(), check expiration, and inject the verified user object. For a framework-based token implementation, see the FastAPI OAuth2 authentication tutorial.
  5. Add role-based access control. Accept roles via *allowed_roles, use set intersection to check the user's roles, and raise PermissionError with a descriptive message on failure.
  6. Stack decorators in the correct order. Auth outermost, authorization next, audit trail, then logging innermost. For the full execution model of chained decorator execution order, see the dedicated article on how Python evaluates stacked decorators at definition time versus call time.

Frequently Asked Questions

How do I build a logging decorator that includes execution time and arguments?

Use Python's logging module inside the decorator wrapper. Capture time.perf_counter() before and after calling the function, then log the function name, arguments, return value, and elapsed time as a structured message. Using the logging module rather than print() allows you to control log levels, format output as JSON for log aggregation systems, and route messages to different handlers. For more on timing function execution with decorators, see the dedicated timing article.

How do I write a token authentication decorator without a framework?

The decorator intercepts a token keyword argument, validates it using hmac.new() with a shared secret key and hashlib.sha256, and checks expiration. If the token is valid, the decorator replaces the token argument with a verified user dictionary before calling the original function. Using hmac.compare_digest() instead of the == operator for signature comparison prevents timing attacks.

What is the difference between authentication and authorization decorators?

An authentication decorator verifies the caller's identity, confirming that they are who they claim to be. An authorization decorator checks whether an already-authenticated user has permission to perform a specific action, such as requiring an admin role. In a decorator stack, the authentication decorator should run before the authorization decorator.

Should logging decorators go above or below authentication decorators?

Logging decorators should go below authentication decorators in the stack (closer to the function). This means the authentication check runs first. If authentication fails, the request is rejected before the logging decorator runs, preventing unauthenticated requests from being recorded with potentially sensitive header data. For the full explanation of why order matters, see Python decorator stacking order.

Can I use the same decorator pattern for both sync Flask routes and async FastAPI routes?

The core pattern is the same: extract credentials, validate them, call the original function or reject the request. However, async frameworks like FastAPI use dependency injection (Depends()) rather than traditional function-wrapping decorators for auth. The logging decorator pattern works identically in both, but for async functions the wrapper must be defined with async def and use await when calling the original function.