Async Endpoints in FastAPI: When to Use async def vs def in Your Python API

FastAPI supports both async def and plain def for route handlers, and it handles them very differently under the hood. Choosing the wrong one does not just affect style—it can freeze your event loop, stall health checks, and bring down pods in a containerized deployment. The rule is straightforward once you understand it: async def runs on the event loop and must never block. def runs in a thread pool and can safely call blocking code. This guide explains exactly how FastAPI makes that decision, walks through common scenarios, and gives you a clear framework for choosing the right option every time.

The Core Rule

There is one sentence that captures the entire concept:

If you use async def, you are promising not to block the event loop. FastAPI trusts you.

When you define a route with async def, FastAPI places it directly on the event loop. It assumes every operation inside is non-blocking. If you break that promise—by calling time.sleep(), a synchronous database driver, or a CPU-heavy computation—the event loop freezes. No other requests can be processed until your blocking call finishes.

When you define a route with def, FastAPI recognizes that the function might contain blocking code and automatically runs it in a thread pool. The event loop stays free to handle other requests while your function executes in its own thread.

How FastAPI Handles Each Route Type

async def route

@app.get("/async-endpoint")
async def get_data():
    # Runs directly on the event loop
    # Must only call non-blocking, awaitable code
    data = await some_async_database_query()
    return {"data": data}

FastAPI calls this function as a coroutine on the main event loop. There is no thread pool involved. The function can use await to yield control back to the event loop while waiting for I/O, allowing other requests to be processed in the meantime.

def route

@app.get("/sync-endpoint")
def get_data():
    # Runs in a separate thread from the thread pool
    # Safe to call blocking, synchronous code
    data = some_sync_database_query()
    return {"data": data}

FastAPI detects that this is a regular function and dispatches it to a thread pool (managed by AnyIO). The event loop remains free. Other incoming requests, health checks, and async tasks continue to be processed while this function runs in its thread.

Note

You can freely mix async def and def routes within the same FastAPI application. FastAPI applies the correct execution strategy for each route independently based on how the function is defined.

What Happens When You Get It Wrong

The dangerous mistake is putting blocking code inside an async def route. Here is a simple example that looks harmless but can crash a production service:

import time

@app.get("/dangerous")
async def dangerous_endpoint():
    # This blocks the event loop for 10 seconds
    time.sleep(10)
    return {"message": "done"}

While time.sleep(10) runs, the entire event loop is frozen. Every other request—including health check endpoints—is stuck waiting. In a Kubernetes deployment, liveness probes fail, the orchestrator decides the pod is unhealthy, and it restarts the container. The service looks flaky for reasons that are not obvious at first glance.

The same problem occurs with any synchronous blocking call: psycopg2 database queries, requests.get() HTTP calls, open().read() file operations, or CPU-intensive computations like scikit-learn predictions.

The fix is straightforward: change async def to def and let FastAPI run it in the thread pool.

@app.get("/safe")
def safe_endpoint():
    # FastAPI runs this in a thread pool automatically
    time.sleep(10)
    return {"message": "done"}

Now the event loop stays responsive while this function sleeps in its own thread.

Common Mistake

Using async def does not make your code faster by default. It gives you access to the await keyword for cooperative multitasking. If you are not using await inside your route, you likely should be using def instead.

I/O-Bound vs CPU-Bound: Choosing the Right One

The decision comes down to what your route handler does while it waits.

I/O-bound operations spend time waiting for something external: a database response, an HTTP API call, a file read from disk, a message from a queue. The CPU is idle during the wait. These are the operations that benefit from async because the event loop can serve other requests during the wait.

CPU-bound operations keep the processor busy: mathematical computations, data transformations, image processing, ML model inference. There is no waiting—the CPU is working the entire time. Async does not help here because there is no idle time to fill with other work.

Operation Type Examples Use
I/O-bound with async library httpx, asyncpg, aiofiles, aioredis async def + await
I/O-bound with sync library psycopg2, requests, SQLAlchemy ORM, open() def (thread pool)
CPU-bound scikit-learn, pandas transforms, image processing, hashing def (thread pool)
Trivial / no I/O Health checks, returning static data, simple calculations Either works; async def avoids thread pool overhead

async def With Async Libraries

When you use async def, every I/O call inside the function should use an async-compatible library and the await keyword. Here is an example using httpx to call an external API:

import httpx
from fastapi import FastAPI

app = FastAPI()

@app.get("/weather")
async def get_weather():
    async with httpx.AsyncClient() as client:
        response = await client.get("https://api.weather.example.com/today")
    return response.json()

While await client.get() waits for the HTTP response, the event loop is free to handle other incoming requests. This is the power of async: a single worker process can serve hundreds of concurrent connections because it is not sitting idle during network waits.

For database access, use an async driver like asyncpg for PostgreSQL or SQLAlchemy's async engine:

from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker

engine = create_async_engine("postgresql+asyncpg://user:pass@localhost/db")
AsyncSessionLocal = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False)

async def get_db():
    async with AsyncSessionLocal() as session:
        yield session

@app.get("/users/{user_id}")
async def get_user(user_id: int, db: AsyncSession = Depends(get_db)):
    result = await db.execute(select(User).where(User.id == user_id))
    user = result.scalar_one_or_none()
    if not user:
        raise HTTPException(status_code=404, detail="User not found")
    return user

def With Synchronous Libraries

If your project uses a synchronous ORM or database driver, define your routes with def and let FastAPI handle the threading.

from sqlalchemy.orm import Session
from fastapi import Depends

@app.get("/items/")
def list_items(db: Session = Depends(get_db)):
    # Synchronous SQLAlchemy query
    # FastAPI runs this in a thread pool automatically
    return db.query(Item).offset(0).limit(20).all()

This is the correct approach when using standard SQLAlchemy with psycopg2, or any other synchronous library. FastAPI dispatches the function to a thread, the query runs in that thread, and the event loop stays free for other work.

Pro Tip

If you are unsure whether a library is async-compatible, check its documentation for await syntax in the examples. Libraries like httpx, asyncpg, aiofiles, and aioredis are async. Libraries like requests, psycopg2, and standard file I/O with open() are synchronous. When in doubt, use def—it is the safer default.

The Hybrid Case: CPU-Bound Work in an Async Route

Sometimes you need an async def route because it calls async libraries, but it also includes a CPU-intensive step. The solution is run_in_executor, which offloads the blocking work to a thread pool while keeping the event loop responsive.

import asyncio
from fastapi import FastAPI

app = FastAPI()

def heavy_computation(data):
    # CPU-intensive work (runs in a thread)
    result = sum(x ** 2 for x in range(10_000_000))
    return result

@app.post("/analyze")
async def analyze(data: dict):
    # Non-blocking I/O
    external_result = await fetch_from_api(data)

    # Offload CPU-bound work to thread pool
    loop = asyncio.get_running_loop()
    computation_result = await loop.run_in_executor(None, heavy_computation, data)

    return {"api_data": external_result, "computation": computation_result}

The None argument tells run_in_executor to use the default thread pool. The await yields control back to the event loop while the computation runs in a separate thread. Health checks and other requests continue to be served normally.

Quick Reference Decision Table

Scenario Route Definition Why
Calling httpx.AsyncClient async def Async HTTP client, uses await
Calling requests.get() def Synchronous HTTP library, would block the event loop
Querying with asyncpg async def Async PostgreSQL driver
Querying with psycopg2 def Synchronous PostgreSQL driver
Running scikit-learn predict() def CPU-bound, blocking
Returning {"status": "ok"} async def Trivial, no I/O, avoids thread pool overhead
Mixed async I/O + CPU work async def + run_in_executor Async for I/O, thread pool for CPU portion

Frequently Asked Questions

What is the difference between async def and def in FastAPI?

When you define a route with async def, FastAPI runs it directly on the main event loop. It must only contain non-blocking, awaitable code. When you define a route with def, FastAPI dispatches it to a separate thread pool, which allows it to safely call blocking or synchronous code without freezing the event loop.

When should I use def instead of async def in FastAPI?

Use def when your route calls synchronous or blocking libraries: psycopg2, the standard SQLAlchemy ORM, requests, scikit-learn, or any CPU-intensive function. FastAPI handles the threading for you, keeping the event loop responsive.

What happens if I put blocking code inside an async def route?

The blocking code freezes the event loop. No other requests can be processed until it finishes. In containerized deployments, this causes health check failures and pod restarts. The fix is to either switch to def or offload the blocking work with run_in_executor.

Can I mix async def and def routes in the same FastAPI application?

Yes. FastAPI inspects each route handler individually and applies the correct strategy: event loop for async def, thread pool for def. You can have both in the same application without any configuration.

Key Takeaways

  1. async def = event loop, def = thread pool: FastAPI runs async def routes on the event loop and def routes in a thread pool. This distinction determines whether blocking code will freeze your application or be handled safely.
  2. Never put blocking code in async def: Synchronous database calls, time.sleep(), CPU-bound computation, and synchronous HTTP requests inside an async def route will block the event loop and stall all other requests.
  3. When in doubt, use def: If you are not sure whether all your code is async-compatible, def is the safer choice. FastAPI will automatically run it in a thread pool, preventing event loop issues.
  4. Use async def when the full chain is async: If you are using httpx, asyncpg, aiofiles, or other async-native libraries end to end, async def gives you the concurrency benefits of cooperative multitasking.
  5. Offload CPU work with run_in_executor: When an async def route needs to do CPU-bound work, use await loop.run_in_executor(None, function) to keep the event loop responsive while the computation runs in a thread.

The choice between async def and def is not about preference or style. It is a contract with the framework. async def says "I promise everything inside is non-blocking." def says "I might block, so run me in a thread." FastAPI respects both declarations and handles them correctly—but it trusts you to choose the right one. Understanding this distinction is what separates a FastAPI application that handles thousands of concurrent connections from one that freezes under load.

back to articles