How FastAPI path operations work
If you’re building a new Python web app these days, there’s a good chance you’re using FastAPI. There are a lot of features that make FastAPI easy to get started with. There are also a lot of nuances that take a while to understand. One feature I’ve been untangling is the way FastAPI manages calls to API routes via decorated path parameters. The new year is a perfect time to take a deeper dive.
What happens in a web server
When we build a web app, one of the critical components is a web server, a program that listens for incoming requests from the network. It then translates those requests into methods that are called in the backend.
To better understand what’s going on under the covers, we can first implement a simple web server using the http.server
module included in Python’s standard library.
We need to write a program that listens on a port and accepts HTTP requests. It accepts the request, parses the path route, and parses any data attached to the HTTP call. Or, “All I want is to cURL and parse a JSON object”.
import json
from http.server import BaseHTTPRequestHandler, HTTPServer
from urllib.parse import urlparse, parse_qs
class RequestHandler(BaseHTTPRequestHandler):
def parse_path(self, request_path: str)-> dict:
"""
Parse request path
"""
parsed = urlparse(request_path)
print(parsed)
params_dict = parse_qs(parsed.query)
return params_dict
def store_urls(self, request_path: str)-> None:
"""
Parse URLs and store them
"""
params = self.parse_path(request_path)
print(params)
for key, val in params.items():
self.data_store.put_data(val[0])
def return_k_json(self, k:dict)-> BinaryIO:
"""
Return json response
"""
self.send_response(200)
self.send_header("Content-type", "application/json")
self.end_headers()
# Contains the output stream for writing a response back to the client.
# BufferedIOBase that writes to a stream
# See https://docs.python.org/3/library/io.html#io.BufferedIOBase.write
self.wfile.write(json.dumps(k).encode('utf-8'))
def bad_request(self):
"""
Handle bad request
"""
self.send_response(400)
self.send_header("Content-type", "application/json")
self.end_headers()
def do_GET(self):
request_path = self.path
if self.path == "/":
self.return_k_json({"ciao": "mondo"})
if request_path.startswith("/get"):
key = self.parse_path(request_path)
self.return_k_json({"jars": key["key"]})
self.send_response(200)
else:
self.bad_request()
self.end_headers()
def do_POST(self):
request_path = self.path
if request_path.startswith("/set"):
self.store_urls(request_path)
self.send_response(200)
else:
self.bad_request()
if __name__ == "__main__":
host = "localhost"
port = 8000
server = HTTPServer((host, port), RequestHandler)
print("Server started http://%s:%s" % (host, port))
server.serve_forever()
What’s going on here?
Let’s say that we produce Nulltella, an artisinal hazlenut spread for statisticians, and are looking to build a web app that keeps track of all of our Nulltella jars so we can later stand up a prediction service.
We would start by designing a super simple API: As users,
- we want to test the server and get back a simple response
- we’d like to add jars to our inventory, and
- to see the jars we added.
We translate these actions to GET and PUT requests so we can write HTTP calls for them. For simplicity’s sake, we won’t actually store them server-side but we will write them so we can can very simply see how to send data to our app:
We want to test the server:
> python serve.py
> curl -X POST http://localhost:8000/
> {"ciao": "mondo"}
We want to store items:
> curl -X POST http://localhost:8000/set\?key\=8
200 OK
And get back the stored items:
> curl -X GET http://localhost:8000/get\?key\=8
> {"jars": ["8"]}
Our server needs a way to parse the key pieces of information it receives:
- They type of request.
do_GET
anddo_POST
handle this implicitly in the HTTP implementation. - The parameters we pass to the path request so that we can do something with them
- A route to a method inside our application itself that processes the data
In our simple server, the heart of the routing happens at the method level. If we send a base path, we return {"ciao": "mondo"}
. Otherwise, we return the amount of jars we’ve passed in via the request path by parsing the parameters in the path.
def do_GET(self) -> None:
request_path = self.path
if self.path == "/":
self.return_k_json({"ciao": "mondo"})
if request_path.startswith("/get"):
key = self.parse_path(request_path)
# action performed within the web app here
self.return_k_json({"jars": key["key"]})
self.send_response(200)
else:
self.bad_request()
self.end_headers()
We can see how this can become complicated quickly. For example, what if we have multiple operations we perform during a GET
: what if we get data from a database, or a cache, or we retrieve assets? We’ll have different methods that we process depending on how the path is parsed. What if we also have PUT/DELETE
verbs? What if we need authentication? To write to a database? Static pages? Our code complexity relative to our starting point starts to grow, and we now need a framework.
Starlette
Early Python web dev frameworks include juggernauts Django and Flask. More recently, since Python’s async story has grown stronger, frameworks like Starlette have come onto the scene to include async functionality out of the box.
Starlette was built by the creator of Django Rest Framework and includes lightweight operations for the core functionality of HTTP calls and additional operations like web sockets, with the added bonus of being async by default.
To manage an HTTP call the same way we would with our simple server, we can do the following with Starlette:
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
async def homepage(request):
return JSONResponse({'ciao': 'mondo'})
app = Starlette(debug=True, routes=[Route('/', homepage),])
We start an instance of a Starlette application, which has processes routes. Each route is linked, at the path level, to the actual method it calls. If Starlette sees that specific route, it calls the method, taking into account logic for parsing and reading HTTP request headers and bodies.
What if we want to add a second method call based on a different route, getting our jar count again?
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
async def homepage(request):
return JSONResponse({'ciao': 'mondo'})
async def get_jars(request):
return JSONResponse({'jars': ['8']})
app = Starlette(debug=True, routes=[
Route('/', homepage),
Route('/get_jars', get_jars)
])
We see that we are also passing and processing params, and there is logic that processes the path params based on the method as they come in from the request.
FastAPI’s implementation
FastAPI wraps Starlette - “as it is basically Starlette on steroids” per the docs - and includes Pydantic type validation at the logical boundaries of the application.
Under the covers, when we instantiate a FastAPI application, it’s really “just” an instance of a Starlette application with properties that we override at the application level.
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
return {"ciao": "mondo"}
@app.get("/jars/{id}")
async def get_jars(id):
return {"message": f"jars: {id}"}
In development, FastAPI uses uvicorn
, an ASGI server to listen for incoming requests and handle them according to the routes defined in your application.
Uvicorn initializes the ASGI server, binds it to a socket connections on port 8000
, and starts listening for incoming connections. So, when we send a GET
request to the main route hosted by default on port 8000, we expect to get back ciao mondo
as a response.
Like our previous applications, FastAPI is still delegating path operations and methods to a router that processes them and parses parameters, but it wraps these in a Python decorator. This is easier to write, but adds a level of complexity at the layer of understanding how the path processing actually happens.
When we perform a path operation in FastAPI, we’re performing the equivalent work of routing that we do with our simple method, but with a lot more rigor and nested definitions.
Within our simple server, we:
- Start the server
- Listen on port
8000
for incoming requests - When we receive a request, we route it to the
do_GET
method - Depending on the path of the request, we route it to
"/"
- We return the results to the client via a
200
status
In FastAPI, we:
- Start the uvicorn web server (if in development mode, if production we have to choose gunicorn using the compatible worker class)
- Listen on port
8000
for incoming requests - We instantiate an instance of the FastAPI application
- This in turn instantiates an instance of Starlette
- When we receive a
GET
request, it’s routed to the application’sself.get
method - This in turn calls
self.router.get
with the path operation - The router is an instance of
routing.APIRouter
- The
.get
method onAPIRouter
takes the path and retunrsreturn self.api_route
. This is the point where the decorater is actually called - we can see the decorator in that method takes aDecoratedCallable
function as input and returns a decoratedadd_api_route
, which actually appends the route to the list of routes.
This is purely the set of steps that happens for correct routing - and we didn’t yet address how the path parameters in the path are processed.
Path Parameter Routing
Path parameter routing happens in Starlette, where path parameters are parsed out of the request into a dictionary (just like we do in our simple web application), via the magic of Jinja Templating.
TL;DR
When we write a route in FastAPI that accepts path parameters, we are creating a lengthy callstack that goes through several levels of logic in FastAPI using decorators as input into an application that routes requests and appends methods using decorators to a group of route methods; those requests are then passed onto Starlette which does the work of parsing the path variables, using Jinja templates, into dictionaries which the application can then work with and return data to you!