FastAPI for Data Applications: Dockerizing and Scaling Your API. Part II

Hey there, data enthusiasts and API aficionados! 🎉 If you joined us for the first part of this series, you already have your FastAPI application running smoothly on your local machine. Today, we’re going to elevate that setup by deploying it to Amazon’…


This content originally appeared on DEV Community and was authored by Felipe de Godoy

Hey there, data enthusiasts and API aficionados! 🎉 If you joined us for the first part of this series, you already have your FastAPI application running smoothly on your local machine. Today, we're going to elevate that setup by deploying it to Amazon's Elastic Kubernetes Service (EKS). Get ready to scale those RESTful routes to the cloud! 🌩️

Prerequisites

  1. Basic knowledge of FastAPI (check out our previous post if you need a refresher).
  2. Docker installed.
  3. An AWS account.
  4. kubectl and eksctl configured.

Step 1: Dockerize Your FastAPI Application

The FastAPI Application

Let's assume you have a simple FastAPI application named main.py (as shown in the initial post):

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List

import uvicorn

app = FastAPI()

# Estrutura dos dados
class Item(BaseModel):
    id: int
    name: str
    description: str = None

# Banco de dados fictĂ­cio (em memĂłria)
fake_db: List[Item] = []

@app.get("/")
async def root():
    return {"message": "Hello World"}   

@app.get("/items/", response_model=List[Item])
async def get_items():
    return fake_db

@app.get("/items/{item_id}", response_model=Item)
async def get_item(item_id: int):
    for item in fake_db:
        if item.id == item_id:
            return item
    raise HTTPException(status_code=404, detail="Item not found")

@app.post("/items/", response_model=Item)
async def create_item(item: Item):
    # Verifica se o item já existe
    for existing_item in fake_db:
        if existing_item.id == item.id:
            raise HTTPException(status_code=400, detail="Item already exists")
    fake_db.append(item)
    return item

@app.put("/items/{item_id}", response_model=Item)
async def update_item(item_id: int, updated_item: Item):
    for idx, existing_item in enumerate(fake_db):
        if existing_item.id == item_id:
            fake_db[idx] = updated_item
            return updated_item
    raise HTTPException(status_code=404, detail="Item not found")


if __name__ == "__main__":
    uvicorn.run(app, host="127.0.0.1", port=8000)

In the end, we'll have a FastAPI application with a Dockerfile and Kubernetes deployment/service YAML files. Here's the file structure:

project_folder
    ├── Dockerfile
    ├── README.md
    ├── deployment.yaml
    ├── run_local.sh
    ├── main.py
    ├── requirements.txt
    ├── service.yaml

Creating the Dockerfile

Now let's build a Dockerfile to containerize our application:

# Dockerfile
# Use the official Python image from the Docker Hub

FROM tiangolo/uvicorn-gunicorn-fastapi:python3.8
COPY ./main.py /app/main.py

# Set the working directory inside the Docker container
WORKDIR /app

# Copy the requirements.txt file into the Docker image
COPY requirements.txt .

# Install the dependencies specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of your FastAPI code into the Docker image
# COPY . .

# Expose the port your FastAPI app runs on (default is 8000)
# EXPOSE 8000

# Command to run the FastAPI application using Uvicorn
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Create the requirements.txt file

Feel free to include any package you could need in your process, in our case we'll need:

fastapi
uvicorn
pydantic

Building and Running the Docker Image Locally

Use the following script to build and run your Docker container locally (run_local.sh):

#!/bin/bash 
set -e 

DOCKER_IMAGE_NAME="my-fastapi-app" 

echo "Building Docker image..." 
docker build -t $DOCKER_IMAGE_NAME .

echo "Running Docker container locally..." 
docker run -d -p 8000:8000 --name my-fastapi-container $DOCKER_IMAGE_NAME 

echo "FastAPI application is now running at http://localhost:8000" 

echo "To stop the container, run: docker stop my-fastapi-container" 
echo "To remove the container, run: docker rm my-fastapi-container"

With the relevant commands in the file, you can run the script:

chmod +x run_local.sh
./run_local.sh

Image description

Image description

Step 2: Set Up an EKS Cluster

aws sts get-caller-identity
eksctl create cluster --name=minimal-cluster --region=us-east-1 --nodegroup-name=minimal-nodes --node-type=t3.micro --nodes=1 --nodes-min=1 --nodes-max=2 --node-volume-size=10 --managed

This command sets up a basic EKS cluster with minimal nodes. Super neat, right?

You can follow the progress in the console of cloudformation: eksctl creates 2 stacks, which have many assets to be built. This may take 10 minutes or longer. It's time to take a breath :)

Image description

Image description

Image description

Step 3: Push Docker Image to AWS ECR

First, create an ECR repository:

aws ecr create-repository --repository-name my-fastapi-app --region us-east-1

Next, build, tag, and push your image to ECR:

docker build -t my-fastapi-app .
docker tag my-fastapi-app:latest 836090608262.dkr.ecr.us-east-1.amazonaws.com/my-fastapi-app:latest

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 836090608262.dkr.ecr.us-east-1.amazonaws.com
docker push 836090608262.dkr.ecr.us-east-1.amazonaws.com/my-fastapi-app:latest

Step 4: Deploy to EKS

Create a deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fastapi-app-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: fastapi-app
  template:
    metadata:
      labels:
        app: fastapi-app
    spec:
      containers:
      - name: fastapi-app
        image: 836090608262.dkr.ecr.us-east-1.amazonaws.com/my-fastapi-app:latest
        ports:
        - containerPort: 8000

And a service.yaml:

apiVersion: v1 
kind: Service 
metadata: 
  name: fastapi-app-service 
  namespace: default 
spec: 
  type: LoadBalancer 
  selector: 
    app: fastapi-app 
  ports: 
    - protocol: TCP 
      port: 80 
      targetPort: 8000

Apply them to your cluster:

kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

Verify Deployment

Run the following commands to check your deployment and services:

kubectl get deployments
kubectl get services

Grab the external IP provided and open it in your browser:

Image description

Image description

Does the response look familiar? That's right, the API we built in the previous post is now accessible via the internet through our Kubernetes cluster! 🎉 (Of course, since we're using a dummy database, it only shows the most recent data entry. 🤷‍♂️)

Feel free to run your tests and deploy some exciting features!

Cleaning Up Resources

Don't forget to clean up all the resources created during this process, including EC2 instances, EBS volumes, load balancers, Elastic IPs, resource groups, ECR repositories, EKS clusters, node groups, CloudFormation stacks, VPCs, subnets, NAT gateways, and network interfaces. Here's an order I recommend:

  1. Node groups
  2. EKS cluster
  3. ECR repositories
  4. Security groups
  5. Load balancers
  6. DHCP option sets
  7. NAT gateways

Although you can use aws-nuke to delete resources easily, beware: if used incorrectly, it can delete critical resources. Therefore, I recommend manually cleaning up the resources.

For a better understanding of the resources created, check out AWS CloudTrail. It will show you every command executed by the stack, providing valuable insights into the automated processes.

What's Next? 🚀

Now that your FastAPI application is successfully deployed on AWS EKS, it's time to think about enhancing and scaling your setup further. Here are some next steps to take your deployment to the next level:

  1. Authentication and Authorization: Implement robust authentication mechanisms like OAuth2 or JWT to secure your APIs.
  2. Improving Autoscaling: Leverage Kubernetes' Horizontal Pod Autoscaler (HPA) to dynamically adjust the number of pod replicas based on real-time metrics like CPU usage or custom application metrics.
  3. Observability: Integrate monitoring and logging solutions like Prometheus and Grafana for metrics, and use tools like Fluentd and AWS CloudWatch for logging to ensure you have full visibility into your application's performance.
  4. High Availability: Design your system for high availability by deploying your application across multiple availability zones and using Kubernetes Ingress controllers for better load balancing and failover capabilities.

In the dynamic world of modern tech, the principles of observability, scalability, and security are paramount, especially when deploying data-centric applications. By implementing these best practices, you're not only future-proofing your infrastructure but also ensuring a seamless experience for end-users.

Moreover, the ability to efficiently deploy machine learning models and data products is becoming increasingly crucial for data scientists and engineers. Leveraging tools like EKS for MLOps (Machine Learning Operations) and LLMOps (Large Language Model Operations) can enable you to deliver robust and scalable AI solutions. Every model deployment can benefit from the high availability, enhanced observability, and seamless scaling these platforms provide. In an era where data is the cornerstone of innovation, mastering these advanced deployment techniques will set you apart.

Stay tuned for future tutorials where we'll dive deeper into these topics, covering advanced Kubernetes features, CI/CD pipelines, and integrating more sophisticated data workflows. Until then, happy deploying! 🎉

Github repo:
https://github.com/felipe-de-godoy/FastAPI-for-Data-Delivery


This content originally appeared on DEV Community and was authored by Felipe de Godoy


Print Share Comment Cite Upload Translate Updates
APA

Felipe de Godoy | Sciencx (2024-06-23T23:05:36+00:00) FastAPI for Data Applications: Dockerizing and Scaling Your API. Part II. Retrieved from https://www.scien.cx/2024/06/23/fastapi-for-data-applications-dockerizing-and-scaling-your-api-part-ii/

MLA
" » FastAPI for Data Applications: Dockerizing and Scaling Your API. Part II." Felipe de Godoy | Sciencx - Sunday June 23, 2024, https://www.scien.cx/2024/06/23/fastapi-for-data-applications-dockerizing-and-scaling-your-api-part-ii/
HARVARD
Felipe de Godoy | Sciencx Sunday June 23, 2024 » FastAPI for Data Applications: Dockerizing and Scaling Your API. Part II., viewed ,<https://www.scien.cx/2024/06/23/fastapi-for-data-applications-dockerizing-and-scaling-your-api-part-ii/>
VANCOUVER
Felipe de Godoy | Sciencx - » FastAPI for Data Applications: Dockerizing and Scaling Your API. Part II. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/06/23/fastapi-for-data-applications-dockerizing-and-scaling-your-api-part-ii/
CHICAGO
" » FastAPI for Data Applications: Dockerizing and Scaling Your API. Part II." Felipe de Godoy | Sciencx - Accessed . https://www.scien.cx/2024/06/23/fastapi-for-data-applications-dockerizing-and-scaling-your-api-part-ii/
IEEE
" » FastAPI for Data Applications: Dockerizing and Scaling Your API. Part II." Felipe de Godoy | Sciencx [Online]. Available: https://www.scien.cx/2024/06/23/fastapi-for-data-applications-dockerizing-and-scaling-your-api-part-ii/. [Accessed: ]
rf:citation
» FastAPI for Data Applications: Dockerizing and Scaling Your API. Part II | Felipe de Godoy | Sciencx | https://www.scien.cx/2024/06/23/fastapi-for-data-applications-dockerizing-and-scaling-your-api-part-ii/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.