Day 73 of 80

Docker & Containerization

Phase 8: Deployment & UI

What You'll Build Today

We have spent weeks building incredible AI agents, RAG pipelines, and API endpoints. But right now, those applications are fragile. They live on your laptop, dependent on your specific version of Python, your specific installed libraries, and your specific file paths.

If your computer died today, could you get your AI running on a new machine in under 5 minutes? Probably not.

Today, we are going to solve the "It works on my machine" problem forever. You are going to package your FastAPI application into a "Container." This container is a self-sufficient unit that includes your code, the operating system settings, and all dependencies.

Here is what you will master:

* Dockerfile Creation: Writing the recipe that defines your application's environment. Why? So you never have to manually install dependencies again.

* Images vs. Containers: Understanding the difference between the blueprint (Image) and the house (Container). Why? So you can run multiple copies of your app simultaneously without conflicts.

* Port Mapping: Connecting the container's internal network to the outside world. Why? So you can actually access your API.

* Docker Compose: Orchestrating multiple services (like your API and a Vector Database) to launch with a single command. Why? Because real AI apps rarely run in isolation.

The Problem

Let's look at a scenario that drives developers crazy.

Imagine you have written a perfect FastAPI application. It uses openai, langchain, and chromadb. You zip up the folder and email it to a friend (or try to upload it to a server).

Your friend unzips it and tries to run it.

The Terminal Output:
$ python main.py

Traceback (most recent call last):

File "main.py", line 1, in

import fastapi

ModuleNotFoundError: No module named 'fastapi'

"Okay," your friend sighs. "I need to install requirements." They run pip install -r requirements.txt.

The Next Error:
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)

ERROR: No matching distribution found for torch

Why? Maybe you are on a Mac using Python 3.11, and they are on Windows using Python 3.8. The libraries are incompatible.

The Final Straw:

Even if they get Python running, maybe your code relies on an Environment Variable for your API Key that you forgot to tell them about, or a specific folder structure for your vector database.

The Pain:

To share your code, you essentially have to write a 10-page manual explaining how to configure their computer exactly like yours. This is fragile, frustrating, and unscalable.

There has to be a way to ship the computer setup along with the code.

Let's Build It

We are going to use Docker. Docker allows us to create a lightweight, virtual environment (a container) that runs exactly the same on Windows, Mac, Linux, or a cloud server.

Step 1: Create a Simple FastAPI App

First, let's create a standard directory structure. Create a folder named docker-day and open it in your editor.

Create a file named main.py. This is a simple API that echoes a message.

# main.py

from fastapi import FastAPI

import os

app = FastAPI()

@app.get("/")

def read_root():

# We will eventually inject this variable using Docker

env_name = os.getenv("ENVIRONMENT_NAME", "Local Machine")

return {"message": f"Hello from {env_name}!"}

@app.get("/health")

def health_check():

return {"status": "running"}

Now, create a requirements.txt file to list our dependencies.

fastapi==0.109.0

uvicorn==0.27.0

Note: We pin specific versions (==) to ensure consistency.

Step 2: Write the Dockerfile

The Dockerfile is a text file (no extension) that tells Docker how to build your application image. Think of it as a step-by-step recipe.

Create a file named Dockerfile (capital D, no extension) in the same folder.

``dockerfile

# 1. Start with a base image # We use a lightweight version of Python 3.10

FROM python:3.10-slim

# 2. Set the working directory inside the container # All future commands will run from this folder

WORKDIR /app

# 3. Copy just the requirements file first # We do this to take advantage of Docker caching (explained later)

COPY requirements.txt .

# 4. Install dependencies # --no-cache-dir keeps the image small

RUN pip install --no-cache-dir -r requirements.txt

# 5. Copy the rest of the application code

COPY . .

# 6. Expose the port the app runs on # This is for documentation; we still need to map it later

EXPOSE 8000

# 7. Define the command to run the app # syntax: ["program", "arg1", "arg2", ...]

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]


Why this matters:

* FROM: We start with a pre-built Linux environment that already has Python installed.

* COPY: We manually move files from your computer into the container.

* host 0.0.0.0: By default, Uvicorn runs on 127.0.0.1 (localhost). Inside a container, localhost is isolated. 0.0.0.0 allows the container to accept connections from outside.

Step 3: Build the Image

Now we "bake" the recipe into an Image. An image is a read-only template.

Open your terminal in the docker-day folder and run:

bash

docker build -t my-fastapi-app .


* -t my-fastapi-app: Tags (names) the image "my-fastapi-app".

* .: Tells Docker to look for the Dockerfile in the current directory.

You will see Docker downloading Python, installing requirements, and copying files.

Step 4: Run the Container

Now we run an instance of that image. This is called a Container.

bash

docker run -p 8000:8000 my-fastapi-app


Understanding -p 8000:8000:

This is the most confusing part for beginners.

* The first 8000 is the port on your computer (Host).

* The second 8000 is the port inside the container.

* It creates a tunnel. If you go to localhost:8000 on your browser, Docker forwards that request to port 8000 inside the container.

Test it:

Open your browser to http://localhost:8000. You should see:

{"message": "Hello from Local Machine!"}

Step 5: Injecting Environment Variables

Remember our code used os.getenv("ENVIRONMENT_NAME")? Let's change that without touching the code. Stop the previous container (Ctrl+C) and run:

bash

docker run -p 8000:8000 -e ENVIRONMENT_NAME="Docker Container" my-fastapi-app


Refresh your browser. It should now say:

{"message": "Hello from Docker Container!"} Why this matters: You can now change configuration (API keys, database URLs) without changing a single line of code.

Step 6: Docker Compose (Multi-Service)

Real AI apps usually need a database. Running docker run manually for 3 different services is annoying. We use docker-compose to manage them all at once.

Delete your running containers. Create a new file named docker-compose.yml.

yaml

version: '3.8'

services:

# Service 1: Our FastAPI App

api:

build: . # Build from the Dockerfile in current directory

ports:

  • "8000:8000"

environment:

  • ENVIRONMENT_NAME=Compose Environment
# We will connect to the vector db using its service name

depends_on:

  • vector-db
# Service 2: A simple Vector Database (using ChromaDB)

vector-db:

image: chromadb/chroma:latest

ports:

  • "8001:8000" # Map host 8001 to container 8000

volumes:

  • ./chroma_data:/chroma/chroma # Persist data to our local folder

Key Concepts:
  • Networking: Docker Compose automatically creates a network. The api service can talk to the vector-db service simply by using the hostname vector-db. No IP addresses needed!
  • Volumes: The volumes section maps a folder on your computer (./chroma_data) to a folder inside the container. If you delete the container, the data inside stays safe on your computer.
  • Run it:
    bash

    docker-compose up

    
    

    Docker will pull the ChromaDB image, build your API image, and start both. You now have a full stack running with one command.

    Now You Try

    It is time to experiment with the container you just built.

  • Version Swap:
  • Modify your Dockerfile. Change FROM python:3.10-slim to FROM python:3.11-slim. Rebuild the image (docker build...). Verify it still runs. This proves how easy it is to upgrade your system dependencies with Docker.

  • The Ignore File:
  • When you copy files (COPY . .), you are also copying your local __pycache__ and maybe your .env file (which might have secrets). Create a .dockerignore file (works exactly like .gitignore) and add __pycache__ and .env to it. Rebuild.

  • Argument Override:
  • Run your image again using docker run, but this time map port 9000 on your machine to 8000 in the container (-p 9000:8000). Access the app in your browser. Note that localhost:8000 no longer works, but localhost:9000 does.

    Challenge Project: The Containerized RAG

    Your goal is to run a mini RAG system entirely within Docker.

    Requirements:
  • Update your main.py to accept a text string via POST request, embed it using OpenAI, and print the embedding to the console (you don't need to store it in Chroma for this specific challenge, just prove the API key works).
  • You must use docker-compose.
  • You cannot hardcode your OpenAI API Key in the code.
  • You cannot hardcode your OpenAI API Key in the docker-compose.yml (that's a security risk if you commit the file).
  • You must use a .env file to store the key and pass it into the container via Compose.
  • Hints:

    * Create a .env file containing OPENAI_API_KEY=sk-....

    * In docker-compose.yml, under the api service, look up how to use the env_file property OR how to pass variable substitution like ${OPENAI_API_KEY}.

    * Don't forget to add openai to your requirements.txt.

    Expected Output:

    When you run docker-compose up, the app starts. When you send a curl request:

    bash

    curl -X POST "http://localhost:8000/embed" -H "Content-Type: application/json" -d '{"text": "Docker is cool"}'

    ``

    You should receive a JSON response with the embedding vector, proving the container successfully talked to OpenAI.

    What You Learned

    Today you solved the deployment headache. You moved from "It works on my machine" to "It works everywhere."

    * Dockerfile: You learned to script the installation of your OS and Python environment.

    * Images & Containers: You learned that an Image is the frozen blueprint, and a Container is the running instance.

    * Networking: You learned how to map ports so the outside world can talk to your container.

    * Docker Compose: You learned to spin up complex, multi-service applications with a single command.

    Why This Matters:

    In the real world, AI models are heavy. They require specific CUDA drivers, vector databases, and caching layers. You cannot ask a user to install these manually. Docker is the industry standard for packaging these complex brains into portable boxes.

    Tomorrow: Now that your app is packaged in a container, we can send it anywhere. Tomorrow, we go to the Cloud. We will deploy your Docker container to a live server so the whole world can use your AI.