🧠 AI with Python - 🐳 Dockerizing a Machine Learning Model
Posted on: December 23, 2025
Description:
As machine learning projects grow beyond notebooks, one common challenge appears quickly: “It works on my system, but not on yours.”
This usually happens due to differences in Python versions, dependencies, or system libraries.
Docker solves this problem by packaging your ML application and its environment into a single, portable container.
In this project, we take a trained ML model and run it inside a Docker container so it behaves the same way everywhere.
Understanding the Problem:
Machine learning models don’t run in isolation. They depend on:
- Python version
- installed libraries (sklearn, numpy, joblib, etc.)
- system-level dependencies
When these differ across machines, deployments break.
Docker addresses this by:
- creating an isolated environment
- bundling dependencies with your code
- running the same way on any machine with Docker installed
This makes Docker a natural next step after building ML APIs and apps.
1. Install Required Packages:
Before containerizing, we ensure our ML API works locally.
pip install fastapi uvicorn scikit-learn joblib numpy
This keeps the workflow consistent with earlier AI with Python scripts.
2. ML Inference Code (FastAPI):
We reuse the same FastAPI-based prediction code that loads a trained model and serves predictions.
from fastapi import FastAPI
from pydantic import BaseModel
import joblib
import numpy as np
model = joblib.load("iris_model.joblib")
app = FastAPI(title="Iris ML API")
class InputData(BaseModel):
features: list
@app.post("/predict")
def predict(data: InputData):
arr = np.array(data.features).reshape(1, -1)
pred = model.predict(arr).tolist()
return {"prediction": pred}
At this point, the ML API is already functional locally.
3. Create a Dockerfile:
The Dockerfile defines how Docker should build and run the ML application.
FROM python:3.10-slim
WORKDIR /app
RUN pip install fastapi uvicorn scikit-learn joblib numpy
COPY iris_model.joblib .
COPY main.py .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
This tells Docker to:
- use a lightweight Python image
- install required ML libraries
- copy the model and code
- start the FastAPI server
4. Build the Docker Image:
We now convert the application into a Docker image.
docker build -t iris-ml-api .
This step packages the ML model, code, and dependencies together.
5. Run the Docker Container:
Once built, the container can be run anywhere.
docker run -p 8000:8000 iris-ml-api
The ML API is now live inside Docker and accessible on port 8000.
6. Test the Dockerized ML API:
We send a prediction request just like before.
curl -X POST "http://127.0.0.1:8000/predict" \
-H "Content-Type: application/json" \
-d '{"features":[5.8,2.7,5.1,1.9]}'
The response confirms that the model works exactly the same inside Docker.
Key Takeaways:
- Docker ensures ML applications run consistently across environments.
- ML models, code, and dependencies are packaged into a single container.
- The same FastAPI inference code works both locally and inside Docker.
- Dockerized ML apps are easier to deploy to cloud platforms.
- This step bridges the gap between ML development and real-world deployment.
Conclusion:
Dockerization is a crucial milestone in any machine learning deployment journey.
It removes environment-related issues and makes ML applications portable, reproducible, and scalable.
With Docker, your ML model is no longer tied to your local machine — it becomes a self-contained service ready for cloud deployment, orchestration, or integration into larger systems.
This sets the stage perfectly for the next steps in deployment: cloud hosting, scaling, and monitoring.
No comments yet. Be the first to comment!