AW Dev Rethought

⚖️ There are two ways of constructing a software design: one way is to make it so simple that there are obviously no deficiencies - C.A.R. Hoare

🧠 AI with Python – 📊 Logging & Monitoring ML Inputs/Outputs


Description:

Deploying a machine learning model doesn’t end when the API goes live.

Once real users start sending requests, visibility becomes critical — not just to ensure the system is working, but to understand how the model is actually being used.

In this project, we add basic logging and monitoring to a deployed ML API so we can track inputs, predictions, and inference behavior in real time.


Understanding the Problem

A model that performs well during development can behave very differently in production. Common issues include:

  • unexpected input values
  • unusual usage patterns
  • silent prediction errors
  • gradual changes in data distribution

Without logging, these problems remain invisible.

Logging model inputs and outputs provides the first layer of ML observability, helping you debug issues early and gather insights for future improvements.


1. Configure Logging for the ML API

We start by setting up Python’s built-in logging module.

import logging

logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s - %(levelname)s - %(message)s"
)

logger = logging.getLogger(__name__)

This configuration ensures every important event is timestamped and readable.


2. Load the Model and Initialize the API

The model is loaded once at startup to avoid repeated disk reads.

import joblib
import numpy as np
from fastapi import FastAPI
from pydantic import BaseModel

model = joblib.load("iris_model.joblib")
app = FastAPI(title="ML API with Logging")

class InputData(BaseModel):
    features: list

This setup mirrors a standard inference API, now enhanced with logging.


3. Log Inputs and Predictions During Inference

We log both the incoming request data and the model’s prediction.

@app.post("/predict")
def predict(data: InputData):
    logger.info(f"Received input features: {data.features}")

    features = np.array(data.features).reshape(1, -1)
    prediction = model.predict(features).tolist()

    logger.info(f"Model prediction: {prediction}")

    return {"prediction": prediction}

Each request now leaves a trace that can be inspected later.


4. Viewing Logs in Real Time

When running the API:

uvicorn main:app

You’ll see logs like:

2025-01-10 10:30:12 - INFO - Received input features: [5.8, 2.7, 5.1, 1.9]
2025-01-10 10:30:12 - INFO - Model prediction: [2]

This makes debugging and monitoring immediate and intuitive.


5. Persisting Logs to a File (Optional)

For long-running services, logs can be stored to a file.

logging.basicConfig(
    filename="inference.log",
    level=logging.INFO,
    format="%(asctime)s - %(levelname)s - %(message)s"
)

This creates a historical record of inference activity for analysis and audits.


Why Logging Is Critical for ML Systems

Logging helps you:

  • understand real-world input patterns
  • detect outliers and abnormal requests
  • audit predictions for compliance or debugging
  • collect data for future retraining
  • build trust in deployed ML systems

This is the foundation for more advanced monitoring like drift detection and performance tracking.


Key Takeaways

  1. Logging provides visibility into real-world ML model behavior.
  2. Monitoring inputs and outputs helps detect anomalies early.
  3. Python’s built-in logging is sufficient for basic observability.
  4. Logs enable debugging, auditing, and continuous improvement.
  5. This step is essential before advanced monitoring and drift detection.

Conclusion

Logging and monitoring turn a deployed ML model from a black box into an observable system.

By tracking inputs and outputs, you gain insight into how your model performs in the real world — beyond test datasets and metrics.

This step lays the groundwork for advanced practices like data drift detection, model performance monitoring, and production-grade ML observability, making it a crucial milestone in any real-world ML deployment journey.


Link copied!

Comments

Add Your Comment

Comment Added!