Back to Projects
MLOps2024

Iris MLOps Pipeline

Production-style CI/CD for an ML API with CML, Docker, Kubernetes, and MLflow.

Automated testing, container builds, deployment patterns, and model lifecycle thinking.

TL;DR

  • End-to-end pipeline: train, evaluate, deploy.
  • MLflow tracking for repeatable experiments.
  • Containerized serving for consistent rollout.

Artifacts

Pipeline diagram

Train -> evaluate -> register -> deploy.

Model registry

Tracked runs and promotion to production.

Context

Manual ML deployments made it hard to reproduce results and ship updates quickly.

Problem

Inconsistent environments and ad-hoc releases slowed iteration and raised risk.

Approach

  • Define pipeline stages with clear contracts.
  • Track experiments in MLflow and register models.
  • Package inference in Docker with pinned dependencies.
  • Deploy via Kubernetes manifests.

Tradeoffs

  • Invested in infra upfront to reduce future manual work.
  • Chose Kubernetes for reproducibility over simpler VM deploys.

Testing and Reliability

  • Unit tests for data and feature checks.
  • Smoke tests for serving endpoints.

Deployment and Ops

  • Automated rollouts via GitHub Actions.
  • Rollback-ready deployment manifests.

Outcome

  • Repeatable releases and cleaner handoffs.
  • Faster iteration for model updates.
  • Clear audit trail for experiments.

If I had two more weeks

  • Add monitoring dashboards for drift and latency.
  • Introduce canary deployments for new models.