Skip to content

6.3. Containers

A container is a lightweight, portable package that includes your application and everything it needs to run — code, libraries, dependencies, and configuration.
Docker is the most widely used container technology.

Instead of worrying about whether code will behave differently on your laptop, a server, or in the cloud, containers guarantee that your application runs the same way everywhere.


Why use Containers in Pricing & Analytics?

  • Consistency – no more “it worked on my machine” problems.
  • Portability – run the same model or pipeline locally, in testing, and in production without change.
  • Isolation – dependencies are separated, so one project’s setup won’t break another.
  • Scalability – containers can be replicated to handle large volumes (e.g. running impact analysis across millions of quotes).
  • Integration – containers slot neatly into CI/CD pipelines for smooth deployment.

Example in Pricing

Imagine you’ve built a GLM or LightGBM model for motor pricing:

  • You package the model, preprocessing logic, and dependencies into a Docker container.
  • That container can then be deployed into a rating engine API or batch pricing system.
  • Whether an analyst tests it on their laptop, IT deploys it to a cloud server, or a vendor runs it in production, the container ensures the model behaves identically.

Workflow with Docker

  1. Write your code – e.g. Python scripts for data prep and model training.
  2. Create a Dockerfile – specify Python version, required packages, and how to run the app.
  3. Build the container – package everything into a single Docker image.
  4. Run it anywhere – locally, on a server, or in the cloud.
  5. Deploy at scale – containers can be managed with orchestration tools like Kubernetes or Azure Container Apps.

Example Dockerfile (Python model)

```dockerfile

Use a lightweight Python base image

FROM python:3.11-slim

Set the working directory

WORKDIR /app

Copy requirements and install dependencies

COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt

Copy model code into container

COPY . .

Run the app (e.g. FastAPI service exposing the model)

CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8080"]