Skip to content

5.3. Deployment

Deploying machine learning models depends on the capabilities of your current infrastructure and the requirements of your pricing workflow. There are multiple approaches, each with trade-offs.


Offline / Batch Scoring

Offline or batch scoring involves running models on datasets outside of a live system. This requires no immediate deployment and can simplify using ML in pricing, but it has some limitations:

  • Good for generating classification files (e.g., postcode risk categories, vehicle classes).
  • Can feed into offline optimisation frameworks to produce relativities or scorecards.
  • Less responsive to changes in data; updates are only reflected when the batch is rerun.
  • Useful when productionising is not yet feasible or for experimental analyses.

PMML

Predictive Model Markup Language (PMML) is a standard for representing models that allows them to be imported into rating engines:

  • Requires converting your ML model into a PMML file.
  • Often needs additional manipulation for compatibility with specific rating software.
  • Provides a way to deploy models where native code execution is not possible.
  • Limitations:
  • Specific library requirements for generating PMML.
  • Deployment is often manual and brittle.
  • Updates require repeating the conversion and deployment steps.

APIs

APIs (Application Programming Interfaces) provide a modern approach for live scoring and integration:

  • Typical approach outside of insurance for real-time model use.
  • Enables automated scoring and integration into pricing systems.
  • Allows models to be part of a modular, end-to-end pricing system.
  • Supports controlled deployment, with optional manual review before production use.
  • Facilitates monitoring, version control, and rollback capabilities.
  • Recommended for building flexible, scalable pricing infrastructure.

  • Prefer automated deployment pipelines wherever possible (CI/CD).
  • Use APIs for live scoring to integrate models directly into pricing systems.
  • Keep offline scoring for experimentation, testing, or batch-driven use cases.
  • Maintain version control and logging for all deployments to ensure reproducibility and auditability.