ModelOps is a set of capabilities focused on the governance and life cycle management of artificial intelligence and decision models. ModelOps includes MLOps and AIOps, terms which can sometimes be used interchangeably. However, while MLOps is similar to and contained within ModelOps, MLOps tends to focus on operationalization of ML models. And AIOps, also similar and contained within ModelOps, is generally AI for IT operations. ModelOps, instead, focuses on operationalizing all AI and decision models. This includes models based on:
- Machine learning (ML)
- Knowledge graphs
- Rules
- Optimization
- Linguistic and agent-based models
The core capabilities of ModelOps includes continuous integration/continuous delivery (CI/CD) integration, model development environments, champion-challenger testing, model versioning, model store, and rollback.
ModelOps is a variation of DevOps necessary for developing predictive analytics at scale and enabling the continuous delivery and efficient development and deployment of models. Through ModelOps, a company or organization can provide regular updates and deployments as the data and AI models are managed, scaled, monitored, and retrained for production and for redeployment as company or organization challenges change. As well, ModelOps works to solve the challenges faced by an organization deploying a model into production, which can include:
- Compatibility of an analytics model from the creation environment to the production environment
- Developing a portable model
- Limitations from monolithic and locked-in software platforms which can limit what organizations can do or offer
- Handling larger volumes of data and data transport modes as a model progresses to production
ModelOps and its related team are also seen as a way to develop communication between data scientists, data engineers, application owners, and infrastructure owners. ModelOps systems with dashboards or reporting systems can also allow leaders or program managers to better understand how teams are deploying or using AI in an enterprise.
Common problems addressed by ModelOps
A use case for ModelOps is in the financial sector, in which time-series models are used for strict rules on bias and auditability. For these models, fairness and robustness are necessary. ModelOps can automate the model life cycle of models in production. The automation can include designing the model life cycle, governing and monitoring the model for bias and technical or business anomalies, and updating the model without disrupting the applications. ModelOps in this case works to keep everything together while working to maintain business performance and ensure risk control and compliance.
Benefits of ModelOps
A 2018 Gartner study found 37% of respondents had deployed AI in some form, but enterprises were still far from implementing AI, with deployment challenges often being the reason why. Independent analyst firm Forrester conducted similar research and reported that data scientists regularly complained that their models sometimes or never deployed.
After these and similar findings, the goal for developing ModelOps was to address the gap between model deployment and model governance, to ensure all models ran in production, and to align the models with technical and business KPI's, all the while managing risk. ModelOps was described as a programmatic solution for AI-aware staged development that would enable model versions to match business apps and include AI model concepts such as model monitoring, drift detection, and active learning. This research was presented in December 2018 by Waldemar Hummer and Vinod Muthusamy of IBM Research AI.
In a 2019 paper presented at the IEEE International Conference on Cloud Engineering (IC2E), the proposition of ModelOps as a cloud-based framework and platform for end-to-end development and lifecycle management of artificial intelligence applications was given. They suggested the framework would make it possible to extend the principles of software lifecycle management to enable automation, trust, reliability, traceability, quality control, and reproducibility of AI model pipelines. The paper was presented by Waldemar Hummer, Vinod Muthusamy, Thomas Rausch, Parijat Dube, and Kaoutar El Maghraoui.
ModelOp, Inc. published the first guide to ModelOps methodology in March 2020. The publication was intended to provide an overview of the capabilities of ModelOps, with technical and organizational requirements for implementing ModelOps practices. In October 2020, ModelOp launched ModelOp.io, a hub for ModelOps and MLOps resources.