• Log in

Introduction to machine learning operations (MLOps)

MLOps stands for machine-learning operations, and consists of a set of practices to increase the quality, simplify the management process, and automate the deployment of machine learning in large-scale production environments.

As more companies invest in artificial intelligence and machine learning, there's a gap in understanding between the data science teams developing machine-learning models and the DevOps teams operating the applications that power those models. In fact, as of today, only 15% of companies deploy AI to encompass their entire activities. It doesn't help that 75% of machine learning models in production are never used due to issues in deployment, monitoring, management, and governance. Ultimately, this leads to a huge waste of time for the engineers and data scientists working on the models, a large net loss of money invested by the company, and a general lack of trust in ML models helping the company grow... when they actually can!

Our model performance monitoring provides data scientists and MLOPs practitioners unprecedented visibility into the performance of their machine-learning applications by monitoring the behavior and effectiveness of models in production. It also improves collaboration with DevOps teams, feeding into a continuous process of development, testing, and operational monitoring.

Don't have a New Relic account? Sign up in seconds... It's free, forever!

How to monitor your machine learning models

To use MLOps within applied intelligence, you have a few different options:

  1. Partnerships: New Relic has partnered with 7 different MLOps vendors who offer specific use cases and ML monitoring capabilities. Partners are a great way to gain access to curated performance dashboards and other observability tools, providing out-of-the-box dashboards that give you instant visibility into your models.

    We currently partner with:

  2. Integrations: New Relic has also partnered with Amazon SageMaker, giving you a view of performance metrics, and expanding access to observability for ML engineers and data science teams. With Amazon SageMaker it's easier to develop, test, and monitor ML models in production by breaking down the silos between AI/ML, DevOps, and site reliability engineers (SREs). Read more on our Amazon SageMaker integration.

  3. Bring your own data (BYO): If you don't want to sign up for another license, or if you don't use Amazon SageMaker, you can easily bring your own ML model telemetry into New Relic and start getting value from your ML model data. In just a few minutes, you can get feature distribution, statistics data, and prediction distribution. Read more on BYO in our docs.

To start measuring machine learning model performance in minutes using either one of these options, check out the MLOps integrations quickstarts, and click on Machine learning ops.

Copyright © 2022 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.