A good summary of Machine Learning Ops tooling by Todd Morrill.
TLDR; I recently attended the MLOps NYC conference, where I explored some neat new tools for building and managing machine learning models.
Anyone in the industry knows how confusing it can be to discern between a data scientist, data engineer, model ops engineer, research scientist, and the list goes on. Titles aside, getting models into production (i.e. serving a ML model that provides predictions) is frankly where the money is. Models don’t drive metrics by sitting in a jupyter notebook on a laptop. However, going from a pickle file or .h5 file to a CICD pipeline that you can rely on for rapid model updates is a huge lift. Everyone always points to this Google paper to explain the complexity of deploying machine learning models. The topic of the conference should now be clear. The goal is to develop pipelines that allow you to train and deploy models in a robust, repeatable, and automated fashion.
I went to the training track, which covered Kubeflow, MLFlow, SageMaker, and a number of other bespoke tools. I also discovered Dask and Rapids while I was there. I’ll attempt to give a quick overview of each of these tools.
https://toddmorrill.github.io/blog/2019/10/05/MLOps-tooling
Comments
Post a Comment