Global MLOps and ML tools landscape — v1.0 (January 2021)
Executive Summary
The term MLOps is — for anyone in the Artificial Intelligence field — the one magic word to solve them all. It combines all Machine Learning relevant tasks, from managing, processing, and visualizing data, running and tracking experiments to putting the creating models into production, ideally at scale, compliantly and securely. It defines the process of operationalizing ML activities to create AI based applications and services.
At MLReef we operate in this market, but we had only a rough idea what other tools, platforms and services there are. To find out, we conducted a global search for relevant providers, finding more than 300 in total! We decided to clean what we found to active projects and list only those specifically pointing to Machine Learning tasks and objectives.
The results is the first version of our global MLOps platforms and ML tools landscape.
Please reach out to us for feedback, corrections or additions at: hello@mlreef.com
Key statistics
- More than 300 platforms and tools where analysed, where around 220 were active projects.
- In Europe, only 4 MLOps platforms could be identified MLReef, Valohai, Hopsworks & Polyaxon.
- Most platforms and tools cover between 2–3 ML tasks
Providers vs ML life cycle
The first view was to see what tools and platforms are offering services for individual taks and processes within the ML life cycle. To keep a better overview, we split the life cycle in major 4 ML areas:
Data Management: All ML focused tasks to explore, manage and create data(sets).
Modelling: All pipeline related tasks from data processing to trained model validation.
Continuous Deployment: All tasks related to the “Ops” part of MLOps — launching, monitoring and securing productive models.
Computing Management: All activities and functions related to computing and resource management.
Note: One provider can be found in several ML tasks. We identified their offerings as best we could by screening their websites, demo videos and hands-on testing.
MLOps platform comparison
We wanted to go deeper and analyzing tools, that position themselfs as holistic platforms covering a broader spectrum of ML tasks. This differentiation needed to be conducted based on identifiable and specific metrics, so to try to avoid arbitrary selection. In our view, an MLOps platform needed to:
- at least cover 5 tasks from the ML life cycle,
- be at least present at 2 main areas (e.f. data management + modelling),
- position itself as MLOps platform.
Based on this criteria, out of the +220 identified platforms and tools only 19 were MLOps platforms.
Review characteristics
The next intreguing question was: what separates these platforms from one another?
We decided that a good approach would be not to list specific features and functions, but rather more generic perspectives on their offering. We propose the following “soft” criteria which, in our view, define a great MLOps platform:
Holistic: An MLOps platform needs to cover a broad selection of ML tasks. The rule for bar received are: one for covering one main area, two for covering two, and three for covering three main areas.
Collaborative: A key task in MLOps is collaboration and is increasingly important as ML solutions are becoming more embedded in organizations. We define collaboration based on the possibility to share and work concurrently on pipeline on data processing, modelling and managing runtime environments. The rules: one bar for covering 1 out of the three.
Reproducible: Reproducing the entire value added chain in ML is very important, as it allows to understand predictions and increases confidence in the deployed model. We defined reproducibility as tracking and versioning of data, source code & hyperparameters and environment configuration. The rule: one bar for each of variable topic (e.g. data, code & hyperparameters, runtime environment).
Community: We see access to community content as increasingly relevant, as more and more datasets, code based functions and libraries are available. GitHubhas been a good source for code, so does Kaggle and other code / project hosting platforms. Nevertheless, we would like to see a direct use of community synergies in a MLOps platform. The rule: one bar for each sharable ML element within the MLOps platform (outside of the team!).
Hosting: Where can one use the platform? Cloud only, self-hosted or only locally. The rule: one bar for each of the main three possibilities.
Data connectivity: This section describes the data aquiring possibilities within the platforms. We identified four: on-platform, via data sources (data connectors), direct APIs to third party applications and via direct access to data bases. The rule: one bar for each of the before mentioned data connectivity types (limited to 3 bars).
Level of expertise: We see it as increasingly important to have easy of use in general operability of a platform, as more newcomers enter the ML market. This last characteristic is more difficult to assess as it relates to many different aspects (UI/UX, workflow mechanics, help documents, general concepts, etc). This part is more open for discussion, but we tried to be as objective as possible. The rule: one bar for expert only, two bars for experts and advanced users and three bars for experts, advanced and beginners.
In-depth review (MLOps platforms)
The following section will have a closer look at the MLOps platforms listed above. In a following landscape analysis, we also will include sections for every tool and platform we found (but this was a bit too much for now!).
Description: MLReef is an open source MLOps platform that provides hosting to Machine Learning project. Build on re-usable ML modules made by your team and by the community, orchestrate concurrent workflows to create faster, more efficiently and better ML applications.
Open source repo: https://gitlab.com/mlreef/mlreef or https://github.com/MLReef/mlreef
Description: One open, simple platform to store and manage all of your data and support all of your analytics and AI use cases.
Open source repo: https://github.com/databricks
Description: H2O.ai is the creator of H2O the leading open source machine learning and artificial intelligence platform trusted by data scientists across 14K enterprises globally. Our vision is to democratize intelligence for everyone with our award winning “AI to do AI” data science platform, Driverless AI.
Open source repo: https://github.com/h2oai
Description: The Iguazio Data Science Platform transforms AI projects into real-world business outcomes. Accelerate and scale development, deployment and management of your AI applications with MLOps and end-to-end automation of machine learning pipelines.
Open source repo: https://github.com/iguazio/
Algorithmia
Description: Algorithmia is machine learning operations (MLOps) software that manages all stages of the ML lifecycle within existing operational processes. Put models into production quickly, securely, and cost-effectively.
Open source repo: https://github.com/algorithmiaio
Hopsworks
Description: Hopsworks 2.0 is an open-source platform for the development and operation of ML models, available as an on-premises platform (open-source or Enterprise version) and as a managed platform on AWS and Azure.
Open source repo: https://github.com/logicalclocks/hopsworks
Allegro AI
Description: End-to-end enterprise-grade platform for data scientists, data engineers, DevOps and managers to manage the entire machine learning & deep learning product life-cycle.
Open source repo: https://github.com/allegroai
Valohai
Description: Train, Evaluate, Deploy, Repeat. Valohai is the only MLOps platform that automates everything from data extraction to model deployment.
Open source repo: https://github.com/valohai
Amazon SageMaker
Description: Amazon SageMaker helps data scientists and developers to prepare, build, train, and deploy high-quality machine learning (ML) models quickly by bringing together a broad set of capabilities purpose-built for ML.
Open source repo: https://github.com/aws/amazon-sagemaker-examples
Pachyderm
Description: Hosted and managed Pachyderm for those who want everything Pachyderm has to offer, without the hassle of managing infrastructure yourself. With Hub, you can version data, deploy end-to-end pipelines, and more. All with little to no setup, and it’s free!
Open source repo: https://github.com/pachyderm/pachyderm
Dataiku
Description: Dataiku is the platform democratizing access to data and enabling enterprises to build their own path to AI in a human-centric way. Note: They are limited to tabular data only.
Open source repo: https://github.com/dataiku
Alteryx
Description: From Data to Discoveries to Decisions — In Minutes. Analytics that automate and optimize business outcomes.
Open source repo: https://github.com/alteryx
Domino Data Lab
Description: Let your data science team use the tools they love. And bring them together in an enterprise-strength platform, that enables them to spend more time solving critical business problems.
Open source repo: https://github.com/dominodatalab
Google Cloud Platform
Description: Avoid vendor lock-in and speed up development with Google Cloud´s commitment to open source, multicloud, and hybrid cloud. Enable smarter decision making across your organization.
Open source repo: https://github.com/GoogleCloudPlatform/
OpenML
Description: As machine learning is enhancing our ability to understand nature and build a better future, it is crucial that we make it transparent and easily accessible to everyone in research, education and industry. The Open Machine Learning project is an inclusive movement to build an open, organized, online ecosystem for machine learning.
Open source repo: https://github.com/openml
MLflow
Description: MLflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. MLflow offers a set of lightweight APIs that can be used with any existing machine learning application or library (TensorFlow, PyTorch, XGBoost, etc), wherever you currently run ML code (e.g. in notebooks, standalone applications or the cloud).
Open source repo: https://github.com/mlflow
SAS
Description: Solve the most complex analytical problems with a single, integrated, collaborative solution — now with its own automated modeling API.
Open source repo: https://github.com/sassoftware
Polyaxon
Description: Reproduce, automate, and scale your data science workflows with production-grade MLOps tools.
Open source repo: https://github.com/polyaxon
Microsoft Azure
Description: Limitless data and analytics capabilities. Yes, limitless. Get unmatched time to insight, scale, and price-performance on the cloud built for data and analytics.
Open source repo: https://github.com/Azure
All tools and platforms listed
Below we list all researched tools and platforms we found during our research.
Data Management
This main area within the ML life cycle focuses on managing the data. We decided to have it as a separate area as there are many aspects to it that lie outside of the “Modelling” area.
Data exploration and management
Tools and platforms that help you manage, explore, store and organize your data.
[Algorithmia]()
[Alluxio](https://www.alluxio.io/)
[Amazon Redshift](https://aws.amazon.com/redshift)
[Amundsen](https://www.amundsen.io/)
[Cohesity](https://www.cohesity.com/)
[Aparavi](https://www.aparavi.com/)
[AtScale](https://www.atscale.com/)
[Cazena](https://cazena.com/)
[Cloudera](https://www.cloudera.com/)
[Clearsky](https://stratsolutions.com/clearsky-data/)
[Databricks](https://databricks.com/)
[Datagrok](https://datagrok.ai/)
[Dataiku](https://www.dataiku.com/)
[Delta Lake](https://delta.io/)
[Datera](https://datera.io/)
[Dremio](https://www.dremio.com/)
[Druid](https://druid.apache.org/)
[Elastifile](https://www.elastifile.com/)
[Erwin](https://erwin.com/)
[Excelero](https://www.excelero.com/)
[Fluree](https://flur.ee/)
[Gemini](https://www.geminidata.com/)
[Hammerspace](https://hammerspace.com/)
[Hudi](https://hudi.apache.org/)
[HYCU](https://www.hycu.com/)
[Imply](https://imply.io/)
[Komprise](https://www.komprise.com/)
[Kyvos](https://www.kyvosinsights.com/)
[MLReef](https://about.mlreef.com/)
[Microsoft Azure](https://azure.microsoft.com/)
[Milvus](https://www.milvus.io/)
[Octopai](https://www.octopai.com/)
[Openml](https://www.openml.org/)
[Parquet](https://parquet.apache.org/)
[Pilosa](https://www.pilosa.com/)
[Presto](https://prestodb.io/)
[Qri](https://qri.io/)
[Rubrik](https://www.rubrik.com)
[Spark](http://spark.apache.org/)
[Tamr](https://www.tamr.com/)
[Waterline Data](https://www.waterlinedata.com/)
[Vearch](https://vearch.github.io/)
[Vexata](https://www.vexata.com/)
[Yellowbrick](https://www.yellowbrick.com/)
Data labeling
Tools that will support your effort in labeling your data to create training datasets.
[Amazon Sage Maker — Data Labeling](https://aws.amazon.com/sagemaker/groundtruth/what-is-data-labeling/)
[Appen](https://appen.com/)
[Dataturks](http://dataturks.com/)
[Defined Workflows](https://www.definedcrowd.com/)
[Doccano](http://doccano.herokuapp.com/)
[Figure Eight](https://f8-federal.com/)
[iMerit](https://imerit.net/)
[Labelbox](https://labelbox.com/)
[Prodigy](https://prodi.gy/)
[Playment](https://playment.io/)
[Scale](https://scale.com/)
[Segments](https://segments.ai/)
[Snorkel](https://www.snorkel.org/)
[Supervisely](https://docs.supervise.ly/)
Data streaming
Data streaming services and tools for loading large amounts of data directly into data pipelines.
[Amazon Kinesis](https://aws.amazon.com/kinesis)
[Alluxio](https://www.alluxio.io/)
[Ares DB](https://github.com/uber/aresdb)
[Confluent](https://www.confluent.io/)
[Flink](https://flink.apache.org/)
[Google Cloud Dataflow](https://cloud.google.com/dataflow/)
[Hudi](https://hudi.apache.org/)
[Kafka](http://kafka.apache.org/)
[Microsoft Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/)
[Storm](http://storm.apache.org/)
[Striim](https://www.striim.com/)
[Valohai](https://valohai.com/)
Data version control
Tools and platforms offering version control for data. This is specially relevant, as data is an integral part in a models performace. Reviewing data changes and data governance are essential for full reproducibility.
[Dagshub](https://dagshub.com)
[Databricks](https://databricks.com/)
[Dataiku](https://www.dataiku.com/)
[Dolt](https://www.dolthub.com/)
[DVC](https://dvc.org/)
[Floydhub](https://www.floydhub.com/)
[MLReef](https://about.mlreef.com/)
[Pachyderm](https://pachyderm.com/)
[Qri](https://qri.io/)
[Waterline Data](https://www.waterlinedata.com/)
Data privacy
Data privacy contains anonymization, encryption, highly secure data storage and other mechanism to leave data private.
[Amnesia](https://amnesia.openaire.eu/)
[AirCloak](https://aircloak.com/)
[Data Anon](https://dataanon.github.io/data-anon/)
[Celantur](https://www.celantur.com/)
[Mostly AI](https://www.mostly.ai/)
[PySyft](https://github.com/OpenMined/PySyft)
[Tumult](https://www.tmlt.io/)
Data quality checks
Mechanisms to ensure healthy data.
[Arize](https://arize.com/)
[Great Expectations](https://greatexpectations.io/)
[Naveego](https://www.naveego.com/)
Modelling
This main area within the ML life cycle focuses on creating ML models. This includes all steps directly connected with creating models, such as preparing data, feature engineering, experiment tracking up to model management. One could say, this phase is where all the magic happens.
Notebook / ML code management
Tools and platforms that help you manage, explore, store and organize your notebooks or your ML operations. We explicitly did not include SCM platforms, such as GitHub or Gitlab, as they are not specifically focused on ML (although being perfectly capable of hosting these functions).
[Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
[Dagshub](https://dagshub.com)
[Databricks](https://databricks.com/)
[Dataiku](https://www.dataiku.com/)
[Deepnote](https://deepnote.com/)
[Domino Data Labs](https://www.dominodatalab.com/)
[Floydhub](https://www.floydhub.com/)
[Google Colab](https://colab.research.google.com/)
[H2O](https://www.h2o.ai/)
[Kaggle](https://kaggle.com/)
[MLReef](https://about.mlreef.com/)
[Openml](https://www.openml.org/)
[Polyaxon](https://polyaxon.com)
[Pachyderm](https://pachyderm.com/)
[Valohai](https://valohai.com/)
[Weights and Biases](https://wandb.ai/site)
Data processing and visualization
Dedicated data processing (e.g. data cleaning, formatting, pre processing, etc.) and visualization pipelines, which target at analyzing large amounts of data. We explicitly excluded simple data representations, for example to show data distribution in tabular data (there are plenty tools that do this).
[Alteryx](https://alteryx.com/)
[Ascend IO](https://www.ascend.io/)
[Google Colab](https://colab.research.google.com/)
[Dask](https://dask.org/)
[Dataiku](https://www.dataiku.com/)
[Databricks](https://databricks.com/)
[Dotdata](https://dotdata.com/)
[Flyte](https://flyte.org/)
[Gluent](https://gluent.com/)
[Koalas](https://koalas.readthedocs.io/)
[Iguazio](https://iguazio.com/)
[Imply](https://imply.io/)
[Incorta](https://www.incorta.com/)
[Mlflow](https://www.mlflow.org/)
[Kyvos](https://www.kyvosinsights.com/)
[MLReef](https://about.mlreef.com/)
[Modin](https://ml2analytics.wordpress.com/modin/)
[Naveego](https://www.naveego.com/)
[Openml](https://www.openml.org/)
[Pachyderm](https://pachyderm.com/)
[Pilosa](https://www.pilosa.com/)
[Presto](https://prestodb.io/)
[SAS](https://www.sas.com/en_us/home.html)
[Snorkel](https://www.snorkel.org/)
[SQLflow](https://sqlflow.gudusoft.com/#/)
[Starburst](https://www.starburstdata.com/)
[Turi Create](https://github.com/apple/turicreate)
[Vaex](https://vaex.io/)
[Valohai](https://valohai.com/)
[Weights and Biases](https://wandb.ai/site)
Feature engineering
Dedicated feature engineering and feature storing platforms and tools.
[Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
[Dotdata](https://dotdata.com/)
[Feast](https://feast.dev/)
[Featuretools](https://www.featuretools.com/)
[Pachyderm](https://pachyderm.com/)
[ScribbleData](https://www.scribbledata.io/)
[Tecton](https://www.tecton.ai/)
[TSfresh](https://tsfresh.com/)
[MLReef](https://about.mlreef.com/)
[Iguazio](https://iguazio.com/)
Model Training
These tools and platforms have dedicated pipeline and functions to train Machine Learning models.
[Alteryx](https://alteryx.com/)
[Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
[Iguazio](https://iguazio.com/)
[Microsoft Azure](https://azure.microsoft.com/)
[Google Cloud Platform](https://cloud.google.com/)
[Google Colab](https://colab.research.google.com/)
[Databricks](https://databricks.com/)
[Dataiku](https://www.dataiku.com/)
[Domino Data Labs](https://www.dominodatalab.com/)
[Dotscience](https://dotscience.com/)
[Floydhub](https://www.floydhub.com/)
[Flyte](https://flyte.org/)
[Horovod](https://horovod.ai/)
[IBM Watson](https://www.ibm.com/watson)
[Ludwig](https://eng.uber.com/introducing-ludwig/)
[Kaggle](https://kaggle.com/)
[MLReef](https://about.mlreef.com/)
[H2O](https://www.h2o.ai/)
[Metaflow](https://metaflow.org/)
[Mlflow](https://www.mlflow.org/)
[Paperspace](https://www.paperspace.com/)
[PerceptiLabs](https://www.perceptilabs.com/)
[Snorkel](https://www.snorkel.org/)
[Turi Create](https://github.com/apple/turicreate)
[Valohai](https://valohai.com/)
[SAS](https://www.sas.com/en_us/home.html)
[Anyscale](https://www.anyscale.com/)
[Pachyderm](https://pachyderm.com/)
Experiment tracking
Tools and platforms that offer ways to track, compare and record metrics from model training.
[Allegro AI](https://www.allegro.ai/)
[Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
[Comet ML](https://www.comet.ml/)
[Dagshub](https://dagshub.com)
[Dataiku](https://www.dataiku.com/)
[Datarobot](https://www.datarobot.com/)
[Datmo](https://www.datmo.com/)
[Domino Data Labs](https://www.dominodatalab.com/)
[Floydhub](https://www.floydhub.com/)
[Google Cloud Platform](https://cloud.google.com/)
[H2O](https://www.h2o.ai/)
[Ludwig](https://eng.uber.com/introducing-ludwig/)
[Iguazio](https://iguazio.com/)
[Mlflow](https://www.mlflow.org/)
[MLReef](https://about.mlreef.com/)
[Neptune AI](https://neptune.ai/)
[Openml](https://www.openml.org/)
[Polyaxon](https://polyaxon.com)
[Spell](https://spell.ml/)
[Valohai](https://valohai.com/)
[Weights and Biases](https://wandb.ai/site)
Model / Hyperparameter optimization
Tools and platforms that allow you to searching for the ideal configuration of hyperparameter for your model (e.g. including bayesian or grid search, performance optimizations, etc.)
[Alteryx](https://alteryx.com/)
[Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
[Angel](https://github.com/Angel-ML/angel)
[Comet ML](https://www.comet.ml/)
[Datarobot](https://www.datarobot.com/)
[Hyperopt](https://github.com/hyperopt/hyperopt)
[Polyaxon](https://polyaxon.com)
[Sigopt](https://sigopt.com/)
[Spell](https://spell.ml/)
[Tune](https://docs.ray.io/en/latest/tune/index.html)
[Optuna](https://optuna.org/)
[Talos](https://github.com/autonomio/talos)
Auto ML
Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the ideal model configuration based on the architecture, data and hyperparameters. AutoML is a more advanced method of model optimization, but is not always applicable.
[Datarobot](https://www.datarobot.com/)
[DeterminedAI](https://determined.ai/)
[Dotdata](https://dotdata.com/)
[Google Cloud Platform](https://cloud.google.com/)
[H2O](https://www.h2o.ai/)
[Iguazio](https://iguazio.com/)
[Tazi](https://www.tazi.ai/)
[Transmogrify](https://transmogrif.ai/)
Model management
Model management encompasses model storage, artifact management and model versioning.
[Algorithmia](https://algorithmia.com/)
[Allegro AI](https://www.allegro.ai/)
[Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
[Databricks](https://databricks.com/)
[Dataiku](https://www.dataiku.com/)
[DeterminedAI](https://determined.ai/)
[Dockship](https://dockship.io/)
[Domino Data Labs](https://www.dominodatalab.com/)
[Dotdata](https://dotdata.com/)
[Floydhub](https://www.floydhub.com/)
[Gluon](https://github.com/gluon-api/gluon-api/)
[Google Cloud Platform](https://cloud.google.com/)
[H2O](https://www.h2o.ai/)
[Huggingface](https://huggingface.co/)
[IBM Watson](https://www.ibm.com/watson)
[Iguazio](https://iguazio.com/)
[Mlflow](https://www.mlflow.org/)
[Modzy](https://www.modzy.com/)
[Perceptilabs](https://www.perceptilabs.com/home)
[SAS](https://www.sas.com/en_us/home.html)
[Turi Create](https://github.com/apple/turicreate)
[Valohai](https://valohai.com/)
[Verta](https://verta.ai/)
Model evaluation
This taks involves measuring the predictive performance of a model. It also includes measuring the computing resources needed, latency checks and vulnarability.
[Arize](https://arize.com/)
[Dawnbench](https://dawn.cs.stanford.edu/benchmark/)
[MLperf](https://mlperf.org/)
[Streamlit](https://www.streamlit.io/)
[Tensorboard](https://www.tensorflow.org/tensorboard/)
Model explainability
Removing the black-box syndrom of (especially) deep learning models by analyzing its achitecture, weights distribution paired with test data, heatmaps, etc. These tools offer dedicated features for model explainability.
[Amazon Sagemaker Clarify](https://aws.amazon.com/sagemaker/clarify/)
[Fiddler](https://www.fiddler.ai/)
[InterpretML](https://interpret.ml/)
[Lucid](https://lucid.wisc.edu/)
[Perceptilabs](https://www.perceptilabs.com/home)
[Shap](https://github.com/slundberg/shap)
[Tensorboard](https://www.tensorflow.org/tensorboard/)
Continuous Deployment
This main area within the ML life cycle focuses on putting a trained model into production.
Data flow management
These tools let you manage and automate data flow processes during inference (what happens with new data that comes in?), measure performance and security issues.
[Alluxio](https://www.alluxio.io/)
[Spark](http://spark.apache.org/)
[Ascend IO](https://www.ascend.io/)
[Kafka](http://kafka.apache.org/)
[Dataiku](https://www.dataiku.com/)
[Dotdata](https://dotdata.com/)
[HYCU](https://www.hycu.com/)
[Prefect](https://www.prefect.io/)
Feature transformation
Similar to the process during model training but now during inference tasks. As new data flies in, it needs to be transformed to fit the input data the model has been trained at. These tools allow you to create feature transformations applied to productive models.
[Feast](https://feast.dev/)
[Featuretools](https://www.featuretools.com/)
[ScribbleData](https://www.scribbledata.io/)
[Tecton](https://www.tecton.ai/)
[Iguazio](https://iguazio.com/)
Monitoring
Model performance monitoring is extremly important, as deviations in data distribution or computing performance might have direct implications on business logics and processes.
[Algorithmia](https://algorithmia.com/)
[Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
[Arize](https://arize.com/)
[Dataiku](https://www.dataiku.com/)
[Datadog](https://www.datadoghq.com/)
[Datatron](https://www.datatron.com/)
[Datarobot](https://www.datarobot.com/)
[Domino Data Labs](https://www.dominodatalab.com/)
[Dotscience](https://dotscience.com/)
[H2O](https://www.h2o.ai/)
[Iguazio](https://iguazio.com/)
[Losswise](https://losswise.com/)
[Snorkel](https://www.snorkel.org/)
[Unravel](https://unraveldata.com/platform/)
[Valohai](https://valohai.com/)
[Verta](https://verta.ai/)
Model compliance and audit
This tasks involves giving transparency on model provenance.
[Algorithmia](https://algorithmia.com/)
[SAS](https://www.sas.com/en_us/home.html)
[H2O](https://www.h2o.ai/)
Model deployment and serving
Tools and platforms that integrate model deployment capabilities.
[Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
[Aible](https://www.aible.com/)
[Algorithmia](https://algorithmia.com/)
[Allegro AI](https://www.allegro.ai/)
[Clipper](http://clipper.ai/)
[Core ML](https://developer.apple.com/documentation/coreml)
[Cortex](hthttps://www.cortex.dev/)
[Dataiku](https://www.dataiku.com/)
[Datatron](https://www.datatron.com/)
[Datmo](https://www.datmo.com/)
[Domino Data Labs](https://www.dominodatalab.com/)
[Dotdata](https://dotdata.com/)
[Dotscience](https://dotscience.com/)
[Floydhub](https://www.floydhub.com/)
[Fritz AI](https://www.fritz.ai/)
[Google Cloud Platform](https://cloud.google.com/)
[IBM Watson](https://www.ibm.com/watson)
[Iguazio](https://iguazio.com/)
[Kubeflow](https://www.kubeflow.org/)
[Mlflow](https://www.mlflow.org/)
[Modzy](https://www.modzy.com/)
[OctoML](https://octoml.ai/)
[Paperspace](https://www.paperspace.com/)
[Prediction IO](http://prediction.io/)
[SAS](https://www.sas.com/en_us/home.html)
[Seldon](https://www.seldon.io/)
[Spell](https://spell.ml/)
[Streamlit](https://www.streamlit.io/)
[H2O](https://www.h2o.ai/)
[Valohai](https://valohai.com/)
[Verta](https://verta.ai/)
Model validation
[Arize](https://arize.com/)
[Datatron](https://www.datatron.com/)
[Fiddler](https://www.fiddler.ai/)
[Lucid](https://lucid.wisc.edu/)
[MLperf](https://mlperf.org/)
[SAS](https://www.sas.com/en_us/home.html)
[Streamlit](https://www.streamlit.io/)
Model validation
Adapting a model to be compatible with other frameworks, libraries or languages.
[MMdnn](https://github.com/Microsoft/MMdnn)
[ONNX](https://onnx.ai/)
[Plaidml](https://plaidml.github.io/plaidml/)
Computing management
This main area within the ML life cycle encompasses managing the computing infrastructure. This is especially relevant, as ML sometimes requires big amounts of storage and computing resources.
Computing and data infrastructure (servers)
These organizations provider the required horsepower for your ML projects (in terms of hardware).
[Google Cloud Platform](https://cloud.google.com/)
[IBM Watson](https://www.ibm.com/watson)
[Amazon AWS](https://aws.amazon.com/)
[Microsoft Azure](https://azure.microsoft.com/)
[Cloudera](https://www.cloudera.com/)
[Paperspace](https://www.paperspace.com/)
[Kamatera](https://www.kamatera.com/)
[Linode](https://www.linode.com/)
[Cloudways](https://www.cloudways.com/)
[Liquidweb](https://www.liquidweb.com/)
[Digitalocean](https://www.digitalocean.com/)
[Vultr](https://www.vultr.com/)
Environment management
Custom scripts require packages, libraries and a runtime environment. The following tools and platforms will help you to management your base environments.
[Conda](https://conda.io/)
[Databricks](https://databricks.com/)
[Datmo](https://www.datmo.com/)
[Mahout](https://mahout.apache.org/)
[MLReef](https://about.mlreef.com/)
Resource allocation
The following tools and platforms support managing different resources (such as computing instances, storage volumens, etc.). Also, some include budgeting and team prioretizations for controlling spending.
[Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
[Algorithmia](https://algorithmia.com/)
[Google Cloud Platform](https://cloud.google.com/)
[MLReef](https://about.mlreef.com/)
[Databricks](https://databricks.com/)
[Microsoft Azure](https://azure.microsoft.com/)
[Dataiku](https://www.dataiku.com/)
[DeterminedAI](https://determined.ai/)
[Floydhub](https://www.floydhub.com/)
[IBM Watson](https://www.ibm.com/watson)
[Polyaxon](https://polyaxon.com)
[Spell](https://spell.ml/)
[Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
[Valohai](https://valohai.com)
[Allegro AI](https://www.allegro.ai/)
Scaling
These tools offer elastic scaling of deployed model and computing tasks.
[Amazon Sagemaker](https://aws.amazon.com/sagemaker/)
[Argo](https://argoproj.github.io/)
[Microsoft Azure](https://azure.microsoft.com/)
[Datadog](https://www.datadoghq.com/)
[Datatron](https://www.datatron.com/)
[Datmo](https://www.datmo.com/)
[Google Cloud Platform](https://cloud.google.com/)
[TensorRT](https://developer.nvidia.com/tensorrt)
[Seldon](https://www.seldon.io/)
[TVM](http://tvm.apache.org/)
Security & privacy
These tools will let you manage privacy topics (such as GDPR compliance) and increase your security standards when deploying your models into production.
[Algorithmia](https://algorithmia.com/)
[Clever Hans](https://github.com/cleverhans-lab/cleverhans)
[Datadog](https://www.datadoghq.com/)
[Modzy](https://www.modzy.com/)
[PySyft](https://github.com/OpenMined/PySyft)
[Tumult](https://www.tmlt.io/)