Skip to main content
Version: Next

ML System with DataHub

Why Integrate Your ML System with DataHub?

As a data practitioner, keeping track of your ML experiments, models, and their relationships can be challenging. DataHub makes this easier by providing a central place to organize and track your ML assets.

This guide will show you how to integrate your ML workflows with DataHub. With this integration, you can easily find and share ML models across your organization, track how models evolve over time, and understand how training data connects to each model. Most importantly, it enables seamless collaboration on ML projects by making everything discoverable and connected.

Goals Of This Guide

In this guide, you'll learn how to:

  • Create your basic ML components (models, experiments, runs)
  • Connect these components to build a complete ML system
  • Track relationships between models, data, and experiments

Core ML Concepts

Here's what you need to know about the key components, based on MLflow's terminology:

  • Experiments are collections of training runs for the same project, like all attempts to build a churn predictor
  • Training Runs are attempts to train a model within an experiment, capturing parameters and results
  • Models organize related model versions together, like all versions of your churn predictor
  • Model Versions are successful training runs registered for production use

The hierarchy works like this:

  1. Every run belongs to an experiment
  2. Successful runs can become model versions
  3. Model versions belong to a model group
  4. Not every run becomes a model version
Terminology

Here's how DataHub and MLflow terms map to each other. For more details, see the MLflow integration doc:

DataHubMLflowDescription
ML Model GroupModelCollection of related model versions
ML ModelModel VersionSpecific version of a trained model
ML Training RunRunSingle training attempt
ML ExperimentExperimentProject workspace

Basic Setup

To follow this tutorial, you'll need DataHub Quickstart deployed locally. For detailed steps, see the Datahub Quickstart Guide.

Next, set up the Python client for DataHub. Create a token in DataHub UI and replace <your_token> with your token:

from mlflow_dh_client import MLflowDatahubClient

client = MLflowDatahubClient(token="<your_token>")
Verifying via GraphQL

Throughout this guide, we'll show how to verify changes using GraphQL queries. You can run these queries in the DataHub UI at https://localhost:9002/api/graphiql.

Create Simple ML Entities

Let's create the basic building blocks of your ML system. These components will help you organize your ML work and make it discoverable by your team.

Create Model Group

A model group contains different versions of a similar model. For example, all versions of your "Customer Churn Predictor" would go in one group.

Create a basic model group with just an identifier:
client.create_model_group(
group_id="airline_forecast_models_group",
)

Let's verify that our model group was created:

See your new model group in the DataHub UI:

Create Model

Next, let's create a specific model version that represents a trained model ready for deployment.

Create a model with just the required version:
client.create_model(
model_id="arima_model",
version="1.0",
)

Let's verify our model:

Check your model's details in the DataHub UI:

Create Experiment

An experiment helps organize multiple training runs for a specific project.

Create a basic experiment:
client.create_experiment(
experiment_id="airline_forecast_experiment",
)

Verify your experiment:

See your experiment's details in the UI:

Create Training Run

A training run captures all details about a specific model training attempt.

Create a basic training run:
client.create_training_run(
run_id="simple_training_run_4",
)

Verify your training run:

View the run details in the UI:

Define Entity Relationships

Now let's connect these components to create a comprehensive ML system. These connections enable you to track model lineage, monitor model evolution, understand dependencies, and search effectively across your ML assets.

Add Model To Model Group

Connect your model to its group:

client.add_model_to_model_group(model_urn=model_urn, group_urn=model_group_urn)

View model versions in the Model Group under the Models section:

Find group information in the Model page under the Group tab:

Add Run To Experiment

Connect a training run to its experiment:

client.add_run_to_experiment(run_urn=run_urn, experiment_urn=experiment_urn)

Find your runs in the Experiment page under the Entities tab:

See the experiment details in the Run page:

Add Run To Model

Connect a training run to its resulting model:

client.add_run_to_model(model_urn=model_urn, run_urn=run_urn)

This relationship enables you to:

  • Track which runs produced each model
  • Understand model provenance
  • Debug model issues
  • Monitor model evolution

Find the source run in the Model page under the Summary tab:

See related models in the Run page under the Lineage tab:

Add Run To Model Group

Create a direct connection between a run and a model group:

client.add_run_to_model_group(model_group_urn=model_group_urn, run_urn=run_urn)

This connection lets you:

  • View model groups in the run's lineage
  • Query training jobs at the group level
  • Track training history for model families

See model groups in the Run page under the Lineage tab:

Add Dataset To Run

Track input and output datasets for your training runs:

client.add_input_datasets_to_run(
run_urn=run_urn,
dataset_urns=[str(input_dataset_urn)]
)

client.add_output_datasets_to_run(
run_urn=run_urn,
dataset_urns=[str(output_dataset_urn)]
)

These connections help you:

  • Track data lineage
  • Understand data dependencies
  • Ensure reproducibility
  • Monitor data quality impacts

Find dataset relationships in the Lineage tab of either the Dataset or Run page:

Full Overview

Here's your complete ML system with all components connected:

You now have a complete lineage view of your ML assets, from training data through runs to production models!

What's Next?

To see this integration in action and learn about real-world use cases: