Responsible AI Dashboard Samples

Explore various samples demonstrating how to integrate and utilize the Responsible AI dashboard for Azure Machine Learning.

Overview

The Responsible AI dashboard provides a centralized view of your AI model's fairness, interpretability, error analysis, and causal inference insights. These samples illustrate how to collect the necessary data, generate visualizations, and gain actionable insights into your model's behavior.

Sample Categories

Basic Fairness Analysis

Learn how to assess and visualize fairness metrics for your machine learning models. This sample covers common fairness concerns and techniques to address them.

View Sample
Model Interpretability Techniques

Discover how to use interpretability methods like SHAP and LIME to understand individual predictions and global model behavior.

View Sample
Error Analysis and Visualization

This sample demonstrates how to identify and analyze model errors, pinpointing problematic data slices and improving model performance.

View Sample
Causal Inference and What-If Analysis

Explore causal inference techniques to understand the impact of interventions and perform "what-if" scenarios on your model's predictions.

View Sample

Key Concepts

Core Components
  • Data Preparation: Gathering and structuring your training data, ground truth, and model predictions.
  • Metric Computation: Calculating fairness metrics, model performance scores, and error rates.
  • Visualization: Generating interactive charts and graphs for intuitive understanding of insights.
  • Customization: Tailoring the dashboard to specific use cases and model types.

Code Snippets

Here's a glimpse of how you might start generating insights using the Azure ML SDK:

Python Example (Conceptual)

import azureml.core
from azureml.responsibleai import RAIInsights

# Initialize Azure ML workspace
ws = azureml.core.Workspace.from_config()

# Load your model and data
model = ... # Load your trained model
test_data = ... # Load your test dataset
predictions = model.predict(test_data)
ground_truth = ... # Load your ground truth labels

# Create a RAIInsights object
rai_insights = RAIInsights(
    target_column='your_target_column',
    task_type='classification', # or 'regression'
    predictions=predictions,
    ground_truth=ground_truth,
    model=model,
    test_data=test_data
)

# Add components
rai_insights.add_fairness_analysis()
rai_insights.add_error_analysis()
rai_insights.add_interpretability_analysis()

# Upload to Azure ML experiment
rai_insights.upload_to_azureml(name='my_rai_dashboard_run')

print("Responsible AI dashboard insights uploaded successfully.")
                        

Related Resources