Kód: ZL1_6X340
DÉLKA: 8 Hours
CENA: Free
This offering teaches you how IBM Watson OpenScale for IBM Cloud Pak for Data lets business analysts, data scientists, and developers build monitors for machine learning (ML) models to manage risks. You will understand how to use Watson OpenScale to build monitors for quality, fairness, and drift. You will also learn how explanations of predictions helps provide business stakeholders confidence in the AI being launched into production.
• Introduction to IBM Watson OpenScale
• IBM Watson OpenScale technical architecture
• Get started with IBM Watson OpenScale on IBM Cloud Pak for Data
• Overview of Watson OpenScale monitors
• Explore a use case for Watson OpenScale
• Configure a quality monitor
• Configure a fairness monitor
• Detect drift and configure the drift monitor
• Monitor a deployed model in production
Business Analysts, Data Scientists, Developers, Researchers, and others who need to monitor machine learning models
• Basic knowledge of cloud platforms, for example IBM Cloud
• Basic understanding of machine learning models, and how they are used
• IBM Cloud Pak for Data (V3.0.x): Foundations - 6X336G (recommended)
Introduction to IBM Watson OpenScale
• Describe the problem that Watson OpenScale solves
• Describe how Watson OpenScale can help
• Explain important terms in the contect of trust in AI
IBM Watson OpenScale technical architecture
• Describe Watson OpenScale architecture for IBM Cloud Pak for Data
• Describe Watson OpenScale components and services
• Describe the various types of data
Get started with IBM Watson OpenScale on IBM Cloud Pak for Data
• Describe the Watson OpenScale system setup
• Describe the Watson OpenScale data mart
• Set up a machine learning provider
• Specify the model details
Overview of Watson OpenScale monitors
• Identify the different Watson OpenScale monitors
• Describe explanations
Explore a use case for Watson OpenScale
• Describe how Watson OpenScale can provide trust in AI in a credit risk example
Configure a quality monitor
• Describe the type of data required to monitor quality
• Explain quality metrics for categorical labels
• Explain quality metrics for quantative labels
• Configure a quality monitor
• Explain predictions
Configure a fairness monitor
• Explain the idea behind assessing fairness
• Describe the type of data required to monitor fariness
• Configure a fairness monitor
Detect drift and configure the drift monitor
• Describe the two types of drift
• Describe the type of data required to monitor fariness
• Configure a drift monitor
Monitor a deployed model in production
• Put a model monitor into production
• Provide feedback data for the quality monitor
• Provide payload data for the fairness and drift monitor