Machine Learning Interpretability Toolkit | AI Show




Channel 9 show

Summary: Understanding what your AI models are doing is super important both from a functional as well as ethical aspects. In this episode we will discuss what it means to develop AI in a transparent way. Mehrnoosh introduces an awesome interpretability toolkit which enables you to use different state-of-the-art interpretability methods to explain your models decisions. By using this toolkit during the training phase of the AI development cycle, you can use the interpretability output of a model to verify hypotheses and build trust with stakeholders. You can also use the insights for debugging, validating model behavior, and to check for bias. The toolkit can even be used at inference time to explain the predictions of a deployed model to the end users. Learn more: Link to the docLink to the sample notebooksSegments of the video: [02:12] – Responsible AI[02:34] – Machine Learning Interpretability[03:12] – Interpretability Use Cases[05:20] - Different Interpretability Techniques[06:45] - DemoThe AI Show's Favorite links: Don't miss new episodes, subscribe to the AI Show Create a Free account (Azure) Follow Seth on Twitter AI Blog Fast ML MIT News | AI Medium | Francesca Lazzeri Deep Learning vs. Machine Learning Follow Channel 9 On Twitter