Online workshop on Explainability
Register here -
Details:
Do you trust your AI model? Are you deploying responsibly? Do you face challenges in convincing your peers/colleagues on your AI model functioning? Are you deploying ‘trustable’ AI in regulations heavy environments/use cases?
AI in production requires explainability and accountability. With various regulatory compliances, legal frameworks, requirements of Ethics and Trustworthiness, AI explainability has really become the core of trust. Hence, there is a lot of buzz around explainable AI aka XAI today.
In this workshop, you’ll get to learn about the best practices on XAI, general challenges with current XAI approaches, details on functioning of Arya-XAI framework, hands-on workshop on implementing Integrated Gradients (IG) & Arya-xAI API on image classification use case and comparison between these two explainable methods for deep learning.
What will you learn?
1. The explainability imperative (why is this important)
2. Different dimensions of explainability / which stakeholder cares for what aspect
3. Current practices in XAI and challenges:
- Details on Arya-xAI: - How to ‘bake it in’ and also how to ‘bolt on’
- Hands-on tutorial on Arya-xAI API on ImageNET and tabular data.
4. Hands-on tutorial on implementation:
- Implementing Integrated Gradients(IG) and Arya-xAI API on image classification use case
- Comparison between the methods
As your organization is at the final step of realizing value through AI deployments, the final hurdle is satisfying explainability requirements. This workshop is a great opportunity to gain a comprehensive perspective on explainability including how to use available tools to actually deploy ML in production.
Note - This will be an online event. Use the below link to register:
We will be sharing download links and resources need prior to the workshop, via mail . So kindly register for the event at the earliest, since limited seats are available.