Ajay Malik
For the field of AI to reach any measurable sense of maturity, we'll need methods to debug, error-check, and understand the decision-making process of machines. The lack of trust was at the heart of many failures of one of the best-known AI efforts. Artificial intelligence (AI) is a transformational $15 trillion opportunity, but without explainability, it will not reach any measurable sense of deployment. We are now entering "third wave" of AI where AI systems will become capable of explaining the reasoning behind every decision made by them. The AI systems themselves will construct models that will explain how it works. XAI is all about improving trust of AI-based systems. At one end it brings fairness, accountability and transparency to the front and center of AI; and on the other end it enables us to control and continuously improve our AI systems. In this session, I will go over why the AI needs to be explainable, what does that mean, the state of the art of explainable AI as well as various approaches to build it.
Partagez cet article