How XAI will quietly revolutionize AI
We assume that data holds all the answers to how to automate decisions. To this end, we build data pipelines and train and deploy machine learning models that turn inputs into outputs. But it isn’t that simple. Data holds plenty of answers, but the process needs more guidance to yield models that we can trust to replace/enhance human decision-making.
To this end, XAI or Interpretable ML has the right toolset. Trust is mission-critical for any technology, so if AI solutions are to supplant software and humans, AI must reach the reliability standards currently expected from software and humans. For that to happen, XAI will be more widely adopted, but also the roles of data scientist and ML engineer will evolve. We will examine examples of XAI methods and discuss how they can revolutionize the way we train, evaluate and deploy machine learning models.