Explainable AI: But Explainable To Whom?
datanami, February 10th, 2020
February 10, 2020,
Volume 263, Issue 2

As the power of AI and machine learning have become widely recognized, and as people see the value that these approaches can bring to an increasingly data-heavy world, a new need has arisen: the need for explainable AI

"How will people know the nature of the automated decisions that are made by machine learning models? How will they make use of the insights provided by AI-driven systems if they do not understand and trust the automated decisions that underlie them?

The biggest challenge to the next level of adoption of AI and machine learning is not the development of new algorithms, although of course that continues to be done. The biggest challenge is building confidence and trust in intelligent machine learning systems. Some call this need for confidence and trust a barrier for AI - and in a way it is - but I prefer to think of it as a very reasonable requirement of AI and machine learning. Explainable or interpretable AI involves the ability to present explanations for model-based decisions to humans. So explainable AI is of critical importance for the success of AI and ML systems. But explainable to whom?..."

Read More ...

Keywords:

 
Other articles in the IT - AI section of Volume 263, Issue 2:

See all archived articles in the IT - AI section.