Search this site:

Humble AI

Brian Knowles, Jason D'Cruz, John T. Richards, Kush R. Varshney
Review published as
Communications of the ACM 66(9): 73-79
Edited by
Association for Computing Machinery (ACM)

While readers of Computing Reviews are more aware than the general population when it comes to whether artificial intelligence (AI) is a magical panacea or the probability of a general intelligence that will develop thinking capabilities and make decisions on its own, we are actually aware of AI’s greatest strengths: finding patterns, probably hidden to the naked eye, and arriving at inferences based on said patterns. That ability has made AI-based systems a tool of choice to make statistical predictions and estimations, learning from enormous datasets, about human behavior.

AI tools are often applied to risk assessment for financial operations. This article argues, however, that the outcome of a negative AI-based evaluation, particularly if it leads to a false negative–that is, if it predicts that a credit will be defaulted instead of paid, or that an individual will not honor their word or even their payment intentions–can have devastating consequences for an individual, or even a family, and is not to be taken lightly. The authors thus make an argument for “humble AI.” They show many situations where AI-based decisions fall in gray areas and thus a more humane touch might be needed, and suggest several ways of setting thresholds so that AI-based ranking systems are ultimately ranked by humans instead of being directly used for decision making.

The article also digs into the issue of how the public appreciates AI-based decision tools. The authors mention how society has been exposed to a big corpus of science fiction where AI systems become self-aware and dominate humanity. Even if the main argument is not about them, there is quite a bit of space devoted to understanding the impact of false negatives, leading to public distrust of AI.

Quoting the article’s conclusions, “Humble trust does not imply trusting indiscriminately. Rather, it calls for developers and deployers of AI systems to be responsive to the effects of misplaced distrust and to manifest epistemic humility about the causes of human behavior.” The presented thesis sounds completely correct–although it somewhat counters the human resources (HR) savings touted by AI proponents as the main driver behind the adoption of AI systems, so in a sense the argument “bites its own tail.” Hopefully, it will get some people to think and act more humanely.