Login

Register

Login

Register

Ai Claims Hidden in Ai Ambiguity

There’s a lot of buzz around AI. But like with any major debate, some claims must be taken with a grain of salt. The fact that AI means different things to different people almost guarantees that some claims are a stretch. Not even considering the issue of definition, the underpinnings of AI are complex and constantly evolving, which means that the discipline requires constant interpretational flexibility. Some practitioners will take advantage of this ambiguity by initiating AI projects that might never be realized. It is thus left to practitioners to call out claims that are blatantly false or misleading due to self-interest. The focus of this paper is to make the reader aware of an exploitation of AI in a branch called machine learning. Without attempting to define this subdiscipline too strictly, machine learning is a way of solving problems with data using computers.

 

Levers

Machine learning does not physically solve problems, but rather a proposes a means to an end. The outcome of a machine learning project is a description or prediction that makes a recommendation (a prescription for action, for a person, software, robot, etc.) to perform an action based on that recommendation. In other words, machine learning itself does not save lives, reduce costs, or generate more revenue. Rather, it is the decision to prescribe a particular cocktail of medication that saves lives or the decision to hire more or less people that affects revenue, decisions made based on recommendations from AI. The outcome of machine learning is information that must have a corresponding lever or action that a person, software, or robot will take to realize its benefits. 

 

Claims 

A red flag should go up whenever successful claims are directly attributable to machine learning. A recent presenter at an analytics conference in Nashville claimed to have saved millions of dollars for a hospital system using AI. Every imaginable metric used to measure hospital productivity was evaluated using machine learning. According to the presenter, the machine learning itself saved lives and the hospital money, not the actions taken based on its recommendations. Because of the nebulous nature of AI and the complexities involved in machine learning, some participants may not have appreciated the missing link between the results of a machine learning analysis and the necessary actions that must accompany it for it to accomplish what it claims to.

In the case of the presenter’s example, a nurse’s patient response time ostensibly decreased by over 35% due to machine learning. When asked how this happened, the presenter suggested that machine learning produced an optimal efficiency metric which nurses could somehow incorporate into their daily routines. What the presenter left out, however, were insights as to how humans fulfilled the recommendations of many machine learning recommendations. The absence of this information should have at minimum raised red flags for the attendees. Furthermore, the presenter claimed that AI transformed a declining chain of hospitals into one of the world’s most efficient and profitable health care institutions by using machine learning algorithms to analyze collective workflows across multiple failing hospitals. Missing once again, however, was the how. Unfortunately, because of the disconnect between AI insights and human actions, it is not clear if the presenter’s claims about AI’s effectiveness were entirely true. If the failing hospitals were in fact converted into efficient and profitable hospitals, it may have been the result of other actions taken unrelated to the AI analyses. As the number of AI providers proliferate, similar claims obscured by the ill-defined and evolving AI field become increasingly common.

 

The Hard Part 

While the mathematics and computer science skills needed to create machine learning algorithms are highly complex, applying the algorithms industrially is far less involved (especially with pre-trained machine learning procedures). The difficulty lies in solving for the right outcomes and connecting those outcomes with the right levers. An analytic plan involves not only using machine learning to crunch data, but also evaluating how an entity can incorporate the recommendations of machine learning models into its employees’ daily workflows, interconnected processes, and culture.

 

To illustrate, consider the nursing example from above. Assume that a machine learning model optimized the time nurses take to perform major tasks to utilize their respective schedules more efficiently. Moreover, the machine learning suggests that the length of time nurses should take to respond to a patient is X, a 35% decline from the pre-optimized state. That alone is fantastic, but it does not account for the nurse’s choices and the consequences of those choices.  To respond to patients more quickly, nurses might increase the time it takes to perform other tasks or even eliminate some tasks altogether. If time allocations were what was being solved for (the presenter did not define the target or outcome variable) and the new time allocation recommendations could realistically be adopted, they would have to work in conjunction with other machine learning recommendations that also needed to be adopted by humans. With over three decades of advanced analytics experience in Fortune and media companies, one thing I have learned is that it is excruciatingly difficult for people to implement more than two or three major changes to their job specifications at one time. The troubling part of the hospital AI presentation is that the actionable complexity associated with any machine learning projects was not discussed. Questions that begged more explanation of how the machine learning results were actually used remained unanswered, hidden in a safe cloud of AI ambiguity. It is self-serving if the final resting place of a machine learning project is in a presentation versus application in a hospital, manufacturing assembly line, retail store, or software delivery system. Fortunately, examples abound where machine learning recommendations result in actionable outcomes. However, unless there is a clear explanation of what actions are required (or were taken) to make machine learning actionable, you should maintain a sense of caution when interacting with individuals’ claims about AI’s seemingly mystical powers.

 

Share on email
Email
Share on linkedin
LinkedIn
Share on twitter
Twitter
Share on facebook
Facebook
Share on google
Google+
Share on whatsapp
WhatsApp
Share on skype
Skype
Share on print
Print