Wednesday, June 13, 2018

Empathy, AI and the Knowledge Factor

Might artificial intelligence ultimately replace physicians? Bertalan Mesko argues that the medical community shouldn't be too concerned re a spate of recent dire predictions, and adds:
They're just plain wrong. All of them. Although many signs are pointing to the fact that A.I. will completely move the world of medicine, and many other technologies will have a transformative effect on the industry, stating that the majority of medical professionals will disappear, is fearmongering and irresponsible. 
Mesko offers five reasons why artificial intelligence won't replace physicians. However, I'd like to take a closer look at the first one where he asserts:
1) We cannot replace empathy.
Even if the array of technologies will offer brilliant solutions, it would be difficult for them to mimic empathy. Why? Because at the core of empathy, there is the process of building trust: listening to the other person, paying attention to their needs, expressing the feeling of compassion and responding in a manner that the other person knows they were understood.
Having come of age in an era when physicians weren't expected to be so considerate of their patients, I can appreciate the current emphasis on compassion. When it comes to patient/doctor relationships, one might say that expression of empathy has become the "right thing to do".

Hence until recently, I was convinced by arguments such as the above. Surely no robot could take the place of human empathy! But what really takes place? Most successful people face societal demands which greatly reduce how they can be expected to assist (mere) individuals. The higher one's level of compensated skill, the less time one generally has left, for one-on-one economic engagement. Yet empathy includes taking the time to stop and listen to pesky or even "troublesome" complaints, a chore which may occasionally include walking a mile in another's shoes. Otherwise, how to become less disagreeable, as to the "unreasonable" assessments others tend to hold re their circumstance? We may find snap judgments a bit distasteful, but they sure "save" a lot of time and bother, don't they. Yet physicians are hardly alone, in their inability to assume a level of basic observation that can only be likened to beginner's mind.

What's more, the present institutions responsible for our time based services, are separated from one another in ways which break up knowledge continuity and practical application, at the outset. In other words, people lose the ability to preserve the usefulness of hard won understandings concerning patients and clients, once individuals are handed off to the next institutional setting. Knowledge workers are now expected to juggle such an extensive array of information alongside administrative circumstance, that they can only tap a fraction of potential solution sets. Sometimes that's not enough, especially when patients or clients need informational continuity for any reason.

Could AI contribute to the knowledge preservation and continuity, that today's professionals are hard pressed to provide? Or, what if AI could even take into consideration, more of those pesky patient/client complaints and observations, should such information become part of the computer's knowledge landscape? In other words, might AI have the "time" to (respectfully?) "listen", if patients could actually report to them directly, particularly if they get little respect from professionals or workers entrusted with their care? What if AI could even be programmed so that it wouldn't react with human anger, when patients lash out in desperation or frustration at a society that has seemingly become too burdened to help them anymore? Yes, I hope that time arbitrage can eventually make a difference in terms of personal civility and mutual assistance, but we aren't there yet...

Granted: The examples in my barely controlled rant aren't normally how most people envision what supposedly constitutes empathy. Nevertheless, the impartial observation of beginner's mind, and continuity of knowledge application, are both important. Let's don't assume too quickly, that deep learning AI couldn't be programmed to "remember" things that might otherwise be disregarded or even discarded. Or, for that matter, that AI can't be expected to exhibit "empathy" towards patients - especially since it's often beyond the scarce time given means of professional providers, to be able to do so.

No comments:

Post a Comment