Americans are aging and doctors are in short supply. By 2025, there could be a shortage of 90,000 physicians, according to the Association of American Medical Colleges.
To deliver results more efficiently, many clinics have turned to online portals where patients can access and review their scores. But, rather than clearing things up, these numeric results often end up confusing patients, particularly older ones, even more.
“This is one reason why patient portals are often underutilized, especially by less educated, older, sicker patients,” Daniel Morrow, an educational psychologist at the University of Illinois, told Digital Trends. “This is a big problem because these are the patients who most need to understand this information and stand the most to gain from ready access to well-designed health information that can support self-care.”
Morrow and his team developed a computer-generated doctor that reads test results in layman’s terms, accompanied by graphics that compare the patient’s test scores with ideal results. The aim is to make these online portals more accessible, while making the results understandable and engaging.
“Traditionally patients turn to their providers, such as physicians and nurses, to help make sense of their numbers,” Morrow explained. Nurses and doctors help patients grasp their test results, and give nonverbal cues that engage and support the patient emotionally.
“The use of a conversational agent to deliver test results in portal environments can emulate some aspects of face-to-face communication that may help patients understand and respond appropriately to their health information,” Morrow continued. “This should also increase patients’ use of their portals.”
In the study, participants between the ages of 65 and 89 played the role of patient, listening to the hypothetical test results delivered by the computer doctor in either a natural or synthesized voice. The participants then answered questions to show their comprehension. In both cases the participants accurately understood and remembered the content or the report, though some participants preferred the natural-sounding voice to the synthesized one.
The questions of natural versus synthesized voices is important when it comes to human-machine interactions. A phenomenon called the “uncanny valley” — in which a human replica suddenly becomes really creepy when it seems too humanlike — can make people reject robots all together.
Roboticists and AI developers are constantly trying to avoid this valley by making their bots obviously not human, by making them cartoonish or by exposing their inner wires.
Morrow and his team don’t expect their computer doctor to deliver results in the immediate future, but their next steps will be to refine the system and test how patients might respond to various configurations to find which one is most relatable.
“The agents will vary in age and gender, but also in realism, such as stylized and cartoon versus photo-realistic,” he said. “We will examine whether some types of participants prefer the more stylized agent over the more realistic version, perhaps to avoid the ‘uncanny valley.’”
A paper detailing the study was published this month in the Journal of Biomedical Informatics.