One area we’ve not previously heard about machines working in, however, is in the role of “resource nurse” in a hospital. A nurse in this position in the labor and delivery ward is responsible for making decisions about the rooms patients should be assigned to, or which physician should perform a C-section. That’s work that researchers in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) attempted to replicate recently — with a Nao robot trained to learn how these scheduling choices are made and make similar decisions on its own.
“What we were able to show is that a system with only a few dozen training examples from people performing a task very well was able to make decisions which appear to be reasonable,” Professor Julie Shah, of of the authors of the study, told Digital Trends. In the experiment, the suggestions the robot made — courtesy of its machine learning algorithm — were accepted by nurses at Boston’s Beth Israel Deaconess Medical Center 90 percent of the time.
But this isn’t about replacing human experts with machines, Professor Shah says. Instead, she says the mission of her lab is to design “human-aware AI,” referring to machines that can “better understand humans and human decision-making, so that it can collaborate with people and help make them better in their jobs.”
Essentially, it’s a philosophical continuation of the “expert system,” a type of AI which enjoyed a boom period in the 1980s — based on the dream of coding the knowledge of a select few experts and making it available to the masses, where it can be used as a training tool.
“We wanted to see if we could learn from these superstar nurses, and help make other people into superstars as well — with the machine’s help,” Professor Shah says.
While there is still work to be done before CSAIL’s tool is rolled out (including making sure that novice nurses won’t unquestioningly follow the algorithm’s suggestions) it’s yet another example of a complex real-world optimization problem that AI can help solve.
- A literal minority report: Examining the algorithmic bias of predictive policing
- OpenAI’s GPT-3 algorithm is here, and it’s freakishly good at sounding human
- Wild new ‘brainsourcing’ technique trains A.I. directly with human brainwaves
- IBM will no longer develop or research facial recognition tech
- Self-driving forklifts are here to revolutionize warehouses, for better or worse