In the world of tech, artificial intelligence is one of the buzziest topics around, particularly the study of “Deep learning,” a technique in which an A.I. is trained for a task by consuming huge quantities of data. Some people in the A.I. industry express doubts about the power of deep learning, however, and one company, Kyndi, has a very specific problem with it. Kyndi CEO Ryan Welsh appeared on Digital Trends Live to talk to host Greg Nibler about his company’s quest to build explainable A.I., and why he thinks predictions about the impending singularity are overstated.
Welsh began to think about the need for explainable A.I. back during the financial crisis.
“So I have a graduate degree in quantitative finance, [and ] I was working for a law firm during the financial crisis, and effectively we had to read a bunch of information to help our clients unwind a bunch of esoteric credit derivatives, and in three days I had to read an amount of information that [when] I left for business school three years later I was still reading.” He wondered how to “build machines that help us consume that information and ultimately make decisions faster. Instead of taking three years, maybe take three days.”
The problem with deep learning, Welsh explains, is that “when you work with these systems, they don’t really work well on language, and they can’t explain to you why they’re making recommendations, why they’re bringing back certain search results.”
“What makes us human is this ability to ask ‘Why?’ and this desire to ask ‘Why?’ And every time someone makes a statement, your first reply is ‘Why?’” Welsh says. “And it’s because we want to interrogate and understand the person’s belief system and their logic, so that we can determine whether we believe them, whether we adhere to their principles, and ultimately we can gain trust with that individual. We don’t have machines that can provide that, and it just really won’t be able to fit within our workflow as human beings.”
Kyndi is working toward A.I. that is capable of “inductive, deductive, abductive, analogical reasoning, these kinds of things that we can do as human beings.” Unlike a lot of the louder voices in the world of tech, Welsh doesn’t believe the A.I. apocalypse is on the horizon.
“We’re very far away from artificial general intelligence … If you’re in the industry and you work with A.I. systems, you understand how limited they are, specifically around sensory motor and natural language understanding … Systems are very good at parsing sentences, but not really good at understanding the semantics or the pragmatics of language.”
“I think that the people that are going to prevail are going to be the companies that realize that A.I. is a feature of a product, not the product itself,” he says. “So you’ve got to go in, you’ve got to solve real business problems or people problems and ultimately have A.I. be a feature of the product, not the product itself.
Digital Trends Live airs Monday through Friday at 9 a.m. PT, with highlights available on demand after the stream ends. For more information, check out the DT Live homepage, and be sure to watch live for the chance to win occasional prizes.
- Digital Trends Live: Rollable TVs from LG and the future of A.I.
- Midrange phones can’t do A.I., but MediaTek’s P90 chip aims to change that
- DT Daily: Windows woes, A.I. news anchors, and Disney’s streaming service
- Microsoft’s friendly Xiaoice A.I can figure out what you want — before you ask
- A.I. can do almost anything now, but here are 6 things machines still suck at