A.I. could help spot telltale signs of coronavirus in lung X-rays

There are many pain points when it comes to the coronavirus, officially known as COVID-19. One of them is how exactly to test people for it when the necessary testing kits are in short supply. One possible solution could be to allow artificial intelligence to scrutinize chest X-rays of patients’ lungs to spot signs of potential coronavirus-caused lung damage.

Recommended Videos

That’s the basis for several exciting and promising attempts to develop a neural network that could be used to give a strong indication of whether or not a patient likely has COVID-19. Researchers at Chinese medical company Infervision recently teamed up with Wuhan Tongji Hospital in China to develop a COVID-19 diagnostic tool. It is reportedly now being used as a screening tool at the Campus Bio-Medico University Hospital in Rome, Italy.

Meanwhile, other researchers from the University of Waterloo in Ontario, Canada, and Canadian A.I. firm DarwinAI this week announced a new open-access neural net that’s open to the public. The neural net was announced at MIT Technology Review’s EmTech Digital event by DarwinAI CEO Sheldon Fernandez. Called COVID-Net, it’s intended as a tool that could be used for similar screening — and is open for further testing by researchers around the world, who may soon be able to deploy it as a much-needed public health solution.

“We carried [out the A.I.’s] training on a dataset made up of 5,941 posteroanterior chest radiography images, across 2,839 patient cases, from two-open access data repositories,” Alexander Wong, one of the researchers on the project, told Digital Trends. “So far, the sensitivity to COVID-19 cases is quite good. However, the data on COVID-19 cases is still limited and we are continuing to improve the COVID-Net model as more data comes in over time.”

This is the problem that any A.I. researchers are likely to run into. Simply put, there’s still much to learn about COVID-19, which can make developing tools for recognizing it (and, in this case, distinguishing it from other maladies of the lung) difficult. That is why the idea of a publicly available — and publicly scrutable — system is so promising.

“[COVID-Net] is currently not used by patients,” Wong said. “But we are continuing to work hard on improving the results, and invite clinicians and clinical institutes and organizations to use it, give feedback, [and] contribute data so we can accelerate its readiness for clinical deployment. Right now, everything is available to the global community, so hopefully this accelerates progress and advances in this area.”

A.I. researchers are always talking about wanting to solve big problems. Right now, this is one of the biggest that there is.

Editors' Recommendations

I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Can A.I. beat human engineers at designing microchips? Google thinks so

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more
Google’s LaMDA is a smart language A.I. for better understanding conversation

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more