Skip to main content

A.I. could monitor farms from above to make sure they’re not illegally polluting

The idea of an artificial intelligence that watches from the skies, seeking out wrongdoing, sounds all a bit sci-fi dystopia. Actually, it describes a new deep learning A.I. being developed to help detect farms that are illegally polluting waterways.

“According to the [Environmental Protection Agency], agriculture is the leading contributor of pollutants to the waterways of the United States,” Daniel E. Ho, co-lead author on the project, told Digital Trends. “Intensive livestock agriculture facilities — known in the United States as Concentrated Animal Feeding Operations (CAFOs) — are responsible for roughly 40% of U.S. livestock production. But environmental monitoring and enforcement has been hampered by the lack of systematic knowledge about these facilities. Some environmental interest groups and one state authority hence resorted to manually scanning satellite images to identify CAFO locations, a process that can take over three years for a single state. Our research addresses this problem by training a machine learning model to recognize CAFO facilities from high-resolution satellite imagery.”above

The data used to train the neural network was a combination of census data from environmental interest groups in North Carolina and publicly available satellite images. The A.I. was trained to identify features like outdoor manure pits, which can suggest possible pollutants. In the future, it may be further developed to detect actual pollution of waterways.

Image used with permission by copyright holder

Ho isn’t necessarily who you would expect to be behind an initiative like this. A legal scholar, he is a law and political science professor at Stanford University. So how did he come to be involved?

“I came across the topic after teaching a module on livestock production for a class at Stanford,” he explained. “We covered largely the legal questions about how CAFOs are regulated under the Clean Water Act, but many [were] surprised by the basic lack of knowledge about CAFOs. Our research team then began to examine whether there were ways to leverage the major advances in image recognition to solve this problem.”

Development of the system was done in collaboration with research assistants to validate the training data, along with Stanford computer science students who brainstormed relevant computer vision techniques for searching for facilities.

“We view the current version very much as a proof of concept, but we believe such a system could be deployed in partnership with environmental interest groups or regulatory bodies with a bit more engineering effort,” Cassandra Handan-Nader, a graduate student who worked on the project, told Digital Trends. “The system is not intended to be fully autonomous. To the contrary, we envision models like ours playing a supporting role to humans engaging in environmental monitoring tasks.”

A paper describing the work was recently published in the journal Nature Sustainability.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
A.I. teaching assistants could help fill the gaps created by virtual classrooms
AI in education kid with robot

There didn’t seem to be anything strange about the new teaching assistant, Jill Watson, who messaged students about assignments and due dates in professor Ashok Goel’s artificial intelligence class at the Georgia Institute of Technology. Her responses were brief but informative, and it wasn’t until the semester ended that the students learned Jill wasn’t actually a “she” at all, let alone a human being. Jill was a chatbot, built by Goel to help lighten the load on his eight other human TAs.

"We thought that if an A.I. TA would automatically answer routine questions that typically have crisp answers, then the (human) teaching staff could engage the students on the more open-ended questions," Goel told Digital Trends. "It is only later that we became motivated by the goal of building human-like A.I. TAs so that the students cannot easily tell the difference between human and A.I. TAs. Now we are interested in building A.I. TAs that enhance student engagement, retention, performance, and learning."

Read more
This basic human skill is the next major milestone for A.I.
Profile of head on computer chip artificial intelligence.

Remember the amazing, revelatory feeling when you first discovered the existence of cause and effect? That’s a trick question. Kids start learning the principle of causality from as early as eight months old, helping them to make rudimentary inferences about the world around them. But most of us don’t remember much before the age of around three or four, so the important lesson of “why” is something we simply take for granted.

It’s not only a crucial lesson for humans to learn, but also one that today’s artificial intelligence systems are pretty darn bad at. While modern A.I. is capable of beating human players at Go and driving cars on busy streets, this is not necessarily comparable with the kind of intelligence humans might use to master these abilities. That’s because humans -- even small infants -- possess the ability to generalize by applying knowledge from one domain to another. For A.I. to live up to its potential, this is something it also needs to be able to do.

Read more
A.I. could play a vital role in the birth of tomorrow’s IVF children
microwave a sponge baby

Since the first “test-tube baby” was born in 1978, in-vitro fertilization (IVF) has been an astonishing game changer when it comes to helping people to conceive. However, as amazing as it is, its success rate still typically hovers around 30 percent. That means that seven out of ten attempts will fail. This can be extremely taxing to would-be parents not only financially, but also mentally and physically. Could A.I. help improve those odds and, in the process, play an important role in the birth of many of tomorrow’s babies?

According to investigators from Brigham and Women's Hospital and Massachusetts General Hospital, the answer looks to be a resounding “yes.” They are working on a deep-learning A.I. that can help decide on which embryos should be transferred during an IVF round.

Read more