Skip to main content

To feed a growing population, scientists want to unleash AI on agriculture

Agriculture has come a long way in the past century. We produce more food than ever before — but our current model is unsustainable, and as the world’s population rapidly approaches the 8 billion mark, modern food production methods will need a radical transformation if they’re going to keep up. But luckily, there’s a range of new technologies that might make it possible. In this series, we’ll explore some of the innovative new solutions that farmers, scientists, and entrepreneurs are working on to make sure that nobody goes hungry in our increasingly crowded world.

Ever since American citizens’ industrial age migration from the country to the city, urban areas have tended to be associated with cutting-edge technologies.

Well, scratch that correlation — because in the age of artificial intelligence, a new research project by Carnegie Mellon University’s Robotics Institute is setting out to prove that the country can be every bit as technologically advanced as the smart city.

Called FarmView (not to be confused with FarmVille, the time-wasting game that has overrun Facebook feeds for much of the last decade), the project employs machine learning, drones, autonomous robots, and virtually every other area of big-budget tech research to help farmers grow more food, better and smarter.

“We’ve been doing research into robotics for agriculture for about 15 years now,” George Kantor, Carnegie Mellon senior system scientist, told Digital Trends. “It’s taken a number of different forms, and this was an attempt to pull it all together into one cohesive project.”

“The world population will hit 9.6 billion by 2050.”

But FarmView is way more than just a top-down organizational reshuffle, like making the finance administration team responsible for accounts receivable instead of accounts payable. In fact, it demonstrates a new sense of urgency around this topic, thanks to a statistic that hammered home its importance to the researchers involved.

That stat? According to current predictions, the world population will hit 9.6 billion by 2050. What that means is that if better ways aren’t found to use our limited agricultural resources – including land, water, and energy – a global food crisis may well occur.

“That’s a statistic which really forces us to look for solutions,” Kantor continued. “Technology alone isn’t going to solve this potential crisis; it also involves social and political issues. However, it’s something we think we can help with. It’s not just about how much food there is, either. The way we produce food right now is very resource intensive, and the resources that are available are being used up. We have to increase the amount of food we produce, as well as the quality, but do so in a way that doesn’t assume we have unlimited resources.”

As part of the project, the team has developed an autonomous ground robot capable of taking visual surveys of crop fields at different times in the season — courtesy of a camera, a laser scanner to measure plant geometry, and a multispectral camera that looks at nonvisible radiation bands. Using computer vision and machine-learning technology it can predict the expected fruit yield later on in the season.

Rather than just passively passing on this information to a farmer, however, it can then actively trigger the robotic pruning of leaves or thinning of fruit in a way that maintains an optimal ecological balance between leaf area and fruit load.

CMU researchers also use a combination of drones and stationary sensor networks to take macroscale measurements of plant growth.

“Our push now is to start using these tools to solve problems on a large scale.”

While these are definitely smart examples of technology, the really long-lasting impact is going to come from how technologies like leaf-cutting robots and drones can be used to help improve crops.

In this capacity, Kantor pointed toward the crop sorghum, a coarse, dry grass grain that originated thousands of years ago in Egypt. Grain sorghum is widely eaten, and is considered the fifth-most important cereal crop grown in the world. Because it features so many different varieties (a whopping 42,000!), it also has enormous genetic potential for creating new high-protein varieties that could make it even more important.

After all, who’s satisfied with simply being the fifth-most important cereal crop?

That’s where AI comes in. If it’s possible to use machine-learning technology to measure sorghum parameters in such a way that breeders and geneticists can choose the traits most necessary for improved yield, as well as most resistant to disease and drought, it could have a massive positive impact. Just improving the yield alone by, say, 50 percent would represent a realworld impact that very few computer scientists can ever be credited with.

So does this all of this mean that the farm of the future, like the factory of the future, will be largely free of humans — with row after row of gleaming Terminator-style robots carrying out all the work? Not quite.

Carnegie Mellon University | FarmView | Work That Matters

“We’re not doing this to replace people. What we’re doing is to introduce new technologies that can make farmers more efficient at what they do, and allow them to use fewer resources to do it,” Kantor said. “The scenario we envision doesn’t involve using fewer people; it involves using robotics and other technologies to carry out tasks that humans aren’t currently doing.”

At present, many of the technologies are still at the “proof of concept” phase, but Kantor noted that they’ve had some interesting discussions with agricultural early adopters. Now the project — which also includes folks from Texas A&M, Penn State, Colorado State, Washington State, the University of Maryland, University of Georgia, and South Carolina’s Clemson University — is preparing to hit the big time.

“A lot of people don’t think of this as being the first place to do this kind of research and development, but it’s an area that — and I’m sorry to use this pun, but it’s really unavoidable — is really ripe for progress,” Kantor concluded. “Our push now is to start using these tools to solve problems on a large scale.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more
4 simple pieces of tech that helped me run my first marathon
Garmin Forerunner 955 Solar displaying pace information.

The fitness world is littered with opportunities to buy tech aimed at enhancing your physical performance. No matter your sport of choice or personal goals, there's a deep rabbit hole you can go down. It'll cost plenty of money, but the gains can be marginal -- and can honestly just be a distraction from what you should actually be focused on. Running is certainly susceptible to this.

A few months ago, I ran my first-ever marathon. It was an incredible accomplishment I had no idea I'd ever be able to reach, and it's now going to be the first of many I run in my lifetime. And despite my deep-rooted history in tech, and the endless opportunities for being baited into gearing myself up with every last product to help me get through the marathon, I went with a rather simple approach.

Read more