Skip to main content

Study says AI hype is hindering genuine research on artificial intelligence

Monitor showing the 2025 AAAI study on AI.
AAAI / Digital Trends

A new AAAI (Association for the Advancement of Artificial Intelligence) study with hundreds of contributing AI researchers has been published this month, and the main takeaway is this: our current approach to AI is unlikely to lead us to artificial general intelligence

AI has been a buzzword for a good couple of years now, but artificial intelligence as a field of research has existed for many decades. Alan Turing’s famous “Computing Machinery and Intelligence” paper and the Turing test we still talk about today, for example, were published in 1950. 

Recommended Videos

The AI everyone talks about today was born from these decades of research but it’s also diverging from them. Rather than being a scientific pursuit, we now also have a deviating branch of artificial intelligence that you could call “commercial AI.” 

Efforts in commercial AI are led by big tech monopolies like Microsoft, Google, Meta, Apple, and Amazon — and their primary goal is to create AI products. This shouldn’t have to be a problem, but at the moment, it seems it might be.

Firstly, because most people never followed AI research until a couple of years ago, everything the average person knows about AI is coming from these companies, rather than the science community. The study covers this topic in the “AI Perception vs. Reality” chapter, with 79% of the scientists involved believing that the current perception of AI capabilities doesn’t match the reality of AI research and development.

In other words, what the general public thinks AI can do doesn’t match what scientists think AI can do. The reason for this is as simple as it is unfortunate: when a big tech representative makes a statement about AI, it’s not a scientific opinion — it’s product marketing. They want to hype up the tech behind their new products and make sure everyone feels the need to jump on this bandwagon.

When Sam Altman or Mark Zuckerberg say software engineering jobs will be replaced by AI, for example, it’s because they want to influence engineers to learn AI skills and influence tech companies to invest in pricey enterprise plans. Until they start replacing their own engineers (and benefit from it), however, I personally wouldn’t listen to a word they say on the topic.

It’s not just public perception that commercial AI is influencing, however. Study participants believe that the “AI hype” being manufactured by big tech is hurting research efforts. For example, 74% agree that the direction of AI research is being driven by the hype — this is likely because research that aligns with commercial AI goals is easier to fund. 12% also believe that theoretical AI research is suffering as a result.

So, how much of a problem is this? Even if big tech companies are influencing the kind of research we do, you’d think the extremely large sums of money they’re pumping into the field should have a positive impact overall. However, diversity is key when it comes to research — we need to pursue all kinds of different paths to have a chance at finding the best one.

But big tech is only really focusing on one thing at the moment — large language models. This extremely specific type of AI model is what powers just about all of the latest AI products, and figures like Sam Altman believe that scaling these models further and further (i.e. giving them more data, more training time, and more compute power) will eventually give us artificial general intelligence.

This belief, dubbed the scaling hypothesis, says that the more power we feed an AI, the more its cognitive abilities will increase and the more its error rates will decrease. Some interpretations also say that new cognitive abilities will unexpectedly emerge. So, even though LLMs aren’t great at planning and thinking through problems right now, these abilities should emerge at some point.

there is no wall

— Sam Altman (@sama) November 14, 2024

In the past few months, however, the scaling hypothesis has come under significant fire. Some scientists believe scaling LLMs will never lead to AGI, and they believe that all of the extra power we’re feeding new models is no longer producing results. Instead, we’ve hit a “scaling wall” or “scaling limit” where large amounts of extra compute power and data are only producing small improvements in new models. Most of the scientists who participated in the AAAI study are on this side of the argument:

The majority of respondents (76%) assert that “scaling up current AI approaches” to yield AGI is “unlikely” or “very unlikely” to succeed, suggesting doubts about whether current machine learning paradigms are sufficient for achieving general intelligence.

Current large language models can produce very relevant and useful responses when things go well, but they rely on mathematic principles to do so. Many scientists believe we will need new algorithms that use reasoning, logic, and real-world knowledge to reach a solution if we want to progress closer to the goal of AGI. Here’s one spicy quote on LLMs and AGI from a 2022 paper by Jacob Browning and Yann Lecun.

A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.

However, there’s no real way to know who is right here — not yet. For one thing, the definition of AGI isn’t set in stone and not everyone is aiming for the same thing. Some people believe that AGI should produce human-like responses through human-like methods — so they should observe the world around them and figure out problems in a similar way to us. Others believe AGI should focus more on correct responses than human-like responses, and that the methods they use shouldn’t matter.

In a lot of ways, however, it doesn’t really matter which version of AGI you’re interested in or if you’re for or against the scaling hypothesis — we still need to diversify our research efforts. If we only focus on scaling LLMs, we’ll have to start over from zero if it doesn’t work out, and we could fail to discover new methods that are more effective or efficient. Many of the scientists in this study fear that commercial AI and the hype surrounding it will slow down real progress — but all we can do is hope that their concerns are dealt with and both branches of AI research can learn to coexist and progress together. Well, you can also hope that the AI bubble bursts and all of the AI-powered tech products disappear into irrelevance, if you prefer.

Please enable Javascript to view this content

Willow Roberts
Willow Roberts has been a Computing Writer at Digital Trends for a year and has been writing for about a decade. She has a…
Google Gemini’s best AI tricks finally land on Microsoft Copilot
Copilot app for Mac

Microsoft’s Copilot had a rather splashy AI upgrade fest at the company’s recent event. Microsoft made a total of nine product announcements, which include the agentic trick called Actions, Memory, Vision, Pages, Shopping, and Copilot Search. 

A healthy few have already appeared on rival AI products such as Google’s Gemini and OpenAI’s ChatGPT, alongside much smaller players like Perplexity and browser-maker Opera. However, two products that have found some vocal fan-following with Gemini and ChatGPT have finally landed on the Copilot platform. 

Read more
Microsoft Copilot gets an AI agent to browse the web for you
Launching a search with Microsoft Copilot Actions.

Microsoft’s 50th anniversary event was quite loaded, but the company reserved most of its attention for the Copilot AI stack. The buzzy event introduced two crucial upgrades – Actions and Deep Research — which firmly push Copilot into the realm of agentic AI.

Agentic AI is essentially a fancy way of describing an AI tool that can perform multi-step web-based tasks autonomously, or semi-autonomously, on your behalf. In Copilot’s case, the fancier one is Actions. So far, AI chatbots have mostly been able to give answers based on a certain input, but haven’t been able to perform autonomous multi-stage actions.

Read more
OpenAI plans to make Deep Research free on ChatGPT, in response to competition
OpenAI's new typeface OpenAI Sans

OpenAI has plans to soon make its Deep Research function available for free tier ChatGPT users.

The feature has been available since early February to Plus, Pro, Enterprise, and Edu subscribers; however, the AI company plans to expand availability beyond its paid users. Deep Research goes beyond the standard query results of the brand’s more traditional AI models. The AI agent has the ability to do extended research tasks on command without the help of a human. The feature can provide a detailed report on the subject of your choosing that might take between five and 30 minutes to compile.  

Read more