Skip to main content

Our human ancestors knew how to cook 1.9 million years ago

neanderthal-museum-ib-timesA new Harvard University study has proven that Homo erectus, an ancestor of ours, was cooking food as far back as 1.9 million years ago. While our ancestors definitely weren’t baking cherry pies, scientists have found that they were heating food much earlier than we previously believed. Past research pinned the invention of cooking and the harnessing of fire somewhere between 400,000 and 1.5 million years ago, but few thought its origins stretched so far back.

The Harvard scientists have concluded that cooking was commonplace for Homo erectus because of the change in the size of their teeth. When our ancestors began cooking food, their molar teeth slowly shrunk because they weren’t required to eat as tough of food for as long a duration. The Guardian reports that chimpanzees and apes spend about a third of their waking hours eating, while early humans only spent about 5-percent. The changes in tooth size and eating habits account for the invention of cooking, say researchers. Before cooking, we would have had to chew food for hours to obtain enough calories to survive. 

Related Videos

Homo erectus beat out its rivals too. Earlier ancestor Homo habilis spent 7.2-percent of its day eating and Homo rudolfensis spent 9.5-percent of its day eating. Later species like Neanderthals and Homo sapiens (us), were more in line with Homo erectus, suggesting they knew how to cook as well. 

This research means that cooking was even more important to our rise as a species than we previously thought. It also might explain part of our nature. Harnessing fire may have given us a greater desire to begin harnessing and controlling just about everything else. 

So…what will you eat for dinner tonight?

(image from Reuters)

Editors' Recommendations

Meta wants to supercharge Wikipedia with an AI upgrade
the wikipedia logo on a pink background

Wikipedia has a problem. And Meta, the not-too-long-ago rebranded Facebook, may just have the answer.

Let’s back up. Wikipedia is one of the largest-scale collaborative projects in human history, with more than 100,000 volunteer human editors contributing to the construction and maintenance of a mind-bogglingly large, multi-language encyclopedia consisting of millions of articles. Upward of 17,000 new articles are added to Wikipedia each month, while tweaks and modifications are continuously made to its existing corpus of articles. The most popular Wiki articles have been edited thousands of times, reflecting the very latest research, insights, and up-to-the-minute information.

Read more
The next big thing in science is already in your pocket
A researcher looks at a protein diagram on his monitor

Supercomputers are an essential part of modern science. By crunching numbers and performing calculations that would take eons for us humans to complete by ourselves, they help us do things that would otherwise be impossible, like predicting hurricane flight paths, simulating nuclear disasters, or modeling how experimental drugs might effect human cells. But that computing power comes at a price -- literally. Supercomputer-dependent research is notoriously expensive. It's not uncommon for research institutions to pay upward of $1,000 for a single hour of supercomputer use, and sometimes more, depending on the hardware that's required.

But lately, rather than relying on big, expensive supercomputers, more and more scientists are turning to a different method for their number-crunching needs: distributed supercomputing. You've probably heard of this before. Instead of relying on a single, centralized computer to perform a given task, this crowdsourced style of computing draws computational power from a distributed network of volunteers, typically by running special software on home PCs or smartphones. Individually, these volunteer computers aren't particularly powerful, but if you string enough of them together, their collective power can easily eclipse that of any centralized supercomputer -- and often for a fraction of the cost.

Read more
Why AI will never rule the world
image depicting AI, with neurons branching out from humanoid head

Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity -- for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans.

According to the theory, advances in AI -- specifically of the machine learning type that's able to take on new information and rewrite its code accordingly -- will eventually catch up with the wetware of the biological brain. In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. We're literally building our soon-to-be-sentient successors.

Read more