Skip to main content

Deep-learning A.I. is helping archaeologists translate ancient tablets

Deep-learning artificial intelligence is helping grapple with plenty of problems in the modern world. But it also has its part to play in helping solve some ancient problems as well — such as assisting in the translation of 2,500-year-old clay tablet documents from Persia’s Achaemenid Empire.

These tablets, which were discovered in modern-day Iran in 1933, have been studied by scholars for decades. However, they’ve found the translation process for the tablets — which number in the tens of thousands — to be laborious and prone to errors. A.I. technology can help.

“We have initial experiments applying machine learning to identify which cuneiform symbols are present in images of a tablet,” Sanjay Krishnan, assistant professor at the University of Chicago’s Department of Computer Science, told Digital Trends. “Machine learning works by extrapolating patterns from human-labeled examples, and this allows us to automate the annotations in the future. We envision that it is a step toward significant automation in the analysis and study of these tablets.”

In this case, the human-labeled examples are annotated tablets from the Persepolis Fortification Archive’s (PFA) Online Cultural and Historical Research Environment (OCHRE) dataset. In DeepScribe, a collaboration between researchers from the University of Chicago’s Oriental Institute and its Department of Computer Science, they used a training set of more than 6,000 annotated images to build a neural network able to read unanalyzed tablets in the collection.

DeepScribe project 1

When the algorithm was tested on other tablets, it was able to translate the cuneiform signs with an accuracy level of around 80%. The hope is to increase this benchmark in the future. Even if that doesn’t happen, though, the system could be used to translate large amounts of the tablets, leaving human scholars to focus their efforts on the really difficult bits.

“Cuneiform is a script used since the third millennium BCE to write multiple languages including Sumerian, Akkadian, and Elamite,” Susanne Paulus, associate professor for Assyriology, told Digital Trends.

Cuneiform poses a series of particular challenges for machine translation. Firstly, it was written by impressing a reed stylus into wet clay. This makes cuneiform one of very few three-dimensional script systems. Secondly, cuneiform is a complex script system using hundreds of signs. Each sign has different meanings depending on its context. Thirdly, cuneiform tablets are ancient artifacts. They are often broken and hard to decipher, which means reading one tablet can take days.

“So far, we have an initial prototype that suggests that such techniques are very effective in a controlled setting,” Krishnan said. “Given a clean image of a single symbol, [we can] determine what the symbol is. Our next step is to develop more robust models that account for context and data quality.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
The funny formula: Why machine-generated humor is the holy grail of A.I.
microphone in a bar

In "The Outrageous Okona," the fourth episode of the second season of Star Trek: The Next Generation, the Enterprise's resident android Data attempts to learn the one skill it has previously been unable to master: Humor. Visiting the ship’s Holodeck, Data takes lessons from a holographic comedian to try and understand the business of making funny.

While the worlds of Star Trek and the real world can be far apart at times, this plotline rings true for machine intelligence here on Earth. Put simply, getting an A.I. to understand humor and then to generate its own jokes turns out to be extraordinarily tough.

Read more
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
ai religion bot gpt 2 art 4

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more