Skip to main content

This algorithm can hide secret messages in regular-looking text

FontCode: Embedding Information in Text Documents using Glyph Perturbation

Whether it’s hiding messages under the stamps on letters or writing in invisible ink, people have always found ingenious ways of using whatever technology they have available to write secret messages. A new project carried out by researchers at Columbia University continues this tradition by using some deep learning technology to embed encrypted messages in otherwise ordinary looking text.

Recommended Videos

Fontcode” works by making incredibly subtle modifications to everyday fonts like Times New Roman and Helvetica, embedding coded messages inside them. These changes are so subtle that the average person viewing the text would be incredibly unlikely to notice them. They include such alterations as slightly sharper curves or a minutely thicker stem on a particular letter. Each letter has 52 different variations, which makes it possible to encode both lowercase and capital letters within every letter of the alphabet, along with punctuation marks and numbers, too.

The researchers then trained a deep learning neural network to recognize these letters and to match them back to the coded letters in the secret message. With the right smartphone app and just a short period of time for processing the data, it’s possible to decode a secret message from the document it’s embedded in. Simply aim your device at the text and, as if by magic, the real message can be extracted.

Would such a technique ever be applied in the real world? Almost certainly not in everyday conversations, where the idea of having to send one another false text documents to embed a short hidden message sounds like way too much work. However, that doesn’t mean that this is relegated to being an impractical, albeit impressive, demo. It could certainly have applications in the security field, as well as potentially as an invisible watermark. Heck, you could even use it as a to- secret QR code to link to a web address.

A paper describing the project, titled “FontCode: Embedding Information in Text Documents using Glyph Perturbation,” will be presented later this year at the Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH) 2018 conference.

Someone should probably forward this research on to the James Bond producers before then, though. We can totally imagine Daniel Craig using the “Fontcode” algorithm in the next 007 movie!

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Google’s AI can now tell you what to do with your life
Career dreamer results

Got a degree and no idea what to do with it? Google's newest AI feature can help. The company announced on Wednesday the release of Career Dreamer, an AI tool that can recommend careers that best suit you based on your experience, education, skills, and interests.

Grow with Google | Career Dreamer

Read more
DeepSeek AI draws ire of spy agency over data hoarding and hot bias
DeepSeek AI chatbot running on an iPhone.

The privacy and safety troubles continue to pile up for buzzy Chinese AI upstart DeepSeek. After having access blocked for lawmakers and federal employees in multiple countries, while also raising alarms about its censorship and safeguards, it has now attracted an official notice from South Korea’s spy agency.

The country’s National Intelligence Service (NIS) has targeted the AI company over excessive collection and questionable responses for topics that are sensitive to the Korean heritage, as per Reuters.

Read more
Turns out, it’s not that hard to do what OpenAI does for less
OpenAI's new typeface OpenAI Sans

Even as OpenAI continues clinging to its assertion that the only path to AGI lies through massive financial and energy expenditures, independent researchers are leveraging open-source technologies to match the performance of its most powerful models -- and do so at a fraction of the price.

Last Friday, a unified team from Stanford University and the University of Washington announced that they had trained a math and coding-focused large language model that performs as well as OpenAI's o1 and DeepSeek's R1 reasoning models. It cost just $50 in cloud compute credits to build. The team reportedly used an off-the-shelf base model, then distilled Google's Gemini 2.0 Flash Thinking Experimental model into it. The process of distilling AIs involves pulling the relevant information to complete a specific task from a larger AI model and transferring it to a smaller one.

Read more