Skip to main content

Microsoft teaches AI to see the funny side of things

Airbnb’s prescient artificial intelligence predicts the best time to rent out your pad. Google’s self-teaching computers speed up spam detection and translation. And Microsoft’s, apparently, knows when a joke’s funny. As part of a recent study, researchers at the Redmond-based company collaborated with New Yorker cartoon editor Bob Mankoff to imbue artificial intelligence with a sense of humor. The results are predictably fascinating.

Microsoft researcher Dafna Shahaf, who headed the study, began by attempting to convey to the AI linguistic hallmarks of comedy, like sarcasm and wordplay. Shahaf fed the program hundreds of old New Yorker captions and cartoons and, with the aid of crowdsourced workers from Amazon’s Mechanical Turk, painstakingly organized each by two categories: context and anomalies. The ‘context’ label described what was pictured — in an office setting, objects like “secretary” and “phone” — while ‘anomalies’ highlighted any potential source of humor — an unexpected “stairway” in said office, for example.

Recommended Videos

Shahaf then set the software loose at the offices of the New Yorker, tasking it with identifying (or trying to identify) the funniest cartoon captions among a week’s worth of reader submissions. The result? The AI and the editors agreed about 55.8 percent of the time, or on about 2,200 selections. That’s nothing to sneeze at, Shahaf told Bloomberg — on average, the AI saved Mankoff “about 50 percent of [the] workload.”

It isn’t difficult to imagine applications beyond editorial decision-making. The techniques might one day be used to improve Microsoft’s real-time translation efforts (such as Skype Translator), or, Eric Horvitz told Bloomberg, help flesh out the personalities of digital assistants like Siri and Cortana.

And Microsoft’s software isn’t the only AI humorist around, surprisingly. An Israeli student developed a system capable of recognizing “patronizing sounding semantics” and “slang words in phrases in text” by feeding it more than 5,000 Facebook posts. Hebrew University’s SASI, or Semi-supervised Algorithm for Sarcasm Identification, can recognize sarcastic sentences in Amazon.com product reviews. And a scientist at Purdue University has designed algorithms capable not only of identifying jokes, but explaining why a particular joke is funny.

But consistently hilarious robots are a ways off. While Microsoft’s AI found that brevity played best in the New Yorker’s caption section, a University of Michigan system favored downbeat punchlines. Reconciling the two — deriving an objective humor metric — will take a lot more algorithmic fine-tuning. “Computers can be a great aid,” Mankoff told Bloomberg, “[but] there are more things in humor and human beings than are dreamt of in even Microsoft’s algorithms.”

Kyle Wiggers
Former Digital Trends Contributor
Kyle Wiggers is a writer, Web designer, and podcaster with an acute interest in all things tech. When not reviewing gadgets…
Apple is late to Siri revolution, so Microsoft brings you Copilot for Mac
Copilot app for Mac

Microsoft has today launched a dedicated Copilot app for Mac. For now, the app is only available for users in the US and UK, but it’s already loaded with the latest and greatest tricks from Microsoft, such as the new Think Deeper mode.

The only system requirement is that your machine must be running macOS 14, or a later version. On the hardware side, any Mac with an M1 silicon, or newer processor from Apple, is compatible with the app.

Read more
xAI’s Grok-3 is free for a short time. I tried it, and I’m impressed
Grok-3 access option in the X mobile app.

xAI launched its Grok-3 AI chatbot merely a few days ago, but locked it behind a paywall worth $40 per month. Now, the company is offering free access to it, but only for a limited time. xAI chief, Elon Musk, says the free access will only be available for a “short time,” so it’s anyone’s guess how long that window is going to be.

For now, the only two features available to play around are Think and DeepSearch. Think is the feature that adds reasoning capabilities to Grok-3  interactions, in the same view as DeepThink on DeepSeek, Google’s Gemini 2.0 Flash Thinking Experimental, and OpenAI’s o-series models.

Read more
Turns out, it’s not that hard to do what OpenAI does for less
OpenAI's new typeface OpenAI Sans

Even as OpenAI continues clinging to its assertion that the only path to AGI lies through massive financial and energy expenditures, independent researchers are leveraging open-source technologies to match the performance of its most powerful models -- and do so at a fraction of the price.

Last Friday, a unified team from Stanford University and the University of Washington announced that they had trained a math and coding-focused large language model that performs as well as OpenAI's o1 and DeepSeek's R1 reasoning models. It cost just $50 in cloud compute credits to build. The team reportedly used an off-the-shelf base model, then distilled Google's Gemini 2.0 Flash Thinking Experimental model into it. The process of distilling AIs involves pulling the relevant information to complete a specific task from a larger AI model and transferring it to a smaller one.

Read more