Skip to main content

Here’s how A.I. is helping Gfycat get rid of those crummy pixelated GIFs


GIFs may be growing in popularity, but many of them are grainy, low-resolution files that make the moving memes less than stellar — but GIF platform Gfycat is working to change that using A.I. The company recently shared a new program that uses tools like facial recognition to churn out higher quality GIFs from its platform.

The tech is being used to bolster Gfycat’s library in three ways. First, the software identifies individual frames and then searches online for another, higher resolution copy of the same file. Since GIFs are often created from popular movies, TV shows, and the like, the software can often find another version that has a higher resolution than the GIF. After determining which video the GIF came from, the A.I. then has to find the exact frames included in the GIF and swap them out to provide the highest quality version.

The second step is to recognize who’s in the GIF in order to make those higher-quality GIFs easier to find. The platform uses facial recognition and a database of celebrity images to automatically tag the GIFs, making those files pop up in search results and categories even when the user who uploaded the original file didn’t add those tags. Gfycat says its facial recognition system is a bit different because it uses a larger training set which aids accuracy for look-alike celebrities. The company hopes the system will also help recognize new celebrities before they reach rock star status.

The third component of the A.I. system helps correct text that has been affected by the low resolution of the file. Rather than facial recognition, this branch of the tech uses optical character recognition to help make out the text, then replaces the grainy version with new text to match the resolution of that replaced video segment.

Gfycat isn’t as popular as platforms like Giphy in terms of user count, but it has a number of attractive options, including a mobile app for making looping GIFs. The platform launched specifically to create better GIFs and says a Gfycat can be produced around 10 times faster than a traditional GIF, along with support for interaction, more colors, and more file formats. The new A.I. program pushes the firm’s goal further by working to create higher resolution GIFs.

Editors' Recommendations

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more