Skip to main content

As tech trickles in, medicine is about to hit warp speed

In the summer of 2008, I noticed a mole on my arm that seemed to be getting bigger.

It was hard to tell, though. I wasn’t sure if it had actually grown — or if I was just freaking out and being a hypochondriac for no good reason — so I decided to have it checked out. Doing so required me to call a clinic, set up an appointment, wait for a few days, and then drive to the doctor’s office. Once I was there, a woman with more than eight years of specialized medical education look a long, hard look at the mole and asked me a series of questions about it — but when it was all said and done, she didn’t have a definitive answer for me. Instead, she just referred me to different doctor who had more experience with melanoma, and the whole process started over again.

It ended up being nothing, but the second doctor told me to keep an eye on it just to be safe. Fast-forward eight years, and I’m still keeping an eye on it — but my methods have become a bit more sophisticated. Now, every few months, I pull a smartphone out of my pocket, fire up an application called SkinVision, and snap a picture of the mole. Within seconds, the app uses advanced image recognition algorithms to analyze the shape, size, and color of the affected area, then compares it to all the pictures I’ve taken in the past to assess my risk of melanoma.

Something that once took me two weeks and multiple doctor visits can now be performed in less time than it takes me to tie my shoes.

With the help of technology, something that once took me two weeks and multiple doctor visits can now be performed in less time than it takes me to tie my shoes. It still blows my mind that such a radical transformation took less than a decade to come about, so now, every time I fire up the app, I can’t help but wonder what kind of advances we’ll see in the next decade.

Ten years from now, what will medicine look like? Will we be operated on by robotic surgeons, grow new organs on demand, and take miracle pills that alleviate all our ailments? Will the world’s most deadly diseases be cured, or will we figure out how to prevent them before they happen in the first place? It’s easy to speculate what will happen in the distant future, but what about the near future? What wondrous things will be possible – realistically — in 2026?

To understand, you first need to look back at the tectonic shifts that have taken place over the last 10 years, and will continue to ripple into the future. Here’s how technology has radically reshaped medicine over the course of the past decade, and a peek at some of the amazing advances to come in the next decade.

The internet of health

In 2006, nobody had a smartphone in their pocket. The wireless web had just barely been born, the iPhone hadn’t been released, and “wearable tech” wasn’t even part of the popular vernacular yet. It’s just 10 years later, and all of these things are practically ubiquitous in the developed world.

Unlike any other time in human history, people are now walking around with sensor-studded, Internet-connected computers more or less attached to their bodies. These computers allow us to not only access a world of health information whenever we need it, but also track our personal health in unprecedented new ways.

Even a cheap smartphone can check your heart rate, count the number of steps you take, or monitor the quality of your sleep at night. If you need something more advanced, there are also countless attachments available that can transform your mobile device into just about any medical tool you could ever need. A smartphone-powered otoscope can diagnose ear infections, a smart stethoscope can identify unusual heart rhythms, and a smartphone-connected molecular spectrometer can tell you the chemical makeup of any foods or pills you encounter. And that’s just to name a few.

SkinVision
Image used with permission by copyright holder
The SkinVision app can keep track of a skin mole over time to calculate the risk that it’s melanoma. (Credit: SkinVision)

This incredible abundance of apps, sensors, and information has already kicked off a major shift away from traditional medical practices.

“Basically, what we’re seeing is the digitization of human beings,” says Dr. Eric Topol, a cardiologist and the director of the Scripps Translational Science Institute. “All these new tools give you the ability to basically quantify and digitize the medical essence of each human being. And since patients are generating most of this data themselves, because their smartphones are medicalized, then they take center stage instead of the doctor. And with smart algorithms to help them interpret their data, they can, if they want, become emancipated from the closed-off world of traditional health care.”

Looking to the future, Topol believes that smartphones will radically transform the role that human physicians play in the health care system. “These tools can reduce our use of doctors, cut costs, speed up the pace of care, and give more power to patients,” he explains. “As more medical data is generated by patients and processed by computers, much of medicine’s diagnostic and monitoring aspects will shift away from physicians. The patient will begin to take charge, turning to doctors chiefly for treatment, guidance, wisdom, and experience. These doctors won’t write orders; they’ll offer advice.”

Medicine, meet computer science

Computers have a long history in the field of medicine. Hospitals have been using them to track medical records and monitor patients since the 1950s, but computational medicine — that is, using computer models and sophisticated software to figure out how disease develops — has only been around for a relatively short amount of time. It wasn’t until the past decade or so, when computers became drastically more powerful and accessible, that the field of computational medicine really started to take off.

Dr. Raimond Winslow, director of the Johns Hopkins University Institute for Computational Medicine, which was founded in 2005, says that in recent years, “the field has exploded. There’s a whole new community of people being trained in math, computer science, and engineering — and they’re also being cross-trained in biology. This allows them to bring a whole new perspective to medical diagnosis and treatment.”

In a relatively short amount of time, computational medicine has been used to accomplish some pretty incredible things.

Now, instead of just puzzling over complex medical questions with our limited human brainpower, we’ve begun enlisting the help of machines to analyze vast amounts of data, recognize patterns, and make predictions that no human doctor could even fathom.

“Looking at disease through the lens of traditional biology is like trying to assemble a very complex jigsaw puzzle with a huge number of pieces,” Winslow explains. “The result can be a very incomplete picture. Computational medicine can help you see how the pieces of the puzzle fit together to give a more holistic picture. We may never have all of the missing pieces, but we’ll wind up with a much clearer view of what causes disease and how to treat it.”

In a relatively short amount of time, computational medicine has been used to accomplish some pretty incredible things — such as pinpointing the gene and protein markers of colorectal cancer, ovarian cancer, and a number of cardiovascular diseases.

Lately, the field has even begun to branch out beyond disease modeling. As our computational powers have expanded over the years, the ways in which scientists are using these powers have expanded as well. Scientists are now using technologies like deep-learning algorithms and artificial intelligence to mine information from sources that are otherwise useless or inaccessible.

Take Dr. Gunnar Rätcsh of Memorial Sloan Kettering Cancer Center, for example. He and his team recently used computation to unravel the mysteries of cancer in a totally unorthodox way. Rather than building a model of the disease to understand it on a biological level, Rätcsh and his team built an artificially intelligent software program capable of reading and understanding hundreds of millions of doctor’s notes. By comparing these notes and analyzing relationships between patient symptoms, medical histories, doctors’ observations, and different courses of treatment, the program was able to find connections and associations that human doctors might not have noticed.

“The human mind is limited,” Rätsch explains, “hence you need to use statistics and computer science.”

Computational science will open new ways to fight old problems, like the metastasis of cancer. (Credit: Memorial Sloan Kettering)

And Ratsch isn’t the only one thinking outside the box. With powerful new computers, tons of new data, and a myriad of clever new approaches, researchers are cooking up completely different ways to approach complex medical problems.

For instance, researchers recently developed a machine-learning algorithm that tracks the spread of disease by sifting through Twitter for geotagged tweets about being sick. By analyzing this data, epidemiologists can more accurately predict where viruses like influenza are likely to spread, which helps health officials deploy vaccines more effectively.

In a different study, researchers trained an artificial neural network to recognize patterns in MRI scans, which ultimately resulted in a system that could not only detect the presence of Alzheimer’s, but also predict when the disease was likely to appear in an otherwise healthy patient.

We also have algorithms that can diagnose depression and anxiety by analyzing patterns in your speech, and even predict the spread of Ebola by analyzing the migratory activity of infected bats. And the list goes on. These are just a few examples of a larger trend. Computing has invaded dozens of different medical professions at this point, and it will continue to spread its fingers until it’s reached every corner of medical research and practice.

Gene editing

Any discussion of the most significant advances that have occurred over the past 10 years would be woefully incomplete without a mention of CRISPR-Cas9. This single technique is unquestionably one of the biggest achievements of our time, and will have a profound effect on the future of medicine.

For the uninitiated, CRISPR-Cas9 is a genome-editing technique that allows scientists to edit genes with unprecedented precision, efficiency, and flexibility. It was developed in 2012, and has since swept through the field of biology like wildfire.

Put simply, CRISPR has cut down some of the biggest hurdles standing in front of DNA researchers all over the world.

The acronym CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats. That probably doesn’t mean much to you unless you’re a biologist, but in a nutshell, it refers to an adaptive immune system that microbes use to defend themselves against invading viruses by recording and targeting their DNA sequences. Some years ago, scientists realized this technique could be repurposed into a simple and reliable technique for editing — in living cells, no less — the genome of just about any organism.

Now to be fair, CRISPR isn’t the first genome-editing tool that’s ever been created. Earlier, scientists could edit genes with processes like TALENS and zinc finger nucleases. These previous techniques, however, don’t hold a candle to the simplicity of CRISPR. Both require that scientists build custom proteins for each DNA target — a process that takes far more time and effort than the relatively simple RNA programming that CRISPR uses.

“We could do all this genetic engineering stuff before,” explains Josiah Zayner, biohacker and biologist, “but previous things that people used, like zinc finger nucleases and TALENS, had to be engineered on a protein level. So if you wanted to engineer something for a certain gene, it would take you like six months to engineer the proteins to bind the DNA. With CRISPR, if I want to do a new CRISPR experiment, I could go online, go to one of these DNA synthesis companies, order 100 different things, and tomorrow I could be doing my experiments. So it went from six months down to, well — some of these companies ship overnight now — so not only can you do 100 times as much research, you can do it 100 times faster than before.”

Like Photoshop for genes, CRISPR allows scientists to precisely edit DNA, changing the instructions your body follows. (Credit: McGovern Institute for Brain Research at MIT)

Put simply, CRISPR has cut down some of the biggest hurdles standing in front of DNA researchers all over the world. The floodgates are now open, and anybody can do gene editing.

In the decade leading up to the development of the CRISPR-Cas9 technique, CRISPR was mentioned in scientific publications just 200 times. That numbered tripled in 2014 alone, and we’re seeing no signs of slowing down anytime soon.

In the past two years alone, researchers have successfully used CRISPR to engineer crops that are immune to certain fungal diseases, eradicate HIV-1 from infected mouse cells, and even perform full-scale genome engineering.

And this is just the beginning. As I write these words, the first ever gene-editing trials in humans are actually underway. In August, a group of Chinese researchers will attempt to treat a patient with cancer by injecting the person with cells modified using the CRISPR-Cas9 method. More specifically, the team plans to take white blood cells from patients with a certain type of lung cancer, edit those cells so that they attack cancer, and then reintroduce them back into the patient’s body. If all goes as planned, the engineered cells will hunt and kill the cancer cells and the patient will make a full recovery.

A litany of successful animal trials suggest that CRISPR has huge potential in the treatment of human disease.

A litany of successful animal trials suggest that CRISPR has huge potential in the treatment of human disease. But arguably the biggest strength of CRISPR isn’t that it’s so simple and effective — it’s that the technique has become so accessible that everyone can use it.

Right now, thanks to a biotech supply startup in California, anybody with $140 can get their hands on a do-it-yourself CRISPR kit and start performing basic gene-editing experiments right on the kitchen counter. Zayner, the company’s founder, hopes that putting these tools in the hands of citizen scientists will boost our collective knowledge of DNA in a huge way.

“There are so many people out there with all this knowledge and skill and creativity and abilities that aren’t being utilized,” Zayner said. “I read somewhere that there’s over 7 million hobbyist computer programmers in the world right now — which is crazy when you consider that in 1970 there were barely enough to fill a garage. But when it comes to genetic engineering and DNA, we’ve been working on this stuff for longer, or at least as long as computers have been around, yet there’s probably only a few thousand hobbyist scientists doing experiments. That’s what I want to change. Where would our medical world be if there were 7 million hobbyist biologists?”

Regenerative medicine grows up

In 1981, two U.K. scientists made a massive breakthrough. For the first time ever, they managed to grow embryonic stem cells in a lab. Stem cells — the cellular putty from which all tissues of the body are made — have a nearly endless list of potential medical applications, and ever since their discovery, scientists have been singing their praises. For years, we’ve been told that stem cell research will usher in a future where we’ll be able to regrow tissues, organs, and even full limbs. But while we’ve long known their potential, it wasn’t until recently that we figured out how to truly use stem cells to our collective advantage.

The thing is, we hit a few roadblocks along the way. After mouse stem cells were first cultivated in 1981, it took another 18 years for scientists to successfully isolate human embryonic stem cells and grow them in a lab. When this finally happened, it was universally accepted as a monumental achievement — but this new technology wasn’t met with open arms by regulators.

In 2001, the Bush administration placed crippling limits on funding for human stem cell research in the U.S., on the grounds that creating stem cells required the destruction of a human embryo (debates about abortion and where life does or doesn’t begin were very high-profile at the time). This didn’t stop progress from happening in other parts of the world. In 2006, a Japanese scientist by the name of Shinya Yamanaka developed a way to make embryonic-like cells from adult cells — thereby avoiding the need to destroy an embryo in order to make usable, versatile stem cells.

Stem Cells
Image used with permission by copyright holder
Stem cells give scientists a way to regenerate tissue previously thought to be lost forever. (Credit: Juan Gärtner/123RF)

From that point onward, stem cell research has been growing like, well, stem cells. Three years after Yamanaka’s pluripotent stem cell workaround in 2006, the Obama administration lifted the Bush administration’s 2001 funding restrictions imposed on stem cell research. Suddenly, the floodgates opened up, and practically every year since then has seen some kind of major breakthrough in regenerative medicine.

In 2010, for the first time ever, scientists used human embryonic stem cells to treat a person with a spinal cord injury. In 2012, they were successfully used in a different trial to treat a woman with age-related macular degeneration. And the breakthroughs just keep on coming. To date, stem cell-related therapies have been used (or are being investigated) for: diabetes, Parkinson’s disease, Alzheimer’s, traumatic brain injury repair, tooth regrowth, hearing repair, wound healing, and even the treatment of certain learning disabilities.

In the past couple years, researchers have even begun exploring ways to use stem cells in conjunction with additive manufacturing methods — which has given rise to the cutting-edge technique known as 3D bioprinting. By using 3D printers to create scaffolds on which stem cells can be planted, scientists have made great strides in growing new limbs, tissues, and organs outside of the human body. The hope is that one day we’ll reach a point where we can print replacement parts in these machines and then transplant them afterward, thereby reducing or outright eliminating our reliance on organ, limb, and tissue donors. This technique is still in its infancy at this point, but it’s also a wonderful example of how natural sciences like biology can merge with and benefit from technological developments happening outside the confines of traditional medicine.

The Golden Age of Neuroscience

In 2014, when renowned physicist and futurist Michio Kaku famously stated that “we have learned more about the thinking brain in the last 10 to 15 years than in all of human history,” he wasn’t stretching the truth. The fleshy bundle of electrically pulsating neurons inside our skulls has puzzled scientists for centuries — but thanks in large part to advances in computing, sensing, and imaging technologies, our understanding of the human brain has expanded dramatically in the past few years.

“Optogenetics has allowed researchers to learn how various networks of neurons contribute to behavior, perception, and cognition.”

A flurry of new imaging and scanning technologies developed over the course of the past few decades have allowed scientists to observe the brain like never before. We can now see thoughts, emotions, hot spots and dead zones inside the living brain, and then begin the process of deciphering these thoughts using powerful computers.

This has huge implications for the future of medicine. Mental illnesses and neurological impairments are the leading cause of disability in the U.S. and many other developed countries. According to the National Alliance on Mental Illness, roughly 1 in 5 people suffer from some kind of mental health problem. But thanks to a number of new technologies that have come to fruition in the past decade, we’re quickly learning how to treat everything from neurodegenerative diseases like Alzheimer’s and ALS, to more puzzling conditions such as autism and schizophrenia.

One particularly promising development that popped up recently is the advent of optogenetics — a technique that allows scientists to switch individual neurons on or off with light. Before this method was perfected, standard procedures for activating or silencing neural networks were relatively crude. To determine which group of neurons helps mice navigate mazes, for example, scientists would insert electrodes directly into a mouse’s brain tissue, give them a little jolt, and stimulate thousands of neurons at a time. This method was rather imprecise, which made gathering useful data fairly difficult, but with optogenetics, scientists can now place light-sensitive molecules into specific brain cells and manipulate them individually — which makes it far easier to determine the role a neuron (or network of neurons) plays in behavior, emotion, or disease.

optogenetics
Image used with permission by copyright holder
Optogenetics allows scientists to individually flick brain cells on and off with light. (Credit: Robinson Lab)

Neuroscientists all over the world have now embraced the technique. “Over the past decade hundreds of research groups have used optogenetics to learn how various networks of neurons contribute to behavior, perception, and cognition,” says Ed Boyden, professor of biological engineering at the Massachusetts Institute of Technology and co-inventor of optogenetics. “In the future optogenetics will allow us to decipher both how various brain cells elicit feelings, thoughts, and movements — as well as how they can go awry to produce various psychiatric disorders.”

Connecting the dots

By all accounts, the past 10 years have been a whirlwind of medical progress — but in order to understand how medicine might advance in the next 10 years, it’s important to understand not only how quickly these pockets of medicine have progressed individually, but also how they’re beginning to converge, coalesce, and cross-pollinate each other. All of the incredible medical advances and major shifts discussed earlier do not exist in a vacuum. They aren’t closed off from one another, or from other advances happening outside the world of medicine. Instead, many of them are merging in a highly synergistic fashion, which ultimately boosts the overall pace of medical progress even more.

The ongoing convergence of computational medicine and mobile technology is one obvious example, happening on two different scales. At a personal level, increasingly powerful processors (as well as cloud computing) allow mobile phones to complete more complex tasks — like recognizing the growth of a mole — that can be used for medical purposes. On the collective level, all of the medical data that we’re creating with our smartphones and wearable sensors can be used to unravel medical mysteries on a massive scale.

“The real revolution comes from the cloud, where we can combine all our individual data.”

“The real revolution doesn’t come from having your own secure, in-depth medical data warehouse on your smartphone,” says Topol, director of the Scripps Translational Science Institute. “It comes from the cloud, where we can combine all our individual data. When that flood of data is properly assembled, integrated, and analyzed, it will offer huge, new potential at two levels — the individual and the population as a whole. Once all our relevant data are tracked and machine-processed to spot the complex trends and interactions that no one could detect alone, we’ll be able to preempt many illnesses.”

And it’s not just smartphones and computational medicine that are converging, either. A myriad of different fields and technologies are coming together — including but not limited to neuroscience, gene editing, robotics, stem cells, 3D printing, and a host of others.

Even things that are seemingly somewhat separate — such as DNA sequencing and neuroscience– are coming together. Just look at how we diagnose most brain disorders now. Years ago, diagnosing neurological and psychiatric disorders required expensive, invasive procedures like biopsies and spinal taps — but thanks to modern DNA sequencing techniques that were developed in the wake of the Human Genome Project, we can now diagnose those same diseases with a simple blood test. In this case, our knowledge of genetics helped advance our knowledge of neuroscience — and it’s exactly this kind of cross-pollination that’s happening more and more as various branches of medicine and technology become more advanced.

Paying for health, not treatment

The thing is, just as all these medical and technological advances are interconnected, they’re also inexplicably linked to things like to politics, legislation, economics, and even tradition. Not everything moves at the breakneck pace of science and technology, so while the progression of medicine is likely to continue at an increasingly faster rate, it’s also important to remember that the implementation of new medical techniques might not always occur as quickly.

One particularly big hurdle standing in the way of implementation is the current fee-for-service model used by most health care systems. Under such a system, physicians receive payment for each service they provide — be it an office visit, a test, a surgical procedure, or any other kind of health service. This model creates something of a conflict of interest, as it incentivizes the use of treatments, not necessarily keeping people healthy.

As Dr. Daniel Kraft, founding executive director and chair of Exponential Medicine at Singularity University, explains, this structural problem is effectively discouraging the shift to more technologically advanced medical practices.

“I’m a pediatrician,” he explains, “so if I make some of my money from seeing kids with ear infections, and now I can send them home with an app and a digital otoscope — but I can’t bill for that — I’m not going to be incentivized to use this newer, more effective technology.”

cellscope-2
Image used with permission by copyright holder
The Oto by CellScope uses your smartphone’s camera to peer into the inner ear and send the resulting images to a doctor. (Credit: CellScope

That’s a big problem, but certainly not one that can’t be overcome. One thing that will likely accelerate the adoption of these new tools and methods is a switch to what’s known as “value-based care.” As Kraft puts it, “Physicians in this kind of health care system would get paid to keep you healthier. Their incentive would be to keep you out of the hospital when they’ve discharged you, not to get paid to do more procedures or biopsies or prescriptions.” In a value-based health care system, he explains “physicians and health care teams might get bonuses when patients have better blood sugar numbers, or less ER visits that were unnecessary, or their blood pressures are monitored using connected blood pressure cuffs.”

The transition from our current fee-for-service model to a value-based care system isn’t likely to happen overnight — but it is happening. A handful of large medical organizations, such as Kaiser Permanente and The Mayo Clinic, have begun to embrace this model, and the growing availability of modern health-tracking technologies is pressing the shift more and more.

“Data models are shifting,” says Kraft. “Ten years from now, the vast majority of health care is going to be paid for according to outcome — even some medical devices and apps and other tools will only get paid for when they’ve worked, not just because a doctor has prescribed them. If that’s part of my care, and I’m awarded for better outcomes or lower health care costs, I’m far more likely to embrace these newer, more high-tech tools.”

What’s around the corner?

So keeping in mind the exponential pace of progress in fields like gene editing, the cross-pollination of different fields, and the roadblocks keeping us from adopting new technologies as quickly as they’re progressing — what changes should we expect to see in medicine over the next 10 years?

Arguably, the most easily digestible answer to this question comes from Dr. Leroy Hood and his idea of P4 medicine, in which the P’s stand for: predictive, preventive, personalized, and participatory.

Over the course of the next decade, medicine will become increasingly predictive in nature. As more people embrace their ability to record and track health data, and as the scope of that data widens and our ability to analyze that data grows ever stronger, we’ll be able to preempt a broad range of different illnesses. Today, we have an app that can tell you when a mole is at risk of becoming malignant melanoma. Tomorrow, we’ll have apps that analyze gait patterns to find the early signs of Multiple Sclerosis, or look back at your eating habits over the past three years and let you know (with a friendly notification, of course) that you’re on track for diabetes.

“In 10 years I’m hoping that you’ll already have uploaded your recent vital signs into your electronic medical record, that your medical team has access to.”

These predictive abilities, of course, are also predicated on the idea that medicine will become increasingly participatory in the next few years. As technology progresses, patients will play a more active role in their own health care, collaborating with doctors instead of just taking orders.

“In 10 years,” says Kraft, “I’m hoping that you’ll already have uploaded your recent vital signs — from your watch, or your mattress, or your blood pressure reader, or your glucose meter — into your electronic medical record, that your medical team has access to. And hopefully that means your medical team doesn’t need to watch vital signs, but when something seems awry and the machine and ‘predicatlyitcs’ sense that there’s trouble, your health care team — or digital avatar — can contact you early. I’m hoping that a lot more patients are more empowered to be, if not the CEO of their own health, then at least the COO — so they’re tracking their health in smarter ways and are more co-pilots in their care instead of just waiting to hear what to do and are reactive.”

Ultimately, this shift to a more participatory, personalized, and predictive system of medicine will boost our ability to prevent illness from happening in the first place. If your diet-tracking wristband can sync with your smart fridge and determine that you’ve been eating foods with a high amount of sodium, your AI-powered digital health assistant might recommend dietary changes that would, in the long term, help you avoid developing heart disease years later.

It sounds funny to say, but if we continue on our current trajectory, the near future of medicine might actually be a future where we don’t need to take medicine.

Drew Prindle
Former Digital Trends Contributor
Drew Prindle is an award-winning writer, editor, and storyteller who currently serves as Senior Features Editor for Digital…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more