Skip to main content

IBM claims its new processor can detect fraud in real time

At Hot Chips, an annual conference for the semiconductor industry, IBM showed of its new Telum processor, which is powering the next generation of IBM Z systems. In addition to eight cores and a massive amount of L2 cache, the processor features a dedicated A.I. accelerator that can detect fraud in real time.

IBM Telum Processor brings deep learning inference to enterprise workloads

Fraud is on the rise. The Federal Trade Commission (FTC) received 4.7 million reports of fraud in 2020, with $3.3 billion in total losses. Telum, according to IBM, addresses this problem by providing real-time detection. IBM used credit card transactions as an example, saying that Telum can detect a fraudulent transaction before it even completes.

IBM says this could lead to “a potentially new era of prevention of fraud at scale.” Although credit card fraud is the most direct application, Telum’s onboard A.I. accelerator can handle other workloads as well. Using machine learning, it can conduct risk analysis, detect money laundering, and handle loan processing, among other things.

The processor itself looks like a modern chip from AMD or Intel. It features eight cores with simultaneous multithreading running at 5GHz, and each core has a private 32MB of L2 cache. A ring — which IBM calls a “data highway” — connects the private cache pools, giving the processor a total of 256MB of cache. The A.I. accelerator is an offshoot of the data ring, passing information to and from the cores with minimal latency.

IBM says applications can scale up to 32 Telum chips, allowing for everything from credit card fraud detection to large-scale risk analysis for banks. Samsung is helping IBM build the processor, using its 7nm extreme ultraviolet process technology.

IBM Telum processor diagram.

Accelerated computing is quickly catching on, and the new Telum chip is a testament to that. Moore’s Law, the idea that chip density doubles about every two years, has waned in the last few years. Architectural improvements aren’t coming fast enough, leading companies to turn to dedicated “accelerators” that are specialized for certain types of work.

Heterogeneous architectures, as they’re called, combine multiple different types of processing units to increase performance and efficiency without moving to a new manufacturing technology. A.I. has seen the accelerator treatment before, but not quite like this. In most cases, the A.I. accelerator is a separate chip entirely.

Telum puts the A.I. accelerator on the processor itself. This allows IBM to share the cache with the accelerator and and provide a low-latency interface where it can communicate directly with the CPU cores. In applications where enterprises are relying on algorithms, A.I. can speed up the process. And in applications where enterprises are using A.I., Telum can help decrease the computing overhead and time required.

The first Telum machines will show up in the IBM Z mainframe starting in 2022. For most, it will be an invisible transition to a higher level of computing power. If Telum marks a “new era,” as IBM suggests, it could mean much more.

Editors' Recommendations

Jacob Roach
Senior Staff Writer, Computing
Jacob Roach is a writer covering computing and gaming at Digital Trends. After realizing Crysis wouldn't run on a laptop, he…
Intel processors could adopt one of the best Apple and iPhone features
Intel CEO Pat Gelsinger delivers the Day 1 closing keynote at IAA Mobility

Intel could harness one of the best features of Mac and iPhone processors in its future CPUs, according to a new rumor. YouTube channel Moore's Law is Dead (MLID) uploaded a video focused around a Vision Processing Unit (VPU) in upcoming Intel processors, which could handle machine learning tasks like upscaling video, recovering over-exposed photos, and enhancing text-to-speech.

If this is the first time you're hearing about VPU, you're not alone. They come from a company called Movidius, which Intel acquired in 2016. It's similar to the Neural Engine that Apple uses on its M1 desktop and tablet chips, as well as the one featured in iPhone chips from the iPhone 8 to the recently released iPhone 13 Pro.

Read more
Nvidia’s new voice A.I. sounds just like a real person
Nvidia Voice AI

The "uncanny valley" is often used to describe artificial intelligence (A.I.) mimicking human behavior. But Nvidia's new voice A.I. is much more realistic than anything we've ever heard before. Using a combination of A.I. and a human reference recording, the fake voice sounds almost identical to a real one.

All the Feels: NVIDIA Shares Expressive Speech Synthesis Research at Interspeech

Read more
A.I. is leading a chip design revolution, and it’s only just getting started
CPU Computer Chip being put in place with tweezers.

For decades, constant innovation in the world of semiconductor chip design has made processors faster, more efficient, and easier to produce. Artificial intelligence (A.I.) is leading the next wave of innovation, trimming the chip design process from years to months by making it fully autonomous.

Google, Nvidia, and others have showcased specialized chips designed by A.I., and electronic design automation (EDA) companies have already leveraged A.I. to speed up chip design. Software company Synopsys has a broader vision: Chips designed by A.I. from start to finish.

Read more