Today’s desktop and notebook PCs are faster than they’ve ever been, offering up teraflops of computing power and churning through the most arduous personal computing tasks. However, there are times when performance requirements are measured in petaflops — an insanely huge number of flops, or floating point operations per second — for the most advanced scientific and military tasks.
In those instances, such as running climate prediction models and virtually testing nuclear weapon designs, the computing world turns to supercomputers. These massively parallel systems with hundreds of thousands or millions of processor cores keep getting larger and more powerful, and in this year’s supercomputer race, it’s China that has taken a decisive lead with the two fastest systems, according to ExtremeTech reports.
Each year, the most powerful supercomputers on the planet are ranked according to their performance in the Linpack benchmarks that measure a computing system’s ability to process floating point equations such as those common in engineering tasks. Today’s most powerful supercomputers are Linpack-rated at multiple petaflops of performance, starting with the lowest in the top 10, which is rated at 8.1 petaflops.
This year’s number one and two systems are both of Chinese origin. The TalhuLight is located at the National Supercomputing Center in Wuxi, China, and is rated at 93.01 petaflops. Right behind it is the Tianhe-2, located at the National Super Computer Center in Guangzhou, China, and rated at 33.86 petaflops. The United States isn’t too far behind, with the next three highest-rated systems and five out of the top ten.
The complete list of the 10 fastest supercomputing systems in the world is as follows:
- TaihuLight, Sunway MPP, SW26010, National Supercomputing Center, Wuxi, China — 10.6 million cores, 93.01 petaflops
- Tianhe-2, TH-IVB-FEP Cluster, National Super Computer Center in Guangzhou, China — 3.12 million cores, 33.86 petaflops
- Titan, Cray XK7 system, U.S. Department of Energy, Oak Ridge National Laboratory — 17.59 petaflops
- Sequoia, IBM BlueGene/Q system, U.S. Department of Energy, Lawrence Livermore National Lab, California — 1.57 million cores, 16.32 teraflops
- Cori, Cray XC40, Berkeley Lab, U.S. National Energy Research Scientific Computing Center (NERSC) — 14 petaflops
- Oakforest-PACS, Fujitsu Primergy CX1640 M1 cluster, Japan, Joint Center for Advanced High Performance Computing — 13.6 petaflops
- K Computer, SPARC64 system with 705k cores, RIKEN Advanced Institute for Computational Science, Japan — 705,000 cores, 10.5 petaflops
- Piz Daint, Cray XC30 with 116k Xeon and Nvidia cores, Swiss National Computing Centre, Switzerland — 116,000 cores, 9.8 petaflops
- Mira, IBM BlueGene/Q, U.S. DOE/SC/Argonne National Laboratory — 786,000 cores, 8.6 petaflops
- Trinity, Cray XC40, U.S. DOE/NNSA/LANL/SNL — 301,056 cores, 8.1 petaflops
The benchmarks are conducted by the TOP500 team, an organization that measures and maintains a list of the world’s fastest supercomputers to help businesses and government organizations around the world with pertinent statistics on where the fastest systems are located and what kind of performance they can achieve. TOP500 issued the following statement on this year’s results, “In addition to matching each other in system count in the latest rankings, China and the U.S. are running neck and neck in aggregate Linpack performance.”
Globally, while the U.S. has 33.9 percent of total supercomputing power, China is a close second at 33.3 percent of the total. When added all up, the world’s top 500 supercomputers provide 672 petaflops, an increase of 60 percent over 2015’s list and an increase from 566 petaflops in June of this year.
IBM leads all manufacturers in providing the brains behind the machines with its Power CPUs. AMD comes in at second place with seven percent. Nvidia GPUs are prominent in the “many core accelerators” used in 96 percent of all supercomputers on the list, at 60 percent. Xeon Phi is next at 21 percent. A total of 206 systems use Gigabit Ethernet, and 187 use Infiniband to communicate. Intel’s new Omni-Path technology increased from 20 systems to 28 in 2016.
That’s a lot of computing power being applied to some of the world’s most complex and pressing engineering and scientific applications. With such massive year-over-year increases in total computing power, it’s possible that a self-aware artificial intelligence could emerge at some point in the near future. Skynet, here we come.
- What is a teraflop?
- Intel Xe graphics: Everything you need to know about Intel’s dedicated GPUs
- PS4 vs. PS5
- GTC 2020 roundup: Nvidia’s virtual world for robots, A.I. video calls
- Xbox One S vs. Xbox One X