Deep learning uses algorithms inspired by the way human brains operate to put computers to work on tasks too big for organic gray matter. On Monday, IBM announced that a new record for the performance of a large neural network working with a large data set.
The company’s new deep-learning software brings together more than 256 graphics processing units across 64 IBM Power systems. The speed improvements brought about by the research come as a result of better communication between the array of GPUs.
Faster GPUs provide the necessary muscle to take on the kind of large scale problems today’s deep-learning systems are capable of tackling. However, the faster the components are, the more difficult it is to ensure that they are all working together as one cohesive unit.
As individual GPUs work on a particular problem, they share their learning with the other processors that make up the system. Conventional software is not capable of keeping up with the speed of current GPU technology, which means that time is wasted as they wait around for one another’s results.
Hillery Hunter, IBM’s director of systems acceleration and memory, compared the situation to the well-known parable of the blind men and the elephant. The company’s distributed deep-learning project has resulted in an API that developers can be used in conjunction with deep-learning frameworks to scale to multiple servers, making sure that their GPUs remain synchronized.
IBM recorded image recognition accuracy of 33.8 percent on a test run using 7.5 million images from the ImageNet-22K database. The previous best-published result was 29.8 percent, which was posted by Microsoft in October 2014 — in the past, accuracy has typically edged forward at a rate of about one percent in new implementations, so an improvement of four percent is considered to be a very good result.
Crucially, IBM’s system managed to achieve this in seven hours; the process that allowed Microsoft to set the previous record took 10 days to complete.
“Speed and scalability, which means higher accuracy, means that we can quickly retrain an AI model after there is a new cyber-security hack or a new fraud situation,” Hunter told Digital Trends. “Waiting for days or weeks to retrain the model is not practical — so being able to train accurately and within hours makes a big difference.”
These massive improvements in terms of speed, combined with advances in terms of accuracy make IBM’s distributed deep-learning software a major boon for anyone working with this technology. A technical preview of the API is available now as part of the company’s PowerAI enterprise deep-learning software.
- GTC 2020 roundup: Nvidia’s virtual world for robots, A.I. video calls
- Should you overclock your CPU?
- AMD Radeon RX 6000 series: Everything you need to know
- AMD reveals Radeon RX 6800, 6800 XT, 6900 XT starting at $580
- What is a teraflop?