If you pay passing attention to computer news, you may have noticed in recent years that semiconductor giant AMD has been heavily pushing the term “APU.” If you are not into the nitty gritty of microarchitecture design, you may wonder what exactly an APU is, and how it relates to the similar term CPU.
The core concept of an APU is blessedly simple: an APU (Accelerated Processing Unit) combines the functionality of a CPU and a GPU, two pieces of hardware found in basically any computer.
The CPU (Central Processing Unit), commonly called a processor, carries out all the instructions for computer programs. Every action you take on your computer, whether running a game or simply typing a letter, must go through the CPU.
The GPU (Graphics Processing Unit) is a piece of hardware that allows a computer to render images quickly. Creating 3D images often involves complex processes like rendering polygons, mapping textures, and using complicated equations involved in animation. Offloading these to dedicated hardware makes 3D images less difficult.
What’s the point in combining the two? By integrating the CPU and GPU into a single unit, the APU produces a better transfer rate between the two, allowing them to share the burden of processing tasks. This also allows the APU to complete tasks while using less power than a standard CPU and GPU setup would. It also ensure a certain base level of graphical capability, which makes the overall user experience better.
So is AMD the only company making these? Not exactly. Their biggest rival in the industry, Intel, also incorporates GPUs on the processor die, and has done so for several years. They choose not to refer to their products as APUs, however, likely because that term is too heavily associated with AMD. Most of Intel’s recent architectures, such as Ivy Bridge and Haswell, combine CPU and GPU functionality on a single die. Essentially, any modern processor you purchase these days will be an APU, even if it doesn’t bear the name.