Global Internet backbone operators are responding to the traffic growth, such as annual growth rates in the triple digits for many developing countries, by deploying staggering amounts of new capacity, research firm TeleGeography says.
Since 2007, international Internet bandwidth increased 64 percent, according to TeleGeography, as backbone operators upgraded their networks to handle rapidly growing traffic volumes. And in 2009 network operators added 9.4 Tbps of new capacity, exceeding the 8.7 Tbps in existence just two years earlier.
Dennis Quan, IBM director of autonomic computing, talked about primary issues that come about in working with clients, such as how to properly secure and manage the cloud environments and directing clients toward considering a private cloud solution. “In a lot of the private cloud solutions, which take advantage of virtualization and transformation of existing data center assets, they are often able to get a lot more capacity out of the hardware they already have to be able to fulfill the demands that they have currently as well as the demand increases they are predicting.”
According to Quan, from that perspective on the private cloud side, IBM has barely heard anything around capacity issues per se. “But … in the public cloud space there’s a need to have more and more data centers that are relatively local to the clients that make use of public cloud services,” Quan says. “And this is due to basic laws of physics and the speed of light that you can only have latencies reduced to a certain point when a data center is a certain distance from the client.”
He thinks there will be certain pressures for different cloud providers to provide cloud computing facilities that are relatively local to clients, at least in the short term. “I don’t think in terms of the aggregate number of servers across the globe that we’re predicting any unusual spike in growth for servers or for storage,” Quan says. “I would say individual clients may observe the need to grow their capacity up or down and certainly they’re going to need to look at a combination of public and private cloud options. But in aggregates I don’t think we’re expecting say an order of magnitude change to happen overnight.”
One area where capacity growth is probably needed is in networks, Quan says, as there’s been a lot of discussion around the capacity of 3G networks, “but I think providers such as IBM are equipped to satisfy those needs.”
Meanwhile, Omar Sultan, Cisco Systems senior manager for data center solutions, maintains the problems with utilization stem from the fact that application usage is naturally peaky as there are very few systems that are used at a consistent level all day long. “For example, email sees a usage peak in the morning as folks come in and check their email and get their day kicked off,” he says. “Similarly, back-office systems see peaks at the end of the month as businesses close out the books and sales people try and get orders booked. So, typically, organizations design their infrastructure for those peak-usage scenarios and the rest of the time, the capacity is wasted because there has not been an easy way to share capacity across applications.”
Sultan points to a couple of trends that help address this. The first is virtualization. With virtualization, applications are no longer tied to a dedicated hardware, but rather, they are running in a virtualized environment where data center resources (network, compute, storage) based on actual need can be allocated. “For example, instead of running five applications on five different servers, based on the actual workload, we may be able to run all those applications on a single server,” he says. “This has great potential to reduce the demand for infrastructure in the data center. One of the advantages of virtualization is that if one of those applications suddenly sees a spike in load, we can allocate extra resources to it on the fly or even move it to its own server for the duration of the spike.”
The other trend is around cloud computing.
“With cloud computing, one of the things we can have is access to capacity on demand,” he says. “If we see a usage spike in our data center, we can either ‘borrow’ resources from a cloud provider or even ship an application out to a cloud provider to run (all on the fly). This allows us to start sizing our data centers for typical usage scenarios instead of worst-case peak-usage scenarios, because we know we can grab capacity on the fly from a contracted cloud provider.”
A new initiative from Nemertes Research, “The Internet Singularity, Delayed: Why Limits in Internet Capacity Will Stifle Innovation on the Web,” has attracted much attention. The research projects an annual doubling of Internet traffic over the next few years.
Johana Till Johnson, president and senior founding partner at Nemertes Research (www.nemertes.com) says in a statement that users could increasingly encounter Internet ‘brownouts’ or interruptions to applications they’ve become accustomed to using on the Internet.
She gave the example that YouTube will not distribute videos to certain parts of the world as it’s no longer sustainable because the bandwidth isn’t there. “Increasing capacity isn’t a matter of flipping some switches and routers, it involves human labor,” she says. “So you have to have people in trucks, digging ditches, climbing towers and trees.”
The growth in demand for bandwidth appears to be outpacing the growth in the supply of bandwidth capacity. It might come down to which new brands will emerge to match demand and investment. Watch for an update on DMB in the near future.
Barbara Gengler more than a decade experience covering the Silicon Valley hi-tech market before moving to the East Coast. She previously worked for trade publications and for print and online magazines.
- T-Mobile announces expanded 5G Home Internet access across southern U.S.
- iPhone 13 vs. Samsung Galaxy S21: Battle of the flagships
- The best VPN services for 2021
- The best streaming devices for 2021
- Meet the most influential Hispanic people in the world of tech