An industry watershed was reach in the last few weeks when global video traffic finally exceeded all other types of traffic added together in mobile data networks. This trend is only set to continue as smartphones become ever more mainstream and sales of iPhones and increasingly cheaper Android devices accelerate. We can’t hide from the problem anymore, we need a solution –- video traffic needs to be managed.
There are a lot of solutions out there but the difficulty lies in assessing which ones are best and cost effective in the long term. In the Financial Times this week O2 and Vodafone’s plans to reallocate radio spectrum previously used for basic phone and text services to 3G smartphone data activities was discussed as a way to satisfy consumer demand. However, this spectral re-farming isn’t without its issues. A lot of people still have “old handsets” with only a 2G service – why should their service be affected just because operators want to ensure those with more “current handsets” get a better service? Added to this there are regulatory issues of competition as pointed out by 3UK and Everything Everywhere.
There are lots of options available for managing the congestion brought on by the surge in video traffic. Offloading data onto WiFi networks and Femtocells is one, although there are not enough WiFi or Femtos in enough locations to make this a viable option today. Carrier adds and cell-site splitting to increase backhaul capacity and leased capacity is also a great solution technically, but by far the most expensive in terms of establishing new cells and therefore usually impractical. Other solutions include throttling traffic in the radio network (RAN) – using near-term Analytics. Here one predicts that because the data network at 5 p.m. today was congested, it will probably be the same tomorrow, so we determine in advance to “throw away packets” at 5 p.m. tomorrow. But this is an inflexible, sledge hammer approach with no controls over the quality of the resulting video streams.
In contrast, there are two much more dependable, flexible, and long-term solutions available to operators today. The first is the optimisation of traffic in the core network, using intelligent caching and compression that can be varied depending on the bandwidth available. Optimisation can be both content aware and congestion aware, so optimisation can be flexibly applied and highly tuned. Basically optimisation only kicks in where, when, and for whom it is required and only to the extent that it is required. This “holding back” of optimisation also makes the solution cost effective rather than just optimising blindly which always necessitates a greater investment in optimisation hardware (CPUs).
The second is in the application of policy and pricing. This is effective in singling out the most extreme users (for example the 1% of users responsible for 20% of data traffic), but also for more moderate users who just need some reigning in, and even for those users who are in fact only too happy and willing to pay more for a better quality of service, if only their operator gave them the option. Today, more than 20 operators around the world are testing and implementing new “tiered pricing” strategies in an effort to support the market’s need for ever more segmented data traffic plans against declining voice-based ARPUs.
As more operators experience network congestion challenges (some resulting in well-publicised outages), they are coming to terms with the fact that demand is outpacing capacity. Changes need to be made and operators need to be aware now of the long-term, most cost-effective solutions available to them.