Desktop version

Home arrow Economics arrow Content Delivery Networks. Fundamentals, Design, and Evolution

Adaptive Bitrate Arrives

Despite this convergence on the h.264 CoDec, and widespread success of Flash Player underpinned by both Adobe and Wowza, there were still some significant innovations to come. While the fundamental concept was initiated in 2002 as part of the DVD Forum,[1] a range of technologies, collectively known as “adaptive bitrate streaming” technologies, took several years to reach mainstream adoption as common Internet streaming formats. The runaway successes of these commercial implementations, which have emerged are Microsoft's Smooth Streaming (“Smooth”) and Apple's HTTP Live Streaming (HLS). Adobe have tried to keep up by introducing Adobe's HTTP Dynamic Streaming, although arguably it has not seen nearly as much adoption as both Smooth (which was first to market) and HLS (which enjoyed a strong piggyback on the success of the iOS-based iPhone and iPad and so pretty much forced HLS into the market as the only option to live stream to those devices).

The principle aim of adaptive bitrate streaming technology is to allow the publisher to produce several quality streams at the same time - perhaps one for mobile streaming at 300 kbps, one for domestic SD streaming over WiFi at 750Kbps, and one for HD streaming at 1.4Mbps, for example - and to synchronize the publishing of each of these streams in such a way that if a recipient decoder wishes to switch from one bitrate to another (perhaps because the network conditions are varying), then the client player simply requests the “next” packets from the lower or higher bitrate stream, and these are sequenced seamlessly in the player's buffer. This means that while the quality of the image may pixelate or increase mid-playback, there is no transport layer interruption of the flow of the streaming - and so the changing quality is not accompanied by a break in the continuity of the stream. This provides a much better quality of experience (QoE) for the viewer. This smooth transition from one bitrate to another was precisely why Microsoft named their technology Smooth Streaming.

Such technologies almost invariably require discrete layer-3 connections between each player and the server and so invariably use TCP as the control protocol. This comes with significant network overhead, as discussed above, and in fact the nature of creating a buffer to manage the bitrate switching, and the decision-making processes involved in that, combine to add significant latency. While there were one or two very early approaches to try to adapt traditional media servers to support distribution of adaptive bitrate streaming technologies, the role of these servers was almost universally to act as a termination point of the source contribution feed, and to “chunk” or “fragment” the different bitrates into synchronized blocks of video - usually split at the video key frames along the lines of the MPEG “groups of pictures” (GoPs) - and then to packetize these in wrappers of HTTP transport protocol packets.

Accordingly the distribution technology of choice quickly became relatively common HTTP servers, with modifications appearing for Microsoft's IIS, and Apache, among others. Again, from a purist's point of view, while there were many network-optimization reasons not to use HTTP (which has, for example, no flow control and simply uses all available network bandwidth to transfer any given datagram), there were some critical advantages that this method of streaming introduced.

  • [1] http://en.wikipedia.org/wiki/Adaptive_bitrate_streaming
 
Source
< Prev   CONTENTS   Source   Next >

Related topics