Desktop version

Home arrow Economics arrow Content Delivery Networks. Fundamentals, Design, and Evolution

VOD Workflows

Having explored the two specific examples of DVR and catch-up services, I have little more to add to explain the simplest use case of all online video: VOD. However, some comments and thoughts that are not so far covered may be of value.

The vast work effort in a VOD model is typically focused on the searchability and discoverability of content. Good metadata is essential for text-based searches and recommendations. Applying this is easiest done at source, but given that the elementary streams where such metadata can be stored vary greatly from one file format to another, maintaining continuity as archives are transcoded can be incredibly complex. Notably schemes such as SCTE35 are emerging as a strategy to try to unify some of this metadata where it is used for workflow signaling.

There are search systems that can fingerprint/hash multimedia content. For example, if you upload content that has already been fingerprinted into a platform such as YouTube, you will find that it is quickly spotted by automated processes, and these can effectively take down the content, or at least alert you that you are in breach of someone else's claim to the content's copyright.

Storage model

Figure 3.5 Storage model.

Digging deeper into the technology models for delivery, I like to think of a pyramid (Figure 3.5) whose base is remote storage, and whose pinnacle is the CPU and the local machine's memory.

As we work up the storage model, access time decreases, and yet typically cost increases. In a thin-client streaming device we can flatten the model to exclude OFFLINE and LOCAL HDD, and the content is copied directly from the remote HDD to the RAM, and then rendered by the CPU (or possibly by the GPU that would sit alongside the CPU in the model).

Obviously the cheapest storage model is typically OFFLINE - so storage on a DVD, for example, the one created, costs very little to maintain - there is no electricity involved, nor any need to maintain a system to host the DVD, unless it is mounted in a robotic retrieval system. While I was helping BT Rich Media in the early 2000s, they had a large DVD storage system with a robotic arm that could, within a few seconds, locate, pull, and bring online any one of thousands of DVDs. This was fairly practical, until the volumes of on-demand content grew vast, simply because a single film may be encoded into many formats. What typically happens today is that there is a “mezzanine” high-quality archive stored on REMOTE HDD, and as it is pulled through the network, “in workflow treatments” (see Section 3.3.1) can create ephemeral transcodings of the file suited to the particular client. While this bucks the end-to- end model of classic network architecture, it proves to be efficient as a balance between cost and speed. If the network caches the content for a period, and if several users all want a popular file and all request the same transcoding, it can be retrieved from the network cache rather than repeatedly pulled from the REMOTE HDD “origin.” Then the cache can be purged once the file is no longer popular.

Good cache management can reduce the workload, so CDNs are masters of tuning and optimizing this balance.

While much of the recent decade has seen a move to place SSDs into the HDD layer, which is naturally faster, what we increasingly see is a top-level trend to massively enlarge the RAM layer. Platforms such as the Open Compute Platform are working hard to enlarge the role of RAM in ephemeral storage of increasingly large data assets.

There is a great book called Multimedia Servers[1] by Sitaram and Dan that digs into this space in considerable depth.

Up to this point, I hope I have given you a high-level range of thoughts and anecdotes that will help you think widely about your workflow architectures and plan them properly. Small differences to design can make massive differences to your operations, so take your time when developing even simple systems, particularly if you are anticipating scale.

  • [1] dp/1558604308
< Prev   CONTENTS   Source   Next >

Related topics