"ServiceVelocity"and the Operator
Service velocity is explored in depth in a recent StreamingMedia.com article I wrote and have included in Chapter 5. However, as a conclusion to the context and orientation section I want to stress that all these technical solutions will only find success where they address commercial strategies for users who deploy them. For this reason it is important to note that service velocity is key to understanding why one should adopt the techniques I have been advocating above.
Essentially service velocity refers to the speed with which a new service can be provisioned across an operator network in response to either a customer or a business requirement to innovate and bring something new to market.
In the traditional Gen1 appliance-led technology mode, service velocity could be measured to account for the time taken to order and supply the appliances, to train installers how to install them, to test the appliances, and to activate the service. In an extreme example a satellite operator may measure its service velocity in units of years, or possibly even decades. The planning for such rollouts have to be meticulous, since once a rocket is launched, there is little chance to change the satellite's design!
As Gen2 arrived, it was assumed that a Gen1 network of routers and servers based on IP and COTS would still be in place, but from that stage it became possible to commission infrastructure within minutes and deploy services in the time it took to distribute a virtual machine to the commissioned servers within the infrastructure. If “hot-spares” were setup in a redundant mode, then failover for disaster recovery was possible, and this meant that SaaS operators could deploy new services to customers or add new services to their marketing relatively quickly. Typically the business continued to plan and execute much as before, but without needing to wait for physical installation every time a new service was introduced. This meant SaaS operators could measure their service velocity in days or hours. (An interesting legacy of this is that Amazon EC2 still typically measures their IaaS service utilization by the hour.)
Gen3 shrunk the size of the virtual processes that delivered the services once again, and this means that complete networks can be delivered “just in time”
Indeed it is now possible to instantiate a service in response to a request; for example, a user could request a chunk of HTTP-delivered video from a server that doesn't exist at the time the request is made, but that HTTP service can be deployed and respond to the user without the user being aware. This is a heady concept and leads to all sorts of conjecture about the future of computing as a whole; however, more important, it means that service velocity in a Gen3 world can be measured in milliseconds, and makes it possible to always say yes to clients, provide disaster recovery on the fly, and scale or more interesting, moving entire SaaS platforms “on-the-fly” while there may be millions of clients using the service.
It is through this architecture that my company ensured continuity to Nasdaq while delivering hundreds of live financial news broadcasts online through a well-known public cloud infrastructure even when it failed and all the other Gen2 operators suffered a significant outage. We automatically and instantly relocated the service orchestration to an entirely different part of the cloud, and did so between chunks of video. Indeed we only discovered the outage when we saw it reported in the news: we did not receive a support call.
Service velocity obviously changes the competitive landscape - using the right technology for the task in hand means that small agile companies can deliver service levels and times to market that have traditionally been the preserve of very large capital-rich companies. This increases the pace of innovation significantly and will continue to transform not only the content delivery market but many other sectors as well.