Desktop version

Home arrow Economics arrow Content Delivery Networks. Fundamentals, Design, and Evolution

To Singularity and Beyond

While one-to-many streaming may be very interesting to ensure scale for live and linear online TV-like services, for me, looking a little beyond that, the really exciting thing is that with multicast comes the option to develop many- many multicast models.

So far, because the capability isn't really set up on many networks at all, there are very few real-world examples of a many-many multicast-based applications anywhere.

But, if developers were given the opportunity to innovate in this area, there are numerous possibilities I believe that scaling multicast to many-many models could potentially bring.

There is a widely spoken notion of “technological singularity” - that we are, as a civilization, nearing the point where our technology overtakes us. The Wikipedia definition is robust[1]:

The technological singularity (also, simply, the singularity) is the hypothesis that the invention of artificial superintelligence will abruptly trigger runaway technological growth, resulting in unfathomable changes to human civilization. According to this hypothesis, an upgradable intelligent agent (such as a computer running software-based artificial general intelligence) would enter a “runaway reaction” of self-improvement cycles, with each new and more intelligent generation appearing more and more rapidly, causing an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence.

Artificial intelligence (AI), upon which the notion of singularity is predicated, is really a layer-4 challenge rather than something obviously related to IP multicast in layers 2 and 3. Yet, an entire evolution of application design (such as “singularity” would require) in layer-4 may itself be predicated by a change in available capabilities provided by the network on which the application can be run.

In the constraints of today's general networking, we typically have applications that are either unicasting data point to point, and when there are many end points to serve, we do this in a (broadly/simplistically speaking) “round robin” unicast way. TCP can service many clients in this way, and indeed today's HTTP and streaming infrastructures - the ones that have been at the center of this book - are able to scale significantly.

Indeed breaking headline news video is, today, one of the most significant “scaling” challenges we face in the CDN space. And much of this book has been underpinned by that thinking, as it is broadly where we are as a streaming and CDN industry today. The obvious candidate for scaling live TV is clearly IP multicast, which is well known to be technically possible but is caught up behind a number of challenges concerning disruption of the cultures of production, network operator commercials, and rights. The technology is there but the will to adopt it is in transition, and not entirely “there” yet.

Sad though it may sound, I often lay awake at night trying to re-scope my own sense of scale for content delivery and push myself to think of applications that may require orders of magnitude more scale than live TV. I ponder what other industries may benefit from IP multicast, and not only in a one-many model but also in a many-many model?

In order to try to scale out further I lack real-world applications to model against. This is in part because I haven't yet conceived/discovered/realized or even understood the problems I can solve with technology. So I am unable to articulate them, let alone define models and solutions to those problems.

In my gut I feel the most immediate candidates for next-generation application design, which may benefit from IP multicast becoming more widely available, are video conferencing and virtual reality. Potentially significant bidirectional data flows helping large groups “meet” in VR space. These begin to suit a many-many IP multicast paradigm.

Once data can flow from all points of the network to all other points concurrently (rather than round robin) and also follow the end-to-end principle[2] (and so reducing complexity) with technologies like BIER, new - and importantly low-latency models like these can be considered.

Massive multiplayer online (MMO) gaming platforms today largely rely on predistributing the graphic content, and they share metadata about the players' deltas from a previous known state. They share only tiny amounts of “change- data” and reflect the change on the other players' machines as a “local copy” of the region of the MMO.

As IP multicast models arrive, the MMO could become a more truly “shared” space, streamed from a core where the game world is created and the end user viewpoints are streaming “windows” on that world. This could change the distribution models at one level. At another level it may change the entire game design.

Another model that I feel could emerge is in discovery and learning.

When a kid in a school hears others question the teacher, he gains from hearing the answer even without having asked the question himself.

Today, when I search Google, I “ask it a question” and “it replies” directly to me. There is no opportunity for any third party to benefit from that interaction (apart perhaps from Google itself).

With a properly scaled up IP multicast model, there is no reason why I couldn't set up a model where if questions are asked online about a particular topic, and if a response is returned, that response could be sent to multiple subscribed end points. If I wanted to subscribe to the responses for all searches that included “BIER” in real time, I could just watch my screen and learn as the entire community develops the construct and searches for contributions to that process.

In a human context this seems a bit limited, and indeed bulletin boards, Twitter, RSS feeds, and the like, may at first appear to offer something of this type of capability. But typically they are limited to small chunks of data, so applications that may use this type of messaging are sharing data in a very limited way.

But, if the model is extrapolated somewhat, I could imagine machine-to- machine (M2M) communications benefiting from this type of data-sharing model extensively.

In a “superintelligent” system, for example, “context” and “awareness” of all points on the network could be constantly communicated to all other points by IP multicast (in a way that neither unicast nor broadcast could offer in terms of scale), allowing the system to include a far wider and, to a human, seemingly chaotic amount of influences on every decision.

In some-ways at id3as, we already use this thinking to manage our virtual workflows and optimize deployment and traffic. Yet, the possibilities of this capability are, almost by definition, a generation beyond our own ability to design applications. Still I can see humans delivering this framework, and the developers in layer 4 beginning to leverage such capability in AI systems.

So as a parting thought this is why I think that any realization of the technological singularity is predicated by the ubiquitous deployment of IP multicast as a “native” capability of the Internet.

  • [1] https://en.wikipedia.org/wiki/Technological_singularity
  • [2] https://en.wikipedia.org/wiki/End-to-end_principle
 
Source
< Prev   CONTENTS   Source   Next >

Related topics