I recall the delight I had in setting up my first live video stream on a LAN many years ago. With the first frame of video I remember a clear image of the top of my head. The immediate thing I noticed was that when I looked up at the camera I could still see the top of my head. I then waved at the camera. It was some seconds later - with a few moments convinced it was not working in between - that I finally saw my face look up and then my hand waving back.
I have covered the technical background of latency in several places earlier in the text, particularly in Section 2.1.1. I was aware of latency even as I performed my first live video webcasting tests, having for some years prior become used to the effect on audio webcasting. Still, there was a sense of surprise. Not least was because I had until then mainly seen my own image either rendered directly to the screen from the local capture card (a process that neither requires compression nor involves much propagation delay), and further I was used to only otherwise seeing my face in a mirror. So there was, if nothing else, I had to make a mental adjustment to this new, highly compressed and latent video of my hand waving back some moments after the event had occurred.
Now normally seeing yourself on camera provides a very instant feedback on video latency. However, until the event of webcasting it was unusual for the general public to see themselves live on a transmission network of any kind, other than perhaps on the CCTV cameras in a shopping center. Traditional broadcast networks - which, of course, also suffer latency - were expensive, and live broadcasts were involved and complex things to set up. They were far out of reach of the individual, and interested layperson. Webcasting has indeed changed that, but until it happened, it was rare to see yourself waving back from a TV screen. For this reason most people were largely unaware of network latency as an effect.
Obviously webcasting, with its increased need to mitigate errors in the network, accentuates latency over the relatively controlled network latencies found in private broadcasting networks. As the general public has been increasingly able to watch live sporting and other events on both laptops and traditional broadcast networks at the same time, it has garnered more awareness.
Because people watching a TV signal often have no other frame of reference, they always expect that the moment they see something on live TV is the same as the moment it is actually happening. Yes, the sting (extolling the possibility of wire fraud in gambling) had been in the public conscious but only vaguely. In fact betting systems always shut the gate early enough so that even those watching on broadcast links cannot place bets anywhere near the time the race results have obtained.
This segues nicely into the fact that as in betting applications, in a few finance applications the broadcast latency is a key performance indicator of interest to live streaming publishers because it can have a real financial impact on the business for which the streaming is being delivered.
In sports streaming, the issue of latency is considerably less impacting. Yes, it is possible that your neighbors, watching on a traditional broadcast, may cheer a goal a few seconds before you see it on your tablet or smartphone, but you are probably aware of that before you click start on the stream. Indeed in pragmatic reality it is not going to make you rush back to a broadcast network subscription. You are more likely to close your Windows, or wander a little farther away from your neighbors' Windows, or do the social thing and go round to their house with a few beers to watch the game ...!
Obviously that does not mean that the online publisher should not demand low latency from their service providers if it is available. To this end there is currently much interest in WebRTC, which, by its full name (web real-time conferencing) is designed for low-latency video communications. As a mass market proposition WebRTC is not today designed to scale for large audiences: it was designed for individuals and small groups to video chat. At id3as (and I am sure at many others), we are working on technology solutions that can leverage WebRTC in the workflow to minimize latency where possible.
So my current suggestion is to explore WebRTC where possible - particularly as the lowest latency streaming technology - RTMP - is rapidly being deprecated from the market as HTML5 video tags and browsers' own video players replace RTMP-capable Flash players. RTMP may last some more years in the contribution feed market, since Flash per-se is not required to generate RTMP, and it is worth noting that Facebook and YouTube both use RTMP for contributions to their live services. Presumably this is because plug-ins and encoding apps for RTMP are pretty reliable and simple to implement, and RTMP does typically provide latency of around 2 seconds.
However, these publishers then transmux and transcode the content into a variety of ABR formats to reach the myriad devices that the audiences are connected to. Overtime that ingest will likely migrate to WebRTC, since the encoding capability can be delivered without a plug-in or external app, and this will increase the ease with which this user generated content can be contributed.