I’ve never known cable providers of failures to broadcast live TV in its history. MASH (not live) amongst many others had 70-100+ million viewers, many shows had 80%+ of the entire nation viewing something on its network without issue. I’ve never seen buffering on a Superbowl show.
Why do streaming services suffer compared to cable television when too many people watch at the same time? What’s the technical difficulty of a network that has improved over time but can’t keep up with numbers from decades ago for live television?
I hate ad based cable television but never had issues with it growing up. Why can’t current ‘tech’ meet the same needs we seemed to have solved long ago?
Just curious about what changed in data transmission that made it more difficult for the majority of people to watch the same thing at the same time.
You can broadcast to everyone connected to a WiFi network. That’s just an Ethernet network, and there’s a broadcast address on Ethernet.
Typically, WiFi routers aren’t set up to route broadcasts elsewhere, but with the right software, like OpenWRT, a very small Linux distribution, you can bridge them to other Ethernet networks if you want.
Internet Protocol also has its own broadcast address, and in theory you can try to send a packet to everyone on the Internet (
255.255.255.255
), but nobody will have their IP routers set up to forward that packet very far, because there’s no good reason for it and someone would go and try to abuse it to flood the network. But if everyone wanted to, they could.I don’t know if it’s what you’re thinking of, but there are some projects to link together multiple WiFi access points over wireless, called a wireless mesh network. It’s generally not as preferable as linking the access points with cable, but as long as all the nodes can see each other, any device on the network can talk to others, no physical wires. I would assume that on those, Ethernet broadcasts and IP broadcast packets are probably set up to be forwarded to all devices. So in theory, sure.
The real issue with broadcast on the Internet isn’t that it’s impossible to do. It’s just that unlike with TV, there’s no reason to send a packet to everyone over a wide area. Nobody cares about that traffic, and it floods devices that don’t care about it. So normally, the most you’ll see is some kind of multicast, where devices ask that they receive packets from a given sender, subscribe to them, and then the network hardware handles the one-to-many transmission in a sort of star architecture.
You can also do multicast at the IP level today, just as long as the devices are set up for it.
If there were very great demand for that today, say, something like Twitch TV or another live-streaming system being 70% of Internet traffic the way BitTorrent was at one point, I expect that network operators would look into multicast options again. But as it is, I think that the real problem is that the gains just aren’t worth bothering with versus unicast.
kagis
Today, looks like video is something like that much of Internet traffic, but it’s stuff like Netflix and YouTube, which is pretty much all video on demand, not a live stream of video. People aren’t watching the network at the same time. So no call for broadcast or multicast there.
If you could find something that a very high proportion of devices wanted at about the same time, like an operating system update if a high proportion of devices used the same OS, you could maybe multicast that, maybe with some redundant information using forward error correction so that devices that miss a packet or two can still get the update, and ones that still need more data using unicast to get the remaining data. But as it stands, just not enough data being pushed in that form to be incredibly interesting bothering with.