I found the multicast registery here.

https://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml

I already knew that addresses between 224.0.0.1 and 239.255.255.255 are reserved by multicast.

Obviously multicast could be immensely useful if used by the general public, it would obsolete much of facebook, youtube, nearly all CDNs (content delivery networks), would kill cloudflare and company’s business model and just re-arrange the internet with far reaching social implication.

So, why hasn’t all these multicast addresses been converted in usable private IPv4 unicast address space ?

  • interdimensionalmeme@lemmy.mlOP
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    I don’t think there is a technical issue or any kind of complexity at issue here, the problem seems trivial even though I haven’t worked the details. It is moot since it’s broken on purpose to preserve “They’s” business model.

    And “They” is the operators of the internet backbone, the CDNs, the ISPs

    There protocols for dealing about what you’re asking Since multicast is a dead (murdered) technology, I can’t tell you exactly what does what but here they are

    IGMP (Internet Group Management Protocol)
    MLD (Multicast Listener Discovery)
    PIM (Protocol Independent Multicast)
    DVMRP (Distance Vector Multicast Routing Protocol)
    MOSPF (Multicast OSPF)
    MSDP (Multicast Source Discovery Protocol)
    BSR (Bootstrap Router)
    Auto-RP (Automatic Rendezvous Point)
    MBGP (Multiprotocol BGP)
    MADCAP (Multicast Address Dynamic Client Allocation Protocol)
    GLOP Addressing
    ALM (Application-Layer Multicast)
    AMT (Automatic Multicast Tunneling)
    SSMPing
    MRD (Multicast Router Discovery)
    CBT (Core-Based Trees)
    mVPN (Multicast VPN)
    

    There would be many more of course, things that specifically resolve any unexpected issues that might arise from “actually existing multicast” in the hands of “the public”, which has never happened.

    While I agree that P2P is the next best thing and torrents are pretty awesome, they are unicast and ultimately they waste far more resources, especially intercontinental bandwidth than multicast would.

    Also multicast open protocol that might have developed if ISPs didn’t ruin multicast for everyone, would have steered the whole internet toward a “standard solution” in the same way that we all use the “same email system”. There would be one way that was “the way” of doing this one-to-many communication

    To specifically answer your question, as far as routers are concerned, whenever a packet arrives, the router has to decide, WHICH of its WAN ports does the packet need to go to or does the packet need to be dropped.

    From the point of view of the router, the whole internet is divided up in the number of WAN port it has and it sends the packet down the port with the shortest path to the destination host.

    Multicast is a lot like that, the main difference is that the router MIGHT send the packet to more than one destination.

    I think the solution is that receivers that wish to receive the multicast packets to a particular address (and port), from a particular source host, would subscribe to it. The literature mentions “multicast group subscription” I’m pretty sure this is already what this is for.

    I think what this does is add branches in the routing table for the subscribed addresses in the multicast range. This tells the router about hosts that become part of the multicast group. I’m not sure if this is supposed to tell every router on the whole internet, or just the routers between the source and the destination host, but it gives the routers that need to know, where to send those packets pretty much in the same way as unicast, except with multiple destinations as specified in the multicast group subscriber list.

    It’s really just unicast with extra steps, and not that many more steps, and those are all already baked in L3 switches silicon for decades. These protocols were designed to run on computers from the 1980s, I don’t believe for a minute that today we can’t handle that.

    • chaospatterns@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I don’t think there is a technical issue or any kind of complexity at issue here, the problem seems trivial even though I haven’t worked the details. It is moot since it’s broken on purpose to preserve “They’s” business model.

      I’m explaining what the technical problems are with your idea. It seems like you don’t fully understand the technical details of these networking protocols and that’s okay but I’ve summarized a few non trivial technical problems that aren’t just people keeping multicast from being used. I assure you if multicast worked, big tech would want to use it. For example, Netflix would want to use it to distribute content to their CDN boxes and save tons of bandwidth.

    • TauZero@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      While I agree that P2P is the next best thing and torrents are pretty awesome, they are unicast and ultimately they waste far more resources, especially intercontinental bandwidth than multicast would.

      Tell me if I understand the use case correctly here. I want to livestream to my 1000 viewers but don’t want to go through CDNs and gatekeepers like Twitch. I want to do it from my phone, as I am entitled to by the spirit of free internet and democratization of information, but I obviously do not have enough bandwidth for 1000 unicast video streams. If only I had ability to use multicast, I could send a single video stream with multicast up my cellular connection, and at each internet backbone router it would get duplicated and split as many times as necessary to reach all my 1000 subscribers. My 100 viewers in Japan are served by a single stream in the trans-Pacific backbone that gets split once it touches land, is that all correct?

      In that case, torrent/peertube-like technology gets you almost all of the way there! As long as my upload ratio is greater than 1 (say I push the bandwidth equivalent of TWO video streams up my cellular), and each of my two initial viewers (using their own phones or tablets or whatever devices that can communicate with each other equally well across the global internet without any SERVERS, CDNS, or MIDDLEMEN in between, using IPv6 as God intended) pushes it to two more, and so on, then within 10 hops and 1 second of latency, all 1000 of my viewers can see my stream. Within 2 seconds, a million could see me in theory, with zero additional bandwidth required on my part, right? In terms of global bandwidth resource usage, we are already within a factor of two of the ideal case of working multicast!

      It is true that my 100 peertube subscribers in Japan could be triggering my video stream to be sent through the intercontinental pipe multiple times (and even back again!), but this is only so because the peertube protocol is not yet geographic-aware! (Or maybe it already is?) Have you considered adding geographic awareness to peertube instead? Then only one viewer in Japan will receive my stream, and then pyramid-share it with all the other Japanese.

      P2P, IPv6, and geographic awareness is something that you can pursue right now, and it gets you within better than a factor of 2 of the ideal multicast dream! Is factor of 2 an acceptable rate of waste of resource usage? And you can implement it all on your own, without requiring every single internet backbone provider and ISP to cooperate with you and upgrade their router hardware to support multicast. AND you get all the other features of peertube, like say being able to watch a video that is NOT a livestream. Or being able to read a comment that was posted when your device was powered off.

      Also, I am intrigued by the great concern you give for intercontinental bandwidth usage, considering those pipes are owned by the same types of big for-profit companies as the walled-garden social networks and CDNs that are so distasteful. From the other end, the reason why geographic awareness has not already been implemented in bittorrent and most other P2P protocols is precisely because bandwidth has been so plentiful. I can easily go to any website in Japan, play video games with the Chinese, or upload Linux images to the Europeans, without worrying about all the peering arrangements in between. If you are Netflix you have to deal with it and pay for peerage and build out local CDN boxes, but as a P2P user I’ve never had to think about it. Maybe if 1-to-millions torrent-based server-less livestreaming from your phone were to become popular, the intercontinental pipe owners might start complaining, but for now the internet just works.

      • interdimensionalmeme@lemmy.mlOP
        link
        fedilink
        arrow-up
        0
        ·
        6 months ago

        single stream in the trans-Pacific backbone that gets split once it touches land, is that all correct?

        Yep, that is exactly it

        In that case, torrent/peertube-like technology gets you almost all of the way there!

        I am also excited at peer-to-peer technology, however P2P unicast remains a store-and-forward technology, under the best of condition we’re look at at least a 10 millisecond latency per hop and of course a doubling of the total network bandwidth used per node as they each send and receive at least once. Still very exciting stuff that I wish were further along than it is, but this isn’t the “multicast dream” as such, which does not use “Zuck’s computer” by wish I mean it does not use the cloud which is “someone else’s computer”. We can imagine a glorious benevolent P2P swarm that understands that it’s own participation is both a personal and a public good, that warm and fuzzy feeling of a torrent with a 10 to 1 seeding ratio. But we’re still using “someone else’s computer” … at “we’re” using “our computer” and that’s the royal “we”. Multicast is all switch no server, all juice, no seed.

        that can communicate with each other equally well across the global internet without any SERVERS, CDNS, or MIDDLEMEN in between, using IPv6 as God intended

        Yes, well each node is a server and a middleman, but it’s “our” guys I guess, of course in the real world we’ve now got NAT, firewalls, STUN/TURN/ICE, blocked ports, port forwarding you know all that jazz that used to put a serious strain on my router and might ending up killing “our” phones battery, plus here P2P if you’re on cell your bandwidth is ratioed, some scummy ISP do not treat traffic the same way up or down, we’re starting to accumulate quite a lot of asterisks here.

        we are already within a factor of two of the ideal case of working multicast!

        Ah no, in this case the total bandwidth use has massively increase, those users aren’t communicating with multicast efficiency, they are point-to-point and those points run through the same backbones hundreds of times, coming AND going, while the sender does not have to carry that load, the internet is not a MUCH more congested place because of the lack of multicast

        only so because the peertube protocol is not yet geographic-aware

        I don’t know enough about peertube to answer that, I suspect it’s a best effort, but here I’m sure the focus is on “unstoppable delivery” BEFORE “efficient delivery”

        it gets you within better than a factor of 2 of the ideal multicast dream

        I’m not sure this math is mathing, we’re having double the total network bandwidth per host, and it’s not geography aware, it’s network topology aware, a topology that is often obscured by the ISPs for a variety of benign and malevolent reasons. The worst is that the peers will cross the backbone many time, I think we’re looking at a “network effect scale” in wasted bandwidth compared with multicast. “n2” ? I’m not sure, probably n2 is the worst case ontario.

        without requiring every single internet backbone provider and ISP to cooperate with you

        Yes this is essential, for multicast to “work” it would have to be like that. Unicast and IPv4, the internet would be useless if you had to negociate each packets between you and your peers.

        considering those pipes are owned by the same types of big for-profit companies as the walled-garden social networks and CDNs that are so distasteful

        Yes, I believe they do stand in the way, I believe most of the long range communication is dark fiber, which they have bought on the cheap and have made their business model to exploit and therefore NEED to keep the utility of the public internet as low as possible, that includes never allowing “actually existing multicast” to flourish.

        I can easily go to any website in Japan, play video games with the Chinese, or upload Linux images to the Europeans

        You can because you’re in a drop in the consumer bucket, you exist in the cracks of the system. If everyone suddenly used the internet to this full potential, then we would get the screws turned on us. The internet is largely built like a cablo-distribution network and we’re supposed to just be passive consumers, we purchase product, we receive, we are not meant to send.

        the intercontinental pipe owners might start complaining,

        yes I think so too, and they wouldn’t wait for their complaints to be heard, we have been here before, throttling, QoS deprioritizing (to drops), dropped packet, broken connections, port blocking, transient IP bans, we are sitting ducks on the big pipes if we start really using them proper. Multicast would essentially fly under the radar.