• 2 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle

  • Yep, indeed, I’m already discovering differences too. :) A good document for techies to read seems to be here.

    https://reticulum.network/manual/understanding.html

    I also think I see a problem on the horizon: announce traffic volume. According to this description, it seems that Reticulum tries to forward all announces to every transport node (router). In a small network, that’s OK. In a big network, this can become a challenge (disclaimer: I’ve participated in building I2P, but ages ago, but I still remember some stuff well enough to predict where a problem might pop up). Maintenance of the routing table / network database / <other term for a similar thing> is among the biggest challenges when things get intercontinental.


  • Interesting project, thank you for introducing. :)

    I haven’t tested anything, but only checked their specs (sadly I didn’t find out how they manage without a distributed hashtable).

    Reticulum does not use source addresses. No packets transmitted include information about the address, place, machine or person they originated from.

    Sounds like mix networks like I2P and (to a lesser degree, since its role is proxying out to the Internet) like TOR. Mix networks send traffic using the Internet, so the bottom protocol layers (TCP and UDP) use IP addresses. End to end messages use cryptographic identifiers.

    There is no central control over the address space in Reticulum. Anyone can allocate as many addresses as they need, when they need them.

    Sounds like TOR and I2P, but people’s convenience (easily resolving a name to an address) has created centralized resources on these nets, and will likely create similar resources on any network. An important matter is whether the central name resolver can retroactively revoke a name (in I2P for example, a name that has been already distributed is irrevocable, but you can refuse to distribute it to new nodes).

    Reticulum ensures end-to-end connectivity. Newly generated addresses become globally reachable in a matter of seconds to a few minutes.

    The same as aforementioned mix networks, but neither of them claims operability at 5 bits per second. Generally, a megabit connection is advised to meaninfully run a mix network, because you’re not expected to freeload, but help mix traffic for others (this is how the anonymity arises).

    Addresses are self-sovereign and portable. Once an address has been created, it can be moved physically to another place in the network, and continue to be reachable.

    True for TOR and I2P. The address is a public key. You can move the machine with the private key anywhere, it will build a tunnel to accept incoming traffic at some other node.

    All communication is secured with strong, modern encryption by default.

    As it should.

    All encryption keys are ephemeral, and communication offers forward secrecy by default.

    In mix networks, the keys used as endpoint addresses are not ephemeral, but permanent. I’m not sure if I should take this statement at face value. If Alice wants to speak to Bob tomorrow, some identifier of Bob must not be ephemeral.

    It is not possible to establish unencrypted links in Reticulum networks.

    Same for mix networks.

    It is not possible to send unencrypted packets to any destinations in the network.

    Same.

    Destinations receiving unencrypted packets will drop them as invalid.

    Same.



  • As an anarchist who would welcome other anarchists - sadly, I doubt if that’s a reliable recipe to stop climate change.

    Limiting (hopefully stopping) climate change can be done under almost any political system… except perhaps dictatorial petro-states. However, it takes years of work to tranform the economy. Transport, heating, food production - many things must change. Perhaps the simplest individual choices are:

    • going vegetarian (vegan if one knows enough to do the trick)
    • avoidance of using fossil fueled personal vehicles
    • improving home energy efficiency (especially in terms of heating)
    • avoidance of air travel
    • avoidance of heavy goods delivered from distant lands

    The rest - creating infrastructure to produce energy cleanly and store sufficient quantities - are typically societal choices.

    As for corals - I would start by preserving their biodiversity, sampling the genes of all coral and coral-related species and growing many of them in human-made habitats. If we’re about to cause their extinction, it’s our obligation to provide them life support until the environment has been fixed.

    Also, I would consider genetically engineering corals to tolerate higher temperatures. Since I understand that this is their critical weakness, providing a solution could save ecosystems. If a solution is feasible, that is.

    Corals reproduce sexually so a useful gene obtained from who knows where would spread among them (but slowly - because typical colonies grow bigger asexually). Also, I would keep in mind that this could have side effects.

    As for tempeature - it will be rising for some time before things can be stopped. Short of geoengineering, nothing to be done but reduce emissions, adapt, and help others adapt. The predictable outcome - it will get worse for a long while before it starts getting any better.



  • The article is mostly correct. :)

    Notes: out of the three, Latvia has serious energy storage - a 4 billion cubic meter (at normal pressure) underground gas store, sufficient to carry all three countries over the winter. So far, it’s filled with fossil natural gas - but some day it could be filled with synthesized methane.

    As a backup option, Estonia has oil shale - probably the worst fuel on Earth, so the price of emitting CO2 keeps those plants out of the energy market during summer. During winter, they come online though.

    As for solar, we aren’t planning to rely much on that. Solar capacity has of course skyrocketed, but only because it’s very easy to install. For me, it provices a nice way to charge my car from April to October. But at latitudes 55 to 60, days are really very short in midwinter, so wind and waste wood are the likely candidates in future - after oil shale leaves the scene, but before synthetic gas becomes feasible.

    Regarding pumped hydro - it can stabilize a day, but can’t stabilize a week or month. Lithuania has a biggish (~10 GWh) pumped storage facility. The rest of Baltics don’t have suitable terrain. Estonia has limestone banks, but they’re under various forms of protection and even if one built a lot of pumped hydro, the low elevation difference (up to 50 meters) means one couldn’t support the electric grid through more than a few days.

    Regarding hydrogen - maybe. But hydrogen is difficult to store, so I’m betting on wind, and on sourcing technology from Germany to produce synthetic methane from excess power during summer, and pumping it to Latvia for storage.

    Finally - connecting to the continental EU power grid allows importing energy when local wind isn’t strong enough, and exporting any surplus. So far, all three countries are still in the ex-Soviet synchronization area (common with Russia and Belarus, but with no trade, just synchronization), and thus unable to connect with the EU synchronization area. Local power companies have been building synchronous compensators (devices that steer grid frequency) for the past 2 years to drop this dependency.

    If things go as planned, Baltic countries will sever those connections and join the EU grid via Poland in winter 2025. Undersea cables already go from Estonia to Finland and Lithuania to Sweden, but in the current political conditions, I don’t think anyone counts of them for sure (a Chinese-owned but Russian-crewed ship broke the Estonia-Finland gas pipeline last autumn when dragging its anchor during a storm - it’s still unsure if the damage was accidental or not).





  • Summary:

    But then, in the geologically abrupt space of only a few decades, this great river of ice all but halted. In the two centuries since, it has moved less than 35 feet a year. According to the leading theory, the layer of water underneath it thinned, perhaps by draining into the underside of another glacier. Having lost its lubrication, the glacier slowed down and sank toward the bedrock below.

    /…/

    “The beauty of this idea is that you can start small,” Tulaczyk told me. “You can pick a puny glacier somewhere that doesn’t matter to global sea level.” This summer, Martin Truffer, a glaciologist at the University of Alaska at Fairbanks, will travel to the Juneau Icefield in Alaska to look for a small slab of ice that could be used in a pilot test. If it stops moving, Tulaczyk told me he wants to try to secure permission from Greenland’s Inuit political leaders to drain a larger glacier; he has his eye on one at the country’s northeastern edge, which discharges five gigatons of ice into the Arctic Ocean every year. Only if that worked would he move on to pilots in Antarctica.

    It’s not wild at all. :) The plan makes sense from a physical perspective, but should not be implemented lightly because:

    • it’s extremely hard work and extremely expensive to drain water from beneath an extremely large glacier
    • it doesn’t stop warming, it just puts a brake on ice loss / sea level rise


  • perestroika@slrpnk.nettoDIY@slrpnk.netWhat's Up?
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Trying to figure out how my heat pump supposedly supports WiFi… in unfathomable and non-standard ways. It’s available as an access point, I can associate and ping it, but no TCP ports listen and no UDP port responds. Nothing cool, undocumented features down to the rocky bottom. When you buy a heat pump and plan to automate its use, check out supported protocols before making a decision. :)



  • More information can be found here: https://veilid.com/framework

    I read it, haven’t tested it - commentary below.

    Before I go into commentary, I will summarize: my background is from I2P - I helped build bits and pieces of that network a decade ago. As far as I can tell, Veilid deals in concepts that are considerably similar to I2P. If the makers have implemented things well, it could be a capable tool for many occasions. :) My own interest in recent years has shifted towards things like Briar. With that project, there is less common ground. Veilid is when you use public infrastructure to communicate securely, with anonymity. Briar is when you bring your own infrastructure.

    • Networking

    Looks like I2P, but I2P is coded in Java only. Veilid seems to have newer and more diverse languages (more capability, but likely more maintenance needs in future). I2P has a lot of legacy attached by now, and is not known for achieving great performance. A superficial reading of the network protocol doesn’t enable me to tell if Veilid will do better - I can only tell that they have thought of the same problems and found their own solutions. I would hope that when measured in a realistic situation, Veilid would exceed the performance of I2P. How to find out? By trying, in masses and droves…

    • Cryptography

    Impressive list of ciphers. Times have changed, I’m not qualified to say anything about any of them. It leaves the appearance that these people know what they are doing, and are familiar with recent developments in cryptography. They also seem to know that times will change (“Veilid has ensured that upgrading to newer cryptosystems is streamlined and minimally invasive to app developers, and handled transparently at the node level.”), which is good. Keeping local storage encrypted is an improvement over I2P - last time I worked with I2P, an I2P router required external protection (e.g. Linux disk encryption) against seizing the hardware. With mobile devices ever-present everywhere, storage encryption is a reasonable addition. I notice that the BlockStore functionality is not implemented yet. If they intend to get it working, storage encryption is a must, of course.

    • RPC (remote procedure calls)

    Their choice of a procedure call system is unfamiliar to me, but I read about it. I didn’t find anything to complain about.

    • DHT (distributed hash table)

    Looks somewhat like I2P.

    • Private routing

    Looks very much like I2P.



  • The source is a scientific article from 2022…

    “P. Bombelli et al. Powering a microprocessor by photosynthesis. Energy & Environmental Science, 2022.”

    …so there is zero chance of random folks using it practically, if the information was added to the state of the art during last year. The article even hasn’t made it to Sci-Hub. It can be found here however. The journal currently wants to extort 42 pounds from the reader and I’m not from a research institution so I haven’t got an account to read journals, so I shall remain in the dark. :( One could always request an author’s copy from one of the authors (or maybe someone here is from a place which already has an account?)…

    …until then, I will use a clue they have given: the chip was ARM Cortex M0. That is the tiniest of the tiny, the most energy efficient. Not much computing can be done with them, mostly just data acquisition (sensors). They require milliwatts or microwatts of power. The chip wasn’t run continously, it slept for most of time.

    The article’s public abstract doesn’t describe the growing protocol of the algae. Most likely, the same algae in the same container cannot be grown for a year. An ecosystem needs a biodegrader (bacteria that decompose dead algae) and efficiency likely won’t be great when the primary producer and biodegrader form a mixed culture (instead of nice green algae there will be bacterial films and brown goo, limiting the sunlight available to algae). So the “cell” will probably need to be emptied, cleaned and refilled - but that’s just a guess.