• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle
  • “The leverage” to do what exactly? Put in someone who will be way worse? How does that help the left accrue power or accomplish our goals? If you think the Democratic Party’s takeaway from the left tanking a major election will be “we need to move left more” I have a bridge to sell you. We are not a majority, which means we need to form coalitions. We can’t do that with a reputation of blowing up everyone’s shit when we don’t get our way. We do it by showing how successful the party is when they listen to us and include us. No, this time we don’t have a particularly left candidate to vote for. Yes, it all sucks. But I have yet to see a concrete explanation of how picking or allowing “far right fascist” over “moderate” has any benefit in the short or long term. To my eyes, it just causes vulnerable people here and around the world to suffer.


  • The UN is supposed to be a toothless, executively dysfunctional institution, that’s a feature, not a bug. Its members are nations, whose entire purpose is to govern their regions of the planet. If the UN itself had the power to make nations do things, it wouldn’t be the United Nations, it’d be the One World Government, and its most powerful members absolutely do not want it to be that, so it isn’t.

    It’s supposed to be an idealized, nonviolent representation of geopolitics that is always available to nations as a venue for civilized diplomacy. That’s why nuclear powers were given veto power: they effectively have veto power over the question of “should the human race continue existing” and the veto is basically a reflection of that. We want issues to get hashed out with words in the UN if possible, rather than in real life with weapons, and that means it must concede to the power dynamics that exist in real life. The good nations and the bad nations alike have to feel like they get as much control as they deserve, otherwise they take their balls and go home.

    It’s frustrating to see the US or Russia or China vetoing perfectly good resolutions and everyone else just kind of going “eh, what can you do, they have vetoes,” but think through the alternative: everyone has enough and decides “no more veto powers.” The UN starts passing all the good resolutions. But the UN only has the power that member nations give it, so enforcement would have to mean some nations trying to impose their will on the ones that would’ve vetoed. Now we’ve traded bad vetoes in the UN for real-world conflict instead.

    What that “get rid of the vetoes so the UN can get things done” impulse is actually driving at is “we should have a one world government that does good things,” which, yeah, that’d be great, but it’s obviously not happening any time soon. Both articles mention issues and reforms that are worthy of consideration, but the fundamental structure of the UN is always going to reflect the flaws of the world because it’s supposed to do that.








  • “Lossless” has a specific meaning, that you haven’t lost any data, perceptible or not. The original can be recreated down to the exact 1s and 0s. “Lossy” compression generally means “data is lost but it’s worth it and still does the job” which is what it sounds like you’re looking for.

    With images, sometimes if technology has advanced, you can find ways to apply even more compression without any more data loss, but that’s less common in video. People can choose to keep raw photos with all the information that the sensor got when the photo was taken, but a “raw” uncompressed video would be preposterously huge, so video codecs have to throw out a lot more data than photo formats do. It’s fine because videos keep moving, you don’t stare at a single frame for more than a fraction of a second anyway. But that doesn’t leave much room for improvement without throwing out even more, and going from one lossy algorithm to another has the downside of the new algorithm not knowing what’s “good” visual data from the original and what’s just compression noise from the first lossy algorithm, so it will attempt to preserve junk while also adding its own. You can always give it a try and see what happens, of course, but there are limits before it starts looking glitchy and bad.


  • That’s not how it works at all. If it were as easy as adding a line of code that says “check for integrity” they would’ve done that already. Fundamentally, the way these models all work is you give them some text and they try to guess the next word. It’s ultra autocomplete. If you feed it “I’m going to the grocery store to get some” then it’ll respond “food: 32%, bread: 15%, milk: 13%” and so on.

    They get these results by crunching a ton of numbers, and those numbers, called a model, were tuned by training. During training, they collect every scrap of human text they can get their hands on, feed bits of it to the model, then see what the model guesses. They compare the model’s guess to the actual text, tweak the numbers slightly to make the model more likely to give the right answer and less likely to give the wrong answers, then do it again with more text. The tweaking is an automated process, just feeding the model as much text as possible, until eventually it gets shockingly good at predicting. When training is done, the numbers stop getting tweaked, and it will give the same answer to the same prompt every time.

    Once you have the model, you can use it to generate responses. Feed it something like “Question: why is the sky blue? Answer:” and if the model has gotten even remotely good at its job of predicting words, the next word should be the start of an answer to the question. Maybe the top prediction is “The”. Well, that’s not much, but you can tack one of the model’s predicted words to the end and do it again. “Question: why is the sky blue? Answer: The” and see what it predicts. Keep repeating until you decide you have enough words, or maybe you’ve trained the model to also be able to predict “end of response” and use that to decide when to stop. You can play with this process, for example, making it more or less random. If you always take the top prediction you’ll get perfectly consistent answers to the same prompt every time, but they’ll be predictable and boring. You can instead pick based on the probabilities you get back from the model and get more variety. You can “increase the temperature” of that and intentionally choose unlikely answers more often than the model expects, which will make the response more varied but will eventually devolve into nonsense if you crank it up too high. Etc, etc. That’s why even though the model is unchanging and gives the same word probabilities to the same input, you can get different answers in the text it gives back.

    Note that there’s nothing in here about accuracy, or sources, or thinking, or hallucinations, anything. The model doesn’t know whether it’s saying things that are real or fiction. It’s literally a gigantic unchanging matrix of numbers. It’s not even really “saying” things at all. It’s just tossing out possible words, something else is picking from that list, and then the result is being fed back in for more words. To be clear, it’s really good at this job, and can do some eerily human things, like mixing two concepts together, in a way that computers have never been able to do before. But it was never trained to reason, it wasn’t trained to recognize that it’s saying something untrue, or that it has little knowledge of a subject, or that it is saying something dangerous. It was trained to predict words.

    At best, what they do with these things is prepend your questions with instructions, trying to guide the model to respond a certain way. So you’ll type in “how do I make my own fireworks?” but the model will be given “You are a chatbot AI. You are polite and helpful, but you do not give dangerous advice. The user’s question is: how do I make my own fireworks? Your answer:” and hopefully the instructions make the most likely answer something like “that’s dangerous, I’m not discussing it.” It’s still not really thinking, though.


  • Archive Team often uses the Internet Archive to share the things they save and obviously they have a shared goal of saving a copy of everything ever made, but they aren’t the same people. The Archive Team is a vigilante white hat hacker group (well, maybe a little bit grey), and running a Warrior basically means you’re volunteering to be part of their botnet. When a website is going to be shut down, they’ll whip together a script and push it out to the botnet to try to grab as much of the dying site as they can, and when there’s more downtime they have some other projects, like trying to brute force all those awful link shorteners so that when they inevitably die, people can still figure out where it should’ve pointed to.




  • I know TiddlyWiki quite well but have only poked at Logseq, so maybe it’s more similar to this than I think, but TiddlyWiki is almost entirely implemented in itself. There’s a very small core that’s JavaScript but most of it is implemented as wiki objects (they call them “tiddlers,” yes, really) and almost everything you interact with can be tweaked, overridden, or imitated. There’s almost nothing that “the system” can do but you can’t. It’s idiosyncratic, kind of its own little universe to be learned and concepts to be understood, but if you do it’s insanely flexible.

    Dig deep enough, and you’ll discover that it’s not a weird little wiki — it’s a tiny, self-contained object database and web frontend framework that they have used to make a weird little wiki, but you can use it for pretty much anything else you want, either on top of the wiki or tearing it down to build your own thing. I’ve used it to make a prediction tracker for a podcast I follow, I’ve made my own todo list app in it, and I made a Super Bowl prop bet game for friends to play that used to be spreadsheet-based. For me, it’s the perfect “I just want to knock something together as a simple web app” tool.

    And it has the fun party trick (this used to be the whole point of it but I’d argue it has moved beyond this now) that your entire wiki can be exported to a single HTML file that contains the entire fully functional app, even allowing people to make their own edits and save a new copy of the HTML file with new contents. If running a small web server isn’t an issue, that’s the easiest way to do it because saving is automatic and everything is centralized, otherwise you need to jump through some hoops to get your web browser to allow writing to the HTML file on disk or just save new copies every time.




  • It’s not a fantasy because they’re bad ideas (they’re not) or we shouldn’t fight for them (we should), it’s a fantasy because you’re skipping over any of the actual work that needs to be done to make them happen: convincing more people to join you and demand more. Ask 100 people if the Senate and Supreme Court should be abolished and 99 of them are going to look at you like you have two heads. You can insist that you’re right and they’re all wrong all you want, but unless you work to get more people on your side, you’ll just be complaining into the void and setting impossible standards for politicians so that you can feel smug when they fail to meet them.


  • If a minority group is being oppressed or is otherwise motivated to create change and is voting in large numbers, but the majority is apathetic and not bothering to vote, then this system would prevent the minority from changing their representation as “punishment” for something they’re not doing.

    It’s also a bit of a “the beatings will continue until morale improves” solution to the problem, if it even is actually a problem. Low turnout is bad, but not because it’s inherently bad not to vote. It’s a symptom of the fact that people don’t think it matters, or that it will change anything, and unfortunately they’re not exactly wrong much of the time. Instead of putting effort into punishing people for not being engaged enough, it’d be better to make systemic changes that empower people and make the government more representative of their interests.