Em Adespoton

  • 2 Posts
  • 941 Comments
Joined 2 years ago
cake
Cake day: June 4th, 2023

help-circle


  • …for situations where people are providing service above and beyond what you’re already paying for through other means.

    For me, this requires physical interaction between me and the person. If I can hand them a tip, it will be 10-15%, just like in the 90s.

    If I’m in Starbucks and when I order something a prompt comes up on the machine asking how much I’d like to tip, it will always be 0 unless I’m asking staff to come deliver my order to me outside instead of picking it up myself. They should already be being paid a fair wage for the work they do behind the counter, and if I find out they aren’t, I’ll take my business elsewhere.







  • Remember that fingerprinting can be your friend… because it’s much easier to fake an online fingerprint than a real one.

    You can generate a unique fingerprint with each online interaction; this means that you will always have a unique identity.

    Or, you can ensure you always have the same fingerprint as a large number of other people.

    Think of it as the difference between using a different valid loyalty card each time you shop vs using one of the famous numbers that millions of other people are also using.

    Of course, in both circumstances, you do give up the benefits of being uniquely identifiable.



  • Exactly. People see “AI” and think LLMs and diffusion models. Those are both probabilistic translation engines. They’re no more intelligent than an AC/DC converter, just a lot more complex.

    However, there are neural networks and sense arrays in the field of AI, and those are designed to replicate the process of thought.

    The real route to a thinking AI is likely a combination of the two, where a neural network can call on expert systems including translation engines to do the heavy lifting and then run a more nuanced decision tree over the results.

    Thing is, modern LLMs and diffusion models are already more complex than a single human mind can fully comprehend, so we default to internally labelling them as either “like us” or “magic”, even when we theoretically know them to be nothing but really deep predictive models.