Just imagine how many not so obvious, or nuanced ‘facts’ are being misrepresented. Right there, under billions of searches.
There will be ‘fixes’ for this, but it’s never been easier to shape ‘the truth’ and public opinion.
It’s worse. So much worse. Now ChatGPT will have a human voice with simulated emotions that sounds eminently trustworthy and legitimately intelligent. The rest will follow quickly.
People will be far more convinced of lies being told by something that sounds like a human being sincere. People will also start believing it really is alive.
It’s like if 4chan and Quora had a baby.
The Ai is going to play World of Warcraft the next few years whilst he comes of age.
Rocks are also a good base for soup. Who hasn’t heard of rock soup growing up?
Never.
But I’ve definitely heard of stone soup, and if you bring over some vegetables and some stock, I’ll make it for you with my stone.
Where I’m from we traditionally use stones for soup. Rocks are reserved to make candies
This and glue sauce are so worrisome. Like sure most people probably know better than to actually do that, but what about the ones they don’t know? How many know how bad it is to mix bleach and ammonia? How long until Google AI is poisoned enough to recommend that for a tough stain?
Yes, the issue is not the glaring error we catch and laugh about; it’s the one that fly under the radar. This could potentially be dramatic.
Can’t wait for all these companies to lose all this money on rushed far from ready to implement ‘tech’
I work in AI.
We’ve known this about LLM’s for many years. One of the reasons they weren’t widely used was due to hallucinations, where they’ll be coerced into saying something confidently incorrect. OpenAI created a great set of tools that showed true utility for LLM’s, and people were able to largely accept that even if it’s wrong, it’s good for basic tasks like writing a doc outline or filling in boilerplate in scripts.
Sadly, grifters have decided that LLM’s were the future, and they’ve put them into applications where they have no more benefit than other, compositional models. While they’re great at orchestration, they’re just not suited to search, answering broad questions with limited knowledge, or voice-based search - all areas they’ll be launched in. This doesn’t even scratch the surface of a LLM being used for critical subjects that require knowledge of health or the law, because those companies that decided that AI will build software for them, or run HR departments are going to be totally fucked when a big mistake happens.
It’s an arms race that no one wants, and one that arguably hasn’t created anything worthwhile yet, outside of a wildly expensive tool that will save you some time. What’s even sadder is that I bet you could go to any of these big tech companies and ask IC’s if this is a good use of their time and they’ll say no. Tens of thousands of jobs were lost, and many worthwhile projects were scrapped so some billionaire cunts could enter an AI pissing contest.