We all use ai in you everyday life. How often do you use it? How you feel about it?
Once per week. I feel fine with my own use. Not so much with everyone else using it. I occasionally read some texts or summaries that just feel off. And which are riddled with inaccuracies to plain wrong information. They sometimes make it to the top search results and drown out useful content. I’m always angry at people if they’re dishonest and don’t tell me this might all be wrong info and has been generated by ChatGPT. I’m fine with honest people, though. But it’s really split as of now.
I find myself using it nearly daily. Pretty close to it anyway. I don’t feel badly about it but I get frustrated when it gives false data. I end up correcting it and it’s not about critical things… so I still go back to it. It’ll be interesting to see how much better it gets and how fast that takes.
I use it all day.
On a daily basis. I think it’s a fun toy with limited, but very real utility. I frequently search it for information instead of Google or Bing because search and the web in general has gotten so bad - full of fake information ads and garbage SEO sites that are just as wrong about everything as AI hallucinations.
I do that knowing the answers I get are unreliable, but maybe I need to confirm something that I think I remember. It’s also decent for rubber duck troubleshooting so you aren’t wasting someone else’s time. I recently relied heavily (but not exclusively) on it to set up something in AWS which I’ve never done before. It was a big help.
Why am I seeing duplicate posts in Jerboa? Not from all instances, but still.
Big assumption that we all use AI. Some of us don’t as per company policy.
I use the ChatGPT Google Sheets extension pretty much every day to create tables, charts, and all kinds of lists for various clients and projects. The results are excellent, and it saves me a significant amount of time.
Constantly, unfortunately.
I work in Cyber Security and you can’t swing a Cat-5 'o Nine Tails without hitting some vendor talking up the “AI tools” in their products. Some of them are kinda OK. Mostly, this is language models providing relevant documentation or code snippets, stuff which was previously found by a bit of googling. The problem is that AI has been stuffed into network and system analysis, looking for anomalous activity. And every single one of those models is complete shit. While they do find anomalies, it’s mostly because they alert of so much stuff, generating so many false positives, that they get one right by blind chance. If you want to make money on a model, sell it to a security vendor. Those of us who have to deal with the tools will hate you, but CEOs and CISOs are eating that shit up right now. If you want to make something actually useful, make a model which identifies and tunes out false positives from other models.