I got into linux right around when it was first happening, and I dont think I would’ve made it through my own noob phase if i didnt have a friendly robot to explain to me all the stupid mistakes I was making while re-training my brain to think in linux.
probably a very friendly expert or mentor or even just a regular established linux user could’ve done a better job, the ai had me do weird things semi-often. but i didnt have anyone in my life that liked linux, let alone had time to be my personal mentor in it, so the ai was a decent solution for me
Other than endless posts from the general public telling us how amazing it is, peppered with decision makers using it to replace staff and then the subsequent news reports how it told us that we should eat rocks, or some variation thereof, there’s been no impact whatsoever in my personal life.
In my professional life as an ICT person with over 40 years experience, it’s helped me identify which people understand what it is and more specifically, what it isn’t, intelligent, and respond accordingly.
The sooner the AI bubble bursts, the better.
I fully support AI taking over stupid, meaningless jobs if it also means the people that used to do those jobs have financial security and can go do a job they love.
Software developer Afas has decided to give certain employees one day a week off with pay, and let AI do their job for that day. If that is the future AI can bring, I’d be fine with that.
Caveat is that that money has to come from somewhere so their customers will probably foot the bill meaning that other employees elsewhere will get paid less.
But maybe AI can be used to optimise business models, make better predictions. Less waste means less money spent on processes which can mean more money for people. I then also hope AI can give companies better distribution of money.
This of course is all what stakeholders and decision makers do not want for obvious reasons.
The thing that’s stopping anything like that is that the AI we have today is not intelligence in any sense of the word, despite the marketing and “journalism” hype to the contrary.
ChatGPT is predictive text on steroids.
Type a word on your mobile phone, then keep tapping the next predicted word and you’ll have some sense of what is happening behind the scenes.
The difference between your phone keyboard and ChatGPT? Many billions of dollars and unimaginable amounts of computing power.
It looks real, but there is nothing intelligent about the selection of the next word. It just has much more context to guess the next word and has many more texts to sample from than you or I.
There is no understanding of the text at all, no true or false, right or wrong, none of that.
AI today is Assumed Intelligence
Arthur C Clarke says it best:
“Any sufficiently advanced technology is indistinguishable from magic.”
I don’t expect this to be solved in my lifetime, and I believe that the current methods of"intelligence " are too energy intensive to be scalable.
That’s not to say that machine learning algorithms are useless, there are significant positive and productive tools around, ChatGPT and its Large Language Model siblings not withstanding.
Source: I have 40+ years experience in ICT and have an understanding of how this works behind the scenes.
I think you’re right. AGI and certainly ASI are behind one large hurdle: we need to figure out what consciousness is and how we can synthesize it.
As Qui-Gon Jinn said to Jar Jar Binks: the ability to speak does not make you intelligent.
we need to figure out what consciousness is
Nah, “consciousness” is just a buzzword with no concrete meaning. The path to AGI has no relevance to it at all. Even if we develop a machine just as intelligent as human beings, maybe even moreso, that can solve any arbitrary problem just as efficiently, mystics will still be arguing over whether or not it has “consciousness.”
we need to figure out what consciousness is and how to synthesize it
We don’t know what it is. We don’t know how it works. That is why
“consciousness” is just a buzzword with no concrete meaning
You’re completely correct. But you’ve gone on a very long rant to largely agree with the person you’re arguing against. Consciousness is poorly defined and a “buzzword” largely because we don’t have a fucking clue where it comes from, how it operates, and how it grows. When or if we ever define that properly, then we have a launching off point to compare from and have some hope of being able to engineer a proper consciousness in an artificial being. But until we know how it works, we’ll only ever do that by accident, and even that is astronomically unlikely.
We don’t know what it is. We don’t know how it works. That is why
If you cannot tell me what you are even talking about then you cannot say “we don’t know how it works,” because you have not defined what “it” even is. It would be like saying we don’t know how florgleblorp works. All humans possess florgleblorp and we won’t be able to create AGI until we figure out florgleblorp, then I ask wtf is florgleblorp and you tell me “I can’t tell you because we’re still trying to figure out what it is.”
You’re completely correct. But you’ve gone on a very long rant to largely agree with the person you’re arguing against.
If you agree with me why do you disagree with me?
Consciousness is poorly defined and a “buzzword” largely because we don’t have a fucking clue where it comes from, how it operates, and how it grows.
You cannot say we do not know where it comes from if “it” does not refer to anything because you have not defined it! There is no “it” here, “it” is a placeholder for something you have not actually defined and has no meaning. You cannot say we don’t know how “it” operates or how “it” grows when “it” doesn’t refer to anything.
When or if we ever define that properly
No, that is your first step, you have to define it properly to make any claims about it, or else all your claims are meaningless. You are arguing about the nature of florgleblorp but then cannot tell me what florgleblorp is, so it is meaningless.
This is why “consciousness” is interchangeable with vague words like “soul.” They cannot be concretely defined in a way where we can actually look at what they are, so they’re largely irrelevant. When we talk about more concrete things like intelligence, problem-solving capabilities, self-reflection, etc, we can at least come to some loose agreement of what that looks like and can begin to have a conversation of what tests might actually look like and how we might quantify it, and it is these concrete things which have thus been the basis of study and research and we’ve been gradually increasing our understanding of intelligent systems as shown with the explosion of AI, albeit it still has miles to go.
However, when we talk about “consciousness,” it is just meaningless and plays no role in any of the progress actually being made, because nobody can actually give even the loosest iota of a hint of what it might possibly look like. It’s not defined, so it’s not meaningful. You have to at least specify what you are even talking about for us to even begin to study it. We don’t have to know the entire inner workings of a frog to be able to begin a study on frogs, but we damn well need to be able to identify something as a frog prior to studying it, or else we would have no idea that the thing we are studying is actually a frog.
You cannot study anything without being able to identify it, which requires defining it at least concretely enough that we can agree if it is there or not, and that the thing we are studying is actually the thing we aim to study. We should I believe your florgleblorp, sorry, I mean “consciousness” you speak of, even exists if you cannot even tell me how to identify it? It would be like if someone insisted there is a florgleblorp hiding in my room. Well, I cannot distinguish between a room with or without a florgleblorp, so by Occam’s razor I opt to disbelieve in its existence. Similarly, if you cannot tell me how to distinguish between something that possesses this “consciousness” and something that does not, how to actually identify it in reality, then by Occam’s razor I opt to disbelieve in its existence.
It is entirely backwards and spiritualist thinking that is popularized by all the mystics to insist that we need to study something they cannot even specify what it is first in order to figure out what it is later. That is the complete reversal of how anything works and is routinely used by charlatans to justify pseudoscientific “research.” You have to specify what it is being talked about first.
and let AI do their job for that day.
What? How does that work?
It writes all the bugs so the engineer can fix it over the following 4 days
Usually these tasks are repetitive, scriptable. I don’t know exactly what happens but I suppose AI will just cough up a lot of work and employees come in on Monday and just have to check it. In some cases that would be more work than just making it yourself but this is a first step at least.
It gave me a starting point for a terms of reference document for a Green Champions group that I set up at work. That is the only beneficial thing that I can recall.
I have tried to find other uses, but so far nothing else has actually proven up to scratch. I expect that I could have spent more time composing and tweaking prompts and proofreading the output, but it takes as long as writing the damned documents myself.
it works okay as a fuzzy search over documentation.
…as long as you’re willing to wait.
…and the documentation is freely available.
…and doesn’t contain any sensitive information.
…and you very specifically ask it for page references and ignore everything else it says.so basically, it’s worse than just searching for one word and pressing “next” over and over, unless you don’t know what the word is.
It’s really fun and helpful for character development, writing, and worldbuilding.
A game changer in helping me find out more about topics that have wisdom buried in threads of forum posts. Great to figure out things I have only fuzzy ideas or vague keywords that might be inaccurate. Great at explaining things that I can follow up on questions about details. Great at finding equations I need but I do not trust it one bit to do the calculations for me. Latest gen also gives me sources on request so I can double check and learn more directly from the horse’s mouth.
More things I come to think of: Great for finding specs that have been wiped from manufacturers site. Great for making summaries and comparisons, filtering data and making tables to my requests. Great at rubberducking when I try fix something obscure in Linux though documentation it refers to is often outdated. Still works good for giving me flow and ideas of how to move on. Great at compiling user experiences for comparisons, say for varieties of yeasts or ingredients for home-brewing. This ties into my first comment about being a game changer for information in old forum threads.
ChatGPT has had absolutely zero impact on my work or personal life. I do not have any useful case for it whatsoever. I have used it for goofs before. That’s about it. I cannot see it as a positive or negative influence…as it has had zero influence. I do get annoyed that every company and their mother is peddling worthless AI shit that most people have no use case for.
That’s pretty much been my experience too. I’ve messed around with it a bit, but put no effort into finding actual uses for it. And I don’t really feel like doing so because it all seems like a big cash grab. I’m glad some people find it useful though.
For my life, it’s nothing more than parlor tricks. I like looking at the AI images or whipping one up for a joke in the chat, but of all the uses I’ve seen, not one of them has been “everyday useful” to me.
Impact?
My company sells services to companies trying to implement it. I have a job due to this.
Actual use of it? Just wasted time. The verifiable answers are wrong, the unverifiable answers don’t get me anywhere on my projects.
Thank you for your honest answer from this perspective.
I cannot come up with a use-case for ChatGPT in my personal life, so no impact there.
For work it was a game-changer. No longer do I need to come up with haiku’s to announce it is release-freeze day, I just let ChatGPT crap one out so we can all have a laugh at its lack of poetic talent.
I’ve tried it now and then for some programming related questions, but I found its solutions dubious at best.
It had a good impact for me, it saved me from an immense headache of university. I explicitly told the professors that, I have issues with grammar (despite it being my native language).
They kept freaking out about it and I eventually resorted to ChatGPT. Solved the issue immediately.
Are you my student? Having issues with grammar is just code for needing to learn grammar, you’re in college lol. Multiple students try to fix their papers with ChatGPT and it’s so obvious and frequently gets bad grades.
I see it differently… Certainly students have to learn it. However, when a student tells you explicitly the person has problems with it and the professor refuses to listen. You can bet the students will resort to ChatGPT. It solves the current problem.
If the students just copy-paste it all then obvious they get caught.
I, personally, have had issues with grammar in my native language since I was a kid. I have books to learn but that won’t solve the immediate issue with the thesis at that time. ChatGPT solved that issue directly.
So what I did was making sure there were at least 1 or 2 mistakes.
Also I graduated and currently just waiting to get the degree and searching for a job lol.
The biggest issue in this is that every essay you write is an opportunity to improve your writing. You chose to take the easy route. There is another commenter complaining how they don’t want to teach college writing because of LLM. This is exactly why….
Well, learning the grammar won’t be within the 5 months of the thesis. I refuse to have lots of delay just to satisfy the professor and pay the university money just for that.
Whether it is an easy route or not, I honestly don’t care. All I care about is getting the degree.
And yeah, if that person wants to stop teaching writing. That’s their decision.
You learning isn’t for the prof or the university. That line of thinking is why teaching sucks. Why go to college to not learn? What a waste of money
I had a lot of motivation to learn but that all crashed down when university started. Pandemic happened, professors did not want to give online classes and not allowed to ask questions in online class.
I went to university because*, I want the degree and the job with it.
We have a different opinion on this matter and that’s okay.
If professors don’t want to teach… Then don’t? Having professors that don’t want to listen, actual teach something instead of reading of a PowerPoint* and such. That ain’t fun either.
I love how all these elitist fucktards are dismissing the countless number of people who claim that LLMs help them with their daily tasks.
I wonder if they also tell people wearing eyeglasses to stop cheating and learn how to appreciate the tools that was given to them by God… After all, these people probably also tried wearing eyeglasses and found them useless and limiting.
It’s made my professional life way worse because it was seen as an indication that the every hack-a-thon attempt to put a stupid chat bot in everything is great, actually.
I’m a software person, llm tools for programming have been frankly remarkable. In my cleanest codebases copilot (using gpt4) autocompletes my intention correctly about 70% of the time today, reducing the amount of code I physically type by a huge margin. The accuracy shifts over time and it’s dramatically less helpful for repositories that aren’t pristine and full of well named functions and variables
Beyond that chatgpt has been a godsend sifting through the internet for the information I need, the new web feature is just outstanding since it actually gives sources
Chatgpt has also helped with writers block a ton, getting beyond plot points in my novel I was having a hard time with
It’s been great with recipes, no more wading through fake life stories and ads
It’s been helpful for complex questions about new topics I’m an amateur on, I’ve learned so much about neurology and the process of how neurons interact almost exclusively through the platform, fact checking takes a little time but so far it’s been almost perfectly accurate on higher level objective questions
It’s been helpful as a starting place for legal questions, the law is complex and having a starting place before consulting the lawyers has been really nice so I know what to ask
I could go on
Been using Copilot instead of CharGPT but I’m sure it’s mostly the same.
It adds comments and suggestions in PRs that are mostly useful and correct, I don’t think it’s found any actual bugs in PRs though.
I used it to create one or two functions in golang, since I didn’t want to learn it’s syntax.
The most use Ive gotten out of it is to replace using Google or Bing to search. It’s especially good at finding more obscure things in documentation that are hard to Google for.
I’ve also started to use it personally for the same thing. Recently been wanting to startup the witcher 3 and remembered that there was something missable right at the beginning. Google results were returning videos that I didn’t want to watch and lists of missable quests that I didn’t want to parse through. Copilot gave me the answer without issue.
Perhaps what’s why Google and Ms are so excited about AI, it fixes their shitty search results.
Perhaps what’s why Google and Ms are so excited about AI, it fixes their shitty search results.
Google used to be fantastic for doing the same kinds of searches that AI is mediocre at now, and it went to crap because of search engine optimization and their AI search isn’t any better. Even if AI eventually improves for searching, search AI optimization will end up trashing that as well.
Friends and I have had a good laugh writing rap battles or poems about strangely specific topics, but that’s about it.