- cross-posted to:
- [email protected]
Ok, but what did they try to do as a SaaS?
Money.
Devils advocate, not my actual opinion; if you can make a Thing that people will pay to use, easily and without domain specific knowledge, why would you not? It may hit issues at some point but by them you’ve already got ARR and might be able to sell it.
Yeh, arguably and to a limited extent, the problems he’s having now aren’t the result of the decision to use AI to make his product so much as the decision to tell people about that and people deliberately attempting to sabotage it. I’m careful to qualify that though because the self evident flaw in his plan even if it only surfaced in a rather extreme scenario, is that they lack the domain specific knowledge to actually make his product work as soon as anything becomes more complicated than just collecting the money. Evidently there was more to this venture than just the building of the software that was necessary to for it to be a viable service. Much like if you consider yourself the ideas man and paid a programmer to engineer the product for you and then fired them straight after without hiring anyone to maintain it or keep the infrastructure going or provide support for your clients and then claimed you ‘built’ the product, you’d be in a similar scenario not long after your first paying customer finds out the hard way that you don’t actually know anything about your own service that you willingly took money for and can’t actually provide service part of the Software as a Service
If you started from first principles and made a car or, in this case, told an flailing intelligence precursor to make a car, how long would it take for it to create ABS? Seatbelts? Airbags? Reinforced fuel tanks? Firewalls? Collision avoidance? OBD ports? Handsfree kits? Side impact bars? Cupholders? Those are things created as a result of problems that Karl Benz couldn’t have conceived of, let alone solve.
Experts don’t just have skills, they have experience. The more esoteric the challenge, the more important that experience is. Without that experience you’ll very quickly find your product fails due to long-solved problems leaving you - and your customers - in the position of being exposed dangers that a reasonable person would conclude shouldn’t exist.
Was listening to my go-to podcast during morning walkies with my dog. They brought up an example where some couple was using ShatGPT as a couple’s therapist, and what a great idea that was. Talking about how one of the podcasters has more of a friend like relationship to “their” GPT.
I usually find this podcast quite entertaining, but this just got me depressed.
ChatGPT is by the same company that stole Scarlett Johansson’s voice. The same vein of companies that thinks it’s perfectly okay to pirate 81 terabytes of books, despite definitely being able to afford paying the authors. I don’t see a reality where it’s ethical or indicative of good judgement to trust a product from any of these companies with information.
I agree with you, but I do wish a lot of conservatives used chatGPT or other AI’s more. It, at the very least, will tell them all the batshit stuff they believe is wrong and clear up a lot of the blatant misinformation. With time, will more batshit AI’s be released to reinforce their current ideas? Yea. But ChatGPT is trained on enough (granted, stolen) data that it isn’t prone to retelling the conspiracy theories. Sure, it will lie to you and make shit up when you get into niche technical subjects, or ask it to do basic counting, but it certainly wouldn’t say Ukraine started the war.
It will even agree that AIs shouldn’t controlled by oligarchic tech monopolies and should instead be distributed freely and fairly for the public good, but the international system of nation states competing against each other militarily and economically prevents this. But maybe it will agree to the opposite of that too, I didn’t try asking.
AI can be incredibly useful, but you still need someone with the expertise to verify its output.
Holy crap, it’s real!
I took a web dev boot camp. If I were to use AI I would use it as a tool and not the motherfucking builder! AI gets even basic math equations wrong!
Can’t expect predictive text to be able to do math. You can get it to use a programming language to do it tho. If you ask it in a programmatic way it’ll generate and run it’s own code. Only way I got it to count the amount of r’s in strawrbrerry.
I love strawrbrerry mllilkshakes.
That is the future of AI written code: Broken beyond comprehension.
Ooh is that job security I hear???
taste of his own medicine
This feels like the modern version of those people who gave out the numbers on their credit cards back in the 2000s and would freak out when their bank accounts got drained.
But what site is he talking about?
I hope this is satire 😭
Eat my SaaS
Yes, yes there are weird people out there. That’s the whole point of having humans able to understand the code be able to correct it.
Chatgpt make this code secure against weird people trying to crash and exploit it ot
beep boop
fixed 3 bugs
added 2 known vulnerabilities
added 3 race conditions
boop beebRoger Roger
The fact that “AI” hallucinates so extensively and gratuitously just means that the only way it can benefit software development is as a gaggle of coked-up juniors making a senior incapable of working on their own stuff because they’re constantly in janitorial mode.
Plenty of good programmers use AI extensively while working. Me included.
Mostly as an advance autocomplete, template builder or documentation parser.
You obviously need to be good at it so you can see at a glance if the written code is good or if it’s bullshit. But if you are good it can really speed things up without any risk as you will only copy cody that you know is good and discard the bullshit.
Obviously you cannot develop without programming knowledge, but with programming knowledge is just another tool.
I maintain strong conviction that if a good programmer uses llm in their work, they just add more work for themselves, and if less than good one does it, they add new exciting and difficult to find bugs, while maintaining false confidence in their code and themselves.
I have seen so much code that looks good on first, second, and third glance, but actually is full of shit, and I was able to find that shit by doing external validation like talking to the dev or brainstorming the ways to test it, the things you categorically cannot do with unreliable random words generator.That’s why you use unit test and integration test.
I can write bad code myself or copy bad code from who-knows where. It’s not something introduced by LLM.
Remember famous Linus letter? “You code this function without understanding it and thus you code is shit”.
As I said, just a tool like many other before it.
I use it as a regular practice while coding. And to be true, reading my code after that I could not distinguish what parts where LLM and what parts I wrote fully by myself, and, to be honest, I don’t think anyone would be able to tell the difference.
It would probably a nice idea to do some kind of turing test, a put a blind test to distinguish the AI written part of some code, and see how precisely people can tell it apart.
I may come back with a particular piece of code that I specifically remember to be an output from deepseek, and probably withing the whole context it would be indistinguishable.
Also, not all LLM usage is for copying from it. Many times you copy to it and ask the thing yo explain it to you, or ask general questions. For instance, to seek for specific functions in C# extensive libraries.
Depending on what it is you’re trying to make, it can actually be helpful as one of many components to help get your feet wet. The same way modding games can be a path to learning a lot by fiddling with something that’s complete, getting suggestions from an LLM that’s been trained on a bunch of relevant tutorials can give you enough context to get started. It will definitely hallucinate, and figuring out when it’s full of shit is part of the exercise.
It’s like mid-way between rote following tutorials, modding, and asking for help in support channels. It isn’t as rigid as the available tutorials, and though it’s prone to hallucination and not as knowledgeable as support channel regulars, it’s also a lot more patient in many cases and doesn’t have its own life that it needs to go live.
Decent learning tool if you’re ready to check what it’s doing step by step, look for inefficiencies and mistakes, and not blindly believe everything it says. Just copying and pasting while learning nothing and assuming it’ll work, though? That’s not going to go well at all.
It’ll just keep better at it over time though. The current ai is way better than 5 years ago and in 5 years it’ll be way better than now.
My hobby: extrapolating.
Past performance does not guarantee future results
To get better it would need better training data. However there are always more junior devs creating bad training data, than senior devs who create slightly better training data.
And now LLMs being trained on data generated by LLMs. No possible way that could go wrong.
That’s certainly one theory, but as we are largely out of training data there’s not much new material to feed in for refinement. Using AI output to train future AI is just going to amplify the existing problems.
Just generate the training material, duh.
This is certainly the pattern that is actively emerging.
I mean, the proof is sitting there wearing your clothes. General intelligence exists all around us. If it can exist naturally, we can eventually do it through technology. Maybe there needs to be more breakthroughs before it happens.
“more breakthroughs” spoken like we get these once everyday like milk delivery.
I mean - have you followed AI news? This whole thing kicked off maybe three years ago, and now local models can render video and do half-decent reasoning.
None of it’s perfect, but a lot of it’s fuckin’ spooky, and any form of “well it can’t do [blank]” has a half-life.
If you follow AI news you should know that it’s basically out of training data, that extra training is inversely exponential and so extra training data would only have limited impact anyway, that companies are starting to train AI on AI generated data -both intentionally and unintentionally, and that hallucinations and unreliability are baked-in to the technology.
You also shouldn’t take improvements at face value. The latest chatGPT is better than the previous version, for sure. But its achievements are exaggerated (for example, it already knew the answers ahead of time for the specific maths questions that it was denoted answering, and isn’t better than before or other LLMs at solving maths problems that it doesn’t have the answers already hardcoded), and the way it operates is to have a second LLM check its outputs. Which means it takes,IIRC, 4-5 times the energy (and therefore cost) for each answer, for a marginal improvement of functionality.
The idea that “they’ve come on in leaps and bounds over the Last 3 years therefore they will continue to improve at that rate isn’t really supported by the evidence.
We don’t need leaps and bounds, from here. We’re already in science fiction territory. Incremental improvement has has silenced a wide variety of naysaying.
And this is with LLMs - which are stupid. We didn’t design them with logic units or factoid databases. Anything they get right is an emergent property from guessing plausible words, and they get a shocking amount of things right. Smaller models and faster training will encourage experimentation for better fundamental goals. Like a model that can only say yes, no, or mu. A decade ago that would have been an impossible sell - but now we know data alone can produce a network that’ll fake its way through explaining why the answer is yes or no. If we’re only interested in the accuracy of that answer, then we’re wasting effort on the quality of the faking.
Even with this level of intelligence, where people still bicker about whether it is any level of intelligence, dumb tricks keep working. Like telling the model to think out loud. Or having it check its work. These are solutions an author would propose as comedy. And yet: it helps. It narrows the gap between “but right now it sucks it [blank]” and having to find a new [blank]. If that never lets it do math properly, well, buy a calculator.
Seen a few YouTube channels now that just print out AI generated content. Usually audio only with a generated picture on screen. Vast amounts could be made so cheaply like that, Google is going to have fun storing all that when each only gets like 25 views. I think at some point they are going to have to delete stuff.
Dipshits going “I made this!” is not indicative of what this makes possible.
That’s your interpretation.
that’s reality. Unless you’re too deluded to think it’s magic.
So no change to how it was before then
Different shit, same smell
This is what happens when you don’t know what your own code does, you lose the ability to manage it, that is precisely why AI won’t take programmer’s jobs.
I don’t need ai to not know what my code does
but with AI you can not know even faster. So efficient
You are even freeing up the space that was needed to comprehend and critically think
More space to keep up with the latest brainrot