- cross-posted to:
- [email protected]
Hilarious and true.
last week some new up and coming coder was showing me their tons and tons of sites made with the help of chatGPT. They all look great on the front end. So I tried to use one. Error. Tried to use another. Error. Mentioned the errors and they brushed it off. I am 99% sure they do not have the coding experience to fix the errors. I politely disconnected from them at that point.
What’s worse is when a noncoder asks me, a coder, to look over and fix their ai generated code. My response is “no, but if you set aside an hour I will teach you how HTML works so you can fix it yourself.” Never has one of these kids asking ai to code things accepted which, to me, means they aren’t worth my time. Don’t let them use you like that. You aren’t another tool they can combine with ai to generate things correctly without having to learn things themselves.
I’ve been a professional full stack dev for 15 years and dabbled for years before that - I can absolutely code and know what I’m doing (and have used cursor and just deleted most of what it made for me when I let it run)
But my frontends have never looked better.
Coder? You havent been to university right?
100% this. I’ve gotten to where when people try and rope me into their new million dollar app idea I tell them that there are fantastic resources online to teach yourself to do everything they need. I offer to help them find those resources and even help when they get stuck. I’ve probably done this dozens of times by now. No bites yet. All those millions wasted…
Two days later…
Is the implication that he made a super insecure program and left the token for his AI thing in the code as well? Or is he actually being hacked because others are coping?
AI writes shitty code that’s full of security holes, and Leo here has probably taken zero steps to further secure his code. He broadcasts his AI written software and its open season for hackers.
Not just, but he literally advertised himself as not being technical. That seems to be just asking for an open season.
Nobody knows. Literally nobody, including him, because he doesn’t understand the code!
Nah the people doing the pro bono pen testing know. At least for the frontend side and maybe some of the backend.
But the things doing the testing could be bots instead of human actors, so it may very well be that no human does in fact know.
Thought so too, but nah. Unless that bot is very intelligent and can read and humorously respond to social media posts by settings its fake domain.
Good point! Thanks for pointing that out.
I’m stealing “pro bono pen testing.”
Cant steal it, if it is already pro bono :D
That’s fucking hilarious then.
rofl!
Potentially both, but you don’t really have to ask to be hacked. Just put something into the public internet and automated scanning tools will start checking your service for popular vulnerabilities.
He told them which AI he used to make the entire codebase. I’d bet it’s way easier to RE the “make a full SaaS suite” prompt than it is to RE the code itself once it’s compiled.
Someone probably poked around with the AI until they found a way to abuse his SaaS
Doesn’t really matter. The important bit is he has no idea either. (It’s likely the former and he’s blaming the weirdos trying to get in)
Devils advocate, not my actual opinion; if you can make a Thing that people will pay to use, easily and without domain specific knowledge, why would you not? It may hit issues at some point but by them you’ve already got ARR and might be able to sell it.
Yeh, arguably and to a limited extent, the problems he’s having now aren’t the result of the decision to use AI to make his product so much as the decision to tell people about that and people deliberately attempting to sabotage it. I’m careful to qualify that though because the self evident flaw in his plan even if it only surfaced in a rather extreme scenario, is that they lack the domain specific knowledge to actually make his product work as soon as anything becomes more complicated than just collecting the money. Evidently there was more to this venture than just the building of the software that was necessary to for it to be a viable service. Much like if you consider yourself the ideas man and paid a programmer to engineer the product for you and then fired them straight after without hiring anyone to maintain it or keep the infrastructure going or provide support for your clients and then claimed you ‘built’ the product, you’d be in a similar scenario not long after your first paying customer finds out the hard way that you don’t actually know anything about your own service that you willingly took money for and can’t actually provide service part of the Software as a Service
If you started from first principles and made a car or, in this case, told an flailing intelligence precursor to make a car, how long would it take for it to create ABS? Seatbelts? Airbags? Reinforced fuel tanks? Firewalls? Collision avoidance? OBD ports? Handsfree kits? Side impact bars? Cupholders? Those are things created as a result of problems that Karl Benz couldn’t have conceived of, let alone solve.
Experts don’t just have skills, they have experience. The more esoteric the challenge, the more important that experience is. Without that experience you’ll very quickly find your product fails due to long-solved problems leaving you - and your customers - in the position of being exposed dangers that a reasonable person would conclude shouldn’t exist.
If I were
leojr94
, I’d be mad as hell about this impersonator doling the good name ofleojr94
—most users probably don’t even notice the underscore.Managers hoping genAI will cause the skill requirements (and paycheck demand) of developers to plummet:
Also managers when their workforce are filled with buffoons:
AI is yet another technology that enables morons to think they can cut out the middleman of programming staff, only to very quickly realise that we’re more than just monkeys with typewriters.
Yeah! I have two typewriters!
i have a mobile touchscreen typewriter, but it isn’t very effective at writing code.
I was going to post a note about typewriters, allegedly from Tom Hanks, which I saw years and years ago; but I can’t find it.
Turns out there’s a lot of Tom Hanks typewriter content out there.
He donated his to my hs randomly, it was supposed to goto the valedictorian but the school kept it lmao, it was so funny because they showed everyone a video where he says not to keep the typewriter and its for a student
That’s … Pretty depressing.
We’re monkeys with COMPUTERS!!!
To be fair… If this guy would have hired a dev team, the same thing could happen.
True, any software can be vulnerable to attack.
but the difference is a technical team of software developers can mitigate an attack and patch it. This guy has no tech support than the AI that sold him the faulty code that likely assumed he did the proper hardening of his environment (which he did not).
Openly admitting you programmed anything with AI only is admitting you haven’t done the basic steps to protecting yourself or your customers.
But then they’d have a dev team who wrote the code and therefore knows how it works.
In this case, the hackers might understand the code better than the “author” because they’ve been working in it longer.
ITT: “Haha, yah AI makes shitty insecure code!”
<mad scrabbling in background to review all the code committed in the last year>
Was listening to my go-to podcast during morning walkies with my dog. They brought up an example where some couple was using ShatGPT as a couple’s therapist, and what a great idea that was. Talking about how one of the podcasters has more of a friend like relationship to “their” GPT.
I usually find this podcast quite entertaining, but this just got me depressed.
ChatGPT is by the same company that stole Scarlett Johansson’s voice. The same vein of companies that thinks it’s perfectly okay to pirate 81 terabytes of books, despite definitely being able to afford paying the authors. I don’t see a reality where it’s ethical or indicative of good judgement to trust a product from any of these companies with information.
I agree with you, but I do wish a lot of conservatives used chatGPT or other AI’s more. It, at the very least, will tell them all the batshit stuff they believe is wrong and clear up a lot of the blatant misinformation. With time, will more batshit AI’s be released to reinforce their current ideas? Yea. But ChatGPT is trained on enough (granted, stolen) data that it isn’t prone to retelling the conspiracy theories. Sure, it will lie to you and make shit up when you get into niche technical subjects, or ask it to do basic counting, but it certainly wouldn’t say Ukraine started the war.
It will even agree that AIs shouldn’t controlled by oligarchic tech monopolies and should instead be distributed freely and fairly for the public good, but the international system of nation states competing against each other militarily and economically prevents this. But maybe it will agree to the opposite of that too, I didn’t try asking.
AI can be incredibly useful, but you still need someone with the expertise to verify its output.
2 days, LMAO
He should be promoted to management! Specifically head of cyber security! They also love security by obscurity and knowing nothing about what they are doing!
An otherwise meh article concluded with “It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.”
Much as we want to point and laugh - this is not some loon’s fantasy. This is happening. Some dingus told spicy autocomplete ‘make me a database!’ and it did. It’s surely as exploit-hardened as a wet paper towel, but it functions. Largely as a demonstration of Kernighan’s law.
This tech is borderline miraculous, even if it’s primarily celebrated by the dumbest motherfuckers alive. The generation and the debugging will inevitably improve to where the machine is only as bad at this as we are. We will be left with the hard problem of deciding what the software is supposed to do.
deleted by creator
Yeah, I’ve been using it heavily. While someone without technical knowledge will surely allow AI to build it a highly insecure app, people with more technological knowledge are going to propel things to a level that the less tech savvy will have fewer and fewer pitfalls to fall into.
For the past two months, I’ve been leveraging AI to build a CUE system that takes a user desire (e.g. “i want to deploy a system with an app that uses a database and a message queue” expressed as a short json) and converts a simple configuration file that unpacks into all the kubernetes manifests required to deploy the system they want to deploy.
I’m trying to be fully shift-left about it. So, even if the user’s configuration is as simple as my example, it should still use CUE templating to construct the files needed for a full DevSecOps stack - Ingress Controller, KEDA, some kind of logging such as ELK stack, vulnerability scanners, policy agents, etc. The idea is the every stack should at all times be created in a secure state. And extra CUE transformations ensure that you can split the deployment destinations in any type of way, local/onprem, any cloud provider, or any combination thereof.
The idea is that if I need to swap out a component, I just change one override in the config and the incoming component already knows how to connect to everything and do what the previous component was doing because I’ve already abstracted the component’s expected manifest fields using CUE. So, I’d be able to do something like changing my deployment from one cloud to another with a click of a button. Or build up a whole new fully secure stack for a custom purpose within a few minutes.
The idea is I could use this system to launch my own social media app, since I’ve been planning the ideal UX for many years. But whether or not that pans out, I can take my CUE system and put a web interface over it to turn it into a mostly automated PaaS. I figure I could undercut most PaaS companies and charge just a few percentage points above cost (using OpenCost to track the expenses). If we get to the point where we have a ton of novices creating apps with AI, I might be in a lucrative position if I have a PaaS that can quickly scale and provide automated secure back ends.
Of course, I intend on open sourcing the CUE once it’s developed enough to get things off the ground. I’d really love to make money from my creative ideas on a socialized media app that I create, am less excited about gatekeeping this kind of advancement.
Interested to know if anyone has done this type of project in the past. Definitely wouldn’t have been able to move at nearly this speed without AI.
so, MASD(or MDE) then ?
I’ve never heard of this before, but you’re right that it sounds very much like what I’m doing. Thank you! Definitely going to research this topic thoroughly now to make sure I’m not reinventing the wheel.
Based on the sections in that link, I wondered if the MASD project was more geared toward the software dev side or devops. I asked Google and got this AI response:
“MAD” (Modern Application Development) services, often used in the context of software development, encompass a broader approach that includes DevOps principles and tools, focusing on rapid innovation and cloud-native architectures, rather than solely on systems development.
So (if accurate), it sounds like all the modernized automation of CI/CD, IaC, and GitOps that I know and love are already engaging in MAD philosophy. And what I’m doing is really just providing the last puzzle piece to fully automate stack architecting. I’m guessing the reason it doesn’t already exist is because a lot of the open source tools I’m relying on to do the heavy lifting inside kubernetes are themselves relatively new. So, hopefully this all means I’m not wasting my time lol
AFAICT MASD is an iteration on MDE which incorporates parts of MAD but not in a direct fashion.
Lots of acronyms there.
These types of systems do exist, they just aren’t mainstream because there hasn’t been a version of them that could be easily used for general development outside of the specific mid-level niches they are built in.
I think it’s the goal, but I’ve not seen anything come close yet.
Admittedly I’m not an authority so it may just be me missing the important things.
Thanks for the info. When I searched MASD, it told me instead about MAD, so it’s good to know how they’re differentiated.
This whole idea comes from working in a shop where most of their DevSecOps practices were fantastic, but we were maintaining fleets of Helm charts (picture the same Helm override sent to lots of different places with slightly different configuration). The unique values for each deployment were buried “somewhere” in all of these very lengthy values.yaml override files. Basically had to did into thousands of lines of code whenever you didn’t know off-hand how a deployment was configured.
I think when you’re in the thick of a job, people tend to just do what gets the job done, even if it means you’re going to have to do it again in two weeks. We want to automate, but it becomes a battle between custom-fitting and generalization. With the tradeoff being that generalization takes a lot of time and effort to do correctly.
So, I think plenty of places are “kind of” at this level where they might use CUE to generalize but tend to modify the CUE for each use case individually. But many DevOps teams I suspect aren’t even using CUE, they’re still modifying raw yaml. I think of yaml like plumbing. It’s very important, but best not exposed for manual modification unless necessary. Mostly I just see CUE used to construct and deliver Helm/kubernetes on the cluster, in tools like KubeVela and Radius. This is great for overriding complex Helm manifests with a simple Application .yaml, but the missing niche I’m trying to fill is a tool that provides the connections between different tools and constrains the overall structure of a DevSecOps stack.
I’d imagine any company with a team who has solved this problem is keeping it proprietary since it represents a pretty big advantage at the moment. But I think it’s just as likely that a project like this requires such a heavy lift before seeing any gain that most businesses simply aren’t focusing on it.
My experiences are similar to yours, though less k8’s focused and more general DevSecOps.
it becomes a battle between custom-fitting and generalisation.
This is mentioned in the link as “Barely General Enough” I’m not sure i fully subscribe to that specific interpretation but the trade off between generalisation and specialisation is certainly a point of contention in all but the smallest dev houses (assuming they are not just cranking hard coded one-off solutions).
I dislike the yaml syntax, in the same way i dislike python, but it is pervasive in the industry at the moment so you work with that you have.
I don’t think yaml is the issue as much as the uncontrolled nature of the usage.
You’d have the same issue with any format as flexible to interpretation that was being created/edited by hand.
As in, if the yaml were generated and used automatically as part of a chain i don’t think it’d be an issue, but it is not nearly prescriptive enough to produce the high level kind of model definitions further up the requirements stack.
note: i’m not saying it couldn’t be done in yaml, i’m saying that it would be a massive effort to shoehorn what was needed into a structure that wasn’t designed for that kind of thing
Which then brings use back to the generalisation vs specialisation argument, do you create a hyper-specific dsl that allows you only to define things that will work within the boundaries of what you want, does that mean it can only work in those boundaries or do you introduce more general definitions and the complexity that comes with that.
Whether or not the solution is another layer of abstraction into a different format or something else entirely i’m not sure, but i am sure that raw yaml isn’t it.
It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.
The years of specialized education and experience is not for writing code in and of itself. Anyone with an internet connection can learn to do that in not that long. What takes years to perfect is writing reliable, optimized, secure code, communicating and working efficiently with others, writing code that can be maintained by others long after you leave, knowing the theories behind why code written in a certain way works better than code written in some other way, and knowing the qualitative and quantitative measures to even be able to assess whether one piece of code is “better” than the other. Source: Self-learned programming, started building stuff on my own, and then went through an actual computer science program. You miss so much nuance and underlying theory when you self-learn, which directly translates bad code that’s a nightmare to maintain.
Finally, the most important thing you can do with the person that has years of specialized education and experience is you can actually have a conversation with them about their code, ask them to explain in detail how it works and the process they used to write it. Then you can ask them followup questions and request further clarification. Trying to get AI to explain itself is a complete shitshow, and while humans do have a propensity to make shit up to cover their own/their coworkers’ asses, AI does that even when it make no sense not to tell the truth because it doesn’t really know what “the truth” is and why other people would want it.
Will AI eventually catch up? Almost certainly, but we’re nowhere close to that right now. Currently it’s less like an actual professional developer and more like someone who knows just enough to copy paste snippets from Stack Overflow and hack them together into a program that manages to compile.
I think the biggest takeaway with AI programming is not that it can suddenly do just as well as someone with years of specialized education and experience, but that we’re going to get a lot more shitty software that look professional on the surface, but is a dumpster fire inside.
Self-learned programming, started building stuff on my own, and then went through an actual computer science program.
Same. Starting with QBASIC, no less, which is an excellent source of terrible practices. At one point I created a code snippet that would perform a division and multiplication to find the remainder, because I’d never heard of modulo. Or functions.
Right now, this lets people skip the hair-pulling syntax errors, and tell the computer what they think the program should be doing, in plain English. It’s not even “compileable pseudocode.” It’s high-level logic, nearly to the point that logic errors are all that can remain. It desperately needs some non-answer feedback states for if you tell it to “implement MP4 encoding” and expect that to Just Work.
But it’s teaching people to write the comments first.
we’re nowhere close to that right now.
The distance from here to “oh shit” is shorter than we’d prefer. This tech works like a joke. “Chain of thought” apparently means telling the robot to act smarter… and it does. Which is almost less silly than Stable Diffusion removing every part of the marble that doesn’t look like Hatsune Miku. If it’s stupid, but it works… it’s still stupid. But it works.
Someone’s gonna prompt “Write like Donald Knuth” and the robot’s gonna go, “Oh, you wanted good code? Why didn’t you say so.”
Holy crap, it’s real!