I think I would more picture planes taking off those railroads when it comes to AI. It tends to hallucinate API calls that don't exist. if you don't go check the docs yourself you will have a hard time debugging what went wrong.
Programmer Humor
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
It depends. AI can help writing good code. Or it can write bad code. It depends on the developer's goals.
My goal is to write bad code
LLMs can be great for translating pseudo code into real code or creating boiler plate or automating tedious stuff, but ChatGPT is terrible at actual software engineering.
Honestly I just use it for the boilerplate crap.
Fill in that yaml config, write those lua bindings that are just a sequence of lua_pushinteger(L, 1), write the params of my do string kind of stuff.
Saves me a ton of time to think about the actual structure.
I gave it a harder software dev task a few weeks ago... Something that is not answered on the internet... It was as clueless as me, but compared to me, it made up shit that could never work.
OTOH humans did design the tracks in both images.
If you know what you're doing, AI is actually a massive help. You can make it do all the repetitive shit for you. You can also have it write the code and you either clean it or take the pieces that works for you. It saves soooooo much time and I freaking love it.
I've been trying to use aider for this, it seems really cool but my machine and wallet cannot handle the sheer volume of tokens it consumes.
I don't even know what aider is. Lol. There are so many assistants out there. My company created a wrapper for chatgpt and gave us unlimited number of tokens and told us to go ham.
Aider is an LLM agent type app that has a programming assistant and an architect assistant.
You tell the architect what you want and it scans the structure of your code base to generate the boilerplate. Then the coder fills it in. It has command prompt access to then compile and run etc.
I haven’t really figured it out yet.
It's taken me a while to learn how to use it and where it works best but I'm coming around to where it fits.
Just today i was doing a new project, i wrote a couple lines about what i needed and asked for a database schema. It looked about 80% right. Then asked for all the models for the ORM i wanted and it did that. Probably saved an hour of tedious typing.
I'm telling you. It's fantastic for the boring and repetitive garbage. Databases? Oh hell yeah, it does really well on that, too. You have no idea how much I hate working with SQL. The ONLY thing it still struggles with so far is negative tests. For some reason, every single AI I've ever tried did good on positive tests, but just plain bad in the negative ones.
That's the thing, it's a useful assistant for an expert who will be able to verify any answers.
It's a disaster for anyone who's ignorant of the domain.
Tell me about it. I teach a python class. Super basic, super easy. Students are sometimes idiots, but if they follow the steps, most of them should be fine. Sometimes I get one who thinks they can just do everything with chatgpt. They'll be working on their final assignment and they'll ask me what a for loop is for. Than I look at their code and it looks like Sanscrit. They probably haven't written a single line of code in those weeks.
Shhhh! You're not supposed to rock the AI hate boat.
I hate the ethics of it, especially the image models.
But frankly it's here, and lawyers were supposed to have figured out the ethics of it.
I use hosted Deepseek as an FU to OpenAI and GitHub for stealing my code.
Lmao. I don't give a shit. I've been saving a ton of time ever since I started using it. It gobbles up CSS, HTML and JS like hotcakes, and I'm very much ok with that.
Give it time, eventually every project looks like the right.
I mean, not quite every project. Some of my projects have been turned off for not being useful enough before they had time to get that bad. Lol.
I suppose you covered that with given time, though.
You can get decent results from AI coding models, though...
...as long as somebody who actually knows how to program is directing it. Like if you tell it what inputs/outputs you want it can write a decent function - even going so far as to comment it along the way. I've gotten O1 to write some basic web apps with Node and HTML/CSS without having to hold its hand much. But we simply don't have the training, resources, or data to get it to work on units larger than that. Ultimately it'd have to learn from large scale projects, and have the context size to be able to hold if not the entire project then significant chunks of it in context and that would require some very beefy hardware.
and only if you're doing something that has been previously done and publically released
Generally only for small problems. Like things lower than 300 lines of code. And the problem generally can’t be a novel problem.
But that’s still pretty damn impressive for a machine.
But that’s still pretty damn impressive for a machine.
Yeah. I'm so dang cranky about all the overselling, that how cool I think this stuff is often gets lost.
300 lines of boring code from thin air is genuinely cool, and gives me more time to tear my hair out over deployment problems.
Im looking forward in the next 2 years when AI apps are in the wild and I get to fix them lol.
As a SR dev, the wheel just keeps turning.
I'm being pretty resistant about AI code Gen. I assume we're not too far away from "Our software product is a handcrafted bespoke solution to your B2B needs that will enable synergies without exposing your entire database to the open web".
Our gluten-free code is handcrafted with all-natural intelligence.
It has its uses. For templeting and/or getting a small project off the ground its useful. It can get you 90% of the way there.
But the meme is SOOO correct. AI does not understand what it is doing, even with context. The things JR devs are giving me really make me laugh. I legit asked why they were throwing a very old version of react on the front end of a new project and they stated they "just did what chatgpt told them" and that it "works". Thats just last month or so.
The AI that is out there is all based on old posts and isnt keeping up with new stuff. So you get a lot of the same-ish looking projects that have some very strange/old decisions to get around limitations that no longer exist.
The AI also enabled some very bad practices.
It does not refactor and it makes writing repetitive code so easy you miss opportunities to abstract. In a week when you go to refactor you're going to spend twice as long on that task.
As long as you know what you're doing and guide it accordingly, it's a good tool.
Yeah, I think personally LLMs are fine for like writing a single function, or to rubber duck with for debugging or thinking through some details of your implementation, but I'd never use one to write a whole file or project. They have their uses, and I do occasionally use something like ollama to talk through a problem and get some code snippets as a starting point for something. Trying to do too much more than that is asking for problems though. It makes it way harder to debug because it becomes reading code you haven't written, it can make the code style inconsistent, and a non-insignifigant amount of the time even in short code segments it will hallucinate a non existent function or implement something incorrectly, so using it to write massive amounts of code makes that way more likely.
The CursoAI debugging is the best experience ever.
It's so much easier than googling don't stack trace and then browsing GitHub issues and stack overflow.
without exposing your entire database to the open web until well after your payment to us has cleared, so it's fine.
Lol.
You can instantly get whatever you want, only it’s made from 100% technical debt
That estimate seems a little low to me. It's at least 115%.
even more. The first 100% of the tech debt is just understanding "your own" code.
And then 12 hours spent debugging and pulling it apart.
And if you need anything else, you have to use a new prompt which will generate a brand new application, it's fun!
That's not really how agentic ai programming works anymore. Tools like cursor automatically pick files as "context", and you can manually add them or the whole ckdebase as well. That obviously uses way more tokens though.
And of course the ai put rail signals in the middle.
Chain in, rail out. Always
!Factorio/Create mod reference if anyone is interested !<