this post was submitted on 22 Oct 2024
226 points (86.9% liked)

Technology

58845 readers
4820 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 day ago

I recently removed in editor AI cause I noticed I was acquiring muscle memory for my brain, not thinking through the rest past the start of a snippet that would get an LLM to auto complete. I'm still using LLMs, particularly for languages and libraries I'm not familiar with, but using the artifacts editors in ChatGPT and Claude.

[–] [email protected] 4 points 1 day ago* (last edited 9 hours ago)

I asked ChatGPT to give me a script to parse my history file and output the dates in something human readable. Later I learned you can use something like history -i. It was basically instantaneous while the script was slow.

Though, I'm questioning my memory of this because I don't see that flag on man pages online.

Edit: In ChatGPT's defense, this was actually something oh my zsh added, not zsh.

[–] [email protected] 20 points 1 day ago (2 children)

I've been writing code professionally for nearly two decades, and I love having copilot available in my IDE. When there is some boilerplate or a SQL query I just don't want to write, it'll oftentimes get me started with something reasonable that is wrong in a couple of subtle ways. I then fix it, laugh at how wrong it was, or use part of the proposed answer in my project.

If you're a non-corder, sure it is pure danger, but if you know what you're doing it can give you a little boost. Only time will tell if it makes me rusty on some basics, but it is another tool in the toolbox now.

[–] [email protected] 2 points 6 hours ago* (last edited 6 hours ago)

Same here (15 years). I work in all sorts of frameworks and languages. I normally would have just googled a given question to see the code i need, paste it in with everything that's wrong, and fix it to my liking. I know what I'm doing I was just missing the specific words i havent used in a couple years, i still understand them. Copilot just avoid me opening google, clicking through some bad SEO, passing the bad answers, and doing that a couple more times to bring in everything I need. It's a google formatter.

It's also exactly like searching google. If you ask "is this cancer" you'll find cases where it's cancer, if you ask "is this not cancer" youll find cases where it's cancer. You can't trust it in that way, but you can still quickly parse the internet. I make juniors explain their code so even if they paste it in, they're kind force to research it more to make sure they get it; it's on the reviewers now to train llm kiddos.

[–] [email protected] 3 points 20 hours ago

For me personally, there is only two applications of LLMs in programming:

  • doing tasks I kinda know how to do, but don't want to properly learn (recent example: generate pgf plots from csv data in matplotlib. 90% boilerplate, I last had to do it 3 years ago and vaguely remember some pitfalls so can steer the LLM in that direction. Will probably never again have to do this, so not worth the extra couple hours to properly learn
  • things I would ordinarily write a script for, but aren't worth automating because they won't come up in the future again (example: convert this Lua table to a Nix set)

Essentially, one-off things that you know how to check for correctness.

[–] [email protected] 9 points 1 day ago

I don’t have an encyclopedic knowledge of every random library or built-in function of every language on earth so what’s the difference between googling for an example on stack overflow or asking an LLM?

If you are asking ChatGPT for every single piece of code it will be terrible because it just hallucinates libraries or misunderstands the prompt. But saying any kind of use makes you a bad programmer seems more like fud than actual concern

[–] [email protected] 41 points 2 days ago

I can be a bad programmer without using AI generated code. 😤

[–] [email protected] 13 points 2 days ago (1 children)

I was saying AI for coding is bad until saw two pictures:

[–] [email protected] 6 points 1 day ago (1 children)
[–] [email protected] 1 points 1 day ago* (last edited 1 day ago)

Yes! Best pony.

[–] [email protected] 7 points 2 days ago

Not using AI Generated Code won't make you programmer at all. It's just another way to start a journey to alcoholism and loneliness in front of computer screen. The only difference is that this time you travel with junior developer for poor people.

[–] [email protected] 20 points 2 days ago (1 children)

I don’t love AI, but programming is engineering. The goal is to solve a problem, not to be the best at solving a problem.

Also I can write shitty code without help anyway

[–] [email protected] 12 points 2 days ago (2 children)

The issue with engineering is that if you don't solve it efficiently and correctly enough, it'll blow up later.

[–] [email protected] 2 points 1 day ago

That's a problem for HR if they have shitty retention.

[–] [email protected] 7 points 2 days ago* (last edited 2 days ago) (4 children)

Sounds like a problem for later

Flippancy aside: the fundamental rule in all engineering is solving the problem you have, not the problem you might have later

[–] [email protected] 0 points 1 day ago
[–] [email protected] 6 points 2 days ago

It's rarely the case. You rarely work in vacuum where your work only affects what you do at the moment. There is always a downstream or upstream dependency/requirement that needs to be met that you have to take into account in your development.

You have to avoid the problem that might come later that you are aware of. If it's not possible, you have to mitigate the impact of the future problems.

It's not possible to know of all the problems that might/will happen, but with a little work before a project, a lot of issues can be avoided/mitigated.

I wouldn't want civil engineers thinking like that, because our infrastructure would be a lot worse than it is today.

[–] [email protected] 4 points 2 days ago (1 children)

That doesn't apply to all engineering. In ChE, it'll literally blow up later...

[–] [email protected] 5 points 2 days ago (1 children)

“Not blowing up later” would be part of the problem being solved

Engineering for future requirements almost always turn out to be a net loss. You don’t build a distillation column to process 8000T of benzene if you only need to process 40T

[–] [email protected] 3 points 1 day ago

but you could design it to be easily scalable instead of having to build another even more expensive thing when you suddenly need to process 41T

[–] [email protected] 3 points 2 days ago

What's really ugly is it makes really good code with fucking terrible bugs. My last job for all of six weeks was trying to fix and integrations wrapper of an integrations wrapper on a 3rd party library of integrations.

It looked like really good code, but the architecture was fucked beyond repair. I was supposed to support it for a fortune 50. I quit before they could put me in the on call rotation.

[–] [email protected] 0 points 2 days ago

Preface: If all you want is to get a simple script/program going that will more or less work for your purposes, then I understand using AI to make it. But doing much more than this with it will not help you.

If you want to actually learn to code, then using AI to write code for you is a crutch. It's like trying to learn how to write an essay by having ChatGPT write the essays for you. If you want to use an API in your code, then you're setting yourself up for greater failure the more you depend on AI.

Case in point: if you want to make a module or script for Foundry VTT, then they explicitly tell you not to use AI, partly because the models available online have outdated information. In fact, training AI on their documentation is explicitly against the terms of service.

Even if you do this and avoid losing your license, you run a significant risk of getting unusable code because the AI hallucinated a function or method that doesn't actually exist. You will likely wind up spending more time scouting the documents for what you actually want to do than if you'd just done it yourself to begin with.

And if the code works perfectly now, there's no guarantee that it will work forever, or even in the medium term. The software and API receive updates regularly. If you don't know how to read the docs and write the code you need, you're screwed when something inevitably gets deprecated and removed. The more you depend on AI to write it for you, the less capable you'll be of debugging it down the line.

This begs the question: why would you do any of this if you wanted to make something using an API?

load more comments
view more: next ›