this post was submitted on 21 Sep 2024
87 points (72.5% liked)

Technology

59405 readers
2702 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash's syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] -2 points 1 month ago* (last edited 1 month ago) (2 children)

Its not just AI code but AI stuff in general.

It boils down to lemmy having a disproportionate amount of leftist liberal arts college student types. Thats just the reality of this platform.

Those types tend to see AI as a threat to their creative independent business. As well as feeling slighted that their data may have been used to train a model.

Its understandable why lots of people denounce AI out of fear, spite, or ignorance. Its hard to remain fair and open to new technology when its threatening your livelihood and its early foundations may have scraped your data non-consentually for training.

So you'll see AI hate circle jerk post every couple days from angry people who want to poison models and cheer for the idea that its just trendy nonesense. Dont debate them. Dont argue. Just let them vent and move on with your day.

[–] [email protected] 7 points 1 month ago (1 children)

Lmao what weird projection is this. As a leftist liberal quality manager, I can tell you're full of shit

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 24 points 1 month ago* (last edited 1 month ago) (2 children)
  • issues with model training sources
  • business sending their whole codebase to third party (copilot etc.) instead of local models
  • time gain is not that substantial in most case, as the actual "writing code" part is not the part that takes most time, thinking and checking it is
  • "chatting" in natural language to describe something that have a precise spec is less efficient than just writing code for most tasks as long as you're half-competent. We've known that since customer/developer meetings have existed.
  • the dev have to actually be competent enough to review the changes/output. In a way, "peer reviewing" becomes mandatory; it's long, can be fastidious, and generated code really needs to be double checked at every corner (talking from experience here; even a generated one-liner can have issues)
  • some business thinking that LLM outputs are "good enough", firing/moving away people that can actually do said review, leading to more issues down the line
  • actual debugging of non-trivial problems ends up sending me in a lot of directions, getting a useful output is unreliable at best
  • making new things will sometimes confuse LLM, making them a time loss at best, and producing even worst code sometimes
  • using code chatbot to help with common, menial tasks is irrelevant, as these tasks have already been done and sort of "optimized out" in library and reusable code. At best you could pull some of this in your own codebase, making it worst to maintain in the long term

Those are the downside I can think of on the top of my head, for having used AI coding assistance (mostly local solutions for privacy reasons). There are upsides too:

  • sometimes, it does produce useful output in which I only have to edit a few parts to make it works
  • local autocomplete is sometimes almost as useful as the regular contextual autocomplete
  • the chatbot turning short code into longer "natural language" explanations can sometimes act as a rubber duck in aiding for debugging

Note the "sometimes". I don't have actual numbers because tracking that would be like, hell, but the times it does something actually impressive are rare enough that I still bother my coworker with it when it happens. For most of the downside, it's not even a matter of the tool becoming better, it's the usefulness to begin with that's uncertain. It does, however, come at a large cost (money, privacy in some cases, time, and apparently ecological too) that is not at all outweighed by the rare "gains".

load more comments (2 replies)
[–] [email protected] 4 points 1 month ago (2 children)
load more comments (2 replies)
[–] [email protected] 27 points 1 month ago (2 children)

When it comes to writing code, there is a huge difference between code that works and code that works *well." Lets say you're tasked with writing a function that takes an array of RGB values and converts them to grayscale. ChatGPT is probably going to give you two nested loops that iterate over the X and Y values, applying a grayscale transformation to each pixel. This will get the job done, but it's slow, inefficient, and generally not well-suited for production code. An experienced programmer is going to take into account possible edge cases (what if a color is out of the 0-255 bounds), apply SIMD functions and parallel algorithms, factor in memory management (do we need a new array or can we write back to the input array), etc.

ChatGPT is great for experienced programmers to get new ideas; I use it as a modern version of "rubber ducky" debugging. The problem is that corporations think that LLMs can replace experienced programmers, and that's just not true. Sure, ChatGPT can produce code that "works," but it will fail at edge cases and will generally be inefficient and slow.

load more comments (1 replies)
[–] [email protected] 50 points 1 month ago (4 children)

The other day we were going over some SQL query with a younger colleague and I went “wait, what was the function for the length of a string in SQL Server?”, so he typed the whole question into chatgpt, which replied (extremely slowly) with some unrelated garbage.

I asked him to let me take the keyboard, typed “sql server string length” into google, saw LEN in the except from the first result, and went on to do what I'd wanted to do, while in another tab chatgpt was still spewing nonsense.

LLMs are slower, several orders of magnitude less accurate, and harder to use than existing alternatives, but they're extremely good at convincing their users that they know what they're doing and what they're talking about.

That causes the people using them to blindly copy their useless buggy code (that even if it worked and wasn't incomplete and full of bugs would be intended to solve a completely different problem, since users are incapable of properly asking what they want and LLMs would produce the wrong code most of the time even if asked properly), wasting everyone's time and learning nothing.

Not that blindly copying from stack overflow is any better, of course, but stack overflow or reddit answers come with comments and alternative answers that if you read them will go a long way to telling you whether the code you're copying will work for your particular situation or not.

LLMs give you none of that context, and are fundamentally incapable of doing the reasoning (and learning) that you'd do given different commented answers.

They'll just very convincingly tell you that their code is right, correct, and adequate to your requirements, and leave it to you (or whoever has to deal with your pull requests) to find out without any hints why it's not.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

I've been finding it a lot harder recently to find what I'm looking for when it comes to coding knowledge on search engines. I feel with an llm i can give it the wider context and it figures it exactly the sort of things I'm trying to find. Even more useful with trying to understand a complex error message you haven't seen before.

That being said. LLMs are not where my searching ends. I check to see where it got the information from so I can read the actual truth and not what it has conjured up.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (3 children)

I've been finding it a lot harder recently to find what I'm looking for when it comes to coding knowledge on search engines

Yeah, the enshittification has been getting worse and worse, probably because the same companies making the search engines are the ones trying to sell you the LLMs, and the only way to sell them is to make the alternatives worse.

That said, I still manage to find anything I need much faster and with less effort than dealing with an LLM would take, and where an LLM would simply get me a single answer (which I then would have to test and fix), while a search engine will give me multiple commented answers which I can compare and learn from.

I remembered another example: I was checking a pull request and it wouldn't compile; the programmer had apparently used an obscure internal function to check if a string was empty instead of string.IsNullOrWhitespace() (in C# internal means “I designed my classes wrong and I don't have time to redesign them from scratch; this member should be private or protected, but I need to access it from outside the class hierarchy, so I'll allow other classes in the same assembly to access it, but not ones outside of the assembly”; similar use case as friend in c++; it's used a lot in standard .NET libraries).

Now, that particular internal function isn't documented practically anywhere, and being internal can't be used outside its particular library, so it wouldn't pop up in any example the coder might have seen... but .NET is open source, and the library's source code is on GitHub, so chatgpt/copilot has been trained on it, so that's where the coder must have gotten it from.

The thing, though, is that LLM's being essentially statistic engines that'll just pop up the most statistically likely token after a given sequence of tokens, they have no way whatsoever to “know” that a function is internal. Or private, or protected, for that matter.

That function is used in the code they've been trained on to figure if a string is empty, so they're just as likely to output it as string.IsNullOrWhitespace() or string.IsNullOrEmpty().

Hell, if(condition) and if(!condition) are probably also equally likely in most places... and I for one don't want to have to debug code generated by something that can't tell those apart.

load more comments (3 replies)
[–] [email protected] 6 points 1 month ago

I can feel that frustrated look when someone uses chatGPT for such a tiny reason

load more comments (2 replies)
[–] [email protected] 45 points 1 month ago* (last edited 1 month ago) (7 children)

People who use LLMs to write code (incorrectly) perceived their code to be more secure than code written by expert humans.

https://arxiv.org/abs/2211.03622

load more comments (7 replies)
[–] [email protected] 36 points 1 month ago (3 children)
  • AI Code suggestions will guide you to making less secure code, not to mention often being lower quality in other ways.
  • AI code is designed to look like it fits, not be correct. Sometimes it is correct. Sometimes it’s close but has small errors. Sometimes it looks right but is significantly wrong. Personally I’ve never gotten ChatGPT to write code without significant errors for more than trivially small test cases.
  • You aren’t learning as much when you have ChatGPT do it for you, and what you do learn is “this is what chat gpt did and it worked last time” and not “this is what the problem is and last time this is the solution I came up with and this is why that worked”. In the second case you are far better equipped to tackle future problems, which won’t be exactly the same.

All that being said, I do think there is a place for chat GPT in simple queries like asking about syntax for a language you don’t know. But take every answer it gives you with a grain of salt. And if you can find documentation I’d trust that a lot more.

[–] [email protected] 2 points 1 month ago

All that being said, I do think there is a place for chat GPT in simple queries like asking about syntax for a language you don’t know.

I am also weary regarding AI and coding but this is actually the first time I used ChatGpt to programm something for a small home project in python, since I never used it. I was positively surprised in how much it could help me getting started. I also learned quite a bit since I always asked for comparison with Java, which I know, and for reasonings why it is that way. I simply also wanted to understand what it puts out. I also only asked for single lines of code rather than generating a whole method, e.g. I want to move a file from X to Y.

The thought of people blindly copying the produced code scares me.

[–] [email protected] 5 points 1 month ago

Yes, I completely forget how to solve that problem 5 minutes after chatGPT writes its solution. So I whole heartedely believe AI is bad for learning

load more comments (1 replies)
[–] [email protected] 25 points 1 month ago (2 children)

For me it's because if the AI does all the work the person "coding" won't learn anything. Thus when a problem does arise (i.e. the AI not being able to fix a simple mistake it made) no one involved has the means of fixing it.

[–] [email protected] -2 points 1 month ago

But I don't want to learn. I want the machine to free me from tedious tasks I already know how to do. There's no learning experience in creating a Wordpress plugin or a shell script.

[–] [email protected] 3 points 1 month ago

I have seen my friend in this situation

[–] [email protected] -3 points 1 month ago

Because most people on Lemmy have never actually had to write code professionally.

[–] [email protected] 27 points 1 month ago (1 children)

If the AI was trained on code that people permitted it to be freely shared then go ahead. Taking code and ignoring the software license is largely considered a dick-move, even by people who use AI.

Some people choose a copyleft software license to ensure users have software freedom, and this AI (a math process) circumvents that. [A copyleft license makes it so that you can use the code if you agree to use the same license for the rest of the program - therefore users get the same rights you did]

[–] [email protected] 0 points 1 month ago (1 children)

I hate big tech too, but I'm not really sure how the GPL or MIT licenses (for example) would apply. LLMs don't really memorize stuff like a database would and there are certain (academic/research) domains that would almost certainly fall under fair use. LLMs aren't really capable of storing the entire training set, though I admit there are almost certainly edge cases where stuff is taken verbatim.

I'm not advocating for OpenAI by any means, but I'm genuinely skeptical that most copyleft licenses have any stake in this. There's no static linking or source code distribution happening. Many basic algorithms don't follow under copyright, and, in practice, stack overflow code is copy/pasted all the time without that being released under any special license.

If your code is on GitHub, it really doesn't matter what license you provide in the repository -- you've already agreed to allowing any user to "fork" it for any reason whatsoever.

[–] [email protected] 10 points 1 month ago (1 children)

Be it a complicated neural network or database matters not. It output portions of the code used as input by design.

If you can take GPL code and "not" distribute it via complicated maths then that circumvents it. That won't do, friendo.

[–] [email protected] 2 points 1 month ago (1 children)

For example, if I ask it to produce python code for addition, which GPL'd library is it drawing from?

I think it's clear that the fair use doctrine no longer applies when OpenAI turns it into a commercial code assistant, but then it gets a bit trickier when used for research or education purposes, right?

I'm not trying to be obtuse-- I'm an AI researcher who is highly skeptical of AI. I just think the imperfect compression that neural networks use to "store" data is a bit less clear than copy/pasting code wholesale.

would you agree that somebody reading source code and then reimplenting it (assuming no reverse engineering or proprietary source code) would not violate the GPL?

If so, then the argument that these models infringe on right holders seems to hinge on the verbatim argument that their exact work was used without attribution/license requirements. This surely happens sometimes, but is not, in general, a thing these models are capable of since they're using loss-y compression to "learn" the model parameters. As an additional point, it would be straightforward to then comply with DMCA requests using any number of published "forced forgetting" methods.

Then, that raises a further question.

If I as an academic researcher wanted to make a model that writes code using GPL'd training data, would I be in compliance if I listed the training data and licensed my resulting model under the GPL?

I work for a university and hate big tech as much as anyone on Lemmy. I am just not entirely sure GPL makes sense here. GPL 3 was written because GPL 2 had loopholes that Microsoft exploited and I suspect their lawyers are pretty informed on the topic.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago) (1 children)

The corresponding training data is the best bet to see what code an input might be copied from. This can apply to humans too. To avoid lawsuits reverse engineering projects use a clean room strategy: requiring contributors to have never seen the original code. This is to argue they can't possibility be copying, even from memory (an imperfect compression too.

If it doesn't include GPL code then that can't violate the GPL. However, OpenAI argue they have to use copyrighted works to make specific AIs (if I recall correctly). Even if legal, that's still a problem to me.

My understanding is AI generated media can't be copyrighted as it wasn't a person being creative - like the monkey selfie copyright dispute.

[–] [email protected] 1 points 1 month ago (1 children)

Yeah. I'm thinking more along the lines of research and open models than anything to do with OpenAI. Fair use, above all else, generally requires that the derivative work not threaten the economic viability of the original and that's categorically untrue of ChatGPT/Copilot which are marketed and sold as products meant to replace human workers.

The clean room development analogy is definitely an analogy I can get behind, but raises further questions since LLMs are multi stage. Technically, only the tokenization stage will "see" the source code, which is a bit like a "clean room" from the perspective of subsequent stages. When does something stop being just a list of technical requirements and veer into infringement? I'm not sure that line is so clear.

I don't think the generative copyright thing is so straightforward since the model requires a human agent to generate the input even if the output is deterministic. I know, for example, Microsoft's Image Generator says that the images fall under creative Commons, which is distinct from public domain given that some rights are withheld. Maybe that won't hold up in court forever, but Microsoft's lawyers seem to think it's a bit more nuanced than "this output can't be copyrighted". If it's not subject to copyright, then what product are they selling? Maybe the court agrees that LLMs and monkeys are the same, but I'm skeptical that that will happen considering how much money these tech companies have poured into it and how much the United States seems to bend over backwards to accommodate tech monopolies and their human rights violations.

Again, I think it's clear that commerical entities using their market position to eliminate the need for artists and writers is clearly against the spirit of copyright and intellectual property, but I also think there are genuinely interesting questions when it comes to models that are themselves open source or non-commercial.

load more comments (1 replies)
[–] [email protected] 22 points 1 month ago

One point that stands out to me is that when you ask it for code it will give you an isolated block of code to do what you want.

In most real world use cases though you are plugging code into larger code bases with design patterns and paradigms throughout that need to be followed.

An experienced dev can take an isolated code block that does X and refactor it into something that fits in with the current code base etc, we already do this daily with Stackoverflow.

An inexperienced dev will just take the code block and try to ram it into the existing code in the easiest way possible without thinking about if the code could use existing dependencies, if its testable etc.

So anyway I don't see a problem with the tool, it's just like using Stackoverflow, but as we have seen businesses and inexperienced devs seem to think it's more than this and can do their job for them.

[–] [email protected] 4 points 1 month ago

Now, I obviously didnt tell it to write the entire code by itself. [...]

I am fairly competent in writing programs.

Go ahead using it. You are safe.

[–] [email protected] 10 points 1 month ago (1 children)

Basically this: Flying Too High: AI and Air France Flight 447

Description

Panic has erupted in the cockpit of Air France Flight 447. The pilots are convinced they’ve lost control of the plane. It’s lurching violently. Then, it begins plummeting from the sky at breakneck speed, careening towards catastrophe. The pilots are sure they’re done-for.

Only, they haven’t lost control of the aircraft at all: one simple manoeuvre could avoid disaster…

In the age of artificial intelligence, we often compare humans and computers, asking ourselves which is “better”. But is this even the right question? The case of Air France Flight 447 suggests it isn't - and that the consequences of asking the wrong question are disastrous.

[–] [email protected] -1 points 1 month ago (1 children)

I know about this crash and don't see the connection. What's the argument?

[–] [email protected] 12 points 1 month ago (1 children)

I recommend listening to the episode. The crash is the overarching story, but there are smaller stories woven in which are specifically about AI, and it covers multiple areas of concern.

The theme that I would highlight here though:

More automation means fewer opportunities to practice the basics. When automation fails, humans may be unprepared to take over even the basic tasks.

But it compounds. Because the better the automation gets, the rarer manual intervention becomes. At some point, a human only needs to handle the absolute most unusual and difficult scenarios.

How will you be ready for that if you don’t get practice along the way?

load more comments (1 replies)
load more comments
view more: ‹ prev next ›