I dont know why but I am reminded of this clip about eggless omelette https://youtu.be/9Ah4tW-k8Ao
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
They've done studies, you know. 30% of the time, it works every time.
For me as a software developer the accuracy is more in the 95%+ range.
On one hand the built in copilot chat widget in Intellij basically replaces a lot my google queries.
On the other hand it is rather fucking good at executing some rewrites that is a fucking chore to do manually, but can easily be done by copilot.
Imagine you have a script that initializes your DB with some test data. You have an Insert into statement with lots of columns and rows so
Inser into (column1,....,column n) Values row1, Row 2 Row n
Addig a new column with test data for each row is a PITA, but copilot handles it without issue.
Similarly when writing unit tests you do a lot of edge case testing which is a bunch of almost same looking tests with maybe one variable changing, at most you write one of those tests, then copilot will auto generate the rest after you name the next unit test, pretty good at guessing what you want to do in that test, at least with my naming scheme.
So yeah, it's way overrated for many-many things, but for programming it's a pretty awesome productivity tool.
Yeah, it (in my case, ChatGPT) has been great for helping me along with functions I'm only passingly familiar with / trying to use in new ways.
One that I was really surprised with was that it gave me a surprisingly robust, sensible, and (seemingly) well tuned-to-my-case check list of things to inspect for a used car I intend to buy. I'm already mostly familiar with what I'm doing there, but it pointed to some things I might've overlooked / didn't know were points of concern for the specific vehicle I'm looking at.
Pepper Ridge Farms remembers when you could just do a web search and get it answered in the first couple results. Then the SEO wars happened....
Keep doing what you do. Your company will pay me handsomely to throw out all your bullshit and write working code you can trust when you're done. If your company wants to have a product in the future that is.
Lmao, okay buddy, based on how many interviews I have sat on in, the chances that you are a worse programmer than me are much higher than you being better than me.
Being a pompous ass dismissive of new tooling makes you chances even worse 😕
The person who uses fancy autocomplete to write their code will be exactly the person who thinks they're better than everyone. Those traits are correlated.
I’ve been in the industry awhile and your assessment is dead on.
As long as you’re not blindly committing the code, it’s a huge time saver for a number of mundane tasks.
It’s especially fantastic for writing throwaway tooling. Need data massaged a specific way? Ez pz. Need a script to execute an api call on each entry in a spreadsheet? No problem.
The guy above you is a nutter. Not sure if people haven’t tried leveraging LLMs or what. It has a ton of faults, but it really does speed up the mundane work. Also, clearly the person is either brand new to the field or doesn’t even work in it. Otherwise they would have seen the barely functional shite that actual humans churn out.
Part of me wonders if code organization is going to start optimizing for interpretation by these models rather than humans.
"...for multi-step tasks"
Color me surprised
I'd just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time -- Amazon's new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.
It doesn't matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.
A human can review something close to correct a lot better than starting the task from zero.
It is a lot harder to notice incorrect information in review, than making sure it is correct when writing it.
Right, so this is really only useful in cases where either it's vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI's output.
Please stop.
I'm not claiming that the use of AI is ethical. If you want to fight back you have to take it seriously though.
It cant do 30% of tasks vorrectly. It can do tasks correctly as much as 30% of the time, and since it's llm shit you know those numbers have been more massaged than any human in history has ever been.
I meant the latter, not "it can do 30% of tasks correctly 100% of the time."
So no different than answers from middle management I guess?
This basically the entirety of the hype from the group of people claiming LLMs are going take over the work force. Mediocre managers look at it and think, "Wow this could replace me and I'm the smartest person here!"
Sure, Jan.
I won't tolerate Jan slander here. I know he's just a builder, but his life path has the most probability of having a great person out of it!
I'd say Jan Botanist is also up there as being a pretty great person.
Jan Refiner is up there for me.
At least AI won't fire you.
It kinda does when you ask it something it doesn't like.
Idk the new iterations might just. Shit Amazon alreadys uses automated systems to fire people.