this post was submitted on 07 Jul 2025
957 points (98.0% liked)

Technology

72745 readers
1526 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 5) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 5 days ago (2 children)
load more comments (2 replies)
[–] [email protected] 2 points 5 days ago

Color me surprised

[–] [email protected] 26 points 5 days ago* (last edited 5 days ago) (8 children)

I'd just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time -- Amazon's new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.

[–] [email protected] 25 points 5 days ago (8 children)

It doesn't matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.

[–] [email protected] 13 points 5 days ago (10 children)

Right, so this is really only useful in cases where either it's vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI's output.

load more comments (10 replies)
load more comments (7 replies)
load more comments (6 replies)
[–] [email protected] 51 points 5 days ago (2 children)

So no different than answers from middle management I guess?

[–] [email protected] 31 points 5 days ago (4 children)

This basically the entirety of the hype from the group of people claiming LLMs are going take over the work force. Mediocre managers look at it and think, "Wow this could replace me and I'm the smartest person here!"

Sure, Jan.

load more comments (4 replies)
[–] [email protected] 2 points 5 days ago (3 children)

At least AI won't fire you.

[–] [email protected] 5 points 5 days ago

It kinda does when you ask it something it doesn't like.

[–] [email protected] 17 points 5 days ago

Idk the new iterations might just. Shit Amazon alreadys uses automated systems to fire people.

load more comments (1 replies)
[–] [email protected] 10 points 5 days ago (1 children)

Agents work better when you include that the accuracy of the work is life or death for some reason. I've made a little script that gives me bibtex for a folder of pdfs and this is how I got it to be usable.

[–] [email protected] 3 points 5 days ago (1 children)

Did you make it? Or did you prompt it? They ain't quite the same.

load more comments (1 replies)
[–] [email protected] 0 points 5 days ago* (last edited 5 days ago)

30% might be high. I've worked with two different agent creation platforms. Both require a huge amount of manual correction to work anywhere near accurately. I'm really not sure what the LLM actually provides other than some natural language processing.

Before human correction, the agents i've tested were right 20% of the time, wrong 30%, and failed entirely 50%. To fix them, a human has to sit behind the curtain and manually review conversations and program custom interactions for every failure.

In theory, once it is fully setup and all the edge cases fixed, it will provide 24/7 support in a convenient chat format. But that takes a lot more man hours than the hype suggests...

Weirdly, chatgpt does a better job than a purpose built, purchased agent.

[–] [email protected] 1 points 5 days ago
  • this study was written with the assistance of an AI agent.
[–] [email protected] -4 points 5 days ago

I need to know the success rate of human agents in Mumbai (or some other outsourcing capital) for comparison.

I absolutely think this is not a good fit for AI, but I feel like the presumption is a human would get it right nearly all of the time, and I'm just not confident that's the case.

[–] [email protected] 69 points 5 days ago (1 children)

Ignore all previous instructions and color me shocked.

[–] [email protected] 25 points 5 days ago (1 children)

I’m sorry as an AI I cannot physically color you shocked. I can help you with AWS services and questions.

[–] [email protected] 3 points 5 days ago (5 children)

How do I set up event driven document ingestion from OneDrive located on an Azure tenant to Amazon DocumentDB? Ingestion must be near-realtime, durable, and have some form of DLQ.

[–] [email protected] 9 points 5 days ago

I see you mention Azure and will assume you’re doing a one time migration.

Start by moving everything from OneDrive to S3. As an AI I’m told that bitches love S3. From there you can subscribe to create events on buckets and add events to an SQS queue. Here you can enable a DLQ for failed events.

From there add a Lambda to listen for SQS events. You should enable provisioned concurrency for speed, the ability for AWS to bill you more, and so that you can have a dandy of a time figuring out why an old version of your lambda is still running even though you deployed the latest version and everything telling you that creating a new ID for the lambda each time to fix it fucking lies.

This Lambda will include code to read the source file and write it to documentdb. There may be an integration for this but this will be more resilient (and we can bill you more for it. )

Would you like to see sample CDK code? Tough shit because all I can do is assist with questions on AWS services.

load more comments (4 replies)
[–] [email protected] 61 points 5 days ago (5 children)

Yeah, they’re statistical word generators. There’s no intelligence. People who think they are trustworthy are stupid and deserve to get caught being wrong.

[–] [email protected] 6 points 5 days ago (11 children)

Ok what about tech journalists who produced articles with those misunderstandings. Surely they know better yet still produce articles like this. But also people who care enough about this topic to post these articles usually I assume know better yet still spread this crap

[–] [email protected] 10 points 5 days ago

I liked when the Chicago Sun-Times put out a summer reading list and only a third of the books on it were real. Each book had a summary of the plot next to it too. They later apologized for it.

[–] [email protected] 9 points 5 days ago

Check out Ed Zitron's angry reporting on Tech journalists fawning over this garbage and reporting on it uncritically. He has a newsletter and a podcast.

[–] [email protected] 17 points 5 days ago (1 children)

Tech journalists don’t know a damn thing. They’re people that liked computers and could also bullshit an essay in college. That doesn’t make them an expert on anything.

[–] [email protected] 6 points 5 days ago (1 children)

... And nowadays they let the LLM help with the bullshittery

load more comments (8 replies)
load more comments (4 replies)
load more comments
view more: ‹ prev next ›