this post was submitted on 17 Dec 2024
279 points (98.6% liked)

Technology

60042 readers
1944 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 20 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 5 days ago* (last edited 5 days ago) (2 children)

I have a few friends at the Beeb, albeit not in the newsroom, and they have a blanket ban on ALL GenAI tools that aren't self-hosted. I would be very surprised if IT at the BBC wasn't blocking Apple Intelligence outright.

Although reading the article, I can't really tell if this means content was rewritten on the BBC content side, or a hallucination on-device using BBC content.

[–] [email protected] 13 points 5 days ago (1 children)

Well read the BBC article on this then: https://www.bbc.co.uk/news/articles/cd0elzk24dno

It's an iPhone 'feature' that summarises a bunch of notifications into one. It took a set of BBC headlines and turned them into "Luigi Mangione shoots himself..." They don't list the article that was being summarised so I don't know what the original headline was.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] -5 points 5 days ago (1 children)

What are the odds that all these stories about LLMs being terrible, and the crappy publicly available ones, are all just to convince us that they suck so nobody notices when actually good AI gets used?

[–] [email protected] 4 points 5 days ago (7 children)

I remember the very first thing that I have asked ChatGPT.

It was about a kind of shop, and where is the nearest one to me. It gave me a name and a nice description immediately. When I asked further about details, and the street address etc. it went rather vague. In the end it told me to ask Google for specifics.

When I checked Google to confirm, it turned out that this shop did not exist. No shop with that name, no similar one... It was all just made up.

[–] [email protected] 2 points 5 days ago (1 children)

Is it even possible for it to know that? It doesn't have your location, does it?

load more comments (1 replies)
load more comments (6 replies)
[–] [email protected] 12 points 5 days ago

There's a famous quote attributed to Charles Babbage with regard to his difference engine (or some other calculation machine of his invention) which goes: "On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."

Apprehension is right, Mr. Babbage. You were lucky to find yourself talking to those who, in some unconscious way, suspected that something might be wrong in their thinking, leading them to at least enquire. There are those whose ideas are so confused, or even so completely lacking, that they will assume that no matter what is put into the machine, the right answers will come out.

[–] [email protected] 4 points 5 days ago* (last edited 5 days ago) (1 children)

LLMs are useful for a great deal of things, particularly offline translation without having to send data to Google's servers. Sometimes I want to send a long message to friends and family but don't want to write it in English, Polish, and Hindi.

But who thought using it for news headlines was a good idea?! Given the tens of thousands of news headlines published daily, some of them are statistically guaranteed to be falsely presented by AI.

E: not sure whether people are downvoting because they want Google to have their data, they don't want people from different cultures talking to one another, or because they want AI-altered news stories.

load more comments (1 replies)
[–] [email protected] 7 points 5 days ago

“Intelligence”

[–] [email protected] 34 points 5 days ago (6 children)

AI-generated products can be a bad fit for news

No shit. The fact they only discovered that once they've got burned proves they never even questioned what generative AI does.

Though I'm sure half of the blame is from them asking it to tack the most clickbaity headlines on every article they can. Even human editors all but outright lie in those, of course an AI is going to hallucinate you the best title it can.

[–] [email protected] 10 points 5 days ago (2 children)

That's the BBC criticising Apple for indiscriminately mangling all notifications with AI, like news headlines. The BBC could boycott the Apple platform, but that's basically their only lever to stop Apple doing this besides asking nicely.

load more comments (2 replies)
load more comments (5 replies)
[–] [email protected] 80 points 5 days ago (11 children)

In addition to pushing something before it’s ready and where it’s not welcome, Apple’s own stinginess completely screwed them over.

What do LLMs need to be smart? RAM, both for their weights and holding real data to reference. What has apple relentlessly price gouged and skimped on for years? Yeah, I’ll give you one guess…

[–] [email protected] 54 points 5 days ago (8 children)

For 15 years, Apple has always lagged behind Android on implementing new features, preferring to wait until they felt their implementations were ready for mainstream consumption and it's always worked out for them. They should have stuck to that instead of jumping on the AI bandwagon with a half baked technology that most people don't want or need.

load more comments (8 replies)
[–] [email protected] 0 points 5 days ago (1 children)
[–] [email protected] 5 points 5 days ago

For RAG data? It works.

But its too slow for the weights. What generative models fundamentally do is run a full pass through the multi-gigabyte weights for every 'word' or diffusion step, so even 128-bit DDR5 like you find on desktop CPUs is too slow.

load more comments (9 replies)
[–] [email protected] 16 points 5 days ago (1 children)

Ahh yes, the sales and marketing hype of AI continues while the public is still being fed this bullshit.

load more comments (1 replies)
[–] [email protected] 27 points 5 days ago

claiming that Luigi Mangione [...] had shot himself.

AI-generated content is prone to inaccuracies

Somebody would call that an "inaccuracy"?

If I serve you a pile of lukewarm shit on a plate and say here's your dinner, would anybody call that an "inaccuracy"??

I say, life and death is even more than that.

[–] [email protected] 139 points 5 days ago (2 children)

the hallucinations will continue until morale improves

[–] [email protected] 54 points 5 days ago (2 children)

All year long, from January to Decuary.

[–] [email protected] 23 points 5 days ago

The full ten months of the year?

[–] [email protected] 4 points 5 days ago
load more comments (1 replies)
load more comments
view more: ‹ prev next ›