this post was submitted on 03 Sep 2024
47 points (98.0% liked)

Technology

34830 readers
19 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 2 months ago (1 children)

Maybe we just need a different type of NLP to work with summarization. I have noticed before LLMs are unlikely to escape their 'base' knowledge.

[–] [email protected] 2 points 2 months ago

That's very likely the case. LLMs sucking up all the air in ML research right now, and we shouldn't be using them as a hammer for every problem.

[–] [email protected] 13 points 2 months ago (1 children)

This was pretty clear when observing the output of tldrbot. It would just randomly select paragraphs, ignoring surrounding context, and call it a summary.

[–] [email protected] 3 points 2 months ago

The bot demonstrated very well what this article is about. I don’t know the internals, but I also can’t image the bot was using the best and most expensive ways of doing analysis.

It was pretty bad at “getting the point” even when it was obvious, a better system should be able to do so. Sometimes the point is more difficult to discern and there has to be some judgement, you can see this in comments sometimes where people discuss what “the point” was and not just the data. I imagine an AI would have some difficulty determining what is worth summarizing in these situations especially.

[–] [email protected] -1 points 2 months ago (1 children)

Not in every way. They're cheaper and faster.

[–] [email protected] 6 points 2 months ago

people doing more work is actually more expensive because human time is by far the biggest cost for most businesses

A test of AI for Australia's corporate regulator found that the technology might actually make more work for people, not less.

[–] [email protected] 20 points 2 months ago* (last edited 2 months ago) (1 children)

This comes up again and again:

AI only does a half-ass job, so you need a real human stepping in to "fix it in post."

We're seeing corporations throwing money hand over fist at AI because corporations want it to replace workers.

We're burning an extra planets worth of energy for something that still needs human intervention to be usable.

Maybe, just maybe, we could just pay humans a living fucking wage to do the same work to begin with instead of constantly trying to find more and more ways to just not pay people at all.

Like, shocker, if you have a fully staffed customer service department, you actually will solve problems for people faster than an AI staffed customer service department.

The corpos don't care, they're not actually interested in solving our problems. They'll burn the planet to the ground in effort to avoid paying us a living wage.

[–] [email protected] 0 points 2 months ago (1 children)

That's a a bit too absolute way to look at it.

From their point of view the goal isn't to abolish human involvement, but to minimise the cost. So if they can do the job at the same quality with a quarter of the personnel through AI assistance for less cost, obviously they're gonna do that.

At the same time, just because humans having crappy jobs is the current way we solve the problem of people getting money, doesn't mean we should keep on doing that. Basic income would be a much nicer solution for that, for example. Try to think a bit less conservatively.

[–] [email protected] 6 points 2 months ago (1 children)

but to minimise the cost

What about the cost to the environment? That cost is just a negative externality to them and you, apparently. Yet I'm the one accused of thinking "conservatively."

Burning ten times as many fossil fuels to "minimise the costs" is literally fucking stupid and short-sighted.

[–] [email protected] 4 points 2 months ago

In the end it's about money. If one had to pay for environmentally damages (e.g. a new tax on $energyUnit, $resourceUnit,...) and you'd not only pay for the resources + some markup for the producing company and just external externalize the "worth" of the damages (read: the taxpayer,....), then it's cheaper to use these services instead of humans.

[–] [email protected] 12 points 2 months ago (1 children)

Anecdotally, this was my experience as a student when I tried to use AI to summarize and outline textbook content. The result says almost always incomplete such that I'd have to have already read the chapter to include what the model missed.

[–] [email protected] 1 points 2 months ago

I'm not sure how long ago that was, but LLM context sizes have grown exponentially in the past year, from 4k tokens to over a hundred k. That doesn't necessarily affect the quality of the output, although you can't expect it to summarize what it can't hold on memory.