this post was submitted on 18 Sep 2024
444 points (94.4% liked)

Technology

59192 readers
2513 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 1 month ago

I really don't get how people so easily accept this. This is an engineering problem, not a law of the universe... How would someone possibly prove something is impossible, particularly while the entire branch of technology is rapidly changing?

[–] [email protected] 3 points 1 month ago

I for one support the AI centipede and hope it shits into it's own input until it dies

[–] [email protected] 10 points 1 month ago

Anyone who has made copies of videotapes knows what happens to the quality of each successive copy. You're not making a "treasure trove." You're making trash.

[–] [email protected] 1 points 1 month ago

I couldn't care less.

[–] [email protected] 3 points 1 month ago

If we can work out which data conduits are patrolled more often by AI than by humans, we could intentionally flood those channels with AI content, and push Model Collapse along further. Get AI authors to not only vet for "true human content", but also pay licensing fees for the use of that content. And then, hopefully, give the fuck up on their whole endeavor.

[–] [email protected] 9 points 1 month ago

Having now flooded the internet with bad AI content not surprisingly its now eating itself. Numerous projects that aren't AI are suffering too as the quality of text reduces.

[–] [email protected] 2 points 1 month ago (1 children)

Well duh. I think a lot of us here learned that lesson from watching the movie Multiplicity.

[–] [email protected] 1 points 1 month ago (1 children)
[–] [email protected] 1 points 1 month ago

Oh, shit. Ummm...it was a funny movie back when it came out, but I haven't seen it in like 25 years so who knows how bad it seems now. Could still be good?

[–] [email protected] 1 points 1 month ago
[–] [email protected] 28 points 1 month ago* (last edited 1 month ago)

Uh, good.

As an engineer who cares a LOT about engineering ethics, it is absolutely fucking infuriating watching the absolute firehose of shit that comes out of LLMs and public-consumption audio, image, and video ML systems, juxtaposed with the outright refusal of companies and engineers who work there to accept ANY accountability or culpability for the systems THEY FUCKING MADE.

I understand the nuances of NNs. I understand that they’re much more stochastic than deterministic. So, you know, maybe it wasn’t a great idea to just tell the general public (which runs a WIDE gamut of intelligence and comprehension ability - not to mention, morality) “have at it”. The fact that ML usage and deployment in terms of information generating/kinda-sorta-but-not-really-aggregating “AI oracles” isn’t regulated on the same level as what you’d see in biotech or aerospace is insane to me. It’s a refusal to admit that these systems fundamentally change the entire premise of how “free speech” is generated, and that bad actors (either unrepentantly profit driven, or outright malicious) can and are taking disproportionate advantage of these systems.

I get it - I am a staunch opponent of censorship, and as a software engineer. But the flippant deployment of literally society-altering technology alongside the outright refusal to accept any responsibility, accountability, or culpability for what that technology does to our society is unconscionable and infuriating to me. I am aware of the potential that ML has - it’s absolutely enormous, and could absolutely change a HUGE number of fields for the better in incredible ways. But that’s not what it’s being used for, and it’s because the field is essentially unregulated right now.

[–] [email protected] 33 points 1 month ago

So AI:

  1. Scraped the entire internet without consent
  2. Trained on it
  3. Polluted it with AI generated rubbish
  4. Trained on that rubbish without consent
  5. Are now in need of lobotomy
[–] [email protected] 7 points 1 month ago

It's like a human centipede where only the first person is a human and everyone else is an AI. It's all shit, but it gets a bit worse every step.

[–] [email protected] 18 points 1 month ago
[–] [email protected] 1 points 1 month ago

Two outcasts among their peers, Gary Wallace and Wyatt Donnelly spent a good deal of their youth as pioneers and early adopters of AI.

[–] [email protected] 18 points 1 month ago (1 children)

have we tried feeding them actual human beings yet ?

[–] [email protected] 10 points 1 month ago

Billionaires are the smartest, give them the most knowledge first!

[–] [email protected] 10 points 1 month ago

Oh no. Anyways...

load more comments
view more: next ›