this post was submitted on 29 Oct 2024
150 points (96.3% liked)

Ask Lemmy

27278 readers
1509 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

When it comes to dealing with advertisements when they're surfing on their browsers. I've just learned recently about how Google has or is killing UBlock Origin on the Chrome browser as well as all Chromium based browsers too.

We've heard for years about people complaining, bitching, whining and vice versa about how they keep seeing ads. And those trying to help them, keep wasting time to tell these people that they're surfing without extensions. Whether it'd be on Chrome or Firefox or another browser.

By this point, I've long stopped being that helper because if you cared at all about the advertisements you see, you would've long had gotten on the wagon of getting adblockers by now. You bring this onto yourself.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 month ago* (last edited 1 month ago) (3 children)

That AI safety is much more important than AI hurting copyright or artists.

I say this because the "AI sucks haha" and "AI just steals" retoric is very harmful to AI safety movement as people just don't believe AGI or even close-to-AGI will be capable enough to harm our society.

Currently many estimate that there's 1-20% chance that AGI could end our civilization. So fuck the copyright and fuck the artists when we're looking at ods like this we need to start preparing now even if it's 10 years away.

But alas, nobody can't think further out than the length of their nose and honestly I'm just hoping we're lucky enough to be in that 80% because clearly we're not going to do anything about it.

[–] [email protected] 1 points 1 month ago (1 children)

Mate

The fake ass AIs we have are straining the power grids of the entire world

AGI literally cannot hurt humanity because a minor brownout would kill it in its cradle.

[–] [email protected] 1 points 1 month ago

That's simply not true. All datacenters of the world (including crypto) only use 6% of our power.

[–] [email protected] 3 points 1 month ago (1 children)

AI safety is definitely an important thing but when you follow it up with "AGI could end our civilization" you lose me.

[–] [email protected] 2 points 1 month ago

It sounds hyperbolic but if you assume it will reach human-level intelligence and will have the ability to update its own code, you very quickly have something much smarter than us. Whether it will want to help or hurt us is an unknown. Whether we can control something that's smarter than us (and getting smarter every second) is unlikely, IMO.

[–] [email protected] 10 points 1 month ago (3 children)

That would require an actual AGI to emerge, which it has not and is not going to. LLMs are fancy text prediction tools and little more.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

which it has not and is not going to

So you're confident that AGI is not fundamentally possible? That would contradict basically every single scientist in the world and this is exactly why this issue is so difficult. Ironically, proving my point for the OP's question lol

[–] [email protected] 2 points 1 month ago (3 children)

Are you assuming LLMs are the only way humans could ever try making an AGI? If so, why do you assume that?

[–] [email protected] 1 points 1 month ago

If people start developing a new more promising kind of "ai", we can talk about it ðen. For now, ð þing we call "AI" sucks and just steals.

[–] [email protected] 3 points 1 month ago (1 children)

There's more important shit than worrying about if an unproven sci fi concept will come to being any time soon.

[–] [email protected] 1 points 1 month ago

Yeah, agreed. That’s not what I asked though.

This response is a bit of a misdirection since we all discuss shit that isn’t the most important all the time.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago) (1 children)

I agree that AGI is dangerous but I don't see LLMs as evidence that we're close to AGI, I think they should be treated as separate issues.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

Given what I think I know about LLMs, I agree. I don’t think they’re the path to AGI.

The person I replied to said AGI was never going to emerge.

[–] [email protected] 1 points 1 month ago

I had meant to say AGI would never emerge from our current attempts at creating them.

[–] [email protected] 2 points 1 month ago (2 children)

What we see in AI as an average consumer is like the RC hotwheels to a state of the art tank being used by big corps.

Just imagine that if an early LLM can fool an engineer into thinking it's sentient, what a state of the art system can do, one designed to predict the market, run propaganda bots on social media or straight up manufacture news stories with the footage to back it up.

The AI being used by big corporations is so advanced, it's one of the reasons countries have been trying to digitally isolate themselves. It's really not an if, it's a when.

[–] [email protected] 2 points 1 month ago (1 children)

The "AI" being used by big corporations is still fundamentally an LLM and has all the flaws of an LLM. It's not a hot wheels car vs a tank, it's a hot wheels car vs a $2 billion RC car

[–] [email protected] 1 points 1 month ago (1 children)

I'd like to get into how both me and OP are talking about how fast AI, not just LLMs, is scaling, and the potential it has across a variety of industries - most concerning to me is it's use by investment firms. But I need to go to the barber because I already have enough split hairs.

[–] [email protected] 1 points 1 month ago

It is my understanding that the fundamental architecture (the general purpose transformer) is identical between the "AI" used by Black Rock and by OpenAI

If you have some evidence to the contrary I'd always appreciate the chance to learn.

But the transformer based architecture is fundamentally flawed: it will always hallucinate.

[–] [email protected] 4 points 1 month ago (1 children)

I'm not sure you understand what AGI is, and why we're not going to invent it any time soon.

[–] [email protected] 0 points 1 month ago

I do. I did get a little lost in the weeds with my point though, as I was talking in a more general sense about how AI is already powerful and dangerous - because AI safety is a subject in this thread.