this post was submitted on 01 Jul 2025
2111 points (98.4% liked)

Microblog Memes

8400 readers
2306 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 2 days ago (1 children)

The AIs can definitely get more advanced, sure, but with that should come some sort of efficiency.

This is what AI researchers/pundits believed until roughly 2020, when it was discovered you could brute force your way to have more advanced AIs (so-called "scaling laws") just by massively scaling up existing algorithms. That's essentially what tech companies have been doing ever since. Nobody knows what the limit on this is going to be, but as far as I know nobody has any good evidence to suggest that we're near the limit of what's going to be possible with scaling.

We’re also seemingly on the cusp of quantum computing, which I imagine would reduce power requirements.

Quantum computing is not faster than regular computers. Quantum computing has efficiency advantages for some particular algorithms, such as breaking certain types of encryption. As far as I'm aware, nobody is really looking to replace computers with quantum computers in general. Even if they did, I don't think anyone has thought of a way to accelerate AI using quantum computing. Even if there were a way to, it would presumably require quantum computers like, 15 orders of magnitude more powerful than the ones we have today.

We have very, very real and very, very large environmental concerns that need addressing.

Yeah. I don't think AI is really at the highest level of concern for environmental impact, especially since it is looking plausible it will lead to investing in nuclear power, which would be a net positive IMO. (Coolant could still be an issue though.)

[–] [email protected] 2 points 2 days ago (1 children)

How do they brute force their way to a better algorithm? Just trial and error? How do they check outcomes to determine that their new model is good?

I don't expect you to answer those musings - you've been more than patient with me.

Honestly, I'm a tree hugger, and the fact that we aren't going for nuclear simply because of smear campaigns and changes in public opinion is insanity. We already treat some mining wastes in perpetuity, or plan to have them entombed for the rest of time - how is nuclear waste any different?

[–] [email protected] 2 points 1 day ago

It's not brute-force to a better algorithm per se. It's the same algorithm, exactly as "stupid," just with more force (more numerous and powerful GPUs) running it.

Three are benchmarks to check if the model is "good" -- for instance, how well the model does on standardized tests similar to SATs (researchers are very careful to ensure that the questions do not appear on the internet anywhere, so that the model can't just memorize the answers.)