this post was submitted on 04 May 2025
99 points (80.4% liked)

Technology

69891 readers
2737 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 19 comments
sorted by: hot top controversial new old
[–] [email protected] 7 points 5 days ago* (last edited 5 days ago) (19 children)

I find it funny that in the year 2000 while attending philosophy at University of Copenhagen I predicted strong AI around 2035. This was based on calculations of computational power, and estimates of software development trailing a bit.
At the time I had already been interested in AI development and matters of consciousness for many years. And I was a decent programmer. I already made self modifying code back in 1982. So I made this prediction at a time where AI wasn't a very popular topic, and in the middle of a decades long futile desert walk without much progress.

And for 15 about years, very little continued to happen. It was pretty obvious the approach behind for instance Deep Blue wasn't the way forward. But that seemed to be the norm for a long time.
But it looks to me that the understanding of how to build a strong AI is much much closer now, as I expected. We might actually be halfway there!
I think we are pretty close to having the computational power needed now in AI specific datacenter clusters, but the software isn't quite there yet.

I'm honestly not that interested in the current level of AI, although LLM can yield very impressive results at times, it's also flawed, and I see it as somewhat transitional.
For instance partially self driving cars are kind of irrelevant IMO. But truly self driving cars will make all the difference regarding how useful it is, and be a cool achievement for current level of AI evolution when achieved.

So current level AI can be useful, but when we achieve strong AI it will make all the difference!

Edit PS:
Obviously my prediction relied on the assumption that brains and consciousness are natural phenomena, that don't require a god. An assumption I personally consider a fact.

load more comments (19 replies)
[–] [email protected] 3 points 5 days ago (1 children)

Regarding energy/water use:

ChatGPT uses 3 Wh. This is enough energy to: [...] Play a gaming console for 1 minute.

If you want to prompt ChatGPT 40 times, you can just stop your shower 1 second early. If you normally take a 5 minute shower, set a timer for 299 seconds instead, and you’ll have saved enough water to justify 40 ChatGPT prompts.

(Source: https://andymasley.substack.com/p/a-cheat-sheet-for-conversations-about)

load more comments (1 replies)
[–] [email protected] 38 points 5 days ago (2 children)

At least anecdotally, Andreas over at 82MHz.net tried running a AI model locally on his laptop and it took over 10 minutes for just one prompt.

OK just the 4th sentence clearly shows this person has no clue what they're talking about.

load more comments (2 replies)
[–] [email protected] 38 points 6 days ago* (last edited 6 days ago) (8 children)

These endless "AI bad" articles are annoying. It's just click bait at this point.

Energy use: false. His example was someone using a 13 year old laptop to get a result and then extrapolating energy use from that. Running ai locally is the same energy as playing a 3d AAA game for the same time. No one screams about the energy footprint of playing games.

AAA game development energy use ( thousands of developers all with watt burning gpus spending years creating assets) dwarfs AI model building energy use.

Copyright, yes it's a problem and should be fixed. But stealing is part of capitalism. Google search itself is based on stealing content and then selling ads to find that content. The entire "oh we might send some clicks your way that you might be able to compensated for" is backwards.

His last reason was new and completely absurd: he doesn't like AI because he doesn't like Musk. Given the public hatred between OpenAI and Musk it's bizarre. Yes Musk has his own AI. But Musk also has electric cars, and space travel. Does the author hate all EV's too? If course not, that argument was added by the author as a troll to get engagement.

[–] [email protected] 9 points 5 days ago (1 children)

Hi, I'm the writer of the article.

To be clear I am not trying to attack anyone who uses AI, just explain why I don't use it myself.

Energy use: false

I don't dispute that AI energy is/might be comparable to other things like making a AAA game (or other things like traveling). I also don't want to say that 'AI is bad'. However if I used AI more, I would still play the same amount of video games, thus increasing the total energy use. If I was to use AI it would probably replace lower energy activities like writing or searching the internet.

Copyright, yes it’s a problem and should be fixed. But stealing is part of capitalism. Google search itself is based on stealing content and then selling ads to find that content.

I agree with you that the copyright angle is a bad way to attack AI, however AI does seem like it 'gives back' to creatives even less than other things like search as well as actively competing with them in a way that search doesn't. This isn't my main objection so I don't really want to focus on it.

His last reason was new and completely absurd

I considered leaving out the "I just don't like it" reason but I wanted to be completely transparent that my decision isn't objective. This is only one reason out of many - if it was just this problem then I would be quicker to ignore it. I get your point about EV's - I don't hate them despite the fact that Musk is/was an advocate for them. If I was to use a AI it would be something like Jan.ai which @[email protected] mentioned.

Do you agree with me on my other main point on reliability?

load more comments (1 replies)
[–] [email protected] 12 points 5 days ago (3 children)

Running ai locally is the same energy as playing a 3d AAA game for the same time

I wonder if they're factoring in the energy usage to train the model. That's what consumes most of the power.

load more comments (3 replies)
[–] [email protected] 1 points 5 days ago

I agree on the part that Musk sucks, OpenAI also sucks.

And yup, open source (if you can really call them that, I’d say they’re more like openly available) locally hosted LLMs are cool and have gotten pretty efficient nowadays.

My 5 year old M1 MacBook Pro runs models like Qwen3:14b at decent speeds and it’s quite capable (although I only ever use it for bullshitting lol).

[–] [email protected] 2 points 5 days ago

I agree, there are still good reasons not to use commercial AI products though.

https://www.anthropic.com/news/securing-america-s-compute-advantage-anthropic-s-position-on-the-diffusion-rule

https://www.mintpressnews.com/trump-killed-minerva-stargate-make-secret-more-dangerous/289313

A new AI/informational war arms race? Whatever, because...

I just don't like it

[–] [email protected] 2 points 6 days ago (11 children)

Copyright, yes it's a problem and should be fixed.

No, this is just playing into another of the common anti-AI fallacies.

Training an AI does not do anything that copyright is even involved with, let alone prohibited by. Copyright is solely concerned with the copying of specific expressions of ideas, not about the ideas themselves. When an AI trains on data it isn't copying the data, the model doesn't "contain" the training data in any meaningful sense. And the output of the AI is even further removed.

People who insist that AI training is violating copyright are advocating for ideas and styles to be covered by copyright. Or rather by some other entirely new type of IP protection, since as I said this is nothing at all like what copyright already deals with. This would be an utterly terrible thing for culture and free expression in general if it were to come to pass.

I get where this impulse comes from. Modern society has instilled a general sense that everything has to be "owned" by someone, even completely abstract things. Everyone thinks that they're owed payment for everything that they can possibly demand payment for, even if it's something that just yesterday they were doing purely for fun and releasing to the world without a care. There's this base impulse of "mine! Therefore I must control it!" Ironically, it's what leads to the capitalist hellscape so many people are decrying at the same time they demand more.

[–] [email protected] 2 points 5 days ago (6 children)

If a larger youtuber steals the script and content of a video from a smaller youtuber, as far as i know, it wouldnt be illegal. It would hurt the smaller youtuber and benefit the larger one. It would make people mad if they found out about it, but there wouldnt be people who propose changing copyright law to include ideas

I am using youtubers as the example because this happened and a lot of people got angry and its similar to the AI situation
People can complain that something unethical is legal without proposing new copyright laws without flaws

load more comments (6 replies)
load more comments (10 replies)
[–] [email protected] 13 points 6 days ago* (last edited 6 days ago) (2 children)

Copyright, yes it's a problem and should be fixed.

The quick fix: stick to open-source like Jan.ai.

Long-term solution: make profiting AI companies pay for UBI. How to actually calculate that, though, is anyone's guess...

[–] [email protected] 15 points 6 days ago

Don't make "profiteering AI companies" pay for UBI. Make all companies pay for UBI. Just tax their income and turn it around into UBI payments.

One of the major benefits of UBI is how simple it is. The simpler the system is the harder it is to game it. If you put a bunch of caveats on which companies pay more or pay less based on various factors, then there'll be tons of faffing about to dodge those taxes.

[–] [email protected] 1 points 6 days ago

make profiting AI companies pay for UBI

As I said, many companies steal content and repackage it for sale. Google did it long before AI. AI is only the most recent offender. Courts have been splitting hairs for decades over music similarities and that's ignoring that entire genres are based on copying the work of influential artists.

[–] [email protected] 23 points 6 days ago (1 children)

OP said "people like Musk" not just Musk. He's just the easiest example to use.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›