this post was submitted on 13 Nov 2024
531 points (95.5% liked)

Technology

59331 readers
5245 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[โ€“] [email protected] 1 points 9 minutes ago

๐Ÿคทโ€โ™‚๏ธ I only use local generators at this point,so I don't care.

[โ€“] [email protected] 6 points 27 minutes ago

Oh no!

Anyway...

[โ€“] [email protected] 3 points 49 minutes ago

Fingers crossed.

[โ€“] [email protected] 3 points 1 hour ago
[โ€“] [email protected] 4 points 1 hour ago

I think I've heard about enough of experts predicting the future lately.

[โ€“] [email protected] 0 points 2 hours ago (1 children)

Well classic computers will always limited and power hungry. Quantum computer is the key to AI achieving next level

[โ€“] [email protected] 1 points 22 minutes ago

Quantum computers are only good at a very narrow subset of tasks. None of those tasks are related to Neural Networks, AGI, or the emulation of neurons.

[โ€“] [email protected] 21 points 4 hours ago (1 children)

Thank fuck. Can we have cheaper graphics cards again please?

I'm sure a RTX 4090 is very impressive, but it's not ยฃ1800 impressive.

[โ€“] [email protected] 4 points 3 hours ago (1 children)

I swapped to AMD this generation and it's still expensive.

[โ€“] [email protected] 1 points 1 hour ago (2 children)

A well researched pre-owned is the way to go. I bought a 6900xt a couple years ago for a deal.

[โ€“] [email protected] 1 points 1 minute ago

I used to buy broken video cards on ebay for ~$25-50. The ones that run, but shut off have clogged heat sinks. No tools or parts required. Just blow out the dust. Obviously more risky, but sometimes you can hit gold.

[โ€“] [email protected] 1 points 1 hour ago

I used to get EVGA bstock which was reasonable but they got out of the business ๐Ÿ˜ž

[โ€“] [email protected] 7 points 4 hours ago* (last edited 4 hours ago)

Marcus is right, incremental improvements in AIs like ChatGPT will not lead to AGI and were never on that course to begin with. What LLMs do is fundamentally not "intelligence", they just imitate human response based on existing human-generated content. This can produce usable results, but not because the LLM has any understanding of the question. Since the current AI surge is based almost entirely on LLMs, the delusion that the industry will soon achieve AGI is doomed to fall apart - but not until a lot of smart speculators have gotten in and out and made a pile of money.

[โ€“] [email protected] 33 points 7 hours ago (1 children)

It's so funny how all this is only a problem within a capitalist frame of reference.

[โ€“] [email protected] 2 points 27 minutes ago

What they call "AI" is only "intelligent" within a capitalist frame of reference, too.

[โ€“] [email protected] 21 points 8 hours ago (4 children)

The hype should go the other way. Instead of bigger and bigger models that do more and more - have smaller models that are just as effective. Get them onto personal computers; get them onto phones; get them onto Arduino minis that cost $20 - and then have those models be as good as the big LLMs and Image gen programs.

[โ€“] [email protected] 12 points 4 hours ago (1 children)

Other than with language models, this has already happened: Take a look at apps such as Merlin Bird ID (identifies birds fairly well by sound and somewhat okay visually), WhoBird (identifies birds by sound, ) Seek (visually identifies plants, fungi, insects, and animals). All of them work offline. IMO these are much better uses of ML than spammer-friendly text generation.

[โ€“] [email protected] 1 points 2 hours ago

Platnet and iNaturalist are pretty good for plant identification as well, I use them all the time to find out what's volunteering in my garden. Just looked them up and it turns out iNaturalist is by Seek.

[โ€“] [email protected] 3 points 4 hours ago* (last edited 4 hours ago)

Well, you see, that's the really hard part of LLMs. Getting good results is a direct function of the size of the model. The bigger the model, the more effective it can be at its task. However, there's something called compute efficient frontier (technical but neatly explained video about it). Basically you can't make a model more effective at their computations beyond said linear boundary for any given size. The only way to make a model better, is to make it larger (what most mega corps have been doing) or radically change the algorithms and method underlying the model. But the latter has been proving to be extraordinarily hard. Mostly because to understand what is going on inside the model you need to think in rather abstract and esoteric mathematical principles that bend your mind backwards. You can compress an already trained model to run on smaller hardware. But to train them, you still need the humongously large datasets and power hungry processing. This is compounded by the fact that larger and larger models are ever more expensive while providing rapidly diminishing returns. Oh, and we are quickly running out of quality usable data, so shoveling more data after a certain point starts to actually provide worse results unless you dedicate thousands of hours of human labor producing, collecting and cleaning the new data. That's all even before you have to address data poisoning, where previously LLM generated data is fed back to train a model but it is very hard to prevent it from devolving into incoherence after a couple of generations.

[โ€“] [email protected] 7 points 5 hours ago

This has already started to happen. The new llama3.2 model is only 3.7GB and it WAAAAY faster than anything else. It can thow a wall of text at you in just a couple of seconds. You're still not running it on $20 hardware, but you no longer need a 3090 to have something useful.

[โ€“] [email protected] 1 points 6 hours ago (1 children)

That would be innovation, which I'm convinced no company can do anymore.

It feels like I learn that one of our modern innovations was already thought up and written down into a book in the 1950s, and just wasn't possible at that time due to some limitation in memory, precision, or some other metric. All we did was do 5 decades of marginal improvement to get to it, while not innovating much at all.

[โ€“] [email protected] 2 points 5 hours ago

Are you talking about something specific?

[โ€“] [email protected] 14 points 9 hours ago

This is why you're seeing news articles from Sam Altman saying that AGI will blow past us without any societal impact. He's trying to lessen the blow of the bubble bursting for AI/ML.

load more comments
view more: next โ€บ