this post was submitted on 06 Nov 2024
59 points (98.4% liked)
PC Gaming
8568 readers
596 users here now
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
People running LLMs aren't the target. People who use things like ChatGPT and CoPilot on low power PCs who may benefit from edge inference acceleration are. Every major LLM dreams of offloading compute on the end users. It saves them tons of money.
One can't offload "usable" LLMs without tons of memory bandwidth and plenty of RAM. It's just not physically possible.
You can run small models like Phi pretty quick, but I don't think people will be satisfied with that for copilot, even as basic autocomplete.
About 2x faster than Intel's current IGPs is the threshold where the offloading can happen, IMO. And that's exactly what AMD/Apple are producing.