this post was submitted on 21 May 2024
16 points (76.7% liked)

Linux

8119 readers
41 users here now

Welcome to c/linux!

Welcome to our thriving Linux community! Whether you're a seasoned Linux enthusiast or just starting your journey, we're excited to have you here. Explore, learn, and collaborate with like-minded individuals who share a passion for open-source software and the endless possibilities it offers. Together, let's dive into the world of Linux and embrace the power of freedom, customization, and innovation. Enjoy your stay and feel free to join the vibrant discussions that await you!

Rules:

  1. Stay on topic: Posts and discussions should be related to Linux, open source software, and related technologies.

  2. Be respectful: Treat fellow community members with respect and courtesy.

  3. Quality over quantity: Share informative and thought-provoking content.

  4. No spam or self-promotion: Avoid excessive self-promotion or spamming.

  5. No NSFW adult content

  6. Follow general lemmy guidelines.

founded 1 year ago
MODERATORS
 

Do you think it will be possible to run GNU/Linux operating systems on Microsoft's brand new "Copilot+ PCs"? The latter ones were unveiled just yesterday, and honestly, the sales pitch is quite impressive! A Verge article on them: Link

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 6 months ago (2 children)

Isn’t there things like qemu and box64…

Yeah, but they're experimental and probably very buggy. I've used box64 on my phone, it doesn't play well with everything.

Can it do LLM inference as fast as a M2/M3 Macbook?

It should be better at AI stuff than M series laptops, allegedly. Many manufacturers actually started listing their prices for the new laptops, the new Microsoft ones start at 16GB of RAM at $1000. I know the Lenovo one can reach 64GB of RAM but not sure about the pricing.

[–] [email protected] 2 points 5 months ago* (last edited 5 months ago)

By the time Snapdragon X Elite devices are broadly available you probably have to compare them against the M4. Apple specifies the M4's NPU with 38 TOPS while Qualcomm specifies the Snapdragon X Elite with 45 TOPS, but I wouldn't bet on these numbers being directly comparable (just like TFLOPS from different GPU manufacturers).

The M4 also made quite a big jump in single core performance and multi-core performance seems to be comparable to what the X Elite can achieve unless we're talking about its 80 watts mode, but then we'd have to take Apple's "Pro" and "Max" chips into account. Keep in mind current M4 performance metrics stem from a 5mm thick, passively cooled device. It will be interesting to see whether Qualcomm releases bigger chips on this architecture.

Price is obviously where the X Elite could shine as there'll be plenty of devices to choose from (once they're actually broadly available) and if you need anything above base models (which mostly start at 8 GB RAM and 256 GB SSD at Apple) you'll likely pay a lot less for upgrades compared to the absolutely ridiculous upgrade pricing from Apple. Price to performance might be very good here.

If and when Linux distributions start seamlessly supporting x86 apps on ARM I'll be interested in a thin and light ARM device if it really turns out to be that much more energy efficient compared to x86 chips. Most comparisons use Intel as a reference for x86 efficiency, but AMD has a decent lead here and I feel like it's not as far off of ARM chips as the marketing makes it seem, so for the time being I think going with something like an AMD Ryzen 7840U/8840U is the way to go for the broadest Windows/Linux compatibility while achieving decent efficiency.

[–] [email protected] 2 points 6 months ago* (last edited 6 months ago)

Hmmh. I can't really make an informed statement. I can't fathom qemu being experimental. That's like a 20 year old project and used by lots of people. I'm not sure. And I've yet to try Box64.

I looked it up. The Snapdragin X Elite "Supports up to 64GB LPDDR5, with 136 GB/s memory bandwidth" while the Apple M2/M3 have anywhere from 100 GB/s memory bandwith to 150/300 or 400. (800 in the Ultra). And a graphics card has like ~300 to ~1000GB/s)

(Of course that's only relevant for running large language models.)