this post was submitted on 25 Feb 2025
2 points (100.0% liked)

Framework Laptop Community

3062 readers
1 users here now

Related links:

Related communities:

founded 3 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 0 points 2 months ago* (last edited 2 months ago) (1 children)

For inference (running previously-trained models that need lots of RAM), the desktop could be useful, but I would be surprised if training anything bigger than toy examples on this hardware would make sense because I expect compute performance to be limited.

Does anyone here have practical recent experience with ROCm and how it compares with the far-more-dominant CUDA? I would imagine that compatibility is much better now that most models are using PyTorch and that is supported, but what is the performance compared to a dedicated Nvidia GPU?

[โ€“] [email protected] 0 points 2 months ago

ROCM is complete garbage. AMD has an event every year that "Pytorch works now!" and it never does.

ZLUDA is supposedly a good alternative to ROCM but I have not tried it.