this post was submitted on 30 Jul 2024
1208 points (98.1% liked)

linuxmemes

21222 readers
34 users here now

Hint: :q!


Sister communities:


Community rules (click to expand)

1. Follow the site-wide rules

2. Be civil
  • Understand the difference between a joke and an insult.
  • Do not harrass or attack members of the community for any reason.
  • Leave remarks of "peasantry" to the PCMR community. If you dislike an OS/service/application, attack the thing you dislike, not the individuals who use it. Some people may not have a choice.
  • Bigotry will not be tolerated.
  • These rules are somewhat loosened when the subject is a public figure. Still, do not attack their person or incite harrassment.
  • 3. Post Linux-related content
  • Including Unix and BSD.
  • Non-Linux content is acceptable as long as it makes a reference to Linux. For example, the poorly made mockery of sudo in Windows.
  • No porn. Even if you watch it on a Linux machine.
  • 4. No recent reposts
  • Everybody uses Arch btw, can't quit Vim, and wants to interject for a moment. You can stop now.

  • Please report posts and comments that break these rules!

    founded 1 year ago
    MODERATORS
    1208
    submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]
     

    List of icons/services suggested:

    • Calibre
    • Jitsi
    • Kiwix
    • Monero (Node)
    • Nextcloud
    • Pihole
    • Ollama (Should at least be able to run tiny-llama 1.1B)
    • Open Media Vault
    • Syncthing
    • VLC Media Player Media Server
    you are viewing a single comment's thread
    view the rest of the comments
    [–] [email protected] 1 points 3 months ago

    8GB or 4GB?

    Yeah you should get kobold.cpp's rocm fork working if you can manage it, otherwise use their vulkan build.

    llama 8b at shorter context is probably good for your machine, as it can fit on the 8GB GPU at shorter context, or at least be partially offloaded if its a 4GB one.

    I wouldn't recommend deepseek for your machine. It's a better fit for older CPUs, as it's not as smart as llama 8B, and its bigger than llama 8B, but it just runs super fast because its an MoE.