this post was submitted on 11 Jun 2024
15 points (89.5% liked)

Selfhosted

39905 readers
291 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Not upgrading just taking notes.

I got a rasberry pi 5 running most of my services now, and it's doing fine. Usually for my movies and stuff I go to streaming sites, legal ofc *cough *cough, but down the line I intend to build a media server too.

The stuff I got laying arround wont do much with upgrades. So if I indeed wanted to upgrade my setup and run a media server + some AI stuff, I think I would be better off just buying a nvidia jetson SBC than building a tower from scratch.

What do u guys think?

top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 4 months ago

The requirements for a media server mesh well with a NAS and *arr suite and other light loads. Low CPU demand, some RAM demand, integrated GPU if you need transcoding and that’s it.

They are wildly different from generative AI. For good performance, you’ll want a decent GPU with loads of VRAM or brute force with raw CPU power and RAM. If you care about power draw at all, you don’t want this on 24/7/365. Why not build a cool gaming rig and use it for AI? As a bonus, now you have a cool gaming rig with your AI machine!

[–] [email protected] 6 points 4 months ago (1 children)

Just keep in mind that even with a jetson board you'll need one of the higher memory configurations to have a non-frustrating stable diffusion experience. 32-64GB like the Orin and those aren't cheap. The nanos just don't cut it without severe optimizations and very long generate times.

[–] [email protected] 6 points 4 months ago* (last edited 4 months ago) (2 children)

Yup bought the nano thinking it would be good for a home assistant plus some AI voice processing stuff and was severely disappointed. Not only is it slow but you are basically locked into Nvidias OS unless you know how to mess with bootloader's.

[–] [email protected] 1 points 4 months ago

Damn that sucks, really though the it would be able to run some stuff like llms and tts due to the n TFLOPS and alI.

But that about the OS is just a deal breaker, not being able to load any distro on it just like any other SBC is some NVIDIA BS. Gotta check that out.

[–] [email protected] 4 points 4 months ago (1 children)

I have Home Assistant running with TTS and STT on a mini PC with an Intel N100 CPU and 16 gigs of RAM. Works great. LLMs and Stable Diffusion need way more procesing power and RAM (or rather VRAM cause both are very slow without a GPU), so that mini PC wouldn't be enough for that tho.

[–] [email protected] 3 points 4 months ago (1 children)

Yeah. This was obviously wishful thinking. It was also my knee jerk reaction to jumping on to AI with minimal research. I figured that it would be better suited for the whisper tts/stt but it just didn't run well. Then I attempted to throw HAOS various other versions of nix and that's when I threw in the towel cause the bootloader seems to do a sig check on boot. If it's not the Nvidia image it just hangs. Oh well. I now have an old comp running good enough to experiment with the likes of llama mixtral on just a 2070 with on average 2-3 sec delay. More if I ask too big of a question.

[–] [email protected] 1 points 4 months ago

Yeah I am realizing now that unless that someone stands up to NVIDIA with a somewhat competing product I am better off just building my own stuff.

Honesty that speed is more than enough, I just use AI for coding, I dont mind reading docs while wainting a couple seconds.