this post was submitted on 27 Jan 2025
234 points (97.6% liked)

Selfhosted

44737 readers
800 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Tl;dr

I have no idea what I’m doing, and the desire for a NAS and local LLM has spun me down a rabbit hole. Pls send help.

Failed Attempt at a Tl;dr

Sorry for the long post! Brand new to home servers, but am thinking about building out the setup below (Machine 1 to be on 24/7, Machine 2 to be spun up only when needed for energy efficiency); target budget cap ~ USD 4,000; would appreciate any tips, suggestions, pitfalls, flags for where I’m being a total idiot and have missed something basic:

Machine 1: TrueNAS Scale with Jellyfin, Syncthing/Nextcloud + Immich, Collabora Office, SearXNG if possible, and potentially the *arr apps

On the drive front, I’m considering 6x Seagate Ironwolf 8TB in RAIDz2 for 32TB usable space (waaay more than I think I’ll need, but I know it’s a PITA to upgrade a vdev so trying to future-proof), and I am thinking also want to add in an L2ARC cache (which I think should be something like 500GB-1TB m.2 NVMe SSD); I’d read somewhere that back of the envelope RAM requirements were 1GB RAM to 1TB storage (though the TrueNAS Scale hardware guide definitely does not say this, but with the L2ARC cache and all of the other things I’m trying to run I probably get to the same number), so I’d be looking for around 48GB (though I am under the impression that using an odd number of DIMMs isn’t great for performance, so that might bump up to 64GB across 4x16GB?); I’m ambivalent on DDR4 vs. 5 (and unless there’s a good reason not to, would be inclined to just use DDR4 for cost), but am leaning ECC, even though it may not be strictly necessary

Machine 2: Proxmox with LXC for Llama 3.3, Stable Diffusion, Whisper, OpenWebUI; I’d also like to be able to host a heavily modded Minecraft server (something like All The Mods 9 for 4 to 5 players) likely using Pterodactyl

I am struggling with what to do about GPUs here; I’d love to be able to run the 70b Llama 3.3, it seems like that will require something like 40-50GB VRAM to run comfortably at a minimum, but I’m not sure the best way to get there; I’ve seen some folks suggest 2x3090s is the right balance of value and performance, but plenty of other folks seem to advocate for sticking with the newer 4000 architecture (especially with the 5000 series around the corner and the expectation prices might finally come down); on the other end of the spectrum, I’ve also seen people advocate for going back to P40s

Am I overcomplicating this? Making any dumb rookie mistakes? Does 2 machines seems right for my use cases vs. 1 (or more than 2?)? Any glaring issues with the hardware I mentioned or suggestions for a better setup? Ways to better prioritize energy efficiency (even at the risk of more cost up front)? I was targeting something like USD 4,000 as a soft price cap across both machines, but does that seem reasonable? How much of a headache is all of this going to be to manage? Is there a light at the end of the tunnel?

Very grateful for any advice or tips you all have!


Hi all,

So sorry again for the long post. Just including a little bit of extra context here in case it’s useful about what I am trying to do (I feel like this is the annoying part of an online recipe where you get a life story instead of the actual ingredient list; I at least tried to put that first in this post.) Essentially I am a total noob, but have spent the past several months lurking on forums, old Reddit and Lemmy threads, and have watched many hours of YouTube videos just to wrap my head around some of the basics of home networking, and I still feel like I know basically nothing. But I felt like I finally got to the point where I felt that I could try to articulate what I am trying to do with enough specificity to not be completely wasting all of your time (I’m very cognizant of Help Vampires and definitely do not want to be one!)

Basically my motivation is to move away from non-privacy respecting services and bring as much in-house as possible, but (as is frequently the case), my ambition has far outpaced my skill. So I am hopeful that I can tap into all of your collective knowledge to make sure I can avoid any catastrophic mistakes I am likely to blithely walk myself into.

Here are the basic things I am trying to accomplish with this setup:

• A NAS with a built in media server and associated apps
• Phone backups (including photos) 
• Collaborative document editing
• A local ChatGPT 4 replacement 
• Locally hosted metasearch
• A place to run a modded Minecraft server for myself and a few friends

The list in the tl;dr represent my best guesses for the write software and (partial) hardware to get all of these done. Based on some of my reading, it seemed that a number of folks recommend running TrueNAS baremetal as opposed to in ProxMox for when there is an inevitable stability issue, and that got me thinking more about how it might be valuable to split out these functions across two machines, one to hand heavier workloads when needed but to be turned off when not (e.g. game server, all local AI), and a second machine to function as a NAS with all the associated apps that would hopefully be more power efficient and run 24/7.

There are two things that I think would be very helpful to me at this point:

  1. High level feedback on whether this strategy sounds right given what I am trying to accomplish. I feel like I am breaking the fundamental Keep It Simple Stupid rule and will likely come to regret it.
  2. Any specific feedback on the right hardware for this setup.
  3. Any thoughts about how to best select hardware to maximize energy efficiency/minimize ongoing costs while still accomplishing these goals.

Also, above I mentioned that I am targeted around USD 4,000, but I am willing to be flexible on that if spending more up front will help keep ongoing costs down, or if spending a bit more will lead to markedly better performance.

Ultimately, I feel like I just need to get my hands on something and start screwing things up to learn, but I’d love to avoid any major costly screw ups before I just start ordering parts, thus writing up this post as a reality check before I do just that.

Thanks so much if you read this far down the post, and for all of you who share any thoughts you might have. I don’t really have folks IRL I can talk to about these sorts of things, so I am extremely grateful to be able to reach out to this community. -------

Edit: Just wanted to say a huge thank you to everyone who shared their thoughts! I posted this fully expecting to get no responses and figured it was still worth doing just to write out my plan as it stood. I am so grateful for all of your thoughtful and generous responses sharing your experience and advice. I have to hop offline now, but look forward to responding to any comments I haven’t had a chance to turn to tomorrow. Thanks again! :)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 1 month ago

Pretty sure truenas scale can host everything you want so you might only want one server. Use Epyc for the pcie lanes, and a fractal design r7 XL and you could even escape needing a rack mount if you wanted. Use a pcie to m.2 adapter and you could easily host apps on them on a mirrored pool and use a special vdev to speed up the HDD storage pool

The role of the proxmox server would essentially be filled by apps and/or VM you could turn on or off as needed.

[–] [email protected] 6 points 1 month ago (1 children)

ZFS Raid Expansion has been released days ago in OpenZFS 2.3.0 : https://www.cyberciti.biz/linux-news/zfs-raidz-expansion-finally-here-in-version-2-3-0/

It might help you with deciding how much storage you want

[–] [email protected] 1 points 1 month ago

Woah, this is big news!! I'd been following some of the older articles talking about this being pending, but had no idea it just released, thanks for sharing! Will just need to figure out how much of a datahoarder I'm likely to become, but it might be nice to start with fewer than 6 of the 8TB drives and expand up (though I think 4 drives is the minimum that makes sense; my understanding is also that energy consumption is roughly linear with number of drives, though that could be very wrong, so maybe I've even start with 4x a 10-12TB drive if I can find them for a reasonable price). But thanks for flagging this!

[–] [email protected] 3 points 1 month ago (3 children)

For llama 70B I'm using an rtx a6000; slightly older but it does the job magnificently with hers 48gb of vram.

[–] [email protected] 2 points 1 month ago (1 children)

Wow, that sounds amazing! I think that GPU alone would probably exceed my budget for the whole build lol. Thanks for sharing!

[–] [email protected] 1 points 1 month ago

You can still run smaller models on cheaper gpus, no need for the greatest gpu ever. Btw, I use it for other things too, not only LLMs

[–] [email protected] 4 points 1 month ago (2 children)

I'm also on p2p 2x3090 with 48GB of VRAM. Honestly it's a nice experience, but still somewhat limiting...

I'm currently running deepseek-r1-distill-llama-70b-awq with the aphrodite engine. Though the same applies for llama-3.3-70b. It works great and is way faster than ollama for example. But my max context is around 22k tokens. More VRAM would allow me more context, even more VRAM would allow for speculative decoding, cuda graphs, ...

Maybe I'll drop down to a 35b model to get more context and a bit of speed. But I don't really want to justify the possible decrease in answer quality.

[–] [email protected] 2 points 1 month ago

Uhh, a lot of big words here. I mostly just play around with it.. Never used LLMs for anything more serious than a couple of test, so I don't even know now many tokens can my setup generate..

[–] [email protected] 3 points 1 month ago

This is exactly the sort of tradeoff I was wondering about, thank you so much for mentioning this. I think ultimately I would probably align with you in prioritizing answer quality over context length (but it sure would be nice to have both!!) I think my plan for now based on some of the other comments is to go ahead with the NAS build and keep my eyes peeled for any GPU deals in the meantime (though honestly I am not holding my breath). Once I've proved to myself I can something stable without burning the house down, I'll on something more powerful for the localLLM. Thanks again for sharing!

[–] [email protected] 3 points 1 month ago (1 children)

I'm running 70b on two used 3090 and an a6000 nvlink. I think i got these for $900ea, and maybe $200 for the nvlink. Also works great.

[–] [email protected] 2 points 1 month ago

Thanks for sharing! Will probably try to go this route once I get the NAS squared away and turn back to localLLMs. Out of curiosity, are you using the q4_k_m quantization type?

[–] [email protected] 12 points 1 month ago (1 children)

$4,000 seems like a lot to me. Then again, my budget was like $200.

I would start by setting yourself a smaller budget. Learn with cheaper investments before you screw up big. Obviously $200 is probably a bit low but you could build something simple for around $500. Focus on upgrade ability. Once you have a stable system up skill and reflect on what you learned. Once you have a bit more knowledge build a second and third system and then complete a Proxmox cluster. It might be overkill but having three nodes gives a lot of flexibility.

One thing I will add. Make sure you get quality enterprise storage. Don't cheap out since the lower tier drives will have performance issues with heavier workloads. Ideally you should get enterprise SSD's.

[–] [email protected] 2 points 1 month ago (1 children)

I did a double take at that $4000 budget as well! Glad I wasn't the only one.

[–] [email protected] 1 points 1 month ago

You are both totally right. I think I anchored high here just because of the LLM stuff I am trying to get running at around a GPT4 level (which is what I think it will take for folks in my family to actually use it vs. continuing to pass all their data to OpenAI) and it felt like it was tough to get there without spending an arm and a leg on GPUs alone. But I think my plan is now to start with the NAS build, which I should be able to accomplish without spending a crazy amount and then building out iteratively from there. As you say, I'd prefer to screw up and make a $500 mistake vs. a multiple thousand dollar one. Thanks for the sanity check!

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (1 children)

for high vram ai stuff it might be worth waiting and seeing how the 24gb b580 variant is

Intel has a bunch of translation layer sort of stuff though that I think generally makes it easy to run most CUDA ai things on it, but I'm not sure if common ai software supports multi gpu with it though

IDK how cash limited you are but if it's just the vram you need and not necessarily the tokens/sec it should be a much better deal when it releases

Not entirely related but I have a full half hourly shapshotted computer backup going to a large HDD in my home server using Kopia, its very convenient and you don't need to install anything on the server except a large drive and the ability to use ssh/sftp (or another method, it supports several). It supports many compression formats and also avoids storing duplicate data. I haven't needed to use it yet, but I imagine it could become very useful in the future. I also have the same set up in the cli on the server, largely so I can roll back in case some random person happens upon it and decides to destroy everything in my Minecraft server (which is public and doesn't have a whitelist...). It's pretty easy to set up and since it can back up over the internet, its something you could easily use for a whole family.

My home server (with a bunch of used parts plus a computer from the local university surplus store) was probably about ~170$ in total (i7 6700, 16gb ddr4, 256gb ssd, 8tb hdd) and is enough to host all of the stuff I have (very light modded MC with geyser, a gitlab instance, and the backup) very easily, but it is very much not expandable (the case is quite literally tiny and I don't have space to leave it open, I could get a pcie storage controller but the psu is weak and there aren't many sata ports), probably not all that future proof either, and definitely isn't something I would trust to perform well with AI models.

this (sold out now) is the hdd I got, I did a lot of research and they're supposed to be super reliable. I was worried about noise, but after getting one I can say that as long as it isn't within 4 feet of you you'll probably never hear it.

Anyways, it's always nice to really do something the proper way and have something fully future proof, but if you just need to host a few light things you can probably cheap out on the hardware and still get a great experience. It's worth noting that a normal Minecraft server, backups, and a document editor for example are all things that you can run on a Raspberry Pi if you really wanted to. I have absolutely no experience using a NAS, metasearch, or heavy mods however, those might be a lot harder to get fast for all I know.

[–] [email protected] 1 points 1 month ago (1 children)

Thank you so much for all of this! I think you're definitely right that probably starting smaller and trying a few things out is more sensible. At least for now I think I am going to focus on putting something together for the lower-hanging fruit by focusing on the NAS build first and then build up to local AI once I have something stable (but I'll definitely be keeping an eye out for GPU deals in the meantime, so thanks for mentioning the B580 variant, it wasn't on my radar at all as an option). But I think the thread has definitely given me confidence that splitting things out that way makes sense as a strategy (I had been concerned when I first wrote it out that not planning out everything all at once was going to cause me to miss some major efficiency, but I feel like it turns out that self-hosting is more like gardening than I thought in that it sort of seems to grow organically with one's interest and resources over time; sort of sounds obvious in retrospect, but I was definitely approaching this more rigidly initially). And thank you for the HDD rec! I think the Exos are the level above the Ironwolf Pro I mentioned, so will definitely consider them (especially if they come back online for a reasonable price at serverpartdeals or elsewhere). Just out of curiosity, what are you using for admin on your MC server? I had heard of Pterodactyl previously, but another commenter mentioned CraftyController as a bit easier to work with. Thank you again for writing all of this up, it's super helpful!

[–] [email protected] 1 points 1 month ago

I'm just using basic fabric stuff running through a systemd service for my MC server. It also basically just has every single performance mod I could find and nothing else (as well as geyser+floodgate) so there isn't all that much admin stuff to do. I set up RCON (I think it's called) to send commands from my computer but I just set up everything through ssh. I haven't heard of either pterodactyl or crafty controller, I'll check those out!

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

So, I'm a rabid selfhoster because I've spent too many years watching rugpull tactics from every company out there. I'm just going to list what I've ended up with, and it's not perfect, but it is pretty damn robust. I'm running pretty much everything you talk about except much in the way of AI stuff at this point. I wouldn't call it particularly energy efficient since the equipment isn't very new. But take a read and see if it provokes any thoughts on your wishlist.


My Machine 1 is a Proxmox node with ZFS storage backing and machine 2 is mirror image but is a second Proxmox node for HA. Everything, even my OPNsense router runs on Proxmox. My docker/k8s hosts are LXCs or VMs running on the nodes, and the nodes replicate nearly everything between them as a first level, fast recovery backup/high availability failover. I can then live migrate guests around very quickly if I want to upgrade and reboot or otherwise maintain a node. I can also snapshot guests before updates or maintainance that I'm scared will break stuff. Or if I'm experimenting and like to rollback when I fuck up.

Both nodes are backed up via Proxmox Backup Server for any guests I consider prod, and I take backups every hour and keep probably 200 backups at various intervals and amounts. These dedup in PBS so the space utilization for all these extra backups is quite low. I also backup via PBS to removable USB drives on a longer schedule, and swap those out offsite weekly. Because I bind mount everything in my docker compose stacks, recovering a particular folder at a point in time via folder restore lets me recover a stack quite granularly. Also, since it's done as a ZFS snapshot backup, it's internally consistent and I've never had a db-file mismatch issue that didn't just journal out cleanly.

I also zfs-send critical datasets via syncoid to zfs.rent daily from each proxmox node.

Overall, this is highly flexible and very, very bulletproof over the last 5 or 6 years. I bought some decade old 1-U dell servers with enough drive bays and dual xeons, so I have plenty of threads and ram and upgraded to IT-mode 12G SAS RAID cards , but it isn't a powerhouse server or anything, I might be $1000 into each of them. I have considered adding and passing through an external GPU to one node for building an ollama stack on one of the docker guests.

The PBS server is a little piece of trash i3 with a 8TB sata drive and a GB NIC in it.

[–] [email protected] 1 points 1 month ago (2 children)

This is super interesting, thanks so much for sharing! In my initial poking around, I'd seen a lot of people that suggested virtualizing TrueNAS within Proxmox was a bit of a headache (especially when something inevitably goes wrong and everything goes down), but I hadn't considered cutting out TrueNAS entirely and just running directly on Proxmox and pairing that virtualization with k8s and robust backups (I am pleasantly shocked that PBS can manage that many backups without it eating up crazy amounts of space). After the other comments I was sort of aligning around starting off with a TrueNAS build and then growing into some of the LLM stuff I mentioned, but I have to admit this is really intriguing as an alternative (even if as something to work towards once I've got some initial prototypes; figuring out k8s would be a really fun project I think). Just out of curiosity, how noisy do you find the old Dell servers? I have been hesitant both because of power draw and noise, but would love to get feedback from someone who has them. Thanks so much again for taking the time to write all of this out, I really appreciate it!

[–] [email protected] 2 points 1 month ago

Oh, they're noisy as hell when they wind up because they're doing a big backup or something. I have them in my laundry room. If you had to listen to them, you'd quickly find something else. In the end, I don't really use much processor power on these, it's more about the memory these boards will hold. RAM was dirt cheap so having 256GB available for experimenting with kube clusters and multiple docker hosts is pretty sweet. But considering that you can overprovision both proc and ram on PM guests as long as you use your head, you can get away with a lot less. I could probably have gotten by as well or better with a Ryzen with a few cores and plenty of ram, but these were cheaper.

At times, I've moved all the active guests to one node (I have the PBS server set up as a qdevice for Proxmox to keep a quorum active, it gets pissy if it thinks it's flying solo), and I'll WoL the other one periodically to let the first node replicate to the second, then down it again when it's done. If I'm going to be away for a while, I'll leave both of them running so HA can take over, which has actually happened without me even noticing that the first server packed in a drive, the failover was so seamless it took me a week to notice. That can save a bit of power, but overall, it's a kWh a day per server which in my area is about 12 cents.

I've never seen the point of TrueNAS for me. I run Nextcloud as a docker stack using the AIO mastercontainer for myself and 8 users. Together, we use about 1TB of space on it, and that's a few people with years of photos etc. So I mount a separate virtualdisk on the docker host that both nextcloud and immich can access on the same docker host, so they can share photos saved in users NC folders that get backed up from their phones. The AIO also has Collabra office set up by default, so that might satisfy your document editing ask there.

As I said, I've thought I might get an eGPU and pass it to a docker guest for using AI. I'd prefer to get my Home Assistant setup not relying on the NabuCasa server. I don't mind sending them money and the STT service that buys me works very well for voice commands around the house, but it rubs me the wrong way to rely on anything on someone else's computers. But it's brutally slow when I try to run it even on my desktop ryzen 7800 without a GPU, so until I decide to invest in a good GPU for that stuff, I'll be sending it out. At least I trust them way more than I ever would Google or Amazon. I'd do without if that was the choice.

All of this does not need to be a jump both feet first; you can just take some old laptop and start to build a PM cluster and play with this. Your only limit will be the ram.

I've also seen people build PM clusters using Mac Pro 2013 trashcans, you can get a 12core xeon with 64GB of ram for like $200 and maybe a thunderbolt enclosure for additional drives. Those would be super quiet and probably low power usage.

[–] [email protected] 1 points 1 month ago (1 children)

(Also very curious about all of the HA stuff; it's definitely on my list of things to experiment with, but probably down the line once I've gotten some basic infrastructure in place. Very excited at the prospect though)

[–] [email protected] 1 points 1 month ago

The HA stuff is as hard as prepping the cluster and making sure it's repping fine, then enable whichever guests you want to HA. It's seriously not difficult at all.

load more comments
view more: next ›