this post was submitted on 12 Jul 2024
40 points (95.5% liked)

Selfhosted

40006 readers
625 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I have been using Nextcloud for over a year now. Started with it on Bare Metal, switched to the basic Docker Container and Collabora in its own Container. That was tricky to get running nicely. Now I have been using Nextcloud AIO for a couple of Months and am pretty happy. But it feels a little weird with all those Containers and all that overhead.

How do you guys host NC + Collabora? Some easy and best Solution?

top 17 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 3 months ago

The AIO is the way to go. It's not really any more overhead, and the maintenance is so much simpler. I second running it on a Proxmox docker server, you can snapshot before updates if you're concerned about the upgrade.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
HTTP Hypertext Transfer Protocol, the Web
LXC Linux Containers
NVMe Non-Volatile Memory Express interface for mass storage
SSD Solid State Drive mass storage
nginx Popular HTTP server

4 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.

[Thread #868 for this sub, first seen 13th Jul 2024, 16:35] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 1 points 3 months ago

Nextcloud AIO on a Proxmox LXC container. One instance for home, one instance at work. All works great. Both are on fast ceph storage (SSDs at home and NVMe at work).

[–] [email protected] -2 points 3 months ago
[–] [email protected] 7 points 3 months ago* (last edited 3 months ago)

There's essentially no overhead with containers. Performance is almost identical to bare metal in most cases.

[–] [email protected] 2 points 3 months ago

I also use the AIO docker and I'm happy with it.

Fully automatic Borg backups from within the container is neat

[–] [email protected] 2 points 3 months ago

I use AIO as well though I’ve heard the snap version is pretty painless, most of the time.

[–] [email protected] 16 points 3 months ago (1 children)

I think containers get seen as overhead unfairly sometimes. Yes, its not running on bare metal, so theres a layer of abstraction, but I think in practice the performance is nearly identical. Plus, since AIO does things out of the box for you (like a redis cache for instance) it ends up being more performant than a standalone nextcloud instance that isnt configured properly.

That is to say, I use AIO without issues.

[–] [email protected] 3 points 3 months ago (2 children)

I don't think containers are bad, nor that the performance lost in abstractions really is significant. I just think that running multiple services on a physical machine is a delicate balancing act that requires knowledge of what's truly going on, and careful sharing of resources, sometimes across containers. By the time you've reached that point (and know what every container does and how its services are set-up), you've defeated the main reason why many people use containers in the first place (just to fire and forget black boxes that just work, mostly), and only added layers of tooling and complexity between yourself and what's going on.

[–] [email protected] 1 points 3 months ago (1 children)

I'd argue the opposite: it's made it where I care very little about the dependencies of anything I'm running and it's LESS of a delicate balancing act.

I don't care what version of postgres or php or nginx or mysql or rust or node or python or whatever a given app needs, because it's in the container or stack and doesn't impact anything else running on the system.

All that matters at that point is 'does the stack work' and you then don't need to spend any time thinking about dependencies or interactions.

I also treat EACH stack as it's own thing: if it needs a database, I stand one up. If it needs some nosql it gets it's own.

Makes maintenance of and upgrades to everything super simple, since each of the ~30 stacks with ~120 containers I'm running doesn't in any way impact, screw with, or have dependency issues that impact anything else I'm running.

Though, in fairness, if you're only running two or three things, then I could see how the management of the docker layer MIGHT be more time than management of the applications.

[–] [email protected] 1 points 3 months ago (1 children)

I don’t care […] because it’s in the container or stack and doesn’t impact anything else running on the system.

This is obviously not how any of this works: down the line those stacks will very much add-up and compete against each other for CPU/memory/IO/…. That's inherent to the physical nature of the hardware, its architecture and the finiteness of its resources. And here come the balancing act, it's just unavoidable.

You may not notice it as the result of having too much hardware thrown at it, I wouldn't exactly call this a winning strategy long term, and especially not in the context of self-hosting where you directly foot the bill.

Moreover, those server components which you are needlessly multiplying (web servers, databases, application runtimes, …) have spent decades optimizing for resource pooling (with shared buffers, caching, event scheduling, …). These efforts are all thrown away when run for a single client/container further lowering (and quite drastically at that) the headroom for optimization and scaling.

[–] [email protected] 1 points 3 months ago (1 children)

Two things, I think, that are making your view and mine different.

First, the value of time. I like self-hosting things, but it's not a 40 hour a week job. Docker lets me invest minimal time in maintenance and upkeep and restricts the blowback of a bad update to the stack it's in. Yes, I'm using a little bit more hardware to accomplish this, but hardware is vastly cheaper than my time.

Second, uh, this is a hobby yeah? I don't think anyone posting here needs to optimize their Nextcloud or whatever install to scale to 100,000 concurrent users that required 99.999999% uptime SLAs or anything. I mean yes, you'd certainly do things differently in those environments, but that's really not what this is.

Using containers simplifies maintaining and deploying, and a few percent of cpu usage or a little bit of ram is unlikely to matter, unless you're big into running everything on a Raspberry Pi Zero or something.

[–] [email protected] 1 points 3 months ago

I don't think our views are so incompatible, I just think there are two conflictual paradigms supporting a false dichotomy: one that's prevalent in the business world where "cost of labour shrinks cost of hardware" and where it's acceptable to trade some (= a lot of) efficiency for convenience/saving manhours. But this is the "self-hosted" community, where people are running things on their own hardware, often in their own house, paying the high price of inefficiency very directly (electricity costs, less living space, more heat/noise, etc).

And docker is absolutely fine and relevant in this space, but only when "done right", i.e. when containers are not just spun up as isolated black boxes, but carefully organized as to avoid overlapping services and resources wastage, in which case managing containers ends-up requiring more effort, not less.

But this is absolutely not what you suggest. What you suggest would have a much greater wastage impact than "few percent of cpu usage or a little bit of ram", because essentially you propose for every container to ship its own web server, application server, database, etc… We are no longer talking "few percent" of overhead of the container stack, we are talking "whole new machines" software and compute requirements.

So, in short, I don't think there's a very large overlap between the business world throwing money at their problems and the self-hosting community, and so the behaviours are different (there's more than one way to use containers, and my observation is that it goes very differently in either). I'm also not hostile to containers in general, but they cannot be recommended in good faith to self-hosters as a solution that is both efficient and convenient (you must pick one).

[–] [email protected] 1 points 3 months ago (1 children)

I think you're missing an important aspect to containers and that is being able to easily define your infrastructure as code.

That makes server migrations a breeze

[–] [email protected] 1 points 3 months ago

That's… a tool in the bucket for that. But I'm not really sure that's the point here?

[–] [email protected] 2 points 3 months ago

Probably not that helpful but Truenas Scale and the Nextcloud App, and then just used the Collabora "plugin" as I gave up using a separate Collabora App because I couldn't make them work together. Probably going to have to fix everything again in August when the next TNS update drops (Electric Eel) and enables vDev extensions.

[–] [email protected] 4 points 3 months ago

AIO is the way