this post was submitted on 11 Mar 2024
153 points (94.2% liked)

Selfhosted

39964 readers
395 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I never understood how to use Docker, what makes it so special? I would really like to use it on my Rapsberry Pi 3 Model B+ to ease the setup process of selfhosting different things.

I'm currently running these things without Docker:

  • Mumble server with a Discord bridge and a music bot
  • Maubot, a plugin-based Matrix bot
  • FTP server
  • Two Discord Music bots

All of these things are running as systemd services in the background. Should I change this? A lot of the things I'm hosting offer Docker images.

It would also be great if someone could give me a quick-start guide for Docker. Thanks in advance!

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 8 months ago (1 children)

A lot of people here really do be describing docker like flatpak

load more comments (1 replies)
[–] [email protected] 38 points 8 months ago* (last edited 8 months ago) (3 children)

It's virtual machines but faster, more configurable with a considerably larger set of automation, and it consumes less computer resources than a traditional VM. Additionally, in software development it helps solve a problem summarized as "works on my machine." A lot of traditional server creation and management relied on systems that need to be set up perfectly identical every deployment to prevent dumb defects based on whose machine was used to write it on. With Docker, it's stupid easy to copy the automated configuration from "my machine" to "your machine." Now everyone, including the production systems, are running from "my machine." That's kind of a big deal, even if it could be done in other ways naturally on Linux operating systems. They don't have the ease of use or the same shareability.

What you're doing is perfectly expected. That's a great way of getting around using Docker. You aren't forced into using it. It's just easier for most people

load more comments (3 replies)
[–] [email protected] 5 points 8 months ago (2 children)

I've used Docker a fair bit over the years because it's a simple line of code I can copy/paste to get a simple web server running.

I ran Home Assistant Supervised in Docker for many years. It was a few lines of code and then I basically had Home Assistant OS running on my Pi without it taking over the whole Pi, meaning I could run other things on it too.

That ended when HA just died one day and I had no clue how to get it running again. I spent a day trying, then just installed HA OS on the Pi instead.

Anyway I now have a Dell Optiplex and Proxmox and I've gone back to Docker. Why? Well I discovered that I could make a Linux VM and install Docker on it, then add the Docker code to install a Portainer client to it, then make that into a template.

Meaning I can clone that template and type the IP address into Portainer and now I have full access to that Docker instance from my original Portainer container. That means I can bang a Docker Compose file into the "Stack" and press go, then tinker with the thing I wanna tinker with. If I get it working it can stay, if I don't then I just delete the VM and I've lost nothing.

Portainer has made Docker way more accessible for me. I love a webui

[–] [email protected] 1 points 8 months ago (1 children)

What is Portainer? You've said that it's a web UI, but what exactly does it provide you with?

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago) (3 children)

Well the webui provides me with a list of containers, whether they're running or not, the ports that are opened by the containers. There's Stacks which are basically Docker Compose files in a neat UI. The ability to move these stacks to other instances. There's the network options and ability to make more networks, the files that are associated with the containers.

And not just for the instance I'm in, but for all the instances I've connected.

In my previous experience with Docker these are all things that I need to remember code to find, meaning I most often have to Google the code to find out what I'm after. Here is neatly packaged in a web page.

Oh and the logs, which are really useful when tinkering to try get something up and running

[–] [email protected] 2 points 8 months ago (1 children)

Sounds awesome! I've taken a look at Portainer and got confused on the whole Business Edition and Community Edition. What are you running?

[–] [email protected] 3 points 8 months ago (1 children)

Community edition. It's free!

[–] [email protected] 1 points 8 months ago

Docker can be many things - and portainer is a nice replacement for those using docker for running services. It’s got a great web interface. For automation and most development docker and compose is my pick. Also a good fit for those that only use X to spawn terminals.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 2 points 8 months ago

Docker is amazing but not needed. You can compare it to a simpler VM. You can take a docker and run it on any machine. You have an environment that is separate from your host and you and the container can only access it via defined points (volumes and ports).

Imagine you need to run a 2nd Mumble Server. I never set on up but its often that a 2nd instance is not that easy. With docker its easy. The only difference is that you need to use different ports, when you have only one network access or you use a reverse proxy. You can create a 2nd instance to test stuff, without interrupting your productive system. Its a security benefit, because its isolated to some degree and you can remove one easily.

I started using it with MSSQL Server, because I hated how invasive it is on a windows machine, especially I just needed it temporarily to do stuff with it. I'm not a microsoft admin and I know that Servers from Microsoft are a different level. Docker allowed me to start and stop it and remove it very easily. After that I started using it for a lot of and brought my NAS on the next level.

Also one thing worth mentioning are Linux Containerx (LXC). They are in Proxmox but I have less knowledge. It feels more like a full VM than docker but uses less resources. This is the reason why containers in general are more popular. They are less resource hungry than a full VM but have some benefits than running everything on one machine. LXC feels more like a full system, than docker. With docker you rarely get into the system. You may execute some commands, like a create user command or a one time job but don't access it via a shell from the inside (its possible). LXC on the other hand, you use the shell.

[–] [email protected] 33 points 8 months ago* (last edited 8 months ago) (1 children)

I feel that a lot of people here are missing the point. Docker is popular for selfhosted services for a few main reasons:

  1. It is one package that can be used on any distribution (or even OS with a Linux VM).
  2. The package contains all dependencies required to run the software so it is pretty reliable.
  3. It provides some basic sandboxing against non-malicious services. Basically the service can't scribble all over your filesystem. It can only write to specific directories that you have given it access to (via volumes) other than by exploiting security vulnerabilities.
  4. The volume system also makes it very obvious what data is important and needs to be backed up or similar, you have a short list.

Docker also has lots of downsides. I would generally say that if your distribution packages software I would prefer the distribution's package over the docker image. A good distribution package will also solve all of these problems. The main issue you will see with distribution packages is a longer delay before new versions are made available.

What Docker completely dominates was previous cross-distribution packaging options which typically took one of the previous strategies.

  1. Self-contained compiled tarball. Run the program inside as your user. It probably puts its data in the extracted directory, maybe. How do you upgrade? Extract and copy a data directory? Self-update? Code is mutable and mixed with data, gross.
  2. Install script. Probably runs as root. Makes who-knows what changes to your system. Where is the data, is the service running? Will it auto-start on boot. Hope that install script supports your distro.
  3. Source tarball. Figure out the dependencies. Hope they don't conflict with the versions your distro has. Set up users and setup scripts yourself. Hope the build doesn't take too long.
[–] [email protected] 2 points 8 months ago (3 children)

Sorry if I’m about 10 years behind Linux development, but how does Docker compare with the latest FlatPak trend in application distribution? How you have described it sounds somewhat similar, outside of also getting segmented access to data and networks.

[–] [email protected] 10 points 8 months ago* (last edited 8 months ago)

For desktop apps Flatpak is almost certainly a better option than Docker. Flatpak uses the same core concepts as Docker but Flatpak is more suited for distributing graphical apps.

  1. Built in support for sharing graphics drivers, display server connections, fonts and themes.
  2. Most Flatpaks use common base images. Not only will this save disk space if you have lots of, for example GNOME, applications as they will share the same base but it also means that you can ship security updates for common libraries separately from application updates. (Although locked insecure libraries is still a problem in general, it is just improved over the docker case.)
  3. Better desktop integration via the use of "portals" that allow requesting specific things (screenshot, open file, save file, ...) without full access to the user's system.
  4. Configuration UIs that are optimized for the desktop usecase. Graphically tools for install, uninstall, manage permissions, ...

Generally I would still default to my distro's packages where possible, but if they are unsuitable for whatever reason (not available, too old, ...) then a Flatpak is a great option.

load more comments (2 replies)
[–] [email protected] 13 points 8 months ago (1 children)

The thing that confused me when first learning about docker was, that everybody compares it to a virtual machine. It's not. Containers dont virtualize anything. They take a (single) process from the host OS and separate that into its own environment. All system calls, memory access, file writes etc are still handled by the same os (same kernel). However the process is separated both on the file system and process level. It can't see other processes outside of the container and it also doesn't see the real filesystem. It sees a filesystem provided by the container. This also means it sees different file and user permissions. When you run a alpine Linux docker container on an Ubuntu system, the container only containes the (few) files for alpine but no Linux kernel no desktop environment. A process inside that container only sees the alpine files and not the Ubuntu files. It also means all containers see a filesystem independent of each other and can use libraries and dependencies of different versions (they are only files after all).

For administration it makes running complex services easy. You define how to setup that service (what base Linux distro to use, what packages to install, what commands to run, and how to start the process). You can then be save to assume the setup of that service did not interfere with the setup of any other service. "Service 1 needs a certain system wide config changed? Service 2 needs that config in the default state? And both need a different version of the same library?" In containers you can have all at the same time because they each see a different version of the same config and library.

And all this is provided by the kernel itself. All docker does is provide an "easy" way to create and manage containers but could could do all of that using chroot, runc and a few other.

As a note, containers usually don't come with systemd as they don't need an init system. You would run the service directly inside the container and then use systemd outside the container to make sure the container is started/restarted, or just docker as it can already do that.

I found a great article demystifying containers recently

[–] [email protected] 6 points 8 months ago

While you are technically right there is very little logical difference between containers and VMs. Really the only fundamental difference is that containers use the same kernel while VMs run their own. (let's not even worry about para-virtualization right now).

In practice I would say the biggest difference is that there is better memory sharing so total memory usage will often be less. But honestly this mostly comes down to the fact that the average container bundles less software than the average VM image. Easier management of volumes is also nice because typically you will just bind-mount a host directory, but it also isn't hard to mount a block device on a Linux host.

[–] [email protected] 17 points 8 months ago

I have a reason I don't think is covered. A few programs I have come across that I want to try recommend docker and some only provide instructions for docker. They can spend less time trying to help you with dependencies and installations knowing they've included everything you need in the docker file. I don't have a background in Linux or programming so unless they tell you exactly how to install something, I can struggle. Their installation page is then just the docker compose file with a note on the environment variables you can change.

[–] [email protected] 4 points 8 months ago (2 children)

If you're already using systemd, do not switch to Docker. Use Podman instead. Docker runs all your services under the Docker service. Podman can both run the same containers as systemctl services.

[–] [email protected] 1 points 8 months ago

I used to run systemd units that just start docker-compose files, that's also a thing, I suppose. Also generally it's easy to manage the container directly (killing/restarting) without the needed lifecycle a systemd unit gives, I would say.

[–] [email protected] 2 points 8 months ago

Quadlets with podman have completely replaced compose files for me. I use the kuberentes configs. Then I run a tailscale container in the pod and BAM, all of my computers can access that service without have to expose any ports.

Then I have an ansible playbook to log in to the host and start a detached tmux session so my user systemd services keep running. Its all rootless, and just so dang easy.

[–] [email protected] 1 points 8 months ago* (last edited 7 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
CA (SSL) Certificate Authority
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HA Home Assistant automation software
~ High Availability
IP Internet Protocol
LXC Linux Containers
NAS Network-Attached Storage
SBC Single-Board Computer
SSD Solid State Drive mass storage
SSL Secure Sockets Layer, for transparent encryption

9 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.

[Thread #592 for this sub, first seen 11th Mar 2024, 17:25] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] -5 points 8 months ago* (last edited 8 months ago) (1 children)

The thing with Docker is that people don't want to learn how to use Linux and are buying into an overhyped solution that makes their life easier without understanding the long term consequences. Most of the pro-Docker arguments go around security and that's mostly BS because 1) systemd can provide as much isolation a docker containers and 2) there are other container solutions that are at least as safe as Docker and nobody cares about them.

Companies such as Microsoft and GitHub are all about re-creating and reconfiguring the way people develop software so everyone will be hostage of their platforms. We see this in everything now Docker/DockerHub/Kubernetes and GitHub actions were the first sign of this cancer. We now have a generation that doesn’t understand the basic of their tech stack, about networking, about DNS, about how to deploy a simple thing into a server that doesn’t use some Docker BS or isn’t a 3rd party cloud xyz deploy-from-github service.

Before anyone comments that Docker isn’t totally proprietary and there’s Podman consider the following: It doesn’t really matter if there are truly open-source and open ecosystems of containerization technologies. In the end people/companies will pick the proprietary / closed option just because “it’s easier to use” or some other specific thing that will be good on the short term and very bad on the long term.

Docker may make development and deployment very easy and lowered the bar for newcomers have the dark side of being designed to reconfigure and envelope the way development gets done so someone can profit from it. That is sad and above all set dangerous precedents and creates generations of engineers and developers that don’t have truly open tools like we did. There's LOT of money into transitioning everyone to the "deploy-from-github-to-cloud-x-with-hooks" model so those companies will keep pushing for it.

Note that technologies such as Docker keep commoditizing development - it’s a negative feedback loop that never ends. Yes I say commoditizing development because if you look at it those techs only make it easier for the entry level developer and companies instead of hiring developers for their knowledge and ability to develop they’re just hiring “cheap monkeys” that are able to configure those technologies and cloud platforms to deliver something. At the end of the they the business of those cloud companies is transforming developer knowledge into products/services that companies can buy with a click.

[–] [email protected] 8 points 8 months ago (3 children)

Most of the pro-Docker arguments go around security

Actually Docker and the success of containers is mostly due to the ease of shipping code that carries its own dependencies and can be run anywhere. Security is a side-effect and definitely not the reason why containers picked-up.

systemd can provide as much isolation a docker containers and 2) there are other container solutions that are at least as safe as Docker and nobody cares about them.

Yes, and it's much harder to achieve the same. In systemd you need to use 30 different options to get what using containers you achieve almost instantly and with much less hussle. I made an example on my blog where I decided to run blocky in Systemd and not in Docker. It's just less convenient and accessible, harder to debug and also relies on each individual user to do it, while with containers a lot gets packed into the image and therefore harder to mess up.

Docker isn’t totally proprietary

There are a many container runtimes (CRI-O, podman, mirantis, containerd, etc.). Docker is just a convenient API, containers are fully implemented just with Linux native features (namespaces, seccomp, capabilities, cgroups) and images follow an open standard (OCI).

I will avoid comment what looks like a rant, but I want to simply remind you that containers are the successor of VMs (virtualize everything!), platforms that were completely proprietary and in the hands of a handful of vendors, while containers use only native OS features and are therefore a step towards openness.

[–] [email protected] 3 points 8 months ago* (last edited 8 months ago) (3 children)

Docker and the success of containers is mostly due to the ease of shipping code that carries its own dependencies and can be run anywhere

I don't disagree with you, but that also shows that most modern software is poorly written. Usually a bunch of solutions that hardly work and nobody is able to reproduce their setup in a quick, sane and secure way.

There are a many container runtimes (CRI-O, podman, mirantis, containerd, etc.). Docker is just a convenient API, containers are fully implemented just with Linux native features (namespaces, seccomp, capabilities, cgroups) and images follow an open standard (OCI).

Yes, that's exactly point point. There are many options, yet people stick with Docker and DockerHub (that is everything but open).

In systemd you need to use 30 different options to get what using containers you achieve almost instantly and with much less hussle.

Yes... maybe we just need some automation/orchestration tool for that. This is like saying that it's way too hard to download the rootfs of some distro, unpack it and then use unshare to launch a shell on a isolated namespace... Docker as you said provides a convenient API but it doesn't mean we can't do the same for systemd.

but I want to simply remind you that containers are the successor of VMs (virtualize everything!), platforms that were completely proprietary and in the hands of a handful of vendor

Completely proprietary... like QEMU/libvirt? :P

[–] [email protected] 3 points 8 months ago

but that also shows that most modern software is poorly written

Does it? I mean, this is especially annoying with old software, maybe dynamically linked or PHP, or stuff like that. Modern tools (go, rust) don't actually even have this problem. Dependencies are annoying in general, I don't think it's a property of modern software.

Yes, that’s exactly point point. There are many options, yet people stick with Docker and DockerHub (that is everything but open).

Who are these people? There are tons of registries that people use, github has its own, quay.io, etc. You also can simply publish Dockerfiles and people can build themselves. Ofc Docker has the edge because it was the first mainstream tool, and it's still a great choice for single machine deployments, but it's far from the only used. Kubernetes abandoned Docker as default runtime for years, for example... who are you referring to?

Yes… maybe we just need some automation/orchestration tool for that. This is like saying that it’s way too hard to download the rootfs of some distro, unpack it and then use unshare to launch a shell on a isolated namespace… Docker as you said provides a convenient API but it doesn’t mean we can’t do the same for systemd.

But Systemd also uses unshare, chroot, etc. They are at the same level of abstraction. Docker (and container runtimes) are simply specialized tools, while systemd is not. Why wouldn't I use a tool that is meant for this when it's available. I suppose bubblewrap does something similar too (used by Flatpak), and I am sure there are more.

Completely proprietary… like QEMU/libvirt? :P

Right, because organizations generally run QEMU, not VMware, Nutanix and another handful of proprietary platforms... :)

load more comments (2 replies)
load more comments (2 replies)
[–] [email protected] 46 points 8 months ago (5 children)

There have been some great answers on this so far, but I want to highlight my favourite part of Docker: the disposability.

When you have a running Docker container, you can hop in, fuck about with files, break stuff as you try to figure something out, and then kill the container and all of the mess you've created is gone. Now tweak your config and spin up a fresh one exactly the way you need it.

You've been running a service for 6 months and there's a new upgrade. Delete your instance and just start up the new one. Worried that there might be some cruft left over from before? Don't be! Every new instance is a clean slate. Regular, reproducible deployments are the norm now.

As a developer it's even better: the thing you develop locally is identical to the thing that's built, tested, and deployed in CI.

I <3 Docker!

load more comments (5 replies)
[–] [email protected] 7 points 8 months ago* (last edited 8 months ago) (1 children)

One benefit that might be overlooked here is that as long as you don’t use any Docker Volumes (and instead bind mount a local directory) and you’re using Docker Compose, you can migrate a whole service, tech stack and everything, to a new machine super easily. I just did this with a Minecraft server that outgrew the machine it was on. Just tar the whole directory, copy it to the new host, untar, and docker compose up -d.

[–] [email protected] 1 points 8 months ago (2 children)

This docker compose up -d thing is something I don't understand at all. What exactly does it do? A lot of README.md files from git repos include this command for Docker deployment. And another question: How can you automatically start the Docker container? Do you need a systemd service to run docker compose up -d?

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago)

Docker Compose is basically designed to bring up a tech stack on one machine. So rather than having an Apache machine, a MySQL machine, and a Redis machine, you set up a Docker Compose file with all of those services. It’s easier than using individual Docker commands too. It sets up a network so they can all talk to each other, then opens the ports you tell it to. It’s isolated from other Docker Compose networks, so things won’t interfere with each other. So you can basically isolate a bunch of services with their own tech stacks all on the same machine. I’ve got my Jellyfin server running on the same machine as my Mastodon instance, thanks to Docker Compose.

As long as Docker is configured to run automatically at boot (which it usually is when you install it), it will bring containers back up that are set to be restarted. You can use the “always” or the “unless-stopped” values for the restart option, depending on your needs, then Docker will bring that container back up after a reboot.

Docker Compose is also useful in this context, because you can define dependencies for services. So I can say that the Mastodon container depends on the Postgres container, and Docker Compose will always start the Postgres container first.

load more comments (1 replies)
[–] [email protected] 3 points 8 months ago (4 children)

How’s the Pi 3? I was considering the idea of getting one to avoid the crazy prices for newer models

[–] [email protected] 2 points 8 months ago

It's great for my needs. If you think about picking one up today, I wouldn't really recommend it. It just offers too little resources to be actually viable in the regular day. I use mine because I had it laying in the dust for a couple of years. Well, it's enough for my Mumble server and the bots I use for Discord and Matrix.

[–] [email protected] 6 points 8 months ago* (last edited 8 months ago) (2 children)

If you're OK with a little more power usage (like 10W instead of 3-5W), you can buy a mini PC from Dell/Lenovo/HP with a 7th gen Intel CPU for about $50-70 on ebay, with storage and RAM included. As a bonus you also get a case, power supply, cooling, etc.. which you have to buy extra for the Pi.

It'll be significantly faster in every way, with a lot more options for expansion if needed. The Pi 3 is very slow for even the most basic tasks, even just running apt upgrade can take several minutes or more for a few package updates.

[–] [email protected] 1 points 8 months ago

i’ve got a used dell optiplex laying around for any bigger projects so a pi would be for tinkering with and small scale stuff. I didn’t have a good grasp of how slow it would run though thank you

load more comments (1 replies)
[–] [email protected] 3 points 8 months ago

A little slower by today's standards, but if your needs are light, it'll do the job. Keep in mind it only has a gigglebyte of RAM, so its capacity for running things may be limited, especially as docker applications go (since they bring a copy of each dependency). You won't be able to run something as large as GitLab or Nextcloud, but a smattering of small apps should be within its capabilities

load more comments (1 replies)
[–] [email protected] 14 points 8 months ago (1 children)

One of the the main reasons why docker and kubernetes take off is they standardized the deployment process. Say, you have 20 services running on your servers. It's much easier to maintain those 20 services as a set of yaml files that follow certain standard than 20 config files each with different format. If you only have a couple of services, the advantage is probably not apparent. But as you add more and more services, you'll start to appreciate it.

[–] [email protected] 5 points 8 months ago

Yep, I couldn't run half of the services in my homelab if they weren't containerized. Running random, complex installation scripts and maintaining multiple services installed side-by-side would be a nightmare.

[–] [email protected] 3 points 8 months ago

Dockers documentation is actually pretty good, I’d recommend taking a look at it because it’s written really well and can be used as a decent primer on learning to read documentation.

I would recommend learning docker / containerization. For your use case you likely won’t see a big benefit HOWEVER it is a good technology to know.

As far as the “why” you’d use it there are too many to list but for your use case the why I’d argue is “just so you know how to do it” and you’ll come up with your own why along the way.

Simplest why beyond “it’s a good technology to know” is that updating an app is as simple as pulling a new container and relaunching it.

[–] [email protected] 18 points 8 months ago (2 children)

Try to run something that requires php7 and something else that requires php8 on the same web server; or python 2 and python 3.

You actually can, but it's not pretty.

(The thing about a declarative setup isn't much of a difference, you can do it for any popular Linux distro.)

load more comments (2 replies)
[–] [email protected] 59 points 8 months ago (6 children)

IMHO with docker and containerization in general you are trading drive space for consistency and relative simplicity.

a hypothetical:
You set up your mumble server and it requires the leftpad 3.7 package to run. you install it and everything is fine.
Now you install your ftp server but it needs leftpad 5.5. what do you do? hope the function that mumble uses in 3.7 still exists in 5.5? run each app in its own venv?

Docker and containerization resolve this by running each app in its own mini virtual machine. A container running mumble and leftpad 3.7 can coexist on host that also has a container running a ftp server with leftpad 5.5.

Here is a good video on what hole docker and containerization looks to fill
https://www.youtube.com/watch?v=Nm1tfmZDqo8

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago)

Docker and containerization resolve this by running each app in its own mini virtual machine

While what you've written is technically wrong, I get why you did the comparison that way. Now there are tons of other containerization solutions that can exactly what you're describing without the dark side of Docker.

[–] [email protected] 4 points 8 months ago (1 children)

I would also add security, or at least accessible security. Containers provide a number of isolation features out-of-the-box or extremely easy to configure which other systems require way more effort to achieve, or can't achieve.

Ironically, after some conversation on the topic here on Lemmy I compiled a blog post about it.

[–] [email protected] 6 points 8 months ago (1 children)

Tbf, systemd also makes it relatively easy to sandbox processes. But it's opt-in, while for containers it's opt-out.

[–] [email protected] 3 points 8 months ago

Yeah, and it also requires quite many options, some with harder-to-predict outcomes. For example RootDirectory can be used to effectively chroot the process, but that carries implications such as the application not having access to CA certificates anymore, which in general in containers is a solved problem.

[–] [email protected] 80 points 8 months ago (2 children)

Docker containers aren't running in a virtual machine. They're running what amounts to a fancy chroot jail... It's just an isolated environment that takes advantage of several kernel security features to make software running inside the environment think everything is normal despite being locked down.

This is a very important distinction because it means that docker containers are very light weight compared to a VM. They use but a fraction of the resources a VM would and can be brought up and down in milliseconds since there's no hardware to emulate.

load more comments (1 replies)
[–] [email protected] 2 points 8 months ago

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=Nm1tfmZDqo8

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.

load more comments (2 replies)
[–] [email protected] 15 points 8 months ago* (last edited 8 months ago) (1 children)

When I asked this question

So there are many reasons, and this is something I nowadays almost always do. But keep in mind that some of us have used Docker for our applications at work for over half a decade now. Some of these points might be relevant to you, others might seem or be unimportant.

  • The first and most important thing you gain is a declarative way to describe the environment (OS, dependencies, environment variables, configuration).
  • Then there is the packaging format. Containers are a way to package an application with its dependencies, and distribute it easily through the docker hub (or other registries). Redeploying is a matter of running a script and specifying the image and the tag (never use latest) of the image. You will never ask yourself again "What did I need to do to install this again? Run some random install.sh script off a github URL?".
  • Networking with docker is a bit hit and miss, but the big thing about it is that you can have whatever software running on any port inside the container, and expose it on another port on the host. Eg two apps run on port :8080 natively, and one of them will fail to start due to the port being taken. You can keep them running on their preferred ports, but expose one on 18080 and another on 19080 instead.
  • You keep your host simple and empty of installed software and packages. Less of a problem with apps that come packaged as native executables, but there are languages out there which will require you to install a runtime to be able to start the app. Think .NET, Java but there is also Python out there which requires you to install it on the host and have the versions be compatible (there are virtual environments for that but im going into too much detail already).

I am also new to self hosting, check my bio and post history for a giggle at how new I am, but I have taken advantage of all these points. I do use "latest" though, looking forward to seeing how that burns me later on.

But to add one more:- my system is robust, in that I can really break my containers (and I do), and to recover is a couple clicks in Portainer. Then I can try again, no harm done.

[–] [email protected] 4 points 8 months ago (2 children)

The thing with using the "latest" tag is you might get lucky and nothing bad happens (the apps are pretty stable, fault tolerant, and/or backward compatible), but you also might get unlucky and a container update does break something (think a 1.x going to 2.x one day). Without pinning the container to a specific version, you might have an outage suddenly due to that container becoming incompatible with one of your other applications. I've seen this happen a number of times. One example is a frontend (UI) container that updates to no longer be compatible with older versions of the backend and crashes as a result.

If all your apps are pretty much standalone and you trust them to update properly every time a new version of the container is downloaded, then you may never run into the problems that make people say "never use latest". But just keep an eye out for something like that to happen at some point. You'll save yourself some time if you have records of what versions are running when everything's working, and take regular backups of all their data.

[–] [email protected] 2 points 8 months ago* (last edited 8 months ago)

I guessed it was a "once bitten twice shy" kind of thing. This is all a hobby to me so the cost-benefit, I think, is vastly different, nothing on my setup is critical. Keeping all those records and up to date on what version everything is on, and when updates are available and what those updates do and... sound like a whole lot of effort when currently my efforts can be better spent in other areas.

In my arrogance I just installed Watchtower, and accepted it can all come crashing down. When that happens I'll probably realise it's not so much effort after all.

That said I'm currently learning, so if something is going to be breaking my stuff, it's probably going to be me and not an update. Not to discredit your comment, it was informative and useful.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›