gomp

joined 2 years ago
 

I'm rebuilding my home server in nixos.

Rather that configuring the various services natively in nixos, I decided to run containers via virtualisation.oci-containers whenever possible, mostly to be able to independently update the system and the various services.

Everything is going smoothly, but whenever I (for whatever reason) do nixos-rebuild boot and reboot after adding a container instead of nixos-rebuild switch, I run into this issue where podman isn't able to resolve the host (below you see the docker hub host, but it also happened with ghcr.io):

podman-apprise-start[1352]: Trying to pull docker.io/caronc/apprise:1.1.8...
podman-apprise-start[1352]: Pulling image //caronc/apprise:1.1.8 inside systemd: setting pull timeout to 5m0s
podman-apprise-start[1352]: Error: initializing source docker://caronc/apprise:1.1.8: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io: no such host

I thought that my podman-* services were missing a dependency on network-online and that they were started before the network was available, but it is't the case:

# systemctl list-dependencies podman-apprise.service 
podman-apprise.service
● ├─system.slice
● ├─network-online.target
● │ └─systemd-networkd-wait-online.service
● └─sysinit.target
●   ├─dev-hugepages.mount
[...snip...]

Do you happen to know what the issue is?

PS: Manually running systemctl start podman-whatever once fixes the issue, of course, but I wonder if there's a more robust solution?


update:

After investigating based on balsoft input below, the issue seems to be that systemd-networkd-wait-online doesn't behave as expected (by me).

Basically, systemd-networkd-wait-online waits for network interfaces to have a carrier (working ethernet cable) and an IP address. This is what in systemd-networkd docs is called the "degraded" state (no, it doesn't mean that something got worse than before... don't think too much of what "degraded" implies in English).

In my case, I have an interface that is setup via DHCP and that also has static IPs assigned:

$ cat /etc/systemd/network/00-lan1.network 
[Match]
Name=lan1

[Network]
DHCP=ipv4
IPv6AcceptRA=no
LinkLocalAddressing=no

[Address]
Address=192.168.10.10/24

[Address]
Address=192.168.10.99/24

If you are wondering, the reason I do this is that I want static IPs for my dns server and reverse proxy, but I also want my home server to use DHCP to fetch some network-wide configuration which, critically, includes the default route.

Back to the issue: IIUC, since the interface has a non-link-local address (which systemd-networkd confusingly calls a "routable" address), it is immediately considered "routable" (a state that is moar better than "degraded") and so not only it's basically ignored by the default systemd-networkd-wait-online configuration, but even adding

[Link]
RequiredForOnline=routable

to /etc/systemd/network/00-lan1.network doesn't make a difference whatsoever.

For now, my stopgap solution is to explicitly set the default route for the "lan1" network:

[Network]
Gateway=192.168.10.1

this seems to solve the issue with podman and, while the system still thinks to be "online" before being fully configured, it will suffice until I find a more elegant/robust way (ping me in a while if you are interested).

refs:
systemd-networkd-wait-online man page
systemd-networkd docs on "RequiredForOnline"
networkctl man page

 

I experimented with several ways to run my services:

  1. "regular" systemd services (services.glance = { ... };)
  2. nix containers (containers.glance = { ... };)
  3. podman containers (virtualisation.oci-containers.containers.glance = { ... })

and I must say I'm starting to appreciate the last option (the least nixos-y) more and more.

Specifically, I appreciate that:

  • I just have to learn the app/container configuration, instead of also backwards-translating from their config into the various nixos options (of course the .yaml or whatever configuration files are still generated from my nixos config, I just do that in a derivation instead on relying on a module doing it for me)
  • Services are sometimes outdated in nixpks (even in unstable - and juggling packages between stable and unstable is yet another complication)
  • I feel like it's more secure (very arguable and also of very little consequence since everything is on my homelab... it's mainly for the warm fuzzies)

Do you guys use one of the options above? Something different?

 

edit: for the solution, see my comment below

I'm trying to package a go application (beszel) that bundles a bunch of html stuff built with bun (think, npm).

The html is generated by running bun install and bun run and then embedded in the go binary with //go:embed.

Being completely ignorant of the javascript ecosystem, my first idea was to just replicate what they do in the Makefile

postConfigure = ''
bun install --cwd ./site
bun run     --cwd ./site build
'' 

but, since bun install downloads dependencies from the net, that fails.

I guess the "clean" solution would be to look for buildNpmPackage or similar (assuming that exists) and let nix manage all the dependencies, but... it's some 800+ dependencies (at least, bun install ... --dry-run lists 800+ things) so that's a hard pass.

I then tried to look at how buildGoPackage handles the vendoring of dependencies, with the idea of replicating that (it dowloads what's needed and then compare a hash of what was downloaded with a hash provided in the nix package definition), but... I can't for the life of me decipher how nixpkgs' pkgs/build-support/go/module.nix works.

Do you know how to implement this kind of vendoring in a nix derivation?

 

If I navigate to https://lemmy.ml/c/simplex the page behaves as if I'm not logged in (eg. I see the "login" and "signup" at the top-right link instead of my profile).

I visited other communities (including this one I'm posting on) and they seem fine; hard-refreshing the browser doesn't seem to have any effect, nor does logging out of lemmy and authenticating again.

Can someone try and report if it's a bug or some astral conjunction affecting my PC?

edit: It works now. No idea why.

[–] [email protected] 1 points 11 months ago

Isn't the purpose getting views on youtube?

[–] [email protected] 1 points 2 years ago

User: "I have to waste my whole life fixing this" Dev: "you are complaining that you have to spend a few minutes"

Savage.

 

I want to call the escapeSystemdPath (defined in nixpgs at nixos/lib/utils.nix) to derive the name of a systemd mount unit from the target path (eg. srv-my-dir.mount from /srv/my/dir), but I can't figure out how I can reference it... any ideas?

1
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 

I'm playing around with nixos in a few VMs and at some point I realized I must have lost the swap configuration in one of my refactorings.

To my surprise, however, the VMs do use the swap partitions I had set up.

There is no mention on "swap" in my nix configuration (or in fstab) and no .swap units in /etc/systemd/system; I do however have a swap partition labelled "swap".

Turns out there is a systemd unit (albeit not a corresponding file) that sets up swap:

[root@vm1:~]# free -hw
               total        used        free      shared     buffers       cache   available
Mem:           2.8Gi       664Mi       955Mi       4.0Mi       3.0Mi       1.3Gi       2.0Gi
Swap:          3.7Gi          0B       3.7Gi

[root@vm1:~]# systemctl list-dependencies swap.target 
swap.target
● └─dev-disk-by\x2ddiskseq-1\x2dpart3.swap

I'm wondering where the unit comes from? Can I rely on this and never configure swap ever again?