this post was submitted on 25 May 2024
55 points (96.6% liked)

Selfhosted

40133 readers
544 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Ohboy. Tonight I:

  • installed a cool docker monitoring app called dockge
  • started moving docker compose files from random other folders into one centralized place (/opt/dockers if that matters)
  • got to immich, brought the container down
  • moved the docker-compose.yml into my new folder
  • docker compose up -d
  • saw errors saying it didn't have a DB name to work with, so it created a new database

panik

  • docker compose down
  • copy old .env file from the old directory into the new folder!
  • hold breath
  • docker compose up -d

Welcome to Immich! Let's get started...

Awwwwww, crud.

Anything I can do at this point?

No immich DB backup but I do have the images themselves.

EDIT: Thanks to u/atzanteol I figured out that changing the folder name caused this too. I changed the docker folder's name back to the original name and got my DB back! yay

top 14 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 5 months ago

Working now? Awesome!!

[–] [email protected] 4 points 5 months ago

If you want to change the name of the directory without breaking your volumes (or running services, etc), you can specify the name of the project inside the compose file

[–] [email protected] 2 points 5 months ago

Glad to see you solved the issue, I just want to point out that this might happen again if you forget your db is in a volume controlled by docker, better to put it in a folder you know.

Last month immich released an update to the compose file for this, you need to manually change some part.
Here's the post in this community https://lemmy.ml/post/14671585

Also I'll include you this link in the same post, I moved the data from the docker volume to my specific one without issue.
https://lemmy.pe1uca.dev/comment/2546192

Or maybe another option is to make backups of the db. I saw this project some time ago, haven't implemented it on my services, but it looks interesting.
https://github.com/prodrigestivill/docker-postgres-backup-local

[–] [email protected] 30 points 5 months ago* (last edited 5 months ago) (1 children)

Docker compose has a default "feature" of prefixing the names of things it creates with the name of the directory the yml is in. It could be that the name of your volume changed as a result of you moving the yml to a new folder. The old one should still be there.

docker volume ls

[–] [email protected] 9 points 5 months ago (1 children)

Hmmm...

docker volume ls 
DRIVER    VOLUME NAME
local     1da54fed5d479f5a551aaf853999fcc3db659193df2643a2bf20470f4da06bee
local     (a bunch more like the above)
...
local     immich-app_model-cache
local     immich-app_pgdata
local     immich-app_tsdata
local     immich_model-cache
local     immich_pgdata

I'm not sure how to tell what the many volumes with names like guids could be from. (I have like 12 docker apps running here)

My docker compose yml file also has:

database:
    container_name: immich_postgres
    image: tensorchord/pgvecto-rs:pg14-v0.2.0
    env_file:
      - .env
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
    volumes:
      - pgdata:/var/lib/postgresql/data

I think my problem is that I didn't have the proper .env file the first time I started it up after moving the yml file, and that's why immich thought it neded to create a new database from scratch. Does that make sense? I think it's realy overwritten those

[–] [email protected] 17 points 5 months ago* (last edited 5 months ago) (1 children)

Is it not in the immich_pgdata or immich-app_pgdata folder?

The volumes themselves should be stored at /var/lib/docker/volumes

For future reference, doing operations like this without backing up first is insane.

Get borgmatic installed to take automatic backups and send them to a backup like another server or borgbase.

[–] [email protected] 16 points 5 months ago (3 children)

OMG! Yes!!!

I thought it would be good to make the folder name shorter when I moved it, so it went from immich-app before, to immich.

I just now brought it down, renamed the folder, brought it back up and my DB is back again!

Thank you so much. <3

I weill check out borgmatic too. Cheers,

[–] [email protected] 5 points 5 months ago

Woohoo! Always great to read a success story!

[–] [email protected] 4 points 5 months ago* (last edited 5 months ago) (1 children)

Glad you sorted it!

It's very unexpected behavior for docker compose IMHO. When you say the volume is named "foo" it creates a volume named "directory_foo". Same with all the container names.

You do have some control over that by setting a project name. So you could re-use your old volumes with the new directory name.

Or if you want to migrate from an old volume to a new one you can create a container with both volumes mounted and copy your data over by doing something like this:

docker run -it --rm -v old_volume:/old:ro -v new_volume:/new ubuntu:latest 
$ apt update && apt install -y rsync
$ rsync -rav --progress --delete /old/ /new/ # be *very* sure to have the order of these two correct!
$ exit

For the most part applications won't "delete and re-create" a data source if it finds one. The logic is "did I find a DB, if so then use it, else create a fresh one."

[–] [email protected] 1 points 5 months ago (1 children)

This is one of the reasons I never use docker volumes. I bind mount a local folder from the host or mount and NFS share from somewhere else. Has been much more reliable because the exact location of the storage is defined clearly in the compose file.

Borg backup is set to backup the parent folder of all the docker storage folders so when I add a new one the backup solution just picks it up automatically at the next hourly run.

[–] [email protected] 1 points 5 months ago

I have a similar distrust of volumes. I've been warming up to them lately but I still like the simple transparency of bind mounts. It's also very easy to backup a bind mount since it's just sitting there on the FS.

[–] [email protected] 11 points 5 months ago* (last edited 5 months ago) (1 children)

Awesome, take this close call as a kind reminder from the universe to backup!

Borg will allow incremental backups from any number of local folders to any number of remote locations. Borgmatic is a wrapper for it that also includes automated incremental borg backups.

I have a second server that runs this container: nold360/borgserver

Which works as a borg repository.

I also buy storage in borgbase and so every hour and incremental setup goes to both.

The other day I blew away a config folder by accident and restored it with no sweat in 2 mins.

[–] [email protected] 5 points 5 months ago

Was your old setup using docker volumes? Your old database could be in one