Pete90

joined 1 year ago
[–] [email protected] 0 points 4 months ago (1 children)

Ich dachte, ich hätte es hier gelesen. Ich ziehe jetzt eh um, aber danke für den Tipp.

[–] [email protected] 0 points 4 months ago* (last edited 4 months ago) (1 children)

EDIT: Ich hab folgendes gefunden und das funktioniert, danke an alle und danke an den Verfasser!


Für die Leute, die keine offene Browser Session haben, hier ein kleines, aber funktionales Bash Script, welches im Ausführungsverzeichnis eine myFedditUserData.json erstellt, welche bei anderen Instanzen importiert werden kann.

Anforderungen:

  • Linux/Mac OS X Installation
  • jq installiert (Unter Ubuntu/Debian/Mint z.B. per sudo apt install -y jq

Anleitung:

  • Folgendes Script unter einem beliebigen Namen mit .sh Endung abspeichern, z.B. getMyFedditUsserData.sh
  • Script in beliebigen Textprogramm öffnen, Username/Mail und Passwort ausfüllen (optional Instanz ändern)
  • Terminal im Ordner des Scripts öffnen und chmod +x getMyFedditUsserData.sh ausführen (Namen eventuell anpassen)
  • ./getMyFedditUsserData.sh
  • Nun liegt im Ordner neben dem Script eine frische myFedditUserData.json

Anmerkung: Das Script ist recht simpel, es wird ein JWT Bearer Token angefragt und als Header bei dem GET Aufruf von https://feddit.de/api/v3/user/export_settings mitgegeben. Wer kein Linux/Mac OS X zur Verfügung hat, kann den Ablauf mit anderen Mitteln nachstellen.

Das Script:

#!/bin/bash

# Basic login script for Lemmy API

# CHANGE THESE VALUES
my_instance="https://feddit.de"			# e.g. https://feddit.nl
my_username=""			# e.g. freamon
my_password=""			# e.g. hunter2

########################################################

# Lemmy API version
API="api/v3"

########################################################

# Turn off history substitution (avoid errors with ! usage)
set +H

########################################################

# Login
login() {
	end_point="user/login"
	json_data="{\"username_or_email\":\"$my_username\",\"password\":\"$my_password\"}"

	url="$my_instance/$API/$end_point"

	curl -H "Content-Type: application/json" -d "$json_data" "$url"
}

# Get userdata as JSON
getUserData() {
	end_point="user/export_settings"

	url="$my_instance/$API/$end_point"

	curl -H "Authorization: Bearer ${JWT}" "$url"
}

JWT=$(login | jq -r '.jwt')

printf 'JWT Token: %s\n' "$JWT"

getUserData | jq > myFedditUserData.json
 

Moin. Ich hatte vor einiger Zeit mal etwas gelesen, von einer Möglichkeit des Dazenexports von feddit.de zwecks Umzug. Ich kann den Beitrag aber nicht mehr finden, weil das Auto-Verstecken ihn nicht mehr rausrückt. Kann mich jemand davor retten, nochmal 100 zu abonnieren? Danke!

[–] [email protected] 0 points 5 months ago (1 children)

Und der besondere Spaß ist, dass sie dann, wenn sie abgelaufen sind, Hautkrebs verursachen können.

[–] [email protected] 0 points 5 months ago (4 children)

Ich hab es gedacht, aber war mir sicher, dass es was anderes sein MUSS. So irrt man sich.

[–] [email protected] 0 points 5 months ago (6 children)

Was zum Fick ist Tzabatta?

[–] [email protected] 2 points 5 months ago (1 children)

How would that work? I couldn't find it in that post.

[–] [email protected] 38 points 5 months ago (2 children)

I agree, but most games also have a higher ratio of value to cost. If I buy a game for 50 bucks, I'll play it for many hours, let's say 50. So that will be 1 per hour, pretty good. If I buy a new movie, that isn't available for subscription streaming, that ratio is easily double. If I have a subscription and need another now, that also lowers it's value. It also comes with lower comfort and ease of consumption, as you mentioned.

Another great example is YouTube premium. I'll gladly pay 5 or 7 bucks for adfree content, not 14 though. I don't need YouTube music. So I block ads where I can and donate to creators, if I can afford it. They could have had my money, but they are, simply, greedy.

I also hate it, when deals are altered without my consent. It makes me feel like a sucker, and therefore makes it less likely for me to keep investing.

[–] [email protected] 9 points 5 months ago* (last edited 5 months ago) (1 children)

You most likely won't utilize these speeds in a home lab, but I understand why you want them. I do too. I settled for 2.5GBit because that was a sweet spot in terms of speed, cost and power draw. In total, I idle at about 60W for following systems:

  • Lenovo M90q (i7 10700, 32GB, 3 x 1 TB SSD) running Proxmox, 15W idle
  • Custom NAS (Ryzen 2400G, 16GB, 4x12TB HDD)v running Truenas (30W idle)
  • Firewall (N5105, 8GB) running OPNsense (8W idle)
  • FritzBox 6660 Cable, which functions as a glorified access point, 10W idle
[–] [email protected] 5 points 5 months ago (1 children)

Weird, isn't it. A lot of those successful services have cute little mascots. It influences me more than it should.

[–] [email protected] 1 points 5 months ago

I know exactly what you mean. I'd also prefer Debian, Mint or Fedora. Each has its weaknesses, but you got to start somewhere. Go for it, then decide for yourself. It's not that hard to switch again.

[–] [email protected] 0 points 5 months ago (3 children)

Ich nehme an, dass hier physische Hardware vermietet wird. Häufig ist es aber günstiger, virtuelle Hardware (z.B. CPU Kerne) zu nutzen - man teilt sich dann die CPU mit anderen, während bei ersterem nur du darauf Zugriff hast. Ist vereinfacht, aber so mehr oder weniger...

[–] [email protected] 2 points 5 months ago (1 children)

I'd be very careful to publicly host Jellyfin. Although not necessarily true, it basically advertises that you're pirating content while also giving out your IP. Even if you rip your own media, this can still be illegal. Please be careful.

Maybe you can put it behind some authentication or, even better, a VPN.

 

I'm in the marked for a used 4TB for my offsite backup. As I've recently acquired four 12TB drives (about 10000 hours and one to two years old) for 130€ each, I was optimistic. 30 to 40€ I thought. Easy.

WRONG! Used drive, failing SMART stats, 40€. Here is a new drive, no hours on it. Oh wait, it was cold storage and it's almost 8 years old. Price? 90€ (mind you, a new drive costs about 110€). Another drive has already failed, but someone wants 25€ for e-waste. No Sir, it worked fine when I used Check-Disk, please buy. Most of the decent ones are 70 to 80€, way too close to the new price. I PAID 130 FOR 12TB. These drive were almost new and under warranty. WHY DO THIS NUMBNUT WANT 80 EURO FOR A USED 4TB Drive? And what sane person doesn't put SMART data in their offerings??? I have to ask at least 50 percent of the time. Don't even get me started on those external hard drives, they were trash to begin with. I'm SO CLOSE to buying a high capacity drive, because in that segment, people actually know what they are doing and understand what they have.

Rant over.

What gives? Did these people buy them, when they were much more expensive? Does anyone now a good site that ships refurbished drives to Germany? Most of those I found are also rippoffs...

 

Hej everyone. My traefik setup has been up and running for a few months now. I love it, a bit scary to switch at first, but I encourage you to look at, if you haven't. Middelwares are amazing: I mostly use it for CrowdSec and authentication. Theres two things I could use some feedback, though.


  1. I mostly use docker labels to setup routers in traefik. Some people only define on router (HTTP) and some both (+ HTTPS) and I did the latter.
- labels
      - traefik.enable=true
      - traefik.http.routers.jellyfin.entrypoints=web
      - traefik.http.routers.jellyfin.rule=Host(`jellyfin.local.domain.de`)
      - traefik.http.middlewares.jellyfin-https-redirect.redirectscheme.scheme=https
      - traefik.http.routers.jellyfin.middlewares=jellyfin-https-redirect
      - traefik.http.routers.jellyfin-secure.entrypoints=websecure
      - traefik.http.routers.jellyfin-secure.rule=Host(`jellyfin.local.domain.de`)
      - traefik.http.routers.jellyfin-secure.middlewares=local-whitelist@file,default-headers@file
      - traefik.http.routers.jellyfin-secure.tls=true
      - traefik.http.routers.jellyfin-secure.service=jellyfin
      - traefik.http.services.jellyfin.loadbalancer.server.port=8096
      - traefik.docker.network=media

So, I don't want to serve HTTP at all, all will be redirected to HTTPS anyway. What I don't know is, if I can skip the HTTP part. Must I define the web entrypoint in order for redirect to work? Or can I define it in the traefik.yml as I did below?

entryPoints:
  ping:
    address: ':88'
  web:
    address: ":80"
    http:
      redirections:
        entryPoint:
          to: websecure
          scheme: https
  websecure:
    address: ":443"

  1. I use homepage (from benphelps) as my dashboard and noticed, that when I refresh the page, all those widgets take a long time to load. They did not do that, when I connecte homepage to those services directly using IP:PORT. Now I use URLs provided by traefik, and it's slow. It's not really a problem, but I wonder, if I made a mistake somewhere. I'm still a beginner when it comes to this, so any pointers in the right direction are apprecciated. Thank you =)
 

EDIT: I found something looking through the source code on Github. I couldn't find anything at first, but then I searchedfor "periodic" and found something in middelwared/main.py.

Theses tasks (see below) are executed at system start and will be re-run after method._periodic.interval seconds. Looking at the log in var/log/middelwared.log I saw, that the intervall was 86400 seconds, exactly one day. So I'm assuming that the daily execution time is set at the last system start.

I've rebooted and will report back in a day. Maybe somebody can find the file to set it manually, not in source code. That is waaaay to advanced for me.

EDIT 2:

EDIT: I was correct, the tasks are executed 24hours later. This gives at least a crude way to change their execution time: restart the machine.


Hej everyone, in the past few weeks, I've been digging my hands into TrueNAS and have since setup a nice little NAS for all my backup needs. The drives spin down when not in use, as the instance only recieves/sends backup data once a day. Howevery, there are a few periodic tasks which wake my drives. Namely:

catalog.sync	                Success	26796 	12/03/2024 18:06:54 	12/03/2024 18:06:54 		
catalog.sync_all	        Success	26795 	12/03/2024 18:06:54 	12/03/2024 18:06:54 		
zfs.dataset.bulk_process	Success	26792 	12/03/2024 18:06:43 	12/03/2024 18:06:43 		
pool.dataset.sync_db_keys	Success	26791 	12/03/2024 18:06:42 	12/03/2024 18:06:43 		
certificate.renew_certs	        Success	26790 	12/03/2024 18:06:42 	12/03/2024 18:06:43 	
 
dscache.refresh	                Success	24991 	12/03/2024 03:30:01 	12/03/2024 03:30:01 
update.download	                Success	25027 	12/03/2024 03:46:01 	12/03/2024 03:46:02 

I spend the last hour searching online and digging through files and checking cron. I found the dscache.refresh and the update.download. I can't find the first five. At least one of them wakes my drives. Does anyone have an idea? There used to a periodic.conf, but I can't find it on my system. Thanks!

 

I've posted a few days ago, asking how to setup my storage for Proxmox on my Lenovo M90q, which I since then settled. Or so I thought. The Lenovo has space for two NVME and one SATA SSD.

There seems to a general consensus, that you shouldn't use consumer SSDs (even NAS SSDs like WD Red) for ZFS, since there will be lots of writes which in turn will wear out the SSD fast.

Some conflicting information is out there with some saying it's fine and a few GB writes per day is okay and others warning of several TBs writes per day.

I plan on using Proxmox as a hypervisor for homelab use with one or two VMs runnning Docker, Nextcloud, Jellyfin, Arr-Stack, TubeArchivist, PiHole and such. All static data (files, videos, music) will not be stored on ZFS, just the VM images themselves.

I did some research and found a few SSDs with good write endurance (see table below) and settled on two WD Red SN700 2TB in a ZFS Mirror. Those drives have 2500TBW. For file storage, I'll just use a Samsung 870EVO with 4TB and 2400TBW.

SSD TB TBW
980 PRO 1TB 600 68
2TB 1200 128
SN 700 500GB 1000 48
1TB 2000 70
2TB 2500 141
870 EVO 2TB 1200 117
4TB 2400 216
SA 500 2TB 1300 137
4TB 2500 325

Is that good enough? Would you rather recommend enterprise grade SSDs? And if so, which ones would you recommend, that are m.2 NVME? Or should I just stick with ext4 as a file system, loosing data security and the ability for snapshots?

I'd love to hear your thought's about this, thanks!

 

Greetings y'all. I've been using ways to circumvent YouTube ads for years now. I'd much rather donate to creators directly instead of using Google as a middle man, needing YouTube Premium. If even pay for premium for just a add free version, if the price wouldn't be so outrageous. I've So far used adblockers, Vanced and then Revanced.

Since the recent developments in this matter, I've setup Tubearchivist, a self hosted solution to download YouTube videos for later consumption. It mostly works great, with a few minor things that bother me but I highly recommend it. ReVanced also still works, but nobody knows for how long.

The question now is, if I should use a VPN to obscure my identity to Google. I don't know if I'm being paranoid here but I wouldn't put it past Google to block my account, if they see YouTube traffic for my IP address and no served ads. Revanced even uses my main Google account, so not that far fetched.

So far, or at least to my knowledge, Google has never done this but I think they just might in the future. So I'm planning on putting tubearchivist behind a VPN via gluetun.

What do you think? I'm eager to hear your opinions on this.

I can also add my docker compose, if there's interest and when I'm back on my PC.

 

Hej everyone.

Until now I've used a linux install and vpn software (airvpn and eddie) when sailing the high seas. While this works well enough, there is always room for improvement.

I am in the process of setting up a docker stack which so far contains gluetun/airvpn and qbittorrent. Here is my compose file:

version: "3"
services:
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    cap_add:
     - NET_ADMIN
    volumes:
      - /appdata/gluetun:/gluetun
    environment:
      - VPN_SERVICE_PROVIDER=airvpn
      - VPN_TYPE=wireguard
      - WIREGUARD_PRIVATE_KEY=
      - WIREGUARD_PRESHARED_KEY=
      - WIREGUARD_ADDRESSES=10.188.90.221/32,fd7d:76ee:e68f:a993:63b2:6cc0:fe82:614b/128
      - SERVER_COUNTRIES=
      - FIREWALL_VPN_INPUT_PORTS=
    ports:
      - 8070:8070/tcp
      - 60858:60858/tcp
      - 60858:60858/udp
    restart: unless-stopped

  qbittorrent: 
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: qbittorrent 
    network_mode: "service:gluetun" 
    environment: 
     - PUID=1000
     - PGID=100
     - TZ=Europe/Berlin
     - WEBUI_PORT=8070 
    volumes: 
     - /appdata/qbittorrent/config/:/config 
     - /data/videos/downloads:/downloads
    depends_on:
      - gluetun
    restart: always

My first problem was related to the ip adress. For some reason, when I use an IPV6 adress, I got this error in gluetun:

2023-10-06T17:30:42Z ERROR VPN settings: Wireguard settings: interface address is IPv6 but IPv6 is not supported: address fd7d:76ee:e68f:a993:63b2:6cc0:fe82:614b/128

Well, I removed that IPV6 and now everything works. Does anyone have a fix? :)

Now for the important part. I tested the setup with a linux iso and to my surprise - everything works. When I used ipleak.net or other websites, these websites only detect the ip from my vpn. Great.

Do I need to take any other precautions? I also bound the network interface tun0 in the qbit webui, just to be sure. When I stop the gluetun container, the webui stops working (as it should, but it is hard to check, if the download also stops). I'm just a bit paranoid because I don't want to pay coin when downloading all the isos my heart desires.

Thank you so much for any input!

view more: next ›