Selfhosted

49185 readers
1121 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

2
 
 

I tried connecting akaris.space, my other domain, to the Cloudflare tunnel to see if it would work better, and all I get is Cloudflare tunnel error. What do i do to fix this?

3
 
 

A comprehensive fitness coaching platform that allows create workout plans for you, track progress, and access a vast exercise database with detailed instructions and video demonstrations.

4
 
 

I thought this video was rather interesting, because at 12:27, the presenter crunches the numbers to find out how many years it would take for a new computer purchase to be more environmentally friendly (in regards to total CO2 expended) compared to using a less efficient used model.

Depending on the specific use case, it could take as little as 3 years to breakeven in terms of CO2 if both systems were at max power draw forever, and as long as 30 if the systems are mostly at idle.

5
 
 

cross-posted from: https://lemmings.world/post/29678617

Thought I would share my simple docker/podman setup for torrenting over I2P. It's just 2 files, a compose file and a config file, along with an in-depth explanation, available at my repo https://codeberg.org/xabadak/podman-i2p-qbittorrent. And it comes with a built-in "kill-switch" to prevent traffic leaking out to the clearnet. But for the uninitiated, some may be wondering:

What is I2P and why should I care?

For a p2p system like bittorrent, for two peers to connect to each other, at least one side needs to have their ports open. If one side uses a VPN, their provider needs to support "port forwarding" in order for them to have their ports open (assuming everything else is configured properly). If you have ever tried to download a torrent with seeders available, yet failed to connect to any of them, your ports are probably not open. And with regulators cracking down on VPNs and forcing providers like Mullvad to shut down port forwarding, torrenting over the clearnet is becoming more and more difficult.

The I2P network doesn't have these issues. The I2P is an alternative internet network where all users are anonymous by default. So you don't need a VPN to hide your activity from your ISP. You don't need port-forwarding either, all peers can reach each other. And if you do happen to run a VPN on your PC, that's fine too - I2P will work just the same. So if you're turning your VPN on and off all the time, you can keep I2P running throughout, and continue downloading/uploading.

I2P eliminates all the complications and worries about seeding, making it easy for beginners to contribute to the network. I2P also makes downloading easier, since all peers are always reachable. And it's more decentralized too, since users don't need to rely on VPN providers. And of course, it's free and open source!

A fair warning though, I2P is restricted in some countries. And in terms of torrenting specifically, torrents have to explicitly support I2P. You can't just take any clearnet torrent and expect it to work on I2P. And the speeds are generally lower since there are less seeders, and the built-in anonymity has a cost as well. However I've been surprised at the amount of content on the I2P network, and I've been able to reach 1 MB/s download speeds. It's more than good enough for me, and it will only get better the more people join, so I hope this repo is enough for people to get started.

6
 
 

Who benefits from this? Even though Let’s Encrypt stresses that most site operators will do fine sticking with ordinary domain certificates, there are still scenarios where a numeric identifier is the only practical choice:

Infrastructure services such as DNS-over-HTTPS (DoH) – where clients may pin a literal IP address for performance or censorship-evasion reasons.
IoT and home-lab devices – think network-attached storage boxes, for example, living behind static WAN addresses.
Ephemeral cloud workloads – short-lived back-end servers that spin up with public IPs faster than DNS records can propagate.
7
35
submitted 18 hours ago* (last edited 14 hours ago) by [email protected] to c/[email protected]
 
 

I have quite a few self-hosted services, both on machines at home and on a VPS. And there are even more odds and ends I've written that do things on my home network. A one-person maintenance team runs into serious memory limitations, particularly for the services that just run fine for years at a time.

After running into the frustration of forgetting how to run Nextcloud upgrades on the command line for the nth time, I realized it was time to write a tool.

The system wayfinder is what came out of that frustration. It lets you leave notes and commands in place around your infrastructure. After dogfooding it a bit, I was delighted when it saved me a ton of trouble dealing with one of my docker containers.

I took some time to work on it proper, wrote it up, and put it on GitHub, even though it is still a pre-release. Would you use a tool like this? What else would you want in it?

Edit: adding link to GitHub https://github.com/robbieh/way

8
 
 

Tailscale recently announced our Series C fundraise, and while we were grateful for the support, the Internet, as it does, also raised a few eyebrows — some wondering whether this meant the dreaded “enshittification” was on the horizon for Tailscale.

Full Article -->

Tailscale recently announced our Series C fundraise. We were grateful for all the community support, but the Internet also raised a few of its collective eyebrows, wondering whether this meant the dreaded “enshittification” was coming next.

That word describes a very real pattern we’ve all seen before: products start great, grow fast, and then slowly become worse as the people running them trade user love for short-term revenue.

It’s a topic I find genuinely fascinating, and I've seen the downward spiral firsthand at companies I once admired. So I want to talk about why this happens, and more importantly, why it won't happen to us. That's big talk, I know. But it's a promise I'm happy for people to hold us to. What is enshittification?

The term "enshittification" was first popularized in a blog post by Corey Doctorow, who put a catchy name to an effect we've all experienced. Software starts off good, then goes bad. How? Why?

Enshittification proposes not just a name, but a mechanism. First, a product is well loved and gains in popularity, market share, and revenue. In fact, it gets so popular that it starts to defeat competitors. Eventually, it's the primary product in the space: a monopoly, or as close as you can get. And then, suddenly, the owners, who are Capitalists, have their evil nature finally revealed and they exploit that monopoly to raise prices and make the product worse, so the captive customers all have to pay more. Quality doesn't matter anymore, only exploitation.

I agree with most of that thesis. I think Doctorow has that mechanism mostly right. But, there's one thing that doesn't add up for me: Enshittification is not a success mechanism.

I can't think of any examples of companies that, in real life, enshittified because they were successful. What I've seen is companies that made their product worse because they were… scared.

A company that's growing fast can afford to be optimistic. They create a positive feedback loop: more user love, more word of mouth, more users, more money, more product improvements, more user love, and so on. Everyone in the company can align around that positive feedback loop. It's a beautiful thing. It's also fragile: miss a step and it flattens out and soon it's a downward spiral instead of an upward one.

So, if I were, hypothetically, running a company, I think I would be pretty hesitant to deliberately sacrifice a step from that positive feedback loop, the loop I and the whole company spent so much time and energy building, to see if I can grow faster. User love? Nah, I'm sure we'll be fine, look how much money and how many users we have! Time to switch strategies!

Why would I do that? Whenever you switch strategies, there has to be a threshold moment, when something fundamental changes. Threshold moments and control

In Saint John, New Brunswick, there's a river that flows one direction at high tide, and the other way at low tide. Four times a day, gravity equalizes, then crosses a threshold to gently start pulling the other way, then accelerates. What doesn't happen is a rapidly flowing river in one direction "suddenly" shifts to rapidly flowing the other way. You can see the threshold coming. It's predictable.

In my experience, for a company or a product, there are two kinds of thresholds like this, that when crossed, create a flow change.

The first one is control: if the visionaries in charge lose control, chances are their replacements won't "get it."

The new people didn't build the underlying feedback loop, and so they don't realize how fragile it is. There are lots of reasons for a change in control; financial mismanagement, boards of directors, hostile takeovers.

The worst one is temptation. Being a founder is, well, it actually sucks. It's oddly like being repeatedly punched in the face. When I look back at my career, I guess I'm surprised by how few times per day it feels like I was punched in the face. But, the constant face punching gets to you after a while. Once you've established a great product, and amazing customer love, and lots of money, and an upward spiral, isn't your creation strong enough yet? Can't you step back and let the professionals just run it, confident that they won't kill the golden goose?

Empirically, mostly no, you can't. Actually the success rate of control changes, for well loved products, is abysmal. The saturation trap

The second trigger of a flow change is comes from outside: saturation. Every successful product, at some point, reaches approximately all the users it's ever going to reach. Before that, you can watch its exponential growth rate slow down: the infamous S-curve of product adoption.

Saturation can lead us back to control change: the founders get frustrated and back out, or the board ousts them and puts in "real business people" who know how to get growth going again. Generally that doesn't work. Modern VCs consider founder replacement a truly desperate move, most of the time. Maybe a last-ditch effort to boost short term numbers in preparation for an acquisition, if we're lucky.

But sometimes the leaders stay on despite saturation, and they try on their own to make things better. Sometimes that does work. Actually, it's kind of amazing how often it seems to work. Among successful companies, it's rare to find one that sustained hypergrowth, nonstop, without suffering through one of these dangerous periods.

(That's called survivorship bias. All companies have dangerous periods. The successful ones surivived them. But of those survivors, suspiciously few are ones that replaced their founders.)

If you saturate and can't recover - either by growing more in a big-enough current market, or by finding new markets to expand into - then the best you can hope for is for your upward spiral to mature gently into decelerating growth. If so, and you're a buddhist, then you hire less, you optimize margins a bit, you resign yourself to being About This Rich And I Guess That's All But It's Not So Bad. The devil’s bargain

Alas, very few people reach that state of zen. Especially the kind of ambitious people who were able to get that far in the first place. If you can't accept saturation and you can't beat saturation, then you're down to two choices: step away and let the new owners enshittify it, hopefully slowly. Or take the devil's bargain: enshittify it yourself.

I would not recommend the latter. If you're a founder and you find yourself in that position, honestly, you won't enjoy doing it and you probably aren't even good at it and it's getting enshittified either way. Let someone else do the job. Defenses against enshittification

Okay, maybe that section was not as uplifting as we might have hoped. I've gotta be honest with you here. Doctorow is, after all, mostly right. This does happen all the time.

Most founders aren't perfect for every stage of growth. Most product owners stumble. Most markets saturate. Most VCs get board control pretty early on and want hypergrowth or bust. In tech, a lot of the time, if you're choosing a product or company to join, that kind of company is all you can get.

As a founder, maybe you're okay with growing slowly. Then some copycat shows up, steals your idea, grows super fast, squeezes you out along with your moral high ground, and then runs headlong into all the same saturation problems as everyone else. Tech incentives are awful.

But, it's not a lost cause. There are companies (and open source projects) that keep a good thing going, for decades or more. What do they have in common?

An expansive vision that's not about money, and which opens you up to lots and lots of users. A big addressable market means you don't have to worry about saturation for a long time, even at hypergrowth speeds. Google certainly never had an incentive to make Google Search worse.

(Update 2025-06-14: A few people disputed that last bit. Okay. Perhaps Google has ccasionally responded to what they thought were incentives to make search worse -- I wasn't there, I don't know -- but it seems clear in retrospect that when search gets worse, Google does worse. So I'll stick to my claim that their true incentives are to keep improving.)
Keep control.It's easy to lose control of a project or company at any point. If you stumble, and you don't have a backup plan, and there's someone waiting to jump on your mistake, then it's over. Too many companies "bet it all" on nonstop hypergrowth and don't have any way back have no room in the budget, if results slow down even temporarily.

Stories abound of companies that scraped close to bankruptcy before finally pulling through. But far more companies scraped close to bankruptcy and then went bankrupt. Those companies are forgotten. Avoid it.
Track your data. Part of control is predictability. If you know how big your market is, and you monitor your growth carefully, you can detect incoming saturation years before it happens. Knowing the telltale shape of each part of that S-curve is a superpower. If you can see the future, you can prevent your own future mistakes.
Believe in competition. Google used to have this saying they lived by: "the competition is only a click away." That was excellent framing, because it was true, and it will remain true even if Google captures 99% of the search market. The key is to cultivate a healthy fear of competing products, not of your investors or the end of hypergrowth. Enshittification helps your competitors. That would be dumb.

(And don't cheat by using lock-in to make competitors not, anymore, "only a click away." That's missing the whole point!)
Inoculate yourself. If you have to, create your own competition. Linus Torvalds, the creator of the Linux kernel, famously also created Git, the greatest tool for forking (and maybe merging) open source projects that has ever existed. And then he said, this is my fork, the Linus fork; use it if you want; use someone else's if you want; and now if I want to win, I have to make mine the best. Git was created back in 2005, twenty years ago. To this day, Linus's fork is still the central one.

If you combine these defenses, you can be safe from the decline that others tell you is inevitable. If you look around for examples, you'll find that this does actually work. You won't be the first. You'll just be rare. Side note: Things that aren't enshittification

I often see people worry about enshittification that isn't. They might be good or bad, wise or unwise, but that's a different topic. Tools aren't inherently good or evil. They're just tools.

"Helpfulness." There's a fine line between "telling users about this cool new feature we built" in the spirit of helping them, and "pestering users about this cool new feature we built" (typically a misguided AI implementation) to improve some quarterly KPI. Sometimes it's hard to see where that line is. But when you've crossed it, you know.

Are you trying to help a user do what they want to do, or are you trying to get them to do what you want them to do?

Look into your heart. Avoid the second one. I know you know how. Or you knew how, once. Remember what that feels like.
Charging money for your product.Charging money is okay. Get serious. Companies have to stay in business.

That said, I personally really revile the "we'll make it free for now and we'll start charging for the exact same thing later" strategy. Keep your promises.

I'm pretty sure nobody but drug dealers breaks those promises on purpose. But, again, desperation is a powerful motivator. Growth slowing down? Costs way higher than expected? Time to capture some of that value we were giving away for free!

In retrospect, that's a bait-and-switch, but most founders never planned it that way. They just didn't do the math up front, or they were too naive to know they would have to. And then they had to.

Famously, Dropbox had a "free forever" plan that provided a certain amount of free storage. What they didn't count on was abandoned accounts, accumulating every year, with stored stuff they could never delete. Even if a very good fixed fraction of users each year upgraded to a paid plan, all the ones that didn't, kept piling up... year after year... after year... until they had to start deleting old free accounts and the data in them. A similar story happened with Docker, which used to host unlimited container downloads for free. In hindsight that was mathematically unsustainable. Success guaranteed failure.

Do the math up front. If you're not sure, find someone who can.
Value pricing.(ie. charging different prices to different people.) It's okay to charge money. It's even okay to charge money to some kinds of people (say, corporate users) and not others. It's also okay to charge money for an almost-the-same-but-slightly-better product. It's okay to charge money for support for your open source tool (though I stay away from that; it incentivizes you to make the product worse).

It's even okay to charge immense amounts of money for a commercial product that's barely better than your open source one! Or for a part of your product that costs you almost nothing.

But, you have to do the rest of the work. Make sure the reason your users don't switch away is that you're the best, not that you have the best lock-in. Yeah, I'm talking to you, cloud egress fees.
Copying competitors. It's okay to copy features from competitors. It's okay to position yourself against competitors. It's okay to win customers away from competitors. But it's not okay to lie.
Bugs. It's okay to fix bugs. It's okay to decide not to fix bugs; you'll have to sometimes, anyway. It's okay to take out technical debt. It's okay to pay off technical debt. It's okay to let technical debt languish forever.
Backward incompatible changes. It's dumb to release a new version that breaks backward compatibility with your old version. It's tempting. It annoys your users. But it's not enshittification for the simple reason that it's phenomenally ineffective at maintaining or exploiting a monopoly, which is what enshittification is supposed to be about. You know who's good at monopolies? Intel and Microsoft. They don't break old versions.

Enshittification is a real, and tragic, phenomenon. But let's protect a useful term and its definition! Those things aren't it. Epilogue: a special note to founders

If you're a founder or a product owner, I hope all this helps. I'm sad to say, you have a lot of potential pitfalls in your future. But, remember that they're only potential pitfalls. Not everyone falls into them.

Plan ahead. Remember where you came from. Keep your integrity. Do your best.

9
 
 

everytime i check nginx logs its more scrapers then i can count and i could not find any good open source solutions

10
 
 

Hello Friendos

I'm a security / cloud engineer and I've had this lab for about 6 months now. In the last few weeks I've decided to start using it to self host some "production" services for me and my loved ones (extended family of 15) Mainly a next cloud instance that serves as our "picture vault"

The hardware is a poweredge R430 with twin ES-2620's and 128 GBs. It has 8x1TB 2.5

HDDs

This thing ended up being really overpowered for what I use it and I feel like by now I have explored everything I wanted to in this hardware. I was thinking about laterally scaling to R230s so I could play with load balancing and HA.

However these servers only have 2-4 drive bays, and I have no experience with DAS.

Can you guys help with some links? I'm researching DAS enclosures. I understand that any server with a PCI slot can take a SAS card, and any SAS enclosure is compatible.

Can you guys foresee any issue with a server as small as an R230 connecting to a SAS DAS?

I see that DAS enclosures have multiple connections per module, would I be able to connect multiple servers to the same module? or is it one server per connection and it can't be shared?

If I have to share the connection, I would have to host a NAS (I probably should anyways) and will have to upgrade my switch from gigabit to 10G

Would also appreciate some other recommendations for small form factor servers that can be bought for cheap. (18 inches or shorter)

Pic of current setup for attention ... don't judge my PC case :) 3U chassis for it is on the mail.

11
 
 

So, I tried linking my Lemmy instance akaris.space but it says the ssl handshake failed and i can't seem to figure out what went wrong.

12
 
 

Great news! I started my selfhost journey over a year ago, and I'm finding myself needing better hardware. There's so many services I want that my NAS can't handle. And I unfortunately need to add GPU transcoding to my Jellyfin setup.

What's the best OS for a machine focused on containers and (getting started with) VMs? I've heard Proxmox

What CPU specs should I be concerned about?

I'm willing to buy a pre-built as long as its hardware has sufficient longevity.

13
14
771
goodbye plex (piefed.cdn.blahaj.zone)
submitted 4 days ago by [email protected] to c/[email protected]
 
 

after almost 15yrs my plex server is no more. jellyfin behind nginx with authentik is running very nicely.

15
 
 

I am looking to create similar tool to AlternativeTo. This would list different brands and why you should or should not buy them. Is there some software that would be great starting point for creating this kind of service?

I would guess wiki apps would work for this, but like wikijs, but interested to hear is the something else that could be used for this.

16
 
 

So, I have a self hosted Owncast instance. I want to run a 24/7 live stream. However, if the streaming source changes or cuts for a few seconds, Owncast immediately terminates the stream. So I'm trying to find a way to have a "fallback/offline/backup" stream running where it's just a testcard graphic and the time on it for now. And then when it detects an incoming RTMP stream, it switches to the stream. When the stream ends- back to the testcard. My aim is to make a seamless stream that is always live and doesn't cut.

So basically, just a testcard graphic (and maybe some sound) that I can easily take over/hijack

I thought such a thing would be simple - it isn't. FFMPEG needs to reconnect to switch sources. I tried using a FIFO pipe, but the thing that reads the pipe doesn't seem to like it when the RTMP stream connects, choosing to break. It works again eventually, but by then, the stream is dropped. I've tried forwarding an RTMP stream from Nginx and using the switchers, but the forwarder likes to break as well (it seems to dislike mismatched timestamps or something)

I apologise for not leaving any specific logs. I have been working on this for days and have errors galore. I am posting here to see if there's a difference/best approach. (If one of these here is a best option and I was on the right track, I can try and dig up my old code and errors).

Also, this needs to be done headless and automatically begin at startup, which might rule out OBS, but I'm not 100% certain, if it's possible for me to set up a scene on a gui computer and load that into a headless OBS, please let me know.

17
18
 
 

Hi, I'm having trouble getting my Caddy reverse proxy to work with the arr apps I can access everything without the reverse proxy. I have set the basic login prompt (like this one) through the arr apps after I logge in to one of the arr apps I either get a blank page or I'm seeing this error page:

firefox

The page isn’t redirecting properly

Firefox has detected that the server is redirecting the request for this address in a way that will never complete.

This problem can sometimes be caused by disabling or refusing to accept cookies.

chromiumThis page isn’t working domain.com redirected you too many times.

Try deleting your cookies. ERR_TOO_MANY_REDIRECTS

Caddyfile config (1.1.1.1 is a placeholder ip for my vps external ip)

{
    email [email protected]
}

domain.com {

    # qBittorrent
    redir /qbit /qbit/
    handle_path /qbit/* {
        reverse_proxy 1.1.1.1:8080 {
            header_up Host {host}
            header_up X-Real-IP {remote_host}
            header_up X-Forwarded-For {remote_host}
            header_up X-Forwarded-Proto {scheme}
        }
    }

    # Sonarr
    redir /sonarr /sonarr/
    handle_path /sonarr/* {
        reverse_proxy 1.1.1.1:8989 {
            header_up Host {host}
            header_up X-Real-IP {remote_host}
            header_up X-Forwarded-For {remote_host}
            header_up X-Forwarded-Proto {scheme}
        }
    }

    # Radarr
    redir /radarr /radarr/
    handle_path /radarr/* {
        reverse_proxy 1.1.1.1:7878 {
            header_up Host {host}
            header_up X-Real-IP {remote_host}
            header_up X-Forwarded-For {remote_host}
            header_up X-Forwarded-Proto {scheme}
        }
    }

    # Prowlarr
    redir /prowlarr /prowlarr/
    handle_path /prowlarr/* {
        reverse_proxy 1.1.1.1:9696 {
            header_up Host {host}
            header_up X-Real-IP {remote_host}
            header_up X-Forwarded-For {remote_host}
            header_up X-Forwarded-Proto {scheme}
        }
    }
}

I've tried setting the URL base to /the_name_of_the_arr_app, but it didn't work. I've attempted it with and without the redir /the_name_of_the_arr_app /the_name_of_the_arr_app/. I'm stuck and unsure of how to resolve the issue. It works fine with qBittorrent.

radarr debug log

2025-07-04 21:27:45.9|Info|Radarr.Http.Authentication.BasicAuthenticationHandler|Basic was not authenticated. Failure message: Authorization header missing.
2025-07-04 21:27:45.9|Info|Radarr.Http.Authentication.BasicAuthenticationHandler|AuthenticationScheme: Basic was challenged.
2025-07-04 21:27:54.1|Debug|Radarr.Http.Authentication.BasicAuthenticationHandler|AuthenticationScheme: Basic was successfully authenticated.
2025-07-04 21:27:54.8|Debug|Radarr.Http.Authentication.BasicAuthenticationHandler|AuthenticationScheme: Basic was successfully authenticated.
2025-07-04 21:27:55.0|Debug|Radarr.Http.Authentication.BasicAuthenticationHandler|AuthenticationScheme: Basic was successfully authenticated.

19
248
Rate my one year old homelab. (media.piefed.social)
submitted 4 days ago* (last edited 4 days ago) by [email protected] to c/[email protected]
 
 

How it started: mp80
I bought a MiniPC (Blackview MP-80) to run Home Assistant and some lights etc. to go with it.

It's now exactly one year later this is what my setup looks like now:
BMAX B2 Pro --> Home Assistant OS Blackview MP-80 --> Proxmox --> Nextcloud-AIO & Immich
ODROID H4+ --> Proxmox --> TrueNAS

How it's going: odroid
With the heatwave in Europe I've now installed cooling to keep my HDD's from heating up.

I know it's Janky as hell, but I love it. The plan going forward is to buy a 3D Printer so that I can 3D Print a custom 10" rack, and I'll build my own cooling and temperature monitoring system with ESP32 and create a dashboard for it in Home Assistant and sorting out networking.

It's a work in progress, having a lot of fun learning and adding new things.

20
 
 

Hey everyone!

I'm excited to introduce Reitti, a location tracking and analysis application designed to help you gain insights about your movement patterns and significant places—all while keeping your data private on your own server.

Core Capabilities:

  • Visit Tracking: Automatically recognizes and categorizes the places where you spend time, using customizable detection algorithms
  • Trip Analysis: Analyzes your movements between locations to understand how you travel whether by walking, cycling, or driving
  • Interactive Timeline: Visualizes all your past activities on an interactive timeline with map and list views that show visit duration, transport method, and distance traveled

Photo Integration:

  • Connect your self-hosted Immich photo server to seamlessly display photos taken at specific locations right within Reitti's timeline. The interactive photo viewer lets you browse galleries for each place.

Data Import Options:

  • Multiple Formats Supported: Reitti can import existing location data from GPX, GeoJSON, and Google Takeout (JSON) backups
  • (Near) Real-time Updates: Automatically receive location info via mobile apps like OwnTracks, GPSLogger or our REST API

Customization:

  • Multi-geocoding Services: Configurable options to convert coordinates to human-readable addresses using providers like Nominatim
  • User Profiles: Customize individual display names, password management, and API token security under your own control

Self-hosting:

  • Reitti is designed to be deployed on your own infrastructure using Docker containers. We provide configuration templates to set up linked services like PostgreSQL, RabbitMQ and Redis that keep all your location data private.

Reitti is still early in development but has already developed extensive capabilities. I'd love to hear your feedback and answer any questions to tailor Reitti to meet the community's needs.

Hope this sparks some interest!

Daniel

21
 
 

AppFlowy is a collaborative project/wiki/documentation platform (similar to Notion if you are familiar with that). Otherwise check out their screenshots for an idea of how it works.

New Features

Desktop

  • Private page sharing: Add members to private pages with Can View or Can Edit permissions
  • Guest editor collaboration: Invite non-members (guest editors) to collaborate in real-time on your pages
  • Shared with me: Browse all pages shared with you under the new Shared with me section
  • New syncing protocol: Optimized for faster, more reliable multi-user and multi-device data sync

Mobile

  • Shared page collaboration: View and edit pages that have been shared with you on iOS and Android
  • New syncing protocol: Optimized for faster, more reliable multi-user and multi-device data sync
22
 
 

Hi all, I’ve recently got a proxmox server up and running, and cutting my teeth on it setting up some services (thanks to everyone who responded to my earlier post!). One thing I’m struggling with currently, and it’s admittedly not straightforward, is getting a graphical session up and running.

What I have working so far is an arch based lxc container, with gpu pass through. nvidia-smi on the lxc reports as usual, and so that seems to be working fine.

Upon installing a graphical session, say cinnamon, with lightdm, however, I can’t seem to open any display. I can have a virtual terminal available via the proxmox ui, and though I haven’t tried, I’m sure I could ssh in just fine as well. For what it’s worth, I have a display connected to the host system; the host does not have any graphical sessions. I’d like for the time being to use this host display, and have passed through /dev/fb0.

What I haven’t tried is running a pure x11 based session. I’d really prefer to use a Wayland session with cinnamon, but if necessary I can try to get an x11 session running. I additionally have not installed any vnc servers.

The errors I tend to get when trying to start cinnamon center on not being able to get a session ID, not being able to connect or open a display, and not being able to connect or find a dbus session.

Lightdm says it is running as a service on systemctl status lightdm.

Anyone have any ideas for how to get a session going graphically? I’m not sure how to even pass a tty to the connected monitor from the lxc.

Thanks for any help or guidance — if I do figure this all out, I plan to make a guide for future folks.

23
 
 

I tried installing YunoHost once, now I'm installing again. I installed it on a Virtual Machine. After installing, it asked for a user and password. I typed in what was provided "root" and "yunohost", and it didn't work, it said incorrect.

24
 
 

cross-posted from: https://lemmy.world/post/32265822

xkcd #3109: Dehumidifier

xkcd #3109: Dehumidifier

Title text:

It's important for devices to have internet connectivity so the manufacturer can patch remote exploits.

Transcript:

[A store salesman, Hairy, is showing Cueball a dehumidifier, with a "SALE" label on it. Several other unidentified devices, possibly other dehumidifier models, are shown in the store as well.]

Salesman: This dehumidifier model features built-in WiFi for remote updates.
Cueball: Great! That will be really useful if they discover a new kind of water.

Source: https://xkcd.com/3109/

explainxkcd for #3109

25
 
 

The sensor is located on the case (not near the exhaust) of the server. With the structure of my appartment this is the only place I can realistically put my Server but sadly also the hottest place in my appartment.

The outside temperature is supposed to reach 36°C today so I expect the ambient temp for the server to rise another 2-3 degrees.

view more: next ›