dr_robot

joined 1 year ago
[–] [email protected] 22 points 7 months ago (3 children)

It does not seem like you heard the arguments presented in the article. It isn't about being offended by any left or right wing politics, but because women engineers and scientists were uncomfortable about it for a variety of reasons. In a field which struggles to attract and keep female talent, this is a pretty big thing. The model herself spoke out and asked to be "retired from tech".

[–] [email protected] 0 points 8 months ago

I'm working on a music collection manager with a TUI for myself. I prefer to buy and own music instead of just streaming and I have a selhosted server with ZFS and backups where I keep the music and from which I can stream or download to my devices. There are websites which help you keep track of what you own and have wishlists, but they don't really satisfy my needs so I decided to create my own. Its main feature is to have an easier overview of what albums I own and don't own for the artists I'm interested in and to maintain a wishlist based on this for my next purchases. I'm doing it in Rust, because it's a hobby project and I want to get better at Rust. However, it has paid off in other ways. The type system has allowed me to create a UI that is very safe to add features to without worrying about crashes. Sometimes I actually have to think why it didn't crash only to find that Rust forced me to correctly handle an optional outcome before even getting to an undefined situation.

[–] [email protected] 11 points 8 months ago (1 children)

Many open source projects are not developed by unpaid volunteers. The Linux kernel, for example, is primarily developed by professionals on paid time. I'm not convinced the Linux kernel development would continue without business contribution. I'm not convinced all open source projects could just continue without any payment.

[–] [email protected] 4 points 9 months ago

I do the same. Fedora on my laptop because I want a balance of stability and having the newest features. Servers run Debian, because I don't have time to fix and update things.

[–] [email protected] 43 points 9 months ago (1 children)

Logcheck. It took ages to make sure innocent logs are ignored, but now I get an email as soon as anything non-routine happens on my servers. I get emails with logs from every update, every time I log in, etc. This has given me the most confidence that nothing unexpected is happening on my servers. Of course, one needs to make sure that the firewall is configured well, and that you use ssh keys etc., but logcheck is how I know I'm doing enough.

 

To build a fully climate-neutral transport system in the Netherlands, many citizens will have to give up their cars, Jan Willem Eirsman, the government’s new chief climate adviser as chairman of the Scientific Climate Council, told the AD.

 

Note: It seems my original post from last week didn't get posted on lemmy.world from kbin (I can't seem to find it) so I'm reposting it. Apologies to those who may have already seen this.

I'm looking to deploy some form of monitoring across my selhosted servers and I'm a bit confused about the different options.

I have a small network of three machines that I would like to monitor. I am not looking for a solution that lets me monitor tens, hundreds, or thousands of nodes. Furthermore, I am more interested in being able to observe metrics for each node individually rather than in aggregate. Each of these machines performs a different task so aggregate metrics from these machines are not particularly meaningful. However, collecting all the metrics centrally so that I can have a single dashboard to view them all in one convenient place is definitely something I would like.

With that said, I have been trying to understand the different (popular) options that are available and I would like to hear what the community's experience is with these options and if anybody has any advice on any of these in light of my requirements above.

Prometheus seems like the default go-to for monitoring. This would require deploying a node_exporter on each node, a prometheus service, and a grafana dashboard. That's all fine, I can do that. However, from all that I'm reading it doesn't seem like Prometheus is optimised for my use case of monitoring each node individually. I'm sure it's possible, but I'm concerned that because this is not what it's meant for, it would take me ages to set it up such that I'm happy with it.

Netdata seems like a comprehensive single-device monitoring solution. It also appears that it is possible to run your own registry to help with distributed monitoring. Not gonna lie, the netdata dashboard looks slick. An important additional advantage is that it comes packaged on Debian (all my machines run Debian). However, it looks like it does not store the metrics for very long. To solve that I could also set up InfluxDB and Grafana for long-term metrics. I could use Prometheus instead of InfluxDB in this arrangement, but I'm more likely to deploy a bunch of IoT devices than I am to deploy servers needing monitoring which means InfluxDB is a bit more future-proof for me as it could be reused for IoT data.

Cockpit is another single-device solution which additionally provides direct control of the system. The direct control is probably not so much of a plus as then I would never let Cockpit be accessible from outside my home network whereas I wouldn't mind that so much for dashboards with read-only data (still behind some authentication of course). It's also probably not built for monitoring specifically, but I included this in the list in case somebody has something interesting to say about it.

What's everybody's experience with the above solutions and does anybody have advice specific to my situation? I'm currently leaning to netdata with my own registry at first and later add InfluxDB and Grafana for long-term metrics.

[–] [email protected] 1 points 1 year ago

Plasma is amazing. It has been my DE of choice for years now. So happy I'm donating to the project.