packetloss

joined 1 year ago
[–] [email protected] 5 points 6 months ago

That's for everything listed above. This is measured straight from my UPS which everything is connected to.

[–] [email protected] 14 points 6 months ago (2 children)

370W average.

3 x Lenovo x3650 M5 (Proxmox Nodes)

  • 1 x Xeon E5-2697A v4
  • 128GB DDR4 ECC
  • 2 x 960GB sATA SSD
  • 3 x 900GB SAS3 10K RPM HDD
  • 1 x nVidia Quadro M2000

TP Link TL-SG3428X switch

Raspberry Pi 3B+ (physical Pi-hole server)

Generic Mini PC Intel N3150 (OpenVPN client)

Dell Optiplex (OPNSense firewall)

  • Intel i5 4590
  • 8GB
[–] [email protected] 4 points 7 months ago (2 children)
[–] [email protected] 4 points 7 months ago

I use Nala for package management in my Debian systems. I've created aliases for 'apt' & 'apt-get' to use Nala instead.

Also 'll' alias for 'ls -lah'.

That's about it though.

[–] [email protected] 12 points 7 months ago

It's a loot bug from Lethal Company.

[–] [email protected] 2 points 7 months ago (1 children)

Simple, clean, easy to look at. Love it.

[–] [email protected] 1 points 8 months ago (1 children)

Windows 2000 says hi to Windows 98

 

It's the last Friday before the New Year. Like myself, many of you will be starting their on-call rotation.

To all my brothers and sisters in arms, I wish you a quiet and relaxing New Year's weekend. May your DNS be accurate, your switches be resilient, and your uptimes be high.

Cheers!

 

We're about to roll out 365 to all our users. Exchange Online mailboxes, Teams, OneDrive, SharePoint.

What solutions for backing up and restoring the data are you experienced with, and would recommend?

We currently use Veeam for VM backup, but their solution is a totally different product, not integrated with VBR. So since a separate product would have to be licensed and installed, we aren't necessarily locked in to using Veeam for that too.

Thanks in advance.

[–] [email protected] 1 points 1 year ago

Not saying you have to or anything, and I can understand and respect using something like MX Linux to save time on the customization. Just know that because it's based on Debian, any core OS updates will be delayed while the MX team rebases them into their fork.

[–] [email protected] 0 points 1 year ago (2 children)

Honest question, but why not just install Debian with the Xfce DE? Why rely on a fork for updates?

From what I can tell both by testing MX Linux and by reading about it, it's nothing more than Debian with a few pre-installed packages and some customization. All of which could be done on Debian directly without much trouble.

 

Is it just me or are system requirements by vendor applications getting out of hand? In the past 5 years I've watched the minimum specs go from 2vCPU or 4vCPU with 8GB or 16GB RAM now up to a minimum of 24vCPU's and 84GB of RAM!

What the actual hell?

We run a VERY efficient shop where I work. Our VM infrastructure is constantly monitored for services or VM's that are using more resources than they need. We have 100+ VM's running across 4 nodes, each with 2TB of RAM and 32 cores. If we find an application that is abusing CPU usage, or RAM consumption, we will tune it so it's as efficient as can be. However, for vendor solutions where they provide a VM image to deploy, or they install a custom software suite on the VM, the requirements and the performance have been getting absolutely out of hand.

I just received a request to deploy a new VM that is going to be used for managing and provisioning switch ports on some new networking gear. The vendor has provided a document with their minimum requirements for this.

24 vCPU's 84GB of RAM 600GB HDD with a minimum I/O speed of 200MB/s

I've worked as a System Administrator for a long time. One thing I've learned is that a measure of a company's product is not only how well it functions and how well it does what it advertises, but also how well it's built. This includes system resource usage and requirements.

When I see system requirements like the ones I was just given, it really makes me call into question the quality of the development team and the quality of the product. For what it's supposed to do, and what the minimum specs are, it doesn't make sense. It's like they ran into a performance bottleneck somewhere along the line, and instead of diagnosing and fixing the code to be more efficient, they just pulled a Jeremy Clarckson and added "More power!". Because throwing more CPU's and RAM at a performance issue always fixes it. Lets just pass the issue along to our customers and make them use more of their infrastructure resources to fix our problem. Jeez!

Just to be clear, I'm not making a blanket statement about all developers, there are a lot of developers or development teams that do put quite a bit of effort into refining their product and making it quite efficient, however it just seems more common place now that these "basic" applications from very large vendors have absurd system requirements.

Is anyone else experiencing this? Any similar stories to share?