Filesystem level encryption enabled on RHEL. For some damn reason, they turn it off in their kernel.
Linux
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
Better access to tools like ebpf and xdp.
A kernel that fits my hardware and supports things the original kernel doesn't. Then again, i use gentoo.
What did the original kernel not support?
Having / on ZFS, but that went into an initrd i think... don't remember, but not hardware related.
Not for myself but a client who was running a game server. He wanted to tweak the number of ticks/second that the kernel interacted with CPU. Didn't even know that this was a parameter and after a few attempts, according to him, never went on that server myself, made a huge difference and he claimed having grabbed a good part of the market because of that.
After that familiarized myself more with the stuff in there. But that was a good while ago, before most of you guys were born.
Just download the devel kernel from your distro and go into make menuconfig
. I am on an Intel Laptop with recent hardware. No reason to use amd, nvidia etc drivers. And there is a shitload of likely unmaintained drivers for ancient hardware.
I used to manually compile with the Linux-VServer patches, before Debian started shipping a pre-patched kernel.
Linux-VServer was kinda like LXC or OpenVZ. I was using it around 2008 or so as LXC wasn't quite ready for use in production yet (was still far from finished) and OpenVZ didn't support Debian hosts.
The secureblue image I use disables numerous kernel modules, and enables many kernel mitigation argument.
The performance impact is minimal, hopefully that means a more secure system? I honestly don't know, nor do I change the default recommended by the developer.
Bragging rights.
Better lzma performance with xz. 🤪
Knowledge and time forced to not be on the computer
Wow, I never thought of that as a good thing until now. I bet Gentoo users are more well-rounded than Arch users
Well, lots of time to practice sword fighting in office chairs
Or to scroll through XKCD, apparently :P
camera drivers. the patch was submitted, but hadn't been merged yet, and I didn't want to wait.
I'm running a custom kernel on my Arch laptop. It's a little faster, a little smaller and a little quite more secure. I'm also running custom kernel which enables adiantum encryption on old phone with postmarketOS.
How did you conduct this speed test? Where are the results? 😂
Sorry, I think this any time someone says their computer is faster or mod X on Android is "snappier"
I used geekbench 5. My CPU is AMD Ryzen 5 5500U. I tested a few prebuild kernels and custom compiled the fastest one.
prebuild linux kernel:
- singlethread: 1170
- multithread score: 4604
prebuild linux-zen kernel:
- singlethread: 1156
- multithread score: 4593
prebuild linux-xanmod kernel:
- singlethread: 1164
- multithread score: 4594
prebuild linux-hardened kernel:
- singlethread: 1156
- multithread score: 4841
custom linux-hardened kernel:
- singlethread: 1160
- multithread score: 4977
I'm playing around with coreboot and that gives me ability to embed Linux kernel. The problem is we're limited by the amount of ROM chip which is between 4MiB to 16MiB depending on the specific device. The one I'm working on got 12MiB, about 3 is taken in order to boot normally, leaving me with 9 to play around.
Enter buildroot, (arguably) a Linux distro that allows you to have kernel, busybox, minimum libc, along with whatever software you'd choose.
While it's easy to include only what's needed to have a working system (busybox provides working shell as well as the coreutils), you'd need to get rid of stuff you don't need, such as drivers for hardware you wouldn't have.
Aside from that, you'd end up with better running kernel in general if you know what you're doing. I run Gentoo and have kept a working config that I tweak from time to time (especially on version upgrade).
I have configured custom Android kernel builds to enable more USB drivers, enable module support, and tweak various other things. For one tangible example of the result: I could plug in a USB Wi-Fi adapter and use it to simultaneously connect to another Wi-Fi network with the internal NIC while also sharing my own AP over USB. On an Android device of all things. I have also adjusted kernel builds for SBCs (like Pi clones) to get things working at all.
I have never seen any reason to configure a custom kernel for my own desktop/laptop systems. Default builds for the distros I've used have been fine for me; if I'm ever dissatisfied with anything it's the version number rather than the defconfig. The RHEL/Rocky kernel omits a few features I want (like btrfs) but I'd rather stick to other distros on personal systems than tweak a distro that isn't even meant for tweaking.
Years ago (2006-ish), I ran Gentoo on a 300mhz ultra low power system I used for an irc & web server. I gained LOTS of speed and lowered power draw even further while also enabling the hardware acceleration the board had for ssl encryption and video encoding. The whole thing would pull <5 watts and be super stable. It was well worth it.
But now days a Pi zero would trounce it in both low power draw and speed with stock kernels and I don’t really care enough to try to squeeze more out.
Customising the kernel just means something works properly in rare hardware configurations like you described. It's something which he who uses the general hardware (like an X86 desktop) can't easily see or understand because the 'stock' kernel is already working properly.
I do it because I can... I read release notes on every update and once you've configured a kernel for a particular machine you really don't need to touch the config, barring major changes like when PATA and SATA merged. Or of course if I'm adding a new piece of hardware.
I remove everything I don't need and compiling the kernel only takes a couple minutes. I use Gentoo and approach everything on my system the same way - remove the things I don't need to make it as minimal as possible.
Compiling your own kernel also makes it easier when you need to do a git bisect to determine when a bug was introduced to report it or try to fix it. I've also included kernel patches in my build years ago, but haven't needed to do that in a long time.
I used to compile a custom kernel for my phone to enable modules/drivers that weren't included by default by the maintainer.
It's not about performance for me, it's about control.
I haven't custom compiled a kernel in ages - does anyone still do this?
It used to be sorta-kinda-necessary back when memory sizes were measured in MB instead of GB. Kernels had to be under a certain size, the module system was a bit slow, memory was at a premium, hardware support was very spotty, etc. I remember applying some guy's patches to a 2.2 kernel to get full-duplex sound on the crappy sound card my Pentium 120 had (Linux has always had garbage audio support).
I think the last time I purposefully created a custom kernel was to enable some experimental scheduler code I hoped would give a performance boost. Was many years ago though.
These days you only do it if you want to learn the process or performance test a system. Or if you're running something like Gentoo - but even then you likely just use the 'default' configuration provided by Gentoo.
I just installed LFS once, which inevitably came with compiling the kernel. Many times, over and over, every time with other configs as some packages required them. For a dual core Dell Laptop from the 2010 it was surprisingly fast, actually. Still not enjoyable or feasible for my normal systems.
I suppose the most tangible benefit I get out of it is embedding a custom initramfs into the kernel and using it as an EFI stub. And I usually disable module loading and compile in everything I need, which feels cleaner. Also I make sure to tune the settings for my CPU and GPU, enable various virtualization options, and force SELinux to always remain active, among other things.
Well, I can still boot my system without an initram (although that isn't just due to the kernel config)—does that count?
Other than that, custom kernels free up a small amount of disk space that would otherwise be taken up by modules for driving things like CANbus, and taught me a whole lot about the existence of hardware and protocols that I will never use.
pain
I have multiple PCs. One is running a kernel that is mostly monolithic. That means it has only one module (a third party driver).
This made a lot if things more easy and faster, for example, it doesn't need to load an initial RAM disk (initrd) at boot, because it already has all it needs built in and can just mount the root FS and start init. Also all crypto modules are already present when I need them.
The drawback is, I can't unload a module and then load it with different parameters. If I had to change a module param, I would have to change it in the bootloader config and restart (or kexec)
LPIC-2 certification
A wee bit of knowledge and the wisdom to stop doing it.
As a Linux user of almost 30 years, compiling hundreds of kernels over the years has given me a great appreciation of pre-build kernels, and a profound gratitude for those who package them up into convenient distros that work out of the box and let me get on with the rest of my life.
i think the learning experience is valid
Absolutely! If you're doing it to learn something, by all means compile your own kernel. Every Linux user should do that at least once in my opinion. But once the learning is done, the novelty wears off fast and it just becomes tedious.
I used Linux for about a decade from the mid nineties then took a break for a few years. When I came back, every distro kernel was precompiled, it was glorious. There was never a day I said to myself "damn, I miss compiling a kernel".
Well said. I originally compiled my own kernels because I thought it was something you just did to use Linux. I also compiled hundreds of them, probably. Now it's stock kernel all the way. Not worth the effort and time and headache.