this post was submitted on 15 Feb 2024
2 points (100.0% liked)
Open Source
31243 readers
261 users here now
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon from opensource.org, but we are not affiliated with them.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Experimental features are explicitly defined as requiring CVEs. You are supposed to run them in production, that's why they're available as expiermental features and not on a development branch somewhere. You're just supposed to run them carefully, and examine what they're doing, so they can move out of experiment into mainline.
And that requires knowledge about any vulnerabilities, hence why it's required to assigned CVEs to experimental features.
And I'm not sure why you think a DoS isn't a vulnerability, that's literally one of the most classic CVEs there are. A DoS is much, much more severe than a DDoS.
If you do examine what it's doing you will catch this as soon as an attacker exploits it, and can disable it. Also, you should maybe not run the entire production with experimental features enabled. In a stable feature this would absolutely be a CVE, but this is marked experimental because it might not work right or even crash, like here
Correct, I agree you run it with an eye on it (which you should probably do anyway) instead of firing and forgetting (which, to nginx's credit, is typically stable enough you can do that just fine).
That said, nginx treats experimental as something you explicitly run in production- when they announced they added it into experimental they actually specifically say to run it in prod in an A/B setup.
https://www.nginx.com/blog/our-roadmap-quic-http-3-support-nginx/
That means if you're large enough that A can pick up the slack if B shits the bed. The only impact would be that you have to use HTTP2