this post was submitted on 17 Apr 2024
1 points (100.0% liked)
KDE
5317 readers
4 users here now
KDE is an international technology team creating user-friendly free and open source software for desktop and portable computing. KDE’s software runs on GNU/Linux, BSD and other operating systems, including Windows.
Plasma 6 Bugs
If you encounter a bug, proceed to https://bugs.kde.org/, check whether it has been reported.
If it hasn't, report it yourself.
PLEASE THINK CAREFULLY BEFORE POSTING HERE.
Developers do not look for reports on social media, so they will not see it and all it does is clutter up the feed.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Why would you drop a 10GB file in /tmp and leave it there?
Every decent app I've used that processes large files also moves them to a final location when finished, in which case it makes sense not to use /tmp for those, because doing so would turn that final move operation into a copy (unless you happen to have /tmp on the same filesystem as the target location). That's why such applications usually let you configure the directory they use for their large temp files, or else create temp files in the target dir to begin with.
For what it's worth, I changed my /tmp to a tmpfs years ago, even on a 16GB system, for performance and to minimize SSD wear. I think it was only ever restrictive once or twice, and nothing terrible happened; I just had to clear some space or choose a different dir for whatever I was doing.
It's worth reviewing the tmpfs docs to make sure you understand how that memory is actually managed. It's not like a simple RAM disk.
Nice. I'm going to go set up tmpfs rn
Why would it matter the reason of dropping a file of X size? The point is that not all applications are "decent" and some will undoubtedly use /tmp because "it might be the most logical thing" for any developer that's not really up to date.
I don't see how reviewing the tmpfs helps in this scenario if at all... we are talking about end-users your common joe/dane running your day to day applications, whatever they may be. I don't and will never expect developers to adhere to anything and just put out whatever.
It matters because it's the difference between a real-world situation, and a fabricated scenario that you expect to be problematic but doesn't generally happen.
All filesystems have limits, and /tmp in particular has traditionally been sized much smaller than the root or home filesystems, regardless of what backing store is used. This has been true for as long as unix has existed, because making it large by default was (and is) usually a waste of disk space. Making it a tmpfs doesn't change much.
In my experience, the developers of such applications discover their mistake pretty quickly after their apps start seeing wide use, when their users complain about /tmp filling up and causing failures. The devs then fix their code. That's why we don't see it often in practice.
I mentioned it in case it helps you to understand that the memory is used more efficiently than you might think. Perhaps that could relieve some of your concern about using it on a 16GB system. Perhaps not. Just trying to help.
We are? I don't see them echoing your concerns. Perhaps that's because this is seldom a problem.
I humbly disagree. We don't live in that utopia.
I guess for an scenario to be real everyone has to know exactly what's happening? They will know what caused it and they will all know how to properly report it even though I don't even expect a lot of people to know their system especially your average joe/dane nor do I expect them to even troubleshoot the issue if something were to happen. It doesn't really invalidate the scenario at all.
A fabricated scenario is itself pretty redundant. :)