this post was submitted on 11 May 2025
867 points (97.4% liked)

Programmer Humor

23330 readers
370 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 6 days ago

Name all your files *.

[–] [email protected] 6 points 6 days ago
[–] [email protected] 31 points 1 week ago (2 children)

Good luck with your 256 characters.

[–] [email protected] 32 points 1 week ago (1 children)

When you run out of characters, you simply create another 0 byte file to encode the rest.

Check mate, storage manufacturers.

[–] [email protected] 13 points 1 week ago* (last edited 1 week ago)

File name file system! Looks like we broke the universe! Wait, why is my MFT so large?!

[–] [email protected] 14 points 1 week ago* (last edited 1 week ago) (1 children)

255, generally, because null termination. ZFS does 1023, the argument not being "people should have long filenames" but "unicode exists", ReiserFS 4032, Reiser4 3976. Not that anyone uses Reiser, any more. Also Linux' PATH_MAX of 4096 still applies. Though that's in the end just a POSIX define, I'm not sure whether that limit is actually enforced by open(2)... man page speaks of ENAMETOOLONG but doesn't give a maximum.

It's not like filesystems couldn't support it it's that FS people consider it pointless. ZFS does, in principle, support gigantic file metadata but using it would break use cases like having a separate vdev for your volume's metadata. What's the point of having (effectively) separate index drives when your data drives are empty.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago) (1 children)

...Just asking, just asking: Why is the default FILENAME_MAX on Linux/glibc 4096?

[–] [email protected] 2 points 6 days ago (1 children)

Because PATH_MAX is? Also because it's a 4k page.

FILENAME_MAX is not safe to use for buffer allocations btw it could be INT_MAX.

[–] [email protected] 1 points 5 days ago

Thanks! Got an answer and not 200 downvotes. This is why I love Lemm-Lemm.

[–] [email protected] 16 points 1 week ago (1 children)

I remember the first time I ran out of inodes: it was very confusing. You just start getting ENOSPC, but du still says you have half the disk space available.

[–] [email protected] 4 points 1 week ago

Ah memories. That was an interesting lesson.

[–] [email protected] 1 points 1 week ago

Let me guess, over 30 years old.

[–] [email protected] 47 points 1 week ago (5 children)

You want real infinite storage space? Here you go: https://github.com/philipl/pifs

[–] [email protected] 1 points 6 days ago

Easy, just replace each byte of data with multiple bytes of metadata. I see no problem here

[–] [email protected] 4 points 6 days ago

Finally someone uses the fact that compute time is so much cheaper than storage!

[–] [email protected] 4 points 1 week ago

Breakthrough vibes

[–] [email protected] 8 points 1 week ago* (last edited 1 week ago)

that's awesome! I'm just migrating all my data to πfs. finally mathematics is put to a proper use!

[–] [email protected] 59 points 1 week ago (11 children)

I had a manager once tell me during a casual conversation with complete sincerity that one day with advancements in compression algorithms we could get any file down to a single bit. I really didn't know what to say to that level of absurdity. I just nodded.

[–] [email protected] 2 points 3 days ago

Maybe they also believe themselves to be father of computing

[–] [email protected] 6 points 6 days ago (1 children)

Well he's not wrong. The decompression would be a problem though.

[–] [email protected] 4 points 6 days ago

Yeah with lossy compression the future is today!

[–] [email protected] 1 points 1 week ago

How to tell someone you don't know how compression algorithms work, without telling them directly.

[–] [email protected] 11 points 1 week ago* (last edited 1 week ago) (1 children)

You can give me any file, and I can create a compression algorithm that reduces it to 1 bit. (*)

spoiler(*) No guarantees about the size of the decompression algorithm or its efficacy on other files

[–] [email protected] 1 points 5 days ago

Here's a simple command to turn any file into a single b!

echo a > $file_name
[–] [email protected] 3 points 1 week ago* (last edited 1 week ago) (2 children)

It's an interesting question, though. How far CAN you compress? At some point you've extracted every information contained and increased the density to a maximum amount - but what is that density?

[–] [email protected] 2 points 6 days ago* (last edited 6 days ago)

This is a really good question!

I believe the general answer is, until the compressed file is indistinguishable from randomness. At that point there is no more redundant information left to compress. Like you said, the 'information content' of a message can be measured.

(Note that there are ways to get a file to look like randomness that don't compress it)

[–] [email protected] 3 points 1 week ago

I think by the time we reach some future extreme of data density, it will be in a method of storage beyond our current understanding. It will be measured in coordinates or atoms or fractions of a dimension that we nullify.

[–] [email protected] 28 points 1 week ago* (last edited 1 week ago)

That's the kind of manager that also tells you that you just lack creativity and vision if you tell them that it's not possible. They also post regularly on LinkedIn

[–] [email protected] 9 points 1 week ago

u can have everthing in a single bit, if the decompressor includes the whole universe

[–] [email protected] 8 points 1 week ago

Send him your work: 1 (or 0 ofc)

[–] [email protected] 5 points 1 week ago

Just make a file system that maps each file name to 2 files. The 0 file and the 1 file.

Now with just a filename and 1 bit, you can have any file! The file is just 1 bit. It's the filesystems that needs more than that.

[–] [email protected] 7 points 1 week ago

That’s precisely when you bet on it.

load more comments
view more: next ›