Ferk

joined 1 year ago
[–] [email protected] 0 points 3 months ago* (last edited 3 months ago)

That's horrible for muscle memory, every time I switch desk/keyboard I have to re-learn the position of the home/end/delete/PgUp/PgDn keys.

I got used to Ctrl-a / Ctrl-e and it became second nature, my hands don't have to fish for extra keys, to the point that it becomes annoying when a program does not support that. Some map Ctrl-a to "Select all" so, for input fields where the selection is one line, I'd rather Ctrl-a then left/right to go to the beginning/end than fish for home/end, wherever they are.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)
  • Alt-delete deletes the whole word before cursor
  • Alt-d deletes the whole word after cursor
  • Ctrl-k deletes (kill) everything after the cursor

Whatever is deleted is stored in the "killring" and can be pasted(yanked) back with Ctrl-y (like someone else already mentioned), consecutive uses of Alt-delete/Alt-d add to the killring.

  • Alt-b / Alt-f moves one word backwards / forwards
  • Alt-t swaps (translocates) the current word with the previous one
  • Ctrl-_ undo last edit operation

All those bindings are the same as in emacs.

Also, normally Ctrl-d inserts the end-of-file character, and typically can be used to close an active shell session or when you have some other interpreter open in the terminal for interactive input.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago) (2 children)

That quote was in the context of simply separating values with newlines (and the list also included "your language’s split or lines function").

Technically you don't even need awk/sed/fzf, just a loop in bash doing read would allow you to parse the input one line at a time.

while read line; do 
   echo $line # or whatever other operation
done < whateverfile

Also, those manpages are a lot less complex than the documentation for C# or Nushell (or bash itself), although maybe working with C#/nushell/bash is "easy when you’re already intuitively familiar with them". I think the point was precisely the fact that doing that is easy in many different contexts because it's a relatively simple way to separate values.

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago) (1 children)

For the record, you mention "the limitations of the number of inodes in Unix-like systems", but this is not a limit in Unix, but a limit in filesystem formats (which also extends to Windows and other systems).

So it depends more on what the filesystem is rather than the OS. A FAT32 partition can only hold 65,535 files (2^16), but both ext4 and NTFS can have up to 4,294,967,295 (2^32). If using Btrfs then it jumps to 18,446,744,073,709,551,615 (2^64).

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago)

Yes... "metadata" is becoming an overused term. Not all data is metadata.

My first thought when I read the title was about those .nfo files used by Kodi/Jellyfin and other media centers to keep information relative to the media files.

An alternative would be something like FADS (files as data structures) or something like that.

[–] [email protected] 52 points 3 months ago* (last edited 3 months ago) (13 children)

Ironically, I think it's the younger ones the ones pushing for discord the most. Some projects opened a discord because it actually made it more attractive to young people.

The question is how to make an open source alternative more attractive.

[–] [email protected] 4 points 5 months ago* (last edited 5 months ago)

If the original footage is so bad that "nonsense that people assume is part of the actual show" "could plausibly be there", then the problem is not with the AI... it wouldn't be the first time I'm confused by the artifacts in a low quality video.

[–] [email protected] 0 points 5 months ago* (last edited 5 months ago)

What C does depends on the platform, the CPU, etc.

If the result actually differs due to compilers deviating in different architectures, then what we can say is that the language/code is not as portable. But I don't think this implies there's no denotational semantics.

And if the end result doesn't really differ (despite actually executing different instructions in different architectures) then.. well, aren't all compilers for all languages (including Rust) meant to use different instructions in different architectures as appropriate to give the same result?

who’s to say what are the denotational semantics? Right? What is a ‘function’ in C? Well most C compilers translate it to an Assembly subroutine, but what if our target does not support labels, or subroutines?

Maybe I'm misunderstanding here, but my impression was that attempting to interpret the meaning of "what a function is in C" by looking at what instructions the compiler translates that to is more in line with an operational interpretation (you'd end up looking at sequential steps the machine executes one after the other), not a denotational one.

For a denotational interpretation of the meaning of that expression, shouldn't you look at the inputs/outputs of the "factorial" operation to understand its mathematical meaning? The denotational semantics should be the same in all cases if they are all denotationally equivalent (ie. referentially transparent), even if they might not be operationally equivalent.