this post was submitted on 23 Sep 2024
768 points (95.6% liked)

linuxmemes

21009 readers
1045 users here now

Hint: :q!


Sister communities:


Community rules (click to expand)

1. Follow the site-wide rules

2. Be civil
  • Understand the difference between a joke and an insult.
  • Do not harrass or attack members of the community for any reason.
  • Leave remarks of "peasantry" to the PCMR community. If you dislike an OS/service/application, attack the thing you dislike, not the individuals who use it. Some people may not have a choice.
  • Bigotry will not be tolerated.
  • These rules are somewhat loosened when the subject is a public figure. Still, do not attack their person or incite harrassment.
  • 3. Post Linux-related content
  • Including Unix and BSD.
  • Non-Linux content is acceptable as long as it makes a reference to Linux. For example, the poorly made mockery of sudo in Windows.
  • No porn. Even if you watch it on a Linux machine.
  • 4. No recent reposts
  • Everybody uses Arch btw, can't quit Vim, and wants to interject for a moment. You can stop now.

  • Please report posts and comments that break these rules!

    founded 1 year ago
    MODERATORS
    768
    submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/[email protected]
     
    you are viewing a single comment's thread
    view the rest of the comments
    [–] [email protected] 5 points 3 weeks ago (2 children)

    The thing with journalctl is that it is a database. Thus means that searching and finding things can be fast and easy in high complexity cases but it can also stall in cases with very high resource usage.

    [–] [email protected] 3 points 3 weeks ago

    But why?

    I just can't grasp why such elementary things need to be so fancied up.

    It's not like we don't have databases and use them for relevant data. But this isn't it.

    And databases with hundreds of milions of rows are faster than journalctl (in my experience on the same hardware).

    [–] [email protected] 9 points 3 weeks ago (1 children)

    Thing is that they could have preserved the textual nature and had some sort of external metadata to facilitate the 'fanciness'. I have worked in other logging systems that did that, with the ability to consume the plaintext logs in an 'old fashioned' way but a utility being able to do all the nice filtering, search, and special event marking that journalctl provides without compromising the existence of the plain text.

    [–] [email protected] -1 points 3 weeks ago (2 children)

    Plain text is slow and cumbersome for large amounts of logs. It would of had a decent performance penalty for little value add.

    If you like text you can pipe journalctl

    [–] [email protected] 1 points 3 weeks ago

    As I said, I've dealt with logging where the variable length text was kept as plain text, with external metadata/index as binary. You have best of both worlds here. Plus it's easier to have very predictable entry alignment, as the messy variable data is kept outside the binary file, and the binary file can have more fixed record sizes. You may have some duplicate data (e.g. the text file has a text version of a timestamp duplicated with the metadata binary timestamp), but overall not too bad.

    [–] [email protected] 3 points 3 weeks ago (1 children)

    But if journalctl is slow, piping is not helping.

    We have only one week of very sparse logs in it, yet it takes several seconds... greping tens of gigabytes of logs can be sometimes faster. That is insane.

    [–] [email protected] 2 points 3 weeks ago

    Strange

    Probably worth asking on a technical