What “scares” me the most is the journal… for some reason it takes too long to get specific unit logs, and should anything break down in it, there is no way for me to fix it. Like logging has been solved forever, and I prefer specific unit logs to the abomination of journalctl.
But like unit files are everywhere, and systemctl at its core is a nice cmd utility.
The thing with journalctl is that it is a database. Thus means that searching and finding things can be fast and easy in high complexity cases but it can also stall in cases with very high resource usage.
Thing is that they could have preserved the textual nature and had some sort of external metadata to facilitate the ‘fanciness’. I have worked in other logging systems that did that, with the ability to consume the plaintext logs in an ‘old fashioned’ way but a utility being able to do all the nice filtering, search, and special event marking that journalctl provides without compromising the existence of the plain text.
We have only one week of very sparse logs in it, yet it takes several seconds… greping tens of gigabytes of logs can be sometimes faster. That is insane.
I will take OpenRC to my grave
I’m more of a runit guy, but I started using Alpine recently, and I have to say, openrc is also pretty nice!
cool
Unit files, sockets and systemctl
Stop it Patrick, you’re scaring him!
What scared me about it is this kind of shit.
What “scares” me the most is the journal… for some reason it takes too long to get specific unit logs, and should anything break down in it, there is no way for me to fix it. Like logging has been solved forever, and I prefer specific unit logs to the abomination of journalctl.
But like unit files are everywhere, and systemctl at its core is a nice cmd utility.
Just like windows even log
The thing with journalctl is that it is a database. Thus means that searching and finding things can be fast and easy in high complexity cases but it can also stall in cases with very high resource usage.
But why?
I just can’t grasp why such elementary things need to be so fancied up.
It’s not like we don’t have databases and use them for relevant data. But this isn’t it.
And databases with hundreds of milions of rows are faster than journalctl (in my experience on the same hardware).
Thing is that they could have preserved the textual nature and had some sort of external metadata to facilitate the ‘fanciness’. I have worked in other logging systems that did that, with the ability to consume the plaintext logs in an ‘old fashioned’ way but a utility being able to do all the nice filtering, search, and special event marking that journalctl provides without compromising the existence of the plain text.
Plain text is slow and cumbersome for large amounts of logs. It would of had a decent performance penalty for little value add.
If you like text you can pipe journalctl
But if journalctl is slow, piping is not helping.
We have only one week of very sparse logs in it, yet it takes several seconds… greping tens of gigabytes of logs can be sometimes faster. That is insane.
Strange
Probably worth asking on a technical
science progresses one funeral at a time
Planck’s principle
There is nothing scientific about systemd