Haha what a sad hateful bitter fuck. May he find out more
Haha what a sad hateful bitter fuck. May he find out more
The issue is not only complexity, though it does play a role. You can also run into issues with pure text parsing, especially when whitespace is involved. The IP thing is a very classic example in my opinion, and while whitespace might not be an issue there (more common with filenames), the queries you find online in my opinion aren’t less complex.
Normal CLI output is often meant to be consumed by humans, so the data presentation requirements are different. Then you find out that an assumption you made isn’t true (e.g. due to LANG indicating a non-English language) and suddenly your matching rules don’t fit.
There are just a lot of pitfalls that can make things go subtly wrong, which is why parsing general CLI output that’s not intended to be parsed is often advised against. It doesn’t mean that it will go wrong.
Regarding Python, I think it has a place when you do what I’d call data set processing, while what I talk about is shell plumbing. They can both use JSON, but the tools are probably not the same.
It’s a cool shell, I like ita lot more since I found out you can use ?
to mark a field optional
It’s true that compared to the other utilities, it’s rather new. First release was almost 13 years ago. awk
, which I think is the closest comparison, on the other hand turns 50 in 2027… though new awk is only 40.
One might wonder if at those file sizes, working with text still makes sense. I think there’s a good reason journald
uses a binary format for storing all that data. And “real” databases tend to scale better than text files in a filesystem as well, even though a filesystem is a database.
Most people won’t have to deal with that amount of text based data ever.
You’re welcome! And actually, even this approach can yield surprising results… As in have you heard of deprecated IPv6 addresses before? Well I hadn’t until I realized my interface now had one (it actually didn’t anymore when I wrote the post, I used the jq command on old output, not in a pipe). Which made my DynDNS script stop working because there was now a line break in the request URL that curl rightfully failed on.
Edit: also despite what the title of the post says, in not an authoritative expert on the matter and you can use whatever works for you. I see these posts more as a basis for discussions like here than definitive guides to do something the correct way.
Maybe I should have written it differently: I think people are rather willing to install another tool than a whole new shell. The later would also require you to fundamentally change your scripts, because that’s most often where you need that carefully extracted data. Otherwise manually copying and pasting would suffice.
I was thinking about writing a post about nu
as well. But I wasn’t sure that appeals to many, or is it should be part of something like a collection of stuff I like, together with fish as an example of a more traditional shell.
Thanks, I never used it and had forgotten about it until now.
Linux also has some shells working on structured data (e.g. nu
and elvish
). But those don’t really help you on machines that don’t have those installed.
In general, I think Powershell’s approach is better than the traditional one, but I really dislike the general commands. Always feels weird using it.
From a quick glance, this is pacman
with a yaml file instead of a shell script and PKGINFO (the latter was introduced for the same reason you’re doing it your way in the first place). The carcinization of package managers
For me, the factors were:
And from what I hear, the main selling point of NixOS is how easy it is to reinstall.
Well, that isn’t the first thing I’d mention, but whatever. Use whatever you’re comfortable with.
Also Mozilla uses it as far as I know
Did you try Nix (on Arch) or NixOS? For the latter, https://nixos.org/manual/nixos/stable/#sec-declarative-package-mgmt explains the basic installation.
That’s another win for the oat flakes, they don’t drive your blood sugar too high, but will keep it up for very long (no carb crash), plus they contain a load of micronutrients. Even their protein percentage is quite high - higher than chickpeas for example.
Long story short, I don’t understand why people here are mad that the US government will no longer subsidize unhealthy and overpriced garbage. I know this probably isn’t where it’s going to stop, but at least this particular instance makes sense I guess.
I’m not against did stamps being able to buy sweets. The issue I have is with a lot of breakfast cereals is that they too are in fact sweets, but people see them as a proper meal. They’re not. Occasional sweets are fine. Regularly eating a full meal consisting only of sweets is not.
I haven’t had sugar cereal in a decade. I don’t know how you could ever prefer them over oat flakes
I really like fish. It’s just so pragmatic, I don’t know how to describe it differently. No groundbreaking concepts (like nu or elvish), but the tools you need are right there and easily accessible with syntax that doesn’t make me scratch my head (bash).
Never used it, the drugs just showed up at my door without me doing anything
Nah, they just had spare wafer space and wanted to fill it up with something, so they made up these instructions. No use beyond that has ever been found for them
Honestly at this point that’d be a detriment