> I'd like to see a revival of awk. It's less easy to scale up, so there's very little risk that starting a project with a little bit of awk results in the next person inheriting a multi-thousand line awk codebase. Instead, you get an early-ish rewrite into a more scalable and maintainable language.
Taco Bell programming is the way to go.
This is the thinking I use when putting together prototypes. You can do a lot with awk, sed, join, xargs, parallel (GNU), etc. But it's really a lot of effort to abstract in a bash script, so the code is compact. I've built many data engineering/ML systems with this technique. Those command line tools are SO WELL debugged and have reasonable error behavior that you don't have to worry about complexities of exception handling, etc.
The problem Perl and the like have to contend with is that they have to compete with Python. If a dependency needs to be installed to do something you have to convince me that whatever language and script is worthwhile to maintain over Python which is the next de jure thing people reach for after bash. The nice thing about awk is that it’s baked in. So it has an advantage. You can convince me awk is better because I don’t have to deal with dependency issues, but it’s a harder sell for anything I have to install before I can use
And it’s not even that Python is a great language. Or has a great package manager or install situation. It doesn’t have any of those things. It does, however, have the likelihood of the next monkey after me understanding it. Which is unfortunately more than can be said about Perl
> The problem Perl and the like have to contend with is that they have to compete with Python. If a dependency needs to be installed to do something you have to convince me that whatever language and script is worthwhile to maintain over Python which is the next de jure thing people reach for after bash
A historical note: Perl was that language before Python was, and it lost that status to Python through direct competition. For a while, if you had to do anything larger than a shell script but not big enough to need a "serious" C++ or Java codebase, Perl was the natural choice, and nobody would argue with it (unless they were arguing for shell or C.) That's why Perl 5 is installed on so many systems by default.
When I first started using Python, I felt a little scared for liking it too much. I thought I should be smart enough to prefer Perl. Then Eric Raymond's article about Python[1] came out in Linux Journal in 2000, and I felt massive relief that a smart person (or someone accomplished enough that their opinions got published in Linux Journal) felt the same way I did. But I still made a couple more serious attempts to force Perl into my brain because I thought Perl was going to be the big dog forever and every professional would need to know it.
But Perl was doomed —- if Python didn't exist, it would have lost to Ruby, and if Ruby didn't exist, it would have eventually lost to virtually any language that popped up in the same niche.
Perl is installed by default on most Unix systems. FreeBSD being the exception. Python isn’t. Although Python is popular; if we’re comparing the probability of someone having the interpreter installed already, it’s greater for Perl, even if people aren’t aware they already have it.
Though one would probably never be able to work with an assumed install of Python anyway because one would not be able to assume a specific version. I am guessing this is a lesser problem for Perl, since it’s been frozen at some version of 5 for the past 25-30 years correct?
Taco Bell programming is the way to go.
This is the thinking I use when putting together prototypes. You can do a lot with awk, sed, join, xargs, parallel (GNU), etc. But it's really a lot of effort to abstract in a bash script, so the code is compact. I've built many data engineering/ML systems with this technique. Those command line tools are SO WELL debugged and have reasonable error behavior that you don't have to worry about complexities of exception handling, etc.