this post was submitted on 15 Jan 2025
84 points (97.7% liked)

Programming

18001 readers
111 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
 

This may make some people pull their hair out, but I’d love to hear some arguments. I’ve had the impression that people really don’t like bash, not from here, but just from people I’ve worked with.

There was a task at work where we wanted something that’ll run on a regular basis, and doesn’t do anything complex aside from reading from the database and sending the output to some web API. Pretty common these days.

I can’t think of a simpler scripting language to use than bash. Here are my reasons:

  • Reading from the environment is easy, and so is falling back to some value; just do ${VAR:-fallback}; no need to write another if-statement to check for nullity. Wanna check if a variable’s set to something expected? if [[ <test goes here> ]]; then <handle>; fi
  • Reading from arguments is also straightforward; instead of a import os; os.args[1] in Python, you just do $1.
  • Sending a file via HTTP as part of an application/x-www-form-urlencoded request is super easy with curl. In most programming languages, you’d have to manually open the file, read them into bytes, before putting it into your request for the http library that you need to import. curl already does all that.
  • Need to read from a curl response and it’s JSON? Reach for jq.
  • Instead of having to set up a connection object/instance to your database, give sqlite, psql, duckdb or whichever cli db client a connection string with your query and be on your way.
  • Shipping is… fairly easy? Especially if docker is common in your infrastructure. Pull Ubuntu or debian or alpine, install your dependencies through the package manager, and you’re good to go. If you stay within Linux and don’t have to deal with differences in bash and core utilities between different OSes (looking at you macOS), and assuming you tried to not to do anything too crazy and bring in necessary dependencies in the form of calling them, it should be fairly portable.

Sure, there can be security vulnerability concerns, but you’d still have to deal with the same problems with your Pythons your Rubies etc.

For most bash gotchas, shellcheck does a great job at warning you about them, and telling how to address those gotchas.

There are probably a bunch of other considerations but I can’t think of them off the top of my head, but I’ve addressed a bunch before.

So what’s the dealeo? What am I missing that may not actually be addressable?

(page 3) 33 comments
sorted by: hot top controversial new old
[–] [email protected] 7 points 2 weeks ago (4 children)

In your own description you added a bunch of considerations, requirements of following specific practices, having specific knowledge, and a ton of environmental requirements.

For simple scripts or duck tape schedules all of that is fine. For anything else, I would be at least mindful if not skeptical of bash being a good tool for the job.

Bash is installed on all linux systems. I would not be very concerned about some dependencies like sqlite, if that is what you're using. But very concerned about others, like jq, which is an additional tool and requirement where you or others will eventually struggle with diffuse dependencies or managing a managed environment.

Even if you query sqlite or whatever tool with the command line query tool, you have to be aware that getting a value like that into bash means you lose a lot of typing and structure information. That's fine if you get only one or very few values. But I would have strong aversions when it goes beyond that.

You seem to be familiar with Bash syntax. But others may not be. It's not a simple syntax to get into and intuitively understand without mistakes. There's too many alternatives of if-ing and comparing values. It ends up as magic. In your example, if you read code, you may guess that :- means fallback, but it's not necessarily obvious. And certainly not other magic flags and operators.


As an anecdote, I guess the most complex thing I have done with Bash was scripting a deployment and starting test-runs onto a distributed system (and I think collecting results? I don't remember). Bash was available and copying and starting processes via ssh was simple and robust enough. Notably, the scope and env requirements were very limited.

[–] [email protected] 8 points 2 weeks ago (1 children)

You seem to be familiar with Bash syntax. But others may not be.

If by this you mean that the Bash syntax for doing certain things is horrible and that it could be expressed more clearly in something else, then yes, I agree, otherwise I'm not sure this is a problem on the same level as others.

OP could pick any language and have the same problem. Except maybe Python, but even that strays into symbolic line noise once a project gets big enough.

Either way, comments can be helpful when strange constructs are used. There are comments in my own Bash scripts that say what a line is doing rather than just why precisely because of this.

But I think the main issue with Bash (and maybe other shells), is that it's parsed and run line by line. There's nothing like a full script syntax check before the script is run, which most other languages provide as a bare minimum.

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 13 points 2 weeks ago (5 children)

At the level you're describing it's fine. Preferably use shellcheck and set -euo pipefail to make it more normal.

But once I have any of:

  • nested control structures, or
  • multiple functions, or
  • have to think about handling anything else than simple strings that other programs manipulate (including thinking about bash arrays or IFS), or
  • bash scoping,
  • producing my own formatted logs at different log levels,

I'm on to Python or something else. It's better to get off bash before you have to juggle complexity in it.

load more comments (5 replies)
[–] [email protected] 1 points 2 weeks ago (1 children)

Can I slap a decorator on a Bash function? I love my @retry(...) (via tenacity, even if it's a bit wordy).

load more comments (1 replies)
[–] [email protected] 3 points 2 weeks ago (6 children)

May I introduce you to rust script? Basically a wrapper to run rust scripts right from the command line. They can access the rust stdlib, crates, and so on, plus do error handling and much more.

Anti Commercial-AI license

[–] [email protected] 1 points 2 weeks ago (5 children)

How easily can you start parsing arguments and read env vars? Do people import clap and such to provide support for those sorts of needs?

load more comments (5 replies)
[–] [email protected] 3 points 2 weeks ago

Basically a wrapper to run rust scripts right from the command line.

Isn't that just Python? :v

load more comments (4 replies)
[–] [email protected] 7 points 2 weeks ago

It's ok for very small scripts that are easy to reason through. I've used it extensively in CI/CD, just because we were using Jenkins for that and it was the path of least resistance. I do not like the language though.

[–] [email protected] 42 points 2 weeks ago (1 children)

I just don't think bash is good for maintaining the code, debugging, growing the code over time, adding automated tests, or exception handling

[–] [email protected] 9 points 2 weeks ago (2 children)

If you need anything that complex and that it’s critical for, say, customers, or people doing things directly for customers, you probably shouldn’t use bash. Anything that needs to grow? Definitely not bash. I’m not saying bash is what you should use if you want it to grow into, say, a web server, but that it’s good enough for small tasks that you don’t expect to grow in complexity.

[–] [email protected] 9 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

small tasks that you don’t expect to grow in complexity

On one conference I heard saying: "There is no such thing as temporary solution and there is no such thing as proof of concept". It's an overexaguration of course but it has some truth to it - there's a high chance that your "small change" or PoC will be used for the next 20 years so write it as robust and resilient as possible and document it. In other words everything will be extended, everything will be maintained, everything will change hands.

So to your point - is bash production ready? Well, depends. Do you have it in git? Is it part of some automation pipeline? Is it properly documented? Do you by chance have some tests for it? Then yes, it's production ready.

If you just "write this quick script and run it in cron" then no. Because in 10 years people will pull their hair screaming "what the hell is hapenning?!"

Edit: or worse, they'll scream it during the next incident that'll happen at 2 AM on Sunday

[–] [email protected] -1 points 2 weeks ago

I find it disingenuous to blame it on the choice of bash being bad when goalposts are moved. Solutions can be temporary as long as goalposts aren’t being moved. Once the goalpost is moved, you have to re-evaluate whether your solution is still sufficient to meet new needs. If literally everything under the sun and out of it needs to be written in a robust manner to accommodate moving goalposts, by that definition, nothing will ever be sufficient, unless, well, we’ve come to a point where a human request by words can immediately be compiled into machine instructions to do exactly what they’ve asked for, without loss of intention.

That said, as engineers, I believe it’s our responsibility to identify and highlight severe failure cases given a solution and its management, and it is up to the stakeholders to accept those risks. If you need something running at 2am in the morning, and a failure of that process would require human intervention, then maybe you should consider not running it at 2am, or pick a language with more guardrails.

[–] [email protected] 24 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

it’s (bash) good enough for small tasks that you don’t expect to grow in complexity.

I don't think you'll get a lot of disagreement on that, here. As mention elsewhere, my team prefers bash for simple use cases (and as their bash-hating boss, I support and agree with how and when they use bash.)

But a bunch of us draw the line at database access.

Any database is going to throw a lot of weird shit at the bash script.

So, to me, a bash script has grown to unacceptable complexity on the first day that it accesses a database.

[–] [email protected] 4 points 2 weeks ago (1 children)

We have dozens of bash scripts running table cleanups and maintenece tasks on the db. In the last 20 years these scripts where more stable than the database itself (oracle -> mysql -> postgres).

But in all fairness they just call the cliclient with the appropiate sql and check for the response code, generating a trap.

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

That's a great point.

I post long enough responses already, so I didn't want to get into resilience planning, but your example is a great highlight that there's rarely hard and fast rules about what will work.

There certainly are use cases for bash calling database code that make sense.

I don't actually worry much when it's something where the first response to any issue is to run it again in 15 minutes.

It's cases where we might need to do forensic analysis that bash plus SQL has caused me headaches.

load more comments (2 replies)
[–] [email protected] 40 points 2 weeks ago

"Use the best tool for the job, that the person doing the job is best at." That's my approach.

I will use bash or python dart or whatever the project uses.

[–] [email protected] 12 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

A few responses for you:

  • I deeply despise bash (edit: this was hyperbole. I also deeply appreciate bash, as is appropriate for something that has made my life better for free!). That Linux shell defaults settled on it is an embarrassment to the entire open source community. (Edit: but Lexers and Parsers are hard! You don't see me fixing it, so yes, I'll give it a break. I still have to be discerning for production use, of course.)
  • Yes, Bash is good enough for production. It is the world's current default shell. As long as we avoid it's fancier features (which all suck for production use), a quick bash script is often the most reasonable choice.
  • For the love of all that is holy, put your own personal phone number and no one else's in the script, if you choose to use bash to access a datatbase. There's thousands of routine ways that database access can hiccup, and bash is suitable to help you diagnose approximately 0% of them.
  • If I found out a colleague had used bash for database access in a context that I would be expected to co-maintain, I would start by plotting their demise, and then talk myself down to having a severe conversation with them - after I changed it immediately to something else, in production, ignoring all change protocols. (Invoking emergency change protocols.)

Edit: I can't even respond to the security concerns aspect of this. Choice of security tool affects the quality of protection. In this unfortunate analogy, Bash is "the pull out method". Don't do that anywhere that it matters, or anywhere that one can be fired for security violations.

(Edit 2: Others have mentioned invoking SQL DB cleanup scripts from bash. I have no problem with that. Letting bash or cron tell the DB and a static bit of SQL to do their usual thing has been fine for me, as well. The nightmare scenario I was imagining was bash gathering various inputs to the SQL and then invoking them. I've had that pattern blow up in my face, and had a devil of a time putting together what went wrong. It also comes with security concerns, as bash is normally a completely trusted running environment, and database input often come from untrusted sources.)

[–] [email protected] 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Could you explain those db connection hiccups you’ve seen?

[–] [email protected] 1 points 2 weeks ago (6 children)

Sure.

I'll pick on postgres because it's popular. But I have found that most databases have a similar number of error codes.

https://www.postgresql.org/docs/current/errcodes-appendix.html

It's not an specific error that's the issue, it's the sheer variety of ways things can go wrong, combined with bash not having been architected with the database access use case in mind.

load more comments (6 replies)
[–] [email protected] 8 points 2 weeks ago (1 children)

Why internet man hate Bash? Bash do many thing. Make computer work.

[–] [email protected] 7 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I actually (also) love bash, and use it like crazy.

What I really hate is that bash is so locked in legacy that it's bad features (on a scripting language scale, which isn't fair) (and of which there are too many to enumerate) are now locked in permanently.

I also hate how convention has kept other shells from replacing bash's worst features with better modern alternatives.

To some extent, I'm railing against how hard it is to write a good Lexer and a Parser, honestly. Now that bash is stable, there's little interest in improving it. Particularly since one can just invoke a better scripting language for complex work.

I mourn the sweet spot that Perl occupies, that Bash and Python sit on either side of, looking longingly across the gap that separated their practical use cases.

I have lost hope that Python will achieve shell script levels of pragmatism. Although the invoke library is a frigging cool attempt.

But I hold on to my sorrow and anger that Bash hasn't bridged the gap, and never will, because whatever it can invoke, it's methods of responding to that invocation are trapped in messes like "if...fi".

[–] [email protected] 2 points 2 weeks ago (1 children)

What do you suppose bash could do here? When a program reaches some critical mass in terms of adoption, all your bugs and features are features of your program, and, love it or hate it, somebody’s day is going to be ruined if you do your bug fixes, unless, of course, it’s a fix for something that clearly doesn’t work in the very sense of the word.

I’m sure there’s space for a clear alternative to arise though, as far as scripting languages go. Whether we’ll see that anytime soon is hard to tell, cause yeah, a good lexer and parser in the scripting landscape is hard work.

[–] [email protected] 4 points 2 weeks ago

What do you suppose bash could do here?

  • For the love of all that is holy, it's not 1970, we don't need to continue to tolerate "if ... fi"
  • Really everything about how bash handles logic bridging multiple lines of a file. (loops, error handling, etc)

I’m sure there’s space for a clear alternative to arise though, as far as scripting languages go.

The first great alternative/attempt does exist, in PowerShell. (Honorable mention to Zsh, but I find it has most of the same issues as bash without gaining the killer features of pwsh.)

But I'm a cranky old person so I despise (and deeply appreciate!) PowerShell for a completely different set of reasons.

At the moment I use whichever gets the job done, but I would love to stop switching quite so often.

I hold more hope that PowerShell will grow to bridge the gap than that a fork of bash will. The big thing PowerShell lacks is bash's extra decades of debugging and refinement.

[–] [email protected] 8 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I agree with your points, except if the script ever needs maintaining by someone else’s they will curse you and if it gets much more complicated it can quickly become spaghetti. But I do have a fair number of bash scripts running on cron jobs, sometimes its simplicity is unbeatable!

Personally though the language I reach for when I need a script is Python with the click library, it handles arguments and is really easy to work with. If you want to keep python deps down you can also use the sh module to run system commands like they’re regular python, pretty handy

[–] [email protected] 1 points 2 weeks ago (1 children)

Those two libraries actually look pretty good, and seems like you can remove a lot of the boilerplate-y code you’d need to write without them. I will keep those in mind.

That said, I don’t necessarily agree that bash is bad from a maintainability standpoint. In a team where it’s not commonly used, yeah, nobody will like it, but that’s just the same as nobody would like it if I wrote in some language the team doesn’t already use? For really simple, well-defined tasks that you make really clear to stakeholders that complexity is just a burden for everyone, the code should be fairly simple and straightforward. If it ever needs to get complicated, then you should, for sure, ditch bash and go for a larger language.

[–] [email protected] 5 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

That said, I don’t necessarily agree that bash is bad from a maintainability standpoint.

My team uses bash all the time, but we agree (internally as a team) that bash is bad from a maintainability perspective.

As with any tool we use, some of us are experts, and some are not. But the non-experts need tools that behave themselves on days when experts are out of office.

We find that bash does very well when each entire script has no need for branching logic, security controls, or error recovery.

So we use substantial amounts of bash in things like CI/CD pipelines.

[–] [email protected] 5 points 2 weeks ago

Hell, I hate editing bash scripts I’ve written. The syntax just isn’t as easy

[–] [email protected] 8 points 2 weeks ago (1 children)

I don't disagree with this, and honestly I would probably support just using bash like you said if I was in a team where this was suggested.

I think no matter how simple a task is there are always a few things people will eventually want to do with it:

  • Reproduce it locally
  • Run unit tests, integration tests, smoke tests, whatever tests
  • Expand it to do more complex things or make it more dynamic
  • Monitor it in tools like Datadog

If you have a whole project already written in Python, Go, Rust, Java, etc, then just writing more code in this project might be simpler, because all the tooling and methodology is already integrated. A script might not be so present for many developers who focus more on the code base, and as such out of sight out of mind sets in, and no one even knows about the script.

There is also the consideration that many people simply dislike bash since it's an odd language and many feel it's difficult to do simple things with it.

due to these reasons, although I would agree with making the script, I would also be inclined to have the script temporarily while another solution is being implemented.

[–] [email protected] 1 points 2 weeks ago

I don’t necessarily agree that all simple tasks will lead to the need for a test suite to accommodate more complex requirements. If it does reach that point,

  1. Your simple bash script has and is already providing basic value.
  2. You can (and should) move onto a more robust language to do more complicated things and bring in a test suite, all while you have something functional and delivering value.

I also don’t agree that you can just solder on whatever small task you have to whatever systems you already have up and running. That’s how you make a Frankenstein. Someone at some point will have to come do something about your little section because it started breaking, or causing other things to break. It could be throwing error messages because somebody changed the underlying db schema. It could be calling and retrying a network call and due to, perhaps, poorly configured backoff strategy, you’re tripping up monitoring alerts.

That said, I do agree on it suitable for temporary tasks.

[–] [email protected] 6 points 2 weeks ago (1 children)

Bash is perfectly good for what you’re describing.

[–] [email protected] 3 points 2 weeks ago (1 children)

Serious question (as a bash complainer):

Have I missed an amazing bash library for secure database access that justifies a "perfectly good" here?

[–] [email protected] 3 points 2 weeks ago (1 children)

Every database I know comes with an SQL shell that takes commands from stdin and writes query results to stdout. Remember that "bash" never means bash alone, but all the command line tools from cut via jq to awk and beyond … so, that SQL shell would be what you call "bash library".

[–] [email protected] 2 points 2 weeks ago (1 children)

Thank you. I wasn't thinking about that. That's a great point.

As long as any complex recovery logic fits inside the SQL, itself, I don't have any issue invoking it from bash.

It's when there's complicated follow-up that needs to happen in bash that I get anxious about it, due to past painful experiences.

load more comments (1 replies)
load more comments
view more: ‹ prev next ›