I'd rather that most devs don't touch that signal. Using that binding and having a GUI or CLI program continue hanging because the dev screwed up the cleanup is a real pain. And someone writing a Bash script is highly likely for doing something "very clever" with that signal to make my life harder.
Or if you're going to do something with it, at least make it clear you're trolling me. Show me a text add that forces me to choose my favorite Korean boy band before I can exit, or something in that vein.
Agree that developers should be very careful about messing up Ctrl-C. However, as others have pointed out, it can make sense for long-running processes (especially in cases where there's an intermediate result that can be output instead of the final result). I think a good compromise is to only ever trap Ctrl-C once, so that a double Ctrl-C always successfully interrupts.
That’s fair! But sometimes I want to have a hook that basically says “Are you sure?” to catch mistakes in the wrong terminal or something. One thing I wrote recently took several hours to run and it’d suck to accidentally close it because I type without looking.
I'm working on a book about Bash scripting which is currently in the review phase, and it includes most of these. For graceful exit I recommend `trap cleanup EXIT` rather than specifically trapping SIGINT, mostly because the special exit signal is triggered no matter why the script is interrupted. I wouldn't normally recommend pulling out variables into a separate files until those variables are used by more than one script. I'd be interested in the rationale for why that helps refactoring.
Yes... but do not take that as a "cargo cult script shebang".
If you're a sysadmin writing a script for a company with 2k linux servers, that has a policy of "we only use linux version Foo X"... and we do not use other bash in the system than /bin/bash (no bash compiled by hand, no multiple versions of bash, etc)... then portability via "env" does not make sense.
If you have two laptops and a raspberry at home, with debian or arch, and you write a script for yourself... then portability via "env" does not make sense.
And last but not least... using env is slower.
See:
strace -fc /bin/bash -c ':'
Vs
strace -fc /usr/bin/env bash -c ':'
On my system, that's 92 syscalls and 3 errors, Vs 152 syscalls and 8 errors.
Just to start procesing.
Diferent levels of system bloat (environment, library paths, etc) can give different results than my example.
And as others said... if you're not using GNU/bash syntax and the script is really simple, the best for portbility is to go with /bin/sh.
strace -fc /bin/sh -c ':'
On my system 41 syscalls and 1 error... (and less RAM, CPU and pagefaults).
If you're not using associative arrays, array indexes, non POSIX builtin options, and other bash extensions... if the script is just to join a few commands and variables... it pays the effort to write it in simple sh, both, for portability and performance.
- Do I trust my code to run on a machine where /bin/bash doesn't work?
- Do I trust my users to have their PATH configured correctly?
IME a user misconfiguring ~/.bashrc is about seven million times more likely than some theoretical argument about "portability", or even the idea that running my code on some unspecified version of bash that a mac user accidentally downloaded while screwing up a Homebrew copy/paste command is preferable to using the factory default that everyone has.
- Do I respect people who have set up their PATH correctly (to prefer, by example, a newer /usr/local/bin/bash or $HOME/bin/bash than the standard /bin/bash)
This is a great list. Also while reading about 'readonly' bash variables I ran across this amazing project which lets you call native functions from bash [0]. My mind is spinning from the possibilities...
Huge +1 to using long form options in scrips, even if you’re the sole maintainer of the script. Also if you have a command that takes many flags, breaking them out onto new lines can help keep it readable
> But if you want a portable Python then you still target Python
I am still supporting systems that came with Python 2. You get portable Python the same way you get portable bash: build and deploy the interpreter with your code.
You’re missing my point. If you’re targeting Python then you need Python installed however you don’t always know where that executable might live. Whereas if you’re targeting shell scripts then you can always fallback to regular Bourne shell if you need portability and that should always have an executable or simlink in /bin/sh.
You're not the only one, I spent a minute thinking about a discussion explaining why the `env` way was better, I was going to have a rant about people giving contradictory advice for "portability"!
EXIT is trapped in the same way as 0, it's something that happens when your shell exits. Ctrl+C sends the SIGINT signal but you can catch it with INT too. You want to do the latter because your gracefully exiting the script, if you want to have some cleanup after that you could trap EXIT (for deleting tmp files or something).
- Use shellcheck (static analysis/linter) https://www.shellcheck.net/
- Use shunit2 (unit tests) https://github.com/kward/shunit2
- Use 'local' or 'readonly' to annotate your variables
- Trap ctrl+c to gracefully exit (details here https://www.tothenew.com/blog/foolproof-your-bash-script-som...)
- Stick to long-form options for readability (--delete over -d for example)
- #!/usr/bin/env > #!/bin/bash for portability
- Consider setting variable values in a file and importing it at the top of your script to improve refactoring
- You will eventually forget how your scripts work - seriously consider if Bash is your best option for anything that needs to last a while!