Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Really minimal template, that works well for all scenarios and doesn't cause unexpected behaviour (like changing IFS does):

    #!/usr/bin/env bash
    
    set -eEu -o pipefail
    DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
reference all relative things with dir first:

    "$DIR/relativestuff.sh"


I much prefer this approach (finding the script directory and explicitly using it) over changing into the directory of the script. It makes it much easier to use paths relative to the working directory when you want to.


IMO this solution solves the wrong problem. The “right” problem to solve is the coupling between the script and the directory layout. If the script needs to operate on data in some directory, the path to that directory should be part of the script’s interface and handled accordingly.


In many cases one might not want a full stand-alone program with arguments, usage and such. Rather a "script" i.e. something that can be simply executed to run a series of tasks. For example it's quite silly for a build script to require the path of the project it resides in.


It's really not silly. A build script could easily move around inside a project. It could go from the project root, to /root/bin, to /root/bin/build, etc etc. If the inconvenience of arguments troubles you, write a wrapper or alias it or something like that.


Yes, it is silly. You've just added extra complexity to a project for little to no benefit. If you decide to move the script you take the same steps as if you moved anything else i.e update the references.


How would you define complexity such that a script which computes a value is less complex than a script which accepts that pre-computed value?


It's unnecessary complexity for the user. As opposed to simply having the build instructions as `Run bin/build` you're suggesting "Run bin/build <project root>". But there's zero reason for requiring the user to supply the project root when we already know it.


This is exactly the kind of implicit coupling that makes software unmaintainable in the long run. You _are not passing the project root_, you are passing the value of a variable which currently should be set to the project root. The constraints on this value are not that it must be the project root, they are other constraints that are satisfied by the project root right now.


You said earlier that "If the inconvenience of arguments troubles you, write a wrapper". That's precisely what a script like this is. A wrapper. I can run one simple, static command, similar to just about any build tool e.g. `cd project && make`.


What about if the script wants to source sibling files in the samedirectory? I.e. it is a bash library of sorts with modules spread over files. This isn’t “operating on data” abe it would be weird for a library to have to receive its own location as an argument in order to have to load its own modules.


I don't think there's much weird about providing a script a path to its dependencies. If it's the case that the path to the dependencies location is the same as the path to the script's location, that's fine. We're asking distinct questions (where is my data, where are my deps, where is my script) here, the unnecessary coupling is precisely the assumption they have the same answer. If they happen to have the same answer, that's not weird, that sort of thing happens all the time.


Every time I see this I can't help but smile, knowing that in Zsh the equivalent code is:

    #!/usr/bin/env zsh

    setopt err_exit no_unset
    DIR=${0:h}


Honestly I think I prefer the bash version to `DIR=${0:h}`. Although Powershell is superior with simply "$PSScriptRoot" (On the other hand, Powershell's strict mode is a mess and "catch errors" in Powershell is still an extra setting that doesn't actually catch errors half the time.)



Out of curiosity, what do you like better about the Bash version?


Not the OP. For the occasional user, like me, it is more obvious what it is doing.


I like the readlink command


Same, I was scratching my head, wondering why the author of the article doesn't just use readlink... and then a few hours later, a coworker of mine asked why my bash script isn't working on her Mac! Turns out readlink on OS X doesn't have the same flags. I ended up substituting it out for the hackier but more portable method that the article uses.


This is the real minimal template. But it should be pwd -P at the end there or you introduce path resolving bugs. Or you could use cd -P, or cd -e -P to be extra fail-safe (bashism).

I also tack on [ "${DEBUG:-0}" = "1" ] && set -x


Out of interest, why is the way you are handling relative paths better?


is there a reason "cd && pwd" is preferred over `realpath`?


mac doesn't have realpath command.


or a modern bash built in (grrrr)


In fairness, macOS has switched over to zsh as of 1 year ago in macOS Catalina: https://support.apple.com/en-us/HT208050.

Catalina shipped with zsh 5.7 whereas 5.8 is now the latest.


My gripe is about being able to work on a bash script that needs to work "everywhere". Bash 4 was release over a decade ago, so it's a reasonable expectation that it will be present on most servers with a shell.

Homebrew works around that, but their refusal over GPL taint for having the binary present still doesn't seem right.


What about to use zsh to run bash script on mac? Possibly it's better than old bash.


That may cause errors or undesirable behaviour. As way of example, dealing with arrays: `read -a` in bash is `read -A` in zsh, and the former’s arrays are zero-based while the latter’s are one-based.


Big Sur ships with zsh 5.8 as /bin/zsh.


...also use shellcheck


Or add a popd $DIR




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: