At least nowadays LLMs can rewrite Bash to JS/Python/Ruby pretty quickly.
Just integrate fzf into your shell and use ctrl-r to instantly summon a fuzzy shell history search and re-execute any command from your history!
I cannot imagine going back to using a terminal without this.
I still write plenty of scripts if I need to repeat multi command processes but for one liners just use fzf to reexecute it.
Also in a shared project you can ignore script files with .git/info/exclude instead of .gitignore so you don’t have to check in your personal exclusion patterns to the main branch.
Seriously people if you use a terminal you need the following tools to dominate the shell:
ripgrep, zoxide, fzf, fd
"I want to be clear here, I am not advocating writing “proper” scripts, just capturing your interactive, ad-hoc command to a persistent file."
What's the difference? Why not version control it, share it with colleagues. Imagine writing a unit test to test a new feature then deleting it when done, what a waste. Ok it's not exactly the same because you aren't using these scripts to catch regressions, but all of that useful learning and context can be reused.
I don't think the language you use for scripting is too important as long as the runtime is pinned and easily available on all engineers machines, perhaps using a toolchain manager like... mise[3].
[1] https://mise.jdx.dev/tasks/ [2] https://mise.jdx.dev/shell-aliases.html [3] https://mise.jdx.dev/dev-tools/
Another quite standard way of savings your command history in a file that I have seen used in all ecosystems is called "make", which even saves you a few characters when you have to type it, and at least people don't have to discover your custom system, have auto complete work out of the box, etc
Historically we had to use pip which was super janky. Uv solves most of pip's issues but you still do have to deal with venvs and one issue it doesn't solve is that you can't do imports by relative file path which is something you always end up wanting for ad-hoc scripting. You can use relative package paths but that's totally different.
Though, I generally run these scripts using bun (and the corresponding `$` in bun) - basically the same thing, but I just prefer bun over deno
Instead, I now swear by atuin.sh, which just remembers every command I've typed. It's sort of bad, since I never actually get nice scripts, just really long commands, but it gets you 50% of the way there with 0 effort. When leaving my last job, I even donated my (very long) atuin history to my successor, which I suspect was more useful than any document I wrote.
My only hot tip: atuin overrides the up-arrow by default, which is really annoying, so do `atuin init zsh --disable-up-arrow` to make it only run on Ctrl-R.
Some of the scripts are bash, but many are TypeScript via Deno... it's great that you can reference your dependency modules directly as well as not needing a separate install step like node. Most of my shell scripting is now in Deno.
In fact, now VS just added shebang detection for TS files without the .ts extension... So I don't even need that little extra to edit properly anymore. It works great as a shell scripting language.
With your shell's vi mode, it's even better L -> k k k
Or search them with /
And if you are proficient with vim, you can edit your previous one-line really fast
(Remap/Swap CapsLock with Escape system-wide. It's just a gui setting on linux and MacOS and a registry key way on Windows)
If you work in powershell you can start out in the terminal, then when you've got whatever you need working you can grab the history (get-history) and write it to a file, which I've always referred to as a `sample`. Then when it becomes important enough that other people ask me about it regularly I refactor the `sample` into a true production grade `script`. It often doesn't start out with a clear direction and creating a separate file is just unnecessary ceremony when you can just tinker and export later when the `up-enter' pattern actually appears.
What about this instead: select any number of lines, in any file, and pass it through to the shell. You get convenience of text editing, file management, and shell’s straightforwardness.
(This approach was tried and cemented in Acme, a text editor from Bell Labs.)
Instead of juggling dashboards and collections of requests, or relying on your shell history as Matklad mentions, you have it in a file that you can commit and plug into CI. Win-win.
At some point, that testing shell script can be integrated into your codebase using your working language and build tooling.
Edit: zero-dependency Python.
This article used Dax instead which also looks fine! Https://github.com/dsherret/dax
I don't understand why you wouldn't want your scripts in your Git - but I guess OP's context is different from mine.
Anyway, what kills this for me is the need to add await before every command.
...this is the same sort of 'works for me' philosophy as in Matklads post though, it's so heavily opinionated and personalized that I don't expect other people to pick it up, but it makes my day-to-day work a lot easier (especially since I switch multiple times between macOS, Linux and Windows on a typical day).
I'm not sure if Bun can do it too, but the one great thing about Deno is that it can directly import without requiring a 'manifest file' (e.g. package.json or deno.json), e.g. you can do something like this right in the code:
import { Bla } from 'jsr:@floooh/bla^1';
This is just perfect for this type of command line tools....and that was also the one concrete example where it makes sense to have extra dependency and abstraction layer on top of a shell script:)
say you know TS and even if you walk back to where $ is defined, can you tell immediately why $`ls {dir}` gets executed and not just logged?
No repetitive short sentences, no "Not X, just Y." patterns, and lots of opinionated statements, written confidently in the first person.
Please more of this.