I want to add one other note: in any large organization, some developers will use tools in ways nobody can predict. This includes Git. Don't try to force any particular workflow, including mandatory or automatically-enabled hooks.
Instead, put what you want in an optional pre-push hook and also put it into an early CI/CD step for your pull request checker. You'll get the same end result but your fussiest developers will be happier.
If you're using `pre-commit` the tool, not merely the hook, you can also use something like https://github.com/andrewaylett/pre-commit-action to run the tool in CI. It's a really good way to share check definitions between local development and CI, meaning you've shifted your checks to earlier in the pipeline.
I use Jujutsu day-to-day, which doesn't even support pre-commit hooks. But the tooling is still really useful, and making sure we run it in CI means that we're not relying on every developer having the hooks set up. And I have JJ aliases that help pre-commit be really useful in a JJ workflow: https://github.com/andrewaylett/dotfiles/blob/7a79cf166d1e7b...
Don't do that, just dont.
Make test cases all green locally before pushing, but not in a way that interferes with pushing code and they shouldn't be tied to a particular (D)VCS. Allow uploading all of the separate proposed PRs you want in a proposed "for review" state. After a PR is signed-off and sent for merging, it goes into a linearizing single source of truth backed by an automated testing/smoke testing process before they land "auto-fast-forwarded" in a mostly uncontrolled manner that doesn't allow editing the history directly. Standardization and simplicity are good, and so is requiring peer review of code before it's accepted for existing, production, big systems.
Disallow editing trunk/main/master and whenever there's merge conflict between PRs, manual rebasing of one or the other is required. Not a huge deal.
Also, have structured OWNERS files that include people and/or distribution list(s) of people who own/support stuff. Furthermore, have a USERS file that keeps lists of people who would be affected by restarting/interrupting/changing a particular codebase/service for notification purposes too. In general, monorepo and allowing submitting code for any area by anyone are roughly good approaches.
Lefthook with glob+stage_fixed for formatters makes one of the issues raised a complete non-issue.
I'll write a in-depth post about it maybe within the next week or so, been diving into these in my hobby projects for a year or so.
The pre commit script (https://github.com/ThomasHabets/rustradio/blob/main/extra/pr...) triggers my executor which sets up the pre commit environment like so: https://github.com/ThomasHabets/rustradio/blob/main/tickbox/...
I run this on every commit. Sure, I have probably gone overboard, but it has prevented problems, and I may be too picky about not having a broken HEAD. But if you want to contribute, you don't have to run any pre commit. It'll run on every PR too.
I don't send myself PRs, so this works for me.
Of course I always welcome suggestions and critique on how to improve my workflow.
And least nothing is stateful (well, it caches build artefacts), and aside from "cargo deny" no external deps.
With pre-commit you can disable squashing on merge. You'll have a refactor commit and a feature commit. Changes to the refactor can be --fixup followed by rebase. I find this much easier than juggling two pull requests.
I accept that this is only viable with buy-in from everyone on the team, and I wouldn't advise it for teams over 10 people. But for small teams, I love pre-commit and trunk-based development.
(One very nice thing about AI-assisted programming: Claude is not offended by duplicating the same code over and over and using utterly awful APIs, and Claude feels no particular compulsion to get distracted thinking about how the APIs I’m targeting could be made to be less awful. Try asking the Home Assistant docs how an integration is supposed to handle a hot-added entity after an integration finishes setup: you will not get an answer. Ask Claude and it will cheerfully copy the ludicrous and obviously inappropriate solution used by other integrations. Sometimes semi-blindly doing something that works is the right solution when writing piles of glue code.)
Also, if most developers are using one editor, configure that editor to run format and auto-fix lint errors. That probably cleans up the majority of unexpected CI failures.
... but even that one took several rounds of fiddling and complexifying to get it to behave correctly (e.g. merging commits with already-committed bypassed binaries should be allowed. how do you detect that? did you know to check for that scenario when building the hook? someone's gonna get bitten by it, and there are dozens of these cases).
So yea. Agreed. Do not use pre-commit hooks, they're far more trouble than they seem, and the failure modes / surprises are awful and can be quite hard to figure out to laymen who are experiencing it.
(much of this is true for all git hooks imo)
prek uninstall; g rbc; prek install
eg. (using the typical aliases)> They tell me I need to have "proper formatting" and "use consistent style". How rude.
> Maybe I can write a pre-commit hook that checks that for me?
git filter is made for that. It works. There are still caveats (it will format whole file so you might end up commiting changes that are formatting fixed of not your own code).
Pre-commit is not for formatting your code. It's for checking whether commit is correct. Checking whether content has ticket ID, or whether the files pass even basic syntax validation
> Only add checks that are fast and reliable. Checks that touch the network should never go in a hook. Checks that are slow and require an update-to-date build cache should never go in a hook. Checks that require credentials or a running local service should never go in a hook.
If you can do that, great! If you can't (say it's something like CI/CD repo with a bunch of different language involved and not every dev have setup for everything to be checked locally), having to override it to not run twice a year is still preferable over committing not working code. We run local checks for stuff that make sense (checking YAML correctness, or decoding encrypted YAMLs with user key so they also get checked), but the ones that don't go remote. It's faster. few ms RTT don't matter when you can leverage big server CPU to run the checks faster
Bonus points, it makes the pain point - interactive rebases - faster, because you can cache the output for a given file hash globally so existing commits during rebase take miliseconds to check at most
> Don't set the hook up automatically. Whatever tool you use that promises to make this reliable is wrong. There is not a way to do this reliably, and the number of times it's broken on me is more than I can count. Please just add docs for how to set it up manually, prominantly featured in your CONTRIBUTING docs. (You do have contributing docs, right?)
DO set it up automatically (or as much as possible. We have script that adds the hooks and sets the repo defaults we use). You don't want new developer to have to spend half a day setting up some git nonsense only to get it wrong. And once you change it, just rerun it
Pre-push might address some of the pain points but it doesn't address the biggest - it puts the developer in a "git hole" if they have something wrong in commit, because while pre-commit will just... cancel the commit till dev fixes it, with pre-push they now need to dig out knowledge on how to edit or undo existing commits