How does this achieve “remote code execution” as the article states? How serious is it from a security perspective?
> I'm not sharing a PoC yet, but it is an almost trivial modification of an exploit for CVE-2024-32002. There is also a test in the commit fixing it that should give large hints.
EDIT: from the CVE-2024-32002
> Repositories with submodules can be crafted in a way that exploits a bug in Git whereby it can be fooled into writing files not into the submodule's worktree but into a .git/ directory. This allows writing a hook that will be executed while the clone operation is still running, giving the user no opportunity to inspect the code that is being executed.
So a repository can contain a malicious git hook. Normally git hooks aren’t installed by ‘git clone’, but this exploit allows one to, and a git hook can run during the clone operation.
* https://jdebp.uk/FGA/qmail-myths-dispelled.html#MythAboutBar...
"that may not be the most sensible advice now", says M. Leadbeater today. We were saying that a lot more unequivocally, back in 2003. (-:
As Mark Crispin said then, the interpretations that people put on it are not what M. Postel would have agreed with.
Back in the late 1990s, Daniel J. Bernstein did the famous analysis that noted that parsing and quoting when converting between human-readable and machine-readable is a source of problems. And here we are, over a quarter of a century later, with a quoter that doesn't quote CRs (and even after the fix does not look for all whitespace characters).
Amusingly, git blame says that the offending code was written 19 years ago, around the time that Daniel J. Bernstein was doing the 10 year retrospective on the dicta about parsing and quoting.
* https://github.com/git/git/commit/cdd4fb15cf06ec1de588bee457...
* https://cr.yp.to/qmail/qmailsec-20071101.pdf
I suppose that we just have to keep repeating the lessons that were already hard learned in the 20th century, and still apply in the 21st.
But it seems like almost no distributions have patched it yet
https://security-tracker.debian.org/tracker/CVE-2025-48386 (debian as an example)
And the security advisory is from yesterday: https://github.com/git/git/security/advisories/GHSA-4v56-3xv...
Did git backdate the release?
At least a cursory glance at the repo suggests it might: https://github.com/Homebrew/brew/blob/700d67a85e0129ab8a893f...
Why git does not use Landlock? I know it is Linux-only, but why? "git clone" should only have r/o access to config directory and r/w to clone directory. And no subprocesses. In every exploit demo: "Yep, <s>it goes to a square hole</s> it launches a calculator".
See my discussion here: https://dwheeler.com/essays/fixing-unix-linux-filenames.html
One piece of good news: POSIX recently added xargs -0 and find -print0, making it a little easier to portably handle such filenames. Still, it's a pain.
I plan to complete my "safename" Linux module I started years ago. When enabled, it prevents creating filenames in certain cases such as those with control characters. It won't prevent all problems, but it's a decent hardening mechanism that prevents problems in many cases.
> I find this particularly interesting because this isn't fundamentally a problem of the software being written in C. These are logic errors that are possible in nearly all languages
For Christ's sake, Turing taught us that any error in one language is possible in any other language. You can even get double free in Rust if you take the time to build an entire machine emulator and then run something that uses Malloc in the ensuing VM. Rust and similar memory safe languages can emulate literally any problem C can make a mine field out of.. but logic errors being "possible" to perform are significantly different from logic errors being the first tool available to pull out of one's toolbox.
Other comments have cited that in non-C languages a person would be more likely to reach for a security-hardened library first, which I agree might be helpful.. but replies to those comments also correctly point out that this trades one problem for another with dependency hell, and I would add on top of that the issue that a widely relied upon library can also increase the surface area of attack when a novel exploit gets found in it. Libraries can be a very powerful tool but neither are they a panacea.
I would argue that the real value in a more data-safe language (be that Rust or Haskell or LISP et al) is in offering the built-in abstractions which lend themselves to more carefully modeling data than as a firehose of octets which a person then assumes they need to state-switch over like some kind of raw Turing machine.
"Parse, don't validate" is a lot easier to stick to when you're coding in a language designed with a precept like that in mind vs a language designed to be only slightly more abstract than machine code where one can merely be grateful that they aren't forced to use jump instructions for every control flow action.