Thank you. If correct, that is helpful. I only checked my own copy and as far as I can tell, the feature is disabled. It may well be that I disabled it, I don't remember. Seems like the kind of thing I would disable if I noticed it but iTerm2 has so many features and so many settings that I have no idea whether I ever noticed it before this.
I note that the documentation says this:
> Shell Integration
> iTerm2 may be integrated with the unix shell so that [blah blah blah]
> How To Enable Shell Integration
> [blah blah blah]
And that does not make it sound as if it's enabled by default. I really don't know. I only started using iTerm2 about three or four weeks ago.
Disclosure: I didn't discover the vulnerability. I wrote the blog post.
Thanks for releasing a fix!
It was surprising that there wasn't an official release, even though the bug impacts otherwise routine, harmless workflows. The patch itself [1] framed the issue as "hypothetical," so the goal of the blog post was to demonstrate that it is not. I'm glad that you've agreed to release a fix.
> The author of iTerm2 initially didn’t consider it severe enough to warrant an immediate release, but they now seem to have reconsidered.
It's funny that we still have the same conversation about disclosure timelines. 18 days is plenty of time, the commit log is out there, etc.
The whole "responsible disclosure" thing is in response to people just publishing 0days, which itself was a response to vendors threatening researchers when vulns were directly reported.
Disclosure: I didn’t discover the bugs, but helped write the blog post.
These issues are technically classified as local code execution (AV:L), but they go against a pretty strong user expectation: that opening a file should be safe. In reality, they can be triggered through very common workflows like downloading and opening files, which makes them feel a lot closer to some remote scenarios, even if they’re not strictly RCE.
At the end of the day, regardless of how you classify them, it’s worth being aware of the risks when opening untrusted files in editors like Vim or Emacs.
I'm pretty sure the lesson is that at the end of the day, it’s worth being aware of the risks of using git, as security issues intrinsic to git can extend to other tools which use git as a component.
I think we can agree that Git is at least partly responsible for this issue, if not more.
That said, even being aware of that doesn’t necessarily help much in practice. When you’re using Emacs or Vim, you’re not really thinking about Git at all. You’re just opening and editing files. So it’s not obvious to most users why Git would be relevant in that context.
This is why I think editor maintainers should do more to protect their users. Even if the root cause sits elsewhere, users experience the risk at the point where they open files. From their perspective, the editor is the last line of defense, so it makes sense to add safeguards there.
Please read the LLM output critically instead of doubling down on it.
Your defense-in-depth framing makes no sense. If .git/config or similar mechanisms are the attack vector, then adding more editor safeguards would be treating a symptom, as the real problem is git's trust model. The "users don't think about git when using editors" argument also proves too much. Many users also do not think about PATH, shell configs, dynamic linker, or their font renderer either, but you cannot make editors bulletproof against all transitive dependencies...
Seriously, it is actually backwards. Git is where the defense belongs, not every downstream tool that happens to invoke git. Asking editors to sandbox git's behavior is exactly as absurd as it sounds.
And BTW, "technically AV:L but feels like RCE" is your usual blog-post hype. It either is, or is not.
Sure, but you said that was the end of the day analysis, and I didn't think you went far enough in your analysis.
FWIW, I'm not thinking about git at all since I use Mercurial, and never enabled vc hooks in my emacs, which is based on 25.3.50.1, so wasn't affected by this exploit - I tested. I use git and hg only from the command-line.
My end-of-day analysis is to avoid git entirely if you can't trust its security model. ;)
Should the emacs developers also do more to secure emacs against ImageMagick exploits?
But you would expect running "git status" or "git ls-files" in the unzipped directory to completely pwn your system? Probably not either.
If you don't trust git, you can remove from your system or configure emacs not to use it. If you are worried about unsuspecting people with both git and emacs getting into trouble when downloading and interacting with untrusted malware from the internet, the correct solution is to add better safeguards in git before executing hooks. But you did not report this to the git project (where even minor research beyond Claude Code would reveal to you that this has already been discussed in the git community).
I suspect that what happened here was that (1) you asked Claude to find RCEs in Emacs (2) Claude, always eager to please, told you that it indeed has found an RCE in Emacs and conjured up a convincing report with included PoC (3) since Claude told you it had found an RCE "in Emacs", you thought "success!", didn't think critically about it and simply submitted Claude's report to the Emacs project.
Had you instead asked Claude to find RCEs in git itself and it told you about git hooks, you probably would not have turned around and submitted vulnerability reports to all tools and editors that ever call a git command.
>But you would expect running "git status" or "git ls-files" in the unzipped directory to completely pwn your system? Probably not either.
That’s fair, but it would be pretty unusual for me to run Git commands in a directory I’m not actively working on. On the other hand, I open files from random folders all the time without really thinking about it, so that scenario feels much more realistic.
It’s extremely common for shell prompts to integrate Git status for the working directory.
Who’s responsible for the vulnerability? Your text editor? The version control system with a useful feature that also happens to be a vulnerability if run on a malicious repository? The thing you extracted the repository with? The thing you downloaded the malicious repository with?
Windows + NTFS has a solution, sometimes called the “mark of the web”: add a Zone.Identifier alternate data stream to files. And that’s the way you could mostly fix the vulnerability: a world where curl sets that on the downloaded file, tar propagates it to all of the extracted files, and Git ignores (and warns about) config and hooks in marked files. But figuring out where the boundaries of propagation lie would be tricky and sometimes controversial, and would break some people’s workflows.
If you untar a file and get a git repository, you should absolutely expect malicious behavior. No one does that, you clone repos not tarball them, and cloning doesn't copy hooks for precisely this reason
Thanks for sharing. I'm one of the co-authors of the blog post. Let me know if you have any questions!
tl;dr: We analyzed a LockBit v3 variant, and rediscovered a bug that allows us to decrypt some data without paying the ransom. We also found a design flaw that may cause permanent data loss. Nothing's earth-shattering, but it should be a fun read if you're into crypto and security!
You can use Google Search and tell Google not to log your search history or use the data for advertising purposes. See my comment [1] for how to turn on these privacy controls.
> not to log your search history or use the data for advertising purposes
People already know/feel that when Google states that it will not do something with people’s information but it does not say what it does... well, that doesn’t feel very comfortable. In fact, it’s so uncomfortable, that people avoid asking what does Google do with that not logged or not for advertising data. On mobile, being the only alternative a device that costs about 5 months of my work, I may not what to know the answer for that questions too and just hope for the best
This assumes Google is acting in good faith, and I find that hard to believe when Google's consent prompts are intentionally annoying and not GDPR compliant (for reasons outlined in another comment of mine: https://news.ycombinator.com/item?id=25373600) and they used dark patterns like intentionally disabling functionality such as saving specific locations when location history is disabled in Google Maps.
reply