Given Anthropic's existing track record of producing terrible hallucinated inaccurate documentation in Claude Code, I'm very curious how Bun will handle this as it continues development. Anthropic probably doesn't care about Bun's external compatibility as long as it runs Claude Code. Will Bun be eventually become "the JavaScript flavor that Claude Code uses"? Will they even bother updating external documentation as it changes? Docs currently live at https://bun.com/reference, but I don't know how much of this is separately maintained documentation versus JSDoc-style generated documentation.
I'm also switching back to Obsidian after a few-year stint on Anytype, and the Notebook Navigator plugin is the only one I have installed. This is (I assume) a UI-only plugin, which shouldn't need access to external network or processes, so a quite good candidate for sandboxed plugins.
Would certainly be interesting to learn more about. A simple check: allowlist of known "processes that run as root". Any new process shows up, something happened.
Proc title is very easily forged (without root even). Obviously a real privileged process could modify the kernel and do whatever it wants, but if I were trying to detect this I would start with /proc/$id/exe.
/proc/pid/exe is also easily forged, without root. For example you can do LD_PRELOAD=evil.so /bin/foo on any dynamic executable, or spawn /bin/foo unmodified and inject code via ptrace or /proc/pid/mem.
I have a fileless, execless copyfail exploit that works by injecting shellcode directly into systemd's pid 1. (I should probably publish it at some point...)
Yeah the whole system is based on the ability of one task to apparently become another task, that's how Unix works. So the indicators in /proc are just that: indicative at best.
There's no reason the task should even be assumed to be executing code in a file. A process can map code into anonymous memory and continue executing there without even branching. Again this is considered a feature of the system rather than a flaw.
Look at the FedRAMP requirements around integrity protection, then look at how massive the list of complaint products is. I promise, pretty much everyone in regulated environments is. It's so prevelant Azure is even pushing a turnkey solution for k8s https://learn.microsoft.com/en-us/azure/aks/use-azure-linux-...
Nothing about fedramp requires that you enable any of the features you're talking about. Linking to a public preview of an Azure product that doesn't even run with enforcement on is not great supporting evidence.
If you have much experience with fedramp, and it sounds like you do, perhaps you might agree that it is a huge list of things that superficially indicate doing something, without actually doing anything. As the documentation for IPE freely admits, it has no protective benefits because it is unaware of anonymous executable regions.
It sure has limitations, but "no protective benefits" is pretty wrong. In a real world example, if your containerized application has an RCE, you're preventing the attacker from executing binaries they tampered with or down/up-loaded. Combined with minimal distroless containers, it's a very effective attack surface reduction strategy, and works much better than the legacy scan-occasionally integrity-checking methods (rkhunter et al).
Exactly this. We need to be more precise than blanket statements like "agentic coding is a trap" and start figuring out what a "tasteful" application of agentic coding looks like. ChatGPT is destroying liberal arts curriculums because students can choose to not do anything of the thinking themselves and produce mediocre work that passes the bar. I think the same problem is showing itself with agentic coding, just with more directly measurable consequences (because the pile of software ends up failing in a more spectacular way than the pile of bad writing).
On liberal arts is simply a matter of what the students want to get out of the class, vs what the teacher wants the students to do: There's a huge disconnect in goals and expectations, so there's no way for the teacher to actually win. The fact that there's such disconnect should give the departments pause.
This doesn't happen at all for using agentic coding: What the programmer wants and what the boss wants are pretty well aligned. There are corner cases where someone isn't allowed to use LLMs, but does it anyway, but in most cases, the organization agrees.
> what the students want to get out of the class, vs what the teacher wants the students to do: There's a huge disconnect in goals and expectations, so there's no way for the teacher to actually win. The fact that there's such disconnect should give the departments pause.
Unless the teacher's role is to scaffold and support the students in acquiring what the students want, gain trust and lower the disconnect.
Honestly I'm not really thinking about the boss-programmer relationship, but rather the programmer-agent relationship. At best, you get what fnordpiglet is talking about, where it's a symbiotic relationship. On the other side of the coin, you get a parasitic relationship like the OP is talking about: the agent delivers results, you take credit, you fail to develop (or maintain) long-term skills, you become a non-value-adding middleman, you get replaced.
To be fair, many people should be replaced. What is happening right now with layoffs in tech is that the overstaffing these organizations have been accruing across the last decade is staggering
I think it's most easily summarized by: "It's still important to know things and what was important to know before hasn't really changed". If anything, agentic coding highlights and accensuates the need for good systems and software design knowhow.
I've built this twice before. The main problem that I hit is that the AI agents suck at the process lifecycle management: leaving processes alive, starting the same daemon multiple times, etc.
From a brief glance over the code I like the approaches I see. Using the `/etc/resolver/` mechanism is a new trick to me!
The interesting part to me isn't the port numbers, it's the automatic service start/stop, including idle route shutdown.
Only for your user, and it means a keylogger on the system if it gets rooted can't pull your password to try on other machines. Personally I always either login as root or use passwordless sudo.
Yubikeys are also surprisingly annoying when setup for the as well. A working developer just needs sudo a lot.
Realistically a "sudo button" would be handy, on the keyboard, with a display to show a confirmation pin for the request (probably also needs a deny button so you can try and identify weird ones).
The problem is not the passwordless sudo but running untrusted programs on your computer under your user. They don’t need sudo to steal your SSH keys or inject malicious code in your .bashrc.
It's worth pointing out that you cannot, definitionally, get "real UID 0" in a "rootless" container, because then it wouldn't be a rootless container. This is relevant because this exploit doesn't claim to be able to bypass user namespaces, and that getting "real UID 0" would be a different exploit.
The underlying exploit allows writing arbitrary values to the page cache, independent of any namespacing, so it should be assumed to allow container escapes even if the given PoC code doesn't do that.
That's fair (although it doesn't have anything to do with getting "real root" in a userns in that case). I guess one approach would be something like modifying the host's logrotate binary and waiting for it to trigger, or something like that. Would escape the container to root on the host directly. I imagine it wouldn't be a sure thing to pull off, either, but definitely straightforward enough that any APT should be asking Claude to develop it.
Kubernetes 1.33 switches to user namespaces enabled by default, which I imagine is the same underlying mechanism that rootless Podman uses. `hostUsers: false` is the way to ensure that root in the pod is root on the host. It's trivial for a real (unmapped) root to escape a Kubernetes pod.
reply