I started getting into webdev using PHP almost 30 years ago. So I'm probably biased. But when you're developing on just one machine in one language and you can do most of the stuff you need to do within that one system, you can make progress very fast, and the system can support you coding fast (I'm not proud of it but I was live patching production code via SSH and refreshing a web page as fast as humanly possible to make sure it didn't break).
I believe there are several ways achieve that analogy today, even though the technology we have access to (and our own demands) has exponentially grown in complexity. I am happy to see more people thinking about it.
[Side track: I am personally not a fan of "break it up into many tiny systems" (microservices, etc) since it removes that agility of logic/state moving around the system. I just see an attempt to codify the analog of a very large human organization.]
Now that AI lets a single person (and in some cases, no person at all!) write several orders of magnitude more code than they would possibly have been able to, the requirements of our systems will change too, and our old ways of working is cracking at the seams. In a way we're perhaps building up a whole new foundation, sending our AIs to run 50-year-old terminal commands. Maybe that's all we needed all along, but I do find it strange that AI is forced to work within a highly fragmented system, where 95%, if not 99%, of all startups that write code with AI while hiding it from the user, are essentially following the recipe of: (1) launch VM (2) tell AI to install Next.js and good luck.
I too have a horse in this race and have come to similar conclusions as the article: there is a way to create primitives on top of bare metal that work really well for small and large applications alike, and let you express what you really wanted across compute/memory/network. And I believe that with AI we can go back to first principles and rethink how we do things, because this time the technology is not just for groups of humans. I find this really exciting!
Releases keep shifting from API forward to product forward, with API now lagging behind proprietary product surface and special partnerships.
I'd not be surprised if this is the year where some models simply stop being available as a plain API, while foundation model companies succeed at capturing more use cases in their own software.
Yeah this can go many ways but there's a world where OpenAI doesn't sell direct model access for the same reasons Cloudflare doesn't sell direct hardware access.
I made this offline pocket vibe coder using Gemma 4 (works offline once model is downloaded) on an iPhone. It can technically run the 4B model but it will default to 2B because of memory constraints.
It writes a single TypeScript file (I tried multiple files but embedded Gemma 4 is just not smart enough) and compiles the code with oxc.
You need to build it yourself in Xcode because this probably wouldn't survive the App Store review process. Once you run it, there are two starting points included (React Native and Three.js), the UX is a bit obscure but edge-swipe left/right to switch between views.
Since AI became capable of long-running sessions with tool calls, one VM per AI as a service became very lucrative. But I do think a large amount of these can indeed run in the browser, especially all the ones that essentially just want to live-update and execute code, or run shells on top of a mounted file system. You can actually do all of this in the user's browser very efficiently. There are two things you lose though: collaboration (you can do it, but it becomes a distributed problem if you don't have a central server) and working in the background (you need to pause all work while the user's tab is suspended or closed).
So if you can work within the constraints there are a lot of benefits you get as a platform: latency goes down a lot, performance may go up depending on user hardware (usually more powerful than the type of VM you'd use for this), bandwidth can go down significantly if you design this right, and your uptime and costs as a platform will improve if you don't need to make sure you can run thousands of VMs at once (or pay a premium for a platform that does it for you)[1]
All that said I'm not sure trying to put an entire OS or something like WebContainers in the user's browser is the way, I think you need to build a slightly custom runtime for this type of local agentic environment. But I'm convinced it's the best way to get the smoothest user experience and smoothest platform growth. We did this at Framer to be able to recompile any part of a website into React code at 60+ frames per second, which meant less tricks necessary to make the platform both feel snappy and be able to publish in a second.
[1] For big model providers like OpenAI and Anthropic there's an interesting edge they have in that they run a tremendous amount of GPU-heavy loads and have a lot of CPUs available for this purpose.
I’ve been building on an opinionated provider-agnostic library in Go[1] for a year now and it’s nice to see standardization around the format given how much variety there is between the providers. Hopefully it won’t just be the OpenAI logo on this though.
I've gotten to a point where my workflow YAML files are mostly `mise` tool calls (because it handles versioning of all tooling and has cache support) and webhooks, and still it is a pain. Also their concurrency and matrix strategies are just not working well, and sometimes you end up having to use a REST API endpoint to force cancel a job because their normal cancel functionality simply does not take.
There was a time I wanted our GH actions to be more capable, but now I just want them to do as little as possible. I've got a Cloudflare worker receiving the GitHub webhooks firehose, storing metadata about each push and each run so I don't have to pass variables between workflows (which somehow is a horrible experience), and any long-running task that should run in parallel (like evaluations) happens on a Hetzner machine instead.
I'm very open to hear of nice alternatives that integrate well with GitHub, but are more fun to configure.
If the immediate next token probabilities are flat, that would mean the LLM is not able to predict the next token with any certainty. This might happen if an LLM is thrown off by out of distribution data, though I haven't personally seen it happen with modern models, so it was mostly a sanity check. But examples from the past that would cause this have been simple things like not normalizing token boundaries in your input, trailing whitespace, etc. And sometimes using very rare tokens AKA "glitch tokens" (https://en.wikipedia.org/wiki/Glitch_token).
One thing I tend to do myself is use https://generator.jspm.io/ to produce an import map once for all base dependencies I need (there's also a CLI), then I can easily copy/paste this template and get a self-contained single-file app that still supports JSX, React, and everything else. Some people may think it's overkill, but for me it's much more productive than document.getElementById("...") everywhere.
I don't have a lot of public examples of this, but here's a larger project where I used this strategy for a relatively large app that has TypeScript annotations for easy VSCode use, Tailwind for design, and it even loads in huge libraries like the Monaco code editor etc, and it all just works quite well 100% statically:
Yeah I’ve found that the only way to let AI build any larger amount of useful code and data for a user that does not review all of it requires a lot of “gutter rails”. Not just adding more prompting, because it is an after-the-fact solution. Not just verifying and erroring a turn, because it adds latency and allows the model to start spinning out of control. But also isolating tasks and autofixing output keep the model on track.
Models definitely need less and less of this for each version that comes out but it’s still what you need to do today if you want to be able to trust the output. And even in a future where models approach perfect, I think this approach will be the way to reduce latency and keep tabs on whether your prompts are producing the output you expected on a larger scale. You will also be building good evaluation data for testing alternative approaches, or even fine tuning.
I believe there are several ways achieve that analogy today, even though the technology we have access to (and our own demands) has exponentially grown in complexity. I am happy to see more people thinking about it.
[Side track: I am personally not a fan of "break it up into many tiny systems" (microservices, etc) since it removes that agility of logic/state moving around the system. I just see an attempt to codify the analog of a very large human organization.]
Now that AI lets a single person (and in some cases, no person at all!) write several orders of magnitude more code than they would possibly have been able to, the requirements of our systems will change too, and our old ways of working is cracking at the seams. In a way we're perhaps building up a whole new foundation, sending our AIs to run 50-year-old terminal commands. Maybe that's all we needed all along, but I do find it strange that AI is forced to work within a highly fragmented system, where 95%, if not 99%, of all startups that write code with AI while hiding it from the user, are essentially following the recipe of: (1) launch VM (2) tell AI to install Next.js and good luck.
I too have a horse in this race and have come to similar conclusions as the article: there is a way to create primitives on top of bare metal that work really well for small and large applications alike, and let you express what you really wanted across compute/memory/network. And I believe that with AI we can go back to first principles and rethink how we do things, because this time the technology is not just for groups of humans. I find this really exciting!
reply