I can see some embryonic potential in the concept, almost like a little spark of genius. I'm convinced a variant of an agentic personal assistant will become commonplace within a few years and will likely gain widespread adoption.
That said, OpenClaw and most of its clones are extremely brittle right now. FWIW, I also tried building my own thinking the problem is surely the vibe coded complexity but it's not that, it's in limitations of the models and their training.
I do still have an OpenClaw instance running on an M1 Macbook Pro in my closet with a local ollama instance (qwen3.5:35b-a3b-coding-nvfp4). It mostly cleans up my notes in my Trilium instance and it helps monitor prices of homelab components (on eBay and Reddit) daily.
A "Senior Software Engineer" at Microsoft is someone with a pulse and 3 years of experience (due to title inflation); so despite the "senior" in the title definitely not "senior engineering staff".
I have no doubt Azure sucks, but almost all huge projects like that have systemic issues.
Axel sounds like a pretty smart guy, but wanted to point out I've seen this kind of behavior before, often from mid-level "job-hopping" engineers (sometimes with overly inflated egos) that overconfidently declare everything the organization is doing is BS and they have the magic solution to it.
And yes, sometimes by sending long winded emails to very large internal groups about how their solution will address all the problems if only someone recognize their genius (and eventually give them a VP title and budget). Some of the time, they are well intended but missing crucial historical knowledge about why things are in the state they are and why what they're proposing was tried 5 times before and failed.
Support for Xbox Game Pass games (typically deployed as UWP / containerized) would be absolutely amazing and likely the final nail in the coffin for Windows for gaming for many people.
According to the enshittification playbook the next step is to discontinue the lower tiers (or price them so high they stop making sense), then celebrate "Copilot adoption" :)
I hope this keeps momentum. If nothing else, it may force assholes like Altman to think a little bit about the impact of a decision to sell services to a government / military.
And it may lead some folks into discovering privacy-preserving local inference as an alternative for a lot of use cases, which is always a plus.
I switched a very long time ago when Gemini was released and it was a very easy switch at the time. I have never missed ChatGPT and due to current circumstances I'm kind of happy I made the switch. It woukd be a lot harder for me now to switch from Gemini (except for code of course)
Reposting a comment I made on an earlier thread on this.
We need to be super careful with how legislation around this is passed and implemented. As it currently stands, I can totally see this as a backdoor to surveillance and government overreach.
If social media platforms are required by law to categorize content as AI generated, this means they need to check with the public "AI generation" providers. And since there is no agreed upon (public) standard for imperceptible watermarks hashing that means the content (image, video, audio) in its entirety needs to be uploaded to the various providers to check if it's AI generated.
Yes, it sounds crazy, but that's the plan; imagine every image you post on Facebook/X/Reddit/Whatsapp/whatever gets uploaded to Google / Microsoft / OpenAI / UnnamedGovernmentEntity / etc. to "check if it's AI". That's what the current law in Korea and the upcoming laws in California and EU (for August 2026) require :(
Is LTSC still impossible to get as someone who doesn't want to run cracked software or "license unlockers" on the same machine they do their banking on? I never found a way of buying it that didn't involve having to survive an interrogation by a sales team.
Haha, I always guess whether or not there will be an LTSC comment before checking the comments. These days it's always there, even early after posting.
That said, OpenClaw and most of its clones are extremely brittle right now. FWIW, I also tried building my own thinking the problem is surely the vibe coded complexity but it's not that, it's in limitations of the models and their training.
I do still have an OpenClaw instance running on an M1 Macbook Pro in my closet with a local ollama instance (qwen3.5:35b-a3b-coding-nvfp4). It mostly cleans up my notes in my Trilium instance and it helps monitor prices of homelab components (on eBay and Reddit) daily.
reply