The navigation on the site needs work. Because the URLs don't change, it's not easy to go back to the list, except through a difficult to read link at the very bottom of the page. I'd expect a back button to work here instead of the hunting around.
Clicking one of the options performs some noop hollywoodism which could frankly be done without. If the results are cached just serve it right away?
The end results just seem to be jokey quips and a skills.md which attempts to describe what the company does. I'm not sure what the point of this is.
The toxic behaviour by hn commenters in that thread is absolutely shameful. Whether you feel strongly about it or not, there is a civil way to discuss things and that isn't it.
No, it might sound smart, but it's incorrect to equivocate. You don't need to "lock in" to Apple's services either. Apple has a meaningfully better track record than Google on privacy in many regards.
(Apple's Terms of Service is also much better, for not having an arbitration clause anywhere except the Apple credit card, with a very easy opt-out flow.)
> Humans are not a good target for calm technology.
Exactly the opposite is true. I couldn't even understand the point or relation being made here as the article continues to emit further disconnected revelations and factual errors. I would suggest a human calmly read through the post and sense check it.
I did tinker a lil with mine! RTX3080 with 10GB VRAM, 5600x with 64GB DDR4 - not very good but it was very fun and exciting to tinker with :)
My partner on the otherhand has an M3 Max 64GB which I've had way more success with. Setting up opencode and doing a tiny spec-driven Rust project and watching it kiiinda work was extraordinarily exciting!
Of course this is well known. Everything Microsoft does is for selfish capitalist reasons and everything Apple does is for altruistic philanthropic reasons.
Feasibility on commodity hardware would be the true watermark. Running high end computers is the only way to get decent results at the moment, but if we can run inference on CPUs, NPUs, and GPUs on everyday hardware, the moat should disappear.
You can already run inference on ordinary hardware but if you want workable throughput you're limited to small models, and these have very poor world-knowledge.
Showing a rich interactive tui like this to 15 year old me would have me convinced I was in a fever dream. It's amazing how far they have come. Seeing debugging is kind of blowing my mind.
Clicking one of the options performs some noop hollywoodism which could frankly be done without. If the results are cached just serve it right away?
The end results just seem to be jokey quips and a skills.md which attempts to describe what the company does. I'm not sure what the point of this is.
reply