That Vagabond Builds video... They edited it, but left in a claim it had a frunk. The commentary felt like engagement optimized stream of consciousness blather. They cut out shots of the car being moved and never showed it being driven.
I am surprised US prototyping companies don’t have more regional facilities. It costs ~10x more to have an Oklahoma machine shop cut a sheet metal pattern than to get it shipped from California. The difference seems to just be letting the customer make mistakes rather than requiring manual design verification. I probably would not have purchased a CNC machine if they offered 2 day shipping for <10% of the part cost.
I’m ready for a modern form of representation that isn’t constrained by how many people an old building can hold. I wish small groups could have a representative with a proportionally small fraction of voting power.
Mainly participation. Voter participation is already very low. It would be interesting if voting was more like jury duty and a random sample of the population was selected to vote per issue. That way there are no termed representatives to corrupt and participation is always significant and uniform.
I think about troubleshooting like OBST with test cost. Systems are a linear chain of points of failure. The more knowledge you have about how hard components are to test and which components break more often, the easier it is to choose the tests that optimize your time.
They let you write python programs as long as it’s from memory though. I wonder what the code golf looks like for a rudimentary python CAS. If you could evaluate the equation without needing to parse it, I bet you could get a lot of mileage out of a black box gradient decent routine. The analog circuit solver I wrote for my nSpire (without CAS) was ~11kB. https://github.com/deckar01/pylacc
I am skeptical that they decided human input is their bottleneck just as the cost per token spiked from some AI providers. I see this as a way to reduce their compute spend (offloaded to the community), but I doubt they are going to give up any creative control, so their employee review bottleneck probably won’t change.
There is another solution: Get a machine with flow control and a pressure gauge on the group head. You can saturate the puck at low pressure to avoid dry pockets, then ramp the flow rate up until the group head pressure peaks. If the pressure starts to drop you can increase the flow to maintain the group head pressure.
As for the 6 bar course grind theory: You may maximize the extraction of soluble coffee mass, but the concentration will be lower. It does not take very much extra water to ruin the taste and texture of a latte.
They changed it do all of the changes in a virtual cloud environment, then dump the final result at the end of the response. Before it would stream changes, so if it made a minimal fix, then decided to go off on a tangent you could stop it quickly. Now you have to wait 5+ minutes to get a single line of code out of it just to find out it also refactored everything and burned a stack of tokens. No amount of prompting seems to force it to make incremental changes locally.
> They changed it do all of the changes in a virtual cloud environment, then dump the final result at the end of the response.
That’s a hallucination. All they did was hide thinking by default. Quick Google search should easily teach you how to turn it back on (I literally have it enabled in my harness).
I am using Copilot in VSCode and it does stream the thinking output to me. At some point it will say something like "Implementing changes..." similar to "Thinking...", but there is no content to expand. ChatGPT and local models always push the code changes in small chunks. Claude used to and at some point changed.
Can you blame them for believing thinking tokens are completely hidden now? Anthropic has changed the way to see it 3 times in 3 months with no warnings or visible upgrade path. First it was shown by default, then you had to press control+o, then control+t, then it got locked behind a settings.json, then you had to manually enable with --verbose, now it's some random ENV var.
Whoever is their product manager should be embarrassed at the UX they provide.
Product managers reduce velocity. The behavior changes every time another instance of Claude Code thinks something else would be a marginal improvement, with no further oversight or thought put into it.
I’ve started co-opting it specifically in situations where someone claims something untrue that is both easy to verify and stated confidently, but also ostensibly isn’t intentionally spreading misinformation.
reply