Have you been on Reddit recently? Anti-AI sentiment isn't limited to technical communities, but engaged with platform-wide. The hate is the content; apart from the rampant anti-intellectualism there is very little engagement with any material discussing the technologies in and of themselves.
I don't think you know what "mobilized" means - at least, it implies an actor with ulterior motives is driving this response in people. Usually political actors or the media are accused of this.
What you're describing on Reddit sounds like a broad-based antipathy to AI, which is just... how a lot of people are feeling?
You can criticise their motivation being based in emotions or vibes instead of facts and thoughts, but unless you have evidence to the contrary, it sounds like this is just where people are at on this topic.
Of course I know what mobilised means, but with these social medias, the platforms themselves are at play in shaping the conversations. To clarify, I think this type of anti-AI sentiment should be studied as part of a the baseline social culture of the platforms. I do not have a good answer to why it flourishes on them, other than the generic framing. Perhaps its a new type of technological conservatism?
Studies seem to indicate that coffee is at least as healthy, if not healthier than tea, and I have not heard this about caffeine specifically (aka the same effects coming from pills or energy drinks).
One fun fact: we still haven’t figured out why coffee makes us poop. We’ve studied every chemical in there and can’t seem to find a link, but the association is uh… well-known.
Sure, but according to semver it's also totally fine to change a function that returns a Result to start returning Err in cases that used to be Ok. Semver might be ae to project from your Rust code not compiling after you update, but it doesn't guarantee it will do the same thing the next time you run it. While changes like that could still happen in a patch release, I'd argue that you're losing nothing by forgoing new API features if all you're doing is recompiling the existing code you have without making any changes, so only getting patches and manually updating for anything else is a better default. (That said, one of the sibling comments pointed out I was actually wrong about the implicit behavior of Cargo dependencies, so what I recommended doesn't protect from anything, but not for the reasons it sounds like you were thinking).
Some people might argue that changing a function to return an error where it didn't previously would be a breaking change; I'd argue that those people are wrong about what semver means. From what I can tell, people having their own mental model of semver that conflicts with the actual specification is pretty common. Most of the time when I've had coworkers claim that semver says something that actively conflicts with what it says, after I point out the part of the spec that says something else, they end up still advocating for what they originally had said. This is fine, because there's nothing inherently wrong with a version schema other than semver, but I try to push back when the term itself gets used incorrectly because it makes discussions much more difficult than they need to be.
> ... we’ll ensure that a larger share of internal staff use the exact public build of Claude Code (as opposed to the version we use to test new features) ...
Apparently they are using another version internally.
Why is an imperative SSH interface a better way of setting cloud resources than something like OpenTofu? In my experience humans and agents work better in declarative environments. If an OpenTofu integration is offered in the future, will exe.dev offer any value over existing cost-effective VPS providers like Hetzner? Technically, Hetzner, for example, also allows you to set up shared disk volumes:
I don't think SSH vs OpenTofu is the core issue here.
For agents, declarative plans are still valuable because they are reviewable. The interesting question is whether exe.dev changes the primitive: resource pools for many isolated VM-like processes, or just nicer VPS provisioning.
I'm not sure if I'm holding it wrong, but at these usage rates, I can hardly see this being useful for designers in their daily work. In two prompts using the Max 20x plan, it consumed 11% of my weekly limit for Claude Design, which is separate from your normal limits. A day of work would exhaust over four weeks of usage. Is this meant for intermittent use only? Lately I've been getting the feeling that Anthropic is forgetting how absurdly much we are already paying for these tools, compared to conventional development tools, or even competing inference providers.
Yeah. I'm only on the Pro plan and immediately reached my weekly Claude Design quota by having it create a slide template (with much too small text) and three versions of a system dashboard design (rather nice). No iterations.
Another thing: I realized how much I hate waiting for Claude to finish its thing. With UI designs, a quick interaction loop between tool and user feels much more important than with code.
I have wondered this as well. Maybe it's trying to train based on which accounts get flagged/ time-to-flag or something? Otherwise... who would bother with this? It's so dumb.
reply