Thanks for sharing your story, it was an engaging read.
The part about filters in interviews resonated with me because of a recent experience. The place I work has been interviewing for new developers and the team lead asked me for my opinion on one of them. Overall seemed like a good candidate. But when I took a closer look at the assignment and the solution, I noticed that while technically the solution was good, the candidate had ignored a bunch of requirements outlined in the assignment.
At first I was willing to give him a chance, but when I gave it more thought, I realized that one of the biggest issues I've had with colleagues was them not reading the issue they're given, not understanding it, not fulfilling the requirements given in the issue and/or outright ignoring what's written because they independently decide they know a better solution (without consulting anybody), which turns out to be worse because of reasons which might not have been outlined in the issue, but still lead to the given requirements.
I pointed this out and felt it was a big red flag that, in a best-case scenario, this candidate was still unwilling to follow or incapable of following clear instructions. The candidate wasn't invited to the next round.
It also really bugs me when I've put more time into reporting an issue or setting someone up for success than they've spent working on a solution.
You would know best, but it struck me that one reason to skip parts of a take-home interview assignment is that it was taking far longer than it "should". A sufficiently senior candidate should have noted this but (I'm feeling charitable towards junior candidates this lazy Sunday afternoon) maybe that's something that's a reasonable thing for them to learn in a real job.
Ugh... I've had two bad hires in two years that were exactly like this - if you can't follow simple instructions, how have you survived in this career for this long?
Ugh... we have a new colleague who does this repeatedly. Most recently, I said in order to build, you need to do this:
- git clone <repo1> <dest1>
- git clone <repo2> <dest2>
- git clone <repo3> <dest3>
What do they do? git clone repo1, 2, 3 without giving <dest> parameter, which clones into default folders named after the repos. Build fails of course because repo1 depends on repo2 and 3 being named specifically. He sends me the error log (remote colleague, yay) and I say: you gotta rename those folders. Instead of renaming them, tries other things for hours, then comes back and shows me other build errors. I look over the errors and realize, again, the folders are still named incorrectly. Rinse and repeat 2 more times before finally the build process works. Lost a few hours to this. This kinda stuff keeps happening with this colleague. It's really a huge time-sink. If I had more time, I would do remote call and watch over them, but I'm so deep in my own stuff that I don't have time to babysit (not to mention calls take 1-2 hours with this person just trying to explain really basic things, over and over and over again).
If he is young enough he probably did not know how the file systems worked (and I mean: what a directory is, what files are). Supposed to be quite common now for people using only mobile devices. So he lacked the fundamentals to understand what you wanted.
That's definitely a valid argument, but I highly doubt it since they know their way around Windows and Linux just fine. I honestly think it might be related to attention span, miscommunication, language barriers and/or maybe a heavy reliance on AI tools (though to be honest, even local LLMs would have spotted the error immediately).
Nice try :) Actually the hard part isn't creating a script to automate 3 git commands - it's that there is no root repo to put such a script, hence the need to manually do it.
But you know what's also frustrating? Code bases which involve multi-step manual steps to build.
You should be able to get a working local environment with a single command.
You should be able to get a working local build with a single build command.
If you have depedent projects, they should either be in a monorepo, or delivered through a packaging system so they are not depend on the specific local naming of other repos.
Having a repo depend on a different repo being in a specific place on the file system is bad, having multiple of them is terrible.
Stick what's needed in an onboarding script, and make sure it works before onboarding someone.
Ideally that script should be kept to a minimum, if it grows too large that's a sign you've split things artificially instead of finding natural splits.
I agree, and there are other fun gotchas that even more frustrating and convoluted. But everything is thoroughly documented and I even explicitly pointed out those potential issues before I gave them the assignment. When the errors first occurred, I pointed out the fix, which was ignored, multiple times. Should it be that difficult to rename a couple folders? The compiler errors were fairly easy to understand: can't find repo2. Is that too much to expect from someone?
In an ideal world and in retrospect, you are right. But the build process is very old, created by someone long gone, of which multiple projects depend upon, each with minor tweaks and always reliant on the same hard-coded paths which IMO isn't that bad and can easily be rectified - it's really not worth the time or energy to allow dynamically named folders, not to mention dangerous since it's a critical production system that's worked forever. Nobody wants to break a running system, nor has the time to clean things up properly, especially since there are tons of build scripts that all rely on these paths, and trying to fix all of them would be a huge amount of work, spread across multiple projects, all requiring sign-off from higher-ups who will never be able to justify the cost to fix something that already works.
I hear you, but I still feel for the coworker, and if I newly joined a company and learned on my first day that they are failing the Joel Test[1] here in 2026, I would get that sinking feeling in my stomach that I made a huge mistake. There is no longer a valid excuse for having a build like that. "It's documented..." and "Nobody wants to touch..." "Not worth it to..." "Can't justify..." are all huge red flags.
Nice list, somehow I missed that one. I love Joel's wisdom.
The full build literally works in one step but the main prerequisite is to perform a few git clone commands which can be copy/pasted from the readme. Can't help it if they want to manually do git clone and let it name folders incorrectly and then subsequently ignore compiler errors and my advice to rename. Changing our antiquated workflow would have required changing a lot of other sensitive dependencies. What ever happened to the old adage "never change a running system" ? ;)
And people here are concentrating on our build system, which I will be the first to admit isn't perfect, but this isn't the only instance of me telling this person how to do things, them not listening to me and doing something else, letting me debug only to realize they didn't listen, telling them how to fix, only to be ignored _again_, rinse and repeat. It's unfortunately a recurring theme. I initially thought it was a communication error on my part, but I've heard similar complaints from several other colleagues as well.
See, you are probably right about that new hire. But the way you say things here, you do come across (at least to me) as "that guy". You know? "That guy" that says "we've always done it this way and that is why it's good and why we still do it that way".
What happened to "never change a running system" is that if the system is barely running at all, you better do change that system.
If I'm the new guy and you tell me how to do things and those things seem bad and there's no explanation for why they need to be that way, I'll also ignore you, coz I know how I want to do things and how things can be done better. Don't tell me to do things X way. Many roads lead to Rome and some are better than others. See my other reply. At my very first job I was also told how things work and how to do things. But things sucked and so I made them better anyhow.
Now, in the other part of the thread you also did mention how they just sent you error logs without reading and thinking about them themselves and such. That definitely is a red flag and the kind of thing that will make me fail someone's probation period. Definitely. But just because someone doesn't think that "the way we've always done things" is a good reason to keep doing something bonkers is where I'm no longer with you. And again, probably just your wording/what you disclose in various parts of these threads but it explains why "we concentrate on certain things only" ;)
> "That guy" that says "we've always done it this way
I most definitely am not, and constantly push back for changing processes, and do make changes when I am allowed, even when it's not budgeted for or even officially allowed, because we have legacy systems that need to be maintained and some things are truly outdated and need some love. In fact, as a new hire in another life, I used to get scolded for changing _too much_ because I like to constantly improve things. The problem is, in order to ship code at our company and within our budget constraints, you don't have time to constantly refactor, unfortunately.
> is that if the system is barely running at all
Our system is and has been running just fine for many years, thank you very much :)
> there's no explanation for why they need to be that way
There is an explanation: There is no shell script because there is no root for all the repos, and some repos are shared across multiple projects.
I hate reiterating this yet again but is it really that hard to copy/paste a couple git commands? The readme is quite short, and once you've done it one time, it's fairly easy and straightforward to understand.
With you on this one - I worked with a hopeless tester years ago - gave him a detailed test plan with a nice sequence of steps to follow to set up the scenario. I have no idea why but he decided to do the steps out of order, then wondered why my patch didn't do what it was meant to.
at the very least there's no excuse for not having a shell script to check everything out in the right places! I've been the new person in teams with this sort of fragile setup and it's no fun whatsoever.
See, I would probably have been the guy that ignores the dest part. On purpose. Just to see whether this pile of poo shits itself and how much.
I would also recognize what happened when I see the error messages though and then silent quit until I've been with you guys for long enough my resume doesn't take a hit just because your interview process duped me into starting at your place.
Yes this sounds harsh. I know. Nothing against you personally. But I've been at too many such places. One can do things better.
haha I would totally empathize with you. But see, you would realize immediately that it doesn't work and why, but you are smart enough to realize it and would just rename the folder (as mentioned in the readme as the very first thing to do), hit build and voila, everything works as expected, instead of sending me log files, letting me debug your errors, ignoring my repeated instructions, waste my time helping you debug, back and forth until finally you just rename and move on. I had to say it 4x before it finally stuck. This guy seemingly wanted to get an initial build done so he could move on to actually fixing bugs. Maybe it's intentional to waste everyone's time so that he can book hours in the system and get paid for futzing around? :shrug:
and btw the system doesn't shit itself that much: the compiler errors are fairly straight-forward: this folder doesn't exist.
Of course everyone can always do better, but it's a legacy inherited system, and works fine, as long as things are named correctly. The readme is actually very short - literally the git commands are there to copy/paste and will create the correct directory names. There are plenty of other things to get hung up on, but naming folders correctly should IMO not be one of them.
Hehehe, definitely been there and dealt with that guy that I definitely didn't let pass his probation period. I see we do understand each other.
And I might not silent quit right away and try to actually improve things and see how it goes/how you "let me".
This does actually remind me of my very first job actually. It's been many, many, many years now. I found a stinking pile of disparate shell and perl scripts that made up the "backend" of the application I was hired into working on. Grown histerically for almost 15 years prior to me being hired fresh out of university. I started extracting common library code out of every single one of these every time that I was tasked with adjusting one of those scripts. I introduced a proper deployment from source control to production the second time I "broke" something, because someone had previously fixed a bug directly in production and forgot to check the fix into source control (and no, I didn't believe that it was not the guy that was working on the project w/ me for one second after seeing how they coded and how they defended everything that was bad about that pile of poo and how "it's too complicated to do X".
Well guess what, this new grad did all of that anyway. Without AI and without IDE refactoring support (I mentioned Perl and shell scripts, did I?) and without a single QA person in sight. And yes, every single one of the readers here very probably has bought a product that was "touched" by that software, without knowing, since it was an in-house administration tool.
I agree but it's probably more common than you think. I had a job where the setup involved running one setup command twice because the first time always failed. But that was called out in the documentation so it was fine. The reason it wasn't fixed is because the project was well on the way to being end of lifed so it wasn't worth it at all to fix the crappy set-up process.
> If you have depedent projects, they should either be in a monorepo, or delivered through a packaging system so they are not depend on the specific local naming of other repos.
Our git master actually considered this, but it would have caused other issues that I can't recall right now, so we got stuck with lots of repos. The readme literally has all the git commands, all they had to do was copy/paste them into the terminal.
No, I've onboarded a few people people over the years and everyone until now has been able to follow instructions. In one case the guy had stuff going in his personal life but it wasn't a case of me being unhappy with his output (which I would have understood) but he just didn't follow instructions. In the other case I'm 99% sure my manager and the recruiter misrepresented the job and he didn't want to be here but... grind it out until you find something new (which he did after 6 weeks). Instead he wasted my time and ended up leaving with a bad reputation.
I've given up on Claude after seeing the response quality degrade so much over the past two weeks, and now this? I've unsubscribed. I don't know why people are still giving this company money.
This past week was a nightmare in trying to get Claude to do any useful work. I've cancelled my subscription and everybody else here having problems should too. I don't think Anthropic cares about anything else.
Why is it our job to micromanage all this when it used to work fine without? Something's clearly changed for the worse. Why are people insisting on pushing the responsibility on paying users?
Had a single prompt the other day where it just tried to examine dependencies that weren't relevant until it hit the rate limit. That was my first prompt of the day. On a task that it was able to do quickly and successfully many times before.
How is any of what you wrote relevant? People aren't using Claude for the first time and hitting rate limits. They've been using Claude for months, at the very least, and they're hitting rate limits without significant changes to how they prompt.
> People need to understand a few things: vague questions make the models roam endlessly “exploring” dead ends.
> If people were considerably more willing to aggressively prune their context and scope tasks well, they could get a lot more done with it
If this were the problem, people would've encountered this when they started using Claude. The problem is not that they can't get anything done. It's being able to get things done for months, but suddenly hitting rate limits way too easily and response quality being clearly degraded, so they can't get things done that used to be possible.
I think in this case, we probably have different experiences that shape how we see some things differently: I see many (very smart) people doing certain things that are not optimal (eg: copy-paste entire files instead of referencing them or tell claude at every message to "read CLAUDE.md and follow its instructions precisely") which can lead to a lot of token waste. If certain system prompts were tweaked internally or some models now read more files than before, keeping these "inneficient prompts" will make limits exhaust faster. Sub-agents or this new agent teams feature didn't exist until a few months ago: that alone eats A LOT of tokens, not intended for this pre-paid API usage, etc.
The ecosystem is evolving super quickly so, our own experiences and workflows must keep adapting with it to experiment, find limitations and arrive at the "tightest possible scope" that still allows you to get things done, because it is possible.
Another example: pre-paid monthly subscription aggregates usage towards web and Claude Code, for eg. So if you're checking for holiday itineraries over your lunch break, then decide to sit down and ask a team of agents to refactor a giant codebase with hundreds or thousands of files, context will be exhuasted quickly, etc, etc.
I see this "context economy" as a new way of managing your "mental models": every token counts, and every token must bear its weight for the task at hand, otherwise, I'm "wasting budget". I am also still learning how to operate in this new way of doing things, and, while there have been genuine issues with Claude Code, not every single issue that people encounter is an upstream problem.
This is literally victim blaming. When people haven't been having issues until now, why is it their fault? Anthropic is providing a paid service to paying users. It's not acceptable that they degrade our experience to save some money and it's not acceptable to blame everybody else who didn't cause the issue.
In the end, Anthropic is a company and needs to make money, my best bet is that even those of us who pay 100/mo to use Claude Code are costing Anthropic money, besides all the rest they’re burning on inference.
Again, I agree with you and the service should be at least reliable but to be completely fair, if I had to bet, the amount of usage people get for 100/mo is probably only balanced out by the corporate/entreprise customers paying their bill to Anthropic via API usage.
If we look at it through this lens, this limits are not surprising at all, except maybe on how generous they are/were. It’s pretty obvious that they want to force people to pay as they go….
I wish we could just have comments removed where it's clear the author didn't even put in the minimum effort of reading the article. It's disrespectful to the rest of us.
It would be too fucking funny if this were the case. They're vibe coding their infrastructure and they vibe coded their response to the increased load.
You'd think they would have dashboards for all of this stuff, to easily notice any change in metrics and be able to track down which release was responsible for it.
Honestly, it feels like RTS players might qualify considering how much multitasking is required in a game like Starcraft. Maybe they should add a StarCraft 2 competitive rank qualification.
The part about filters in interviews resonated with me because of a recent experience. The place I work has been interviewing for new developers and the team lead asked me for my opinion on one of them. Overall seemed like a good candidate. But when I took a closer look at the assignment and the solution, I noticed that while technically the solution was good, the candidate had ignored a bunch of requirements outlined in the assignment.
At first I was willing to give him a chance, but when I gave it more thought, I realized that one of the biggest issues I've had with colleagues was them not reading the issue they're given, not understanding it, not fulfilling the requirements given in the issue and/or outright ignoring what's written because they independently decide they know a better solution (without consulting anybody), which turns out to be worse because of reasons which might not have been outlined in the issue, but still lead to the given requirements.
I pointed this out and felt it was a big red flag that, in a best-case scenario, this candidate was still unwilling to follow or incapable of following clear instructions. The candidate wasn't invited to the next round.
reply