If you have people on your team that are valuable enough to demand a 40% compensation increase, then you should have been paying them 40% more without them having to demand it.
It never ceases to amaze me that the owner class is so continually shocked that the people who build the value that the owners leverage into growing their fortune might suddenly realize their value.
The entitlement is baked deep into the mindset that there are people who work and people who profit.
Often enough they don't care or realize it. I've seen all the data analysts that were eiter (1) senior or (2) also had data engineering skills leave.
They all got 25% more after leaving. And this is in the Netherlands, where pay is already kind of low. What was even weirder was that this company is a F500 company and the people that left, they left for all kinds of organizations (small and large) and still got paid more.
That includes me by the way. I was doing 4 roles (data analyst, pentester, software engineer through web development, AI stuff + data engineering and project manager). It was a lot of fun though. But it paid decisively mediocre.
Currently going to a fun job that pays way better.
It gets worse. I've seen some managers hold back strong developers because they want everyone to be a replaceable cog. They push for average work across the team so no one becomes irreplaceable--even if it means the product ends up weaker than it could be.
"Value" is not this objective measure that can be deduced from someone's output. It depends on many variables like cost of living, the job market, tech trends, and industry competition. Could very easily fluctuate 50% or more in a short period of time.
I don't sit down and decide "here's how much I'm going to pay the plumber." I call a plumber and get a quote. I can keep going until it feels worth it.
My enlightenment comes from talking to a bunch of plumbers and hearing what they're gonna charge. That's the market rate.
If I hire an employee and they get good at their job and someone else offers them a 40% raise, they'll probably leave. That's a lot of money.
Some companies care about retention, most don't really.
You're making the wrong comparison. If your plumber increases his rate by 40%, will you "fire" him and get another plumber which will also be 40% more expensive, or a plumber which costs the old price but isn't good, or accept his price increase?
The employer and the employee have different goals and metrics for "success". This mismatch comes up all the time whenever "interviewing is broken" threads come up. Many interviewees think the goal is to find the best person for the job. It isn't. It is to find someone suffciently good to do the job for the least money possible. This mean that if you filter out "better" candidates, it's not a failure if the position gets filled anyway.
So the mistakes being made here are:
1. You think you're irreplaceable. You're not. "But--". "--Nope";
2. You think it's more expensive to replace you. Possibly but irrelvant. You see, if they give you a 40% raise, there now might be 10 or 100 other people who will demand a 40% raise. It's cheaper overall not to give you that raise.
This comes up all the time when landlords lose good tenants by raising rents $100-200/month. Tenants will rightly point out that they'll lose more with the vacancy period than they'll get from $100-200/month. Also irrelevant. The landlord will often have 10 or 100 or 1000 or 10,000 units. They give a $200 increase to all of them and not all of them are moving. The increased income from those who don't will exceed the losses from those who move.
Plus, the $200/month extra increases the property value. Someone may lend against that increased value to buy even more units;
3. Ultimately any enterprise can only increase profits by raising prices or lowering costs, particularly wages. Suppressing wages becomes the entire business of the company. That's what permanent layoffs culture is for (to get more unpaid work on those that remain and to stop them asking for raises). That's what AI is for. Supressing wages is THE product for AI.
Remember there's a fundamental imbalance here. If a company loses a particular employee, most likely they either won't notice (at least for awhile) or they'll simply be temporarily inconvenienced.
What happens if you don't have a job? You might lose your house, your car, your health insurance, your childrens' school and so on.
The stakes for you are so much higher so in any difficult hiring market, you will be squeezed.
I don't know if you simply haven't worked with people who are unreplaceable or what, but I assure you with genuine confidence that there are people who cannot be replaced without massive disruption and undesirable risk.
Could you expand on that? I'm genuinely interested.
I personally identify as a leftist and it's my perception that the left is completely missing the moment on AI, to my great frustration. From my perspective, left-leaning people increasingly project an "AI evil" vibe even though most of them simply don't have any direct exposure beyond seeing Sora slop and hearing about data centres.
I actually think the anti-ai on the left is subsiding. More of my friends are using and asking about it, and I have become active in a local indivisible group, where more than half are using it. Those people were very excited to have someone with deep knowledge around. The remaining anti are softer resistance, more skeptical because they have heard bad environmental things. I'm personally more concerned about the social side and second order effects.
I'm trying to help them understand two things
1. Like all of computing history, we will become more efficient and have less environmental impact. The most likely slow down will come from energy availability. We need to step up our renewables, it's not so bad if it's good energy
2. We have moved up the stack. These are not simple text-in-out machines. The training and models are more sophisticated. We now give them tools, skills, constraints and have them operate in teams. Human in the loop is still important.
Getting off-topic, but as a successful high-school dropout I am compelled to remind anyone reading this that [the American] college [system] is a scam.
That's not to say that there aren't benefits to tertiary education, for many people in different contexts. It's just not the golden path that it's made out to be.
Many people currently in college are just wasting their money and should enroll in trades programs instead.
Meanwhile, nothing about being in or out of school is mutually exclusive to using LLMs as a force multiplier for learning - or solving math problems, apparently.
Care to actually refute? Interesting that even an LLM would give an attempt at it, but apparently those who only bother to hit the downvote button aren't even meeting that level of "intelligence".
> It is in the same way that educated guessing is.
I guess (heh) it depends on your definition of 'educated guessing'? Looking at the problem, considering a solution, discarding it, trying another and testing, iteratively, is how most people would approach any tricky problem.
Brute force is substantially different. It would be saying that, other than maybe setting some basic bounds and heuristics, I'm going to try literally everything and test each. That's not at all what the LLM did here.
I would love to better understand how a device launched the year before I was born could be so flexible in its configuration and operation. I can't update the code running on a microcontroller on my desk in front of me without it triggering a reboot.
When they talk about rerouting power and performing a "big bang" reconfiguration with a 23 hour lag on equipment that was underpowered when the 8088 came out... it kind of melts my brain.
Apparently it still has ten years worth of fuel left!
NASA pioneered a lot of what underpins modern design of critical computer systems. Voyager's systems are impressively robust. As far as I know, they can patch it by directly sending up new assembly instructions that are written into its memory, and doing a warm reboot to get it to start executing new instructions without powering down anything. They had the foresight to make their software highly editable, while also having multiple redundancy and emergency systems. Despite this, I wonder how much pressure the people writing this software feel. Even with all the simulators and months of rigorous testing, sending up something that can (in the worst case) break the probe has to be terrifying.
I second your recommendation. I watched it last night, and loved it. It was beautiful to see the level of competence and devotion of the tiny group running the spacecraft.
Here's a talk about how the Voyager team fixed the flight data computer on Voyager 1 when a memory chip went bad on it a few years ago. It goes over how the flight computer works and he walks through a few assembly routines. https://www.youtube.com/watch?v=YcUycQoz0zg
Some of the challenges they had to deal with while developing the fix:
- The only source code they had for the flight data software was an OCR'd Microsoft Word document (with typos) that was likely scanned from a hard copy assembler listing printout.
- The processor runs a custom instruction set developed by JPL for the Voyager mission. The documentation they had on the processor was incomplete.
- Everybody who had designed the flight software was dead.
- They had no assembler, no debugger, and no processor simulator. They had no testbed, the only two FDS processors were in space.
The 2025 YouTube video is "How We Diagnosed and Fixed the 2023 Voyager 1 Anomaly from 15 Billion Miles Away" by David Cummings of JPL.
There is a Vimeo video of the Voyager team reacting when data first began trickling in from Voyager 1 after the fix in April 2024. "Voyager 1 Team Reacts to Receiving Engineering Data From Spacecraft" (JPLraw channel): https://vimeo.com/939376171
Cummings is the one against the back wall who shoots his two arms up in the air in celebration. He and Armen Arslanian (in the blue shirt to his left, right in the image) developed the software fix.
> microcontroller on my desk in front of me without it triggering a reboot
Most microcontrollers can update their own flash while running, either with a built-in bootloader or a user-programmed bootloader that takes up a little bit of the flash.
What makes you think that Voyager isn't "rebooted" though?
This kind of update is often kind of ass to do, though, because you may not be able to execute from said flash while you’re updating it.
So you copy a small write routine into RAM, copy a chunk of new data there too, jump to the routine, then it returns to your main bootloader in flash which receives the next chunk from a UART or whatever (because of course it doesn’t fit into RAM all at once), rinse and repeat. You aren’t exactly going to be serving realtime interrupts during this.
(So if you do need minimal downtime, you probably have dual external flash chips, or even just two microcontrollers given execute-from-external-flash would bump you up to fancy micros.)
With sufficient motivation and effort, you could have a self-updating microcontroller. You could, if you really wanted to, write firmware just as robust, reliable, and flexible as the Voyager system.
It's just that in most cases, the amount of effort required is orders of magnitude higher than is really justifiable.
It's a real shame that people bought into this false dichotomy, because the base reality is that people who work in web dev that stubbornly pick either code or layout are more of a liability than an asset.
I don't believe that people who can design and code are as rare as folks seem to believe, either. What seems more likely is that there are a LOT of coders who are extremely fluent in CSS but aren't particularly gifted when it comes to making things look good.
It wasn't that long ago that designers understood that they couldn't just hand off a 2D comp of what they want to see. The job isn't done until the output can be integrated into the app. Nobody gets to launch cows over the wall and go for lunch.
Honestly, I never understood the move to create an artificial dichotomy between design and code with a heavy layer of tooling.
I suppose that a layout engine made sense in the context of Flash, and you saw the future of the web as a set of keyframe animations. But the notion that there's a lot of value in creating a very heavy, high-friction abstraction between the UI/UX and the platform it ultimately runs on was always going to be a loser.
In the end, it turns out we're all just web developers, regardless of your weapon of choice.
It never ceases to amaze me that the owner class is so continually shocked that the people who build the value that the owners leverage into growing their fortune might suddenly realize their value.
The entitlement is baked deep into the mindset that there are people who work and people who profit.
reply