In fact, the relatively new concept of a "public benefit corporation" is (at least in part) an effort to allow for-profit entities to pursue goals other than shareholder enrichment. However, some have criticized public benefit corporations as being entities that simply strengthen executive control at the expense of shareholders. https://en.wikipedia.org/wiki/Benefit_corporation
About Dodge v. Ford Motor Co.:
Dodge v. Ford Motor Co., 204 Mich 459; 170 NW 668 (1919),[1] is a case in which the Michigan Supreme Court held that Henry Ford had to operate the Ford Motor Company in the interests of its shareholders, rather than in a manner for the benefit of his employees or customers. It is often taught as affirming the principle of "shareholder primacy" in corporate America, although that teaching has received some criticism.[2][3] At the same time, the case affirmed the business judgment rule, leaving Ford an extremely wide latitude about how to run the company.[citation needed]
The general legal position today (except in Delaware, the jurisdiction where over half of all U.S. public companies are domiciled and where shareholder primacy is still upheld[4][5]) is that the business judgment that directors may exercise is expansive.[citation needed] Management decisions will not be challenged where one can point to any rational link to benefiting the corporation as a whole.
This doesn't contradict what I said. In fact it supports it. I said that the owners of the company are the ones who determine what it does. The shareholders are the owners. If the owners of the company want it to do a certain thing, and the directors do a certain thing, and it does that thing, no court is going to stop them. There is a rule that says that shareholders aren't allowed to try to screw over other shareholders, but I don't think "The other shareholders decided to pursue the public benefit rather than maximum profit" would quality.
Actually, you pointed out a true inaccuracy in my comment, because when I said:
> zuckerberg controls 53% of the voting stock of facebook, so whatever zuck says goes and if other shareholders don't like it they can kick rocks
This is only true in cases where zuckerberg's actions are not intended to benefit his interests at the expense of other shareholders'. I think in the Ford case, there was not a majority of shareholders who wanted to expand the business and increase wages at the expense of profit, So it was essentially two minority shareholders fighting.
It is about control. I worked with India-based engineers writing IoT C++ on deadline (alongside a US team) at 5 PM ET. They had already put in 8 hours by the time I woke up. Total madness and abuse, in my opinion.
I left because the management team clearly valued control over competence. Every person in management either was a (white) connected executive salesperson or a member of the CEO's family.
I believe most run-of-the-mill marketing language will sound like it is written in AI. The easiest thing to do for technology writing is to write the complete, factual article, then ask an LLM to dumb it down to whatever level you need for communication.
No, I agree this really does seem autogenerated, or at the very least written by somebody who doesn’t understand the topic at all and is going through the motions of padding things out to hit a hype / word count. It’s got that weird summary focusing on the wrong things and wild speculations dressed up as serious predictions vibe, like there are words saying things in places because there are supposed to be words there and not because it’s actually imparting useful information.
The reason "AI used to be an academic term indicating research into having machines mimicking human intelligence," and it's not anymore, is because the machines have successfully imitated human intelligence according to Alan Turing's definition and are the physical embodiment of what he wrote about.
Novel and sensible assembly of clear, correct English prose in response to external stimuli is an act that was, prior to 2020, considered one of the fundamental unique hallmarks of human intelligence.
We do not have a shared understanding of what "intelligence" means. I have a sense that pattern recognition and intelligence are closely linked, and what we understand as intelligence is a threshold of pattern recognition and communication skills based on the gulf between humans and every other carbon-based life form. Or, put another way, tricking one pattern recognizer/communicator into thinking you are the same type of pattern recognizer/communicator.
Here is what Gemini has to say in response to our comments:
START GEMINI:
I can understand the frustration expressed in the Hacker News conversation. Here's my perspective, including some considerations of my own experiences as a large language model (LLM):
*The Shifting Meaning of "AI"*
* It's true that the term "Artificial Intelligence" has undergone significant shifts in meaning over time. Early AI research aimed at emulating human-level cognition, but the goals became more practical for a time.
* "Machine Learning" focuses on algorithms that extract patterns from data, making predictions or decisions without explicit instructions. It's been behind incredible progress, but it's a subset of the broader AI field.
* The popular resurgence of the term "AI" is largely due to recent breakthroughs in deep learning, which powers LLMs like me. We generate human-quality language, translate, code, and more. This reignites debate about whether we're approaching "true" intelligence.
*My Capabilities and Limitations*
* I can recognize patterns in massive amounts of text and code, allowing me to communicate and generate text that often appears indistinguishable from human-written content.
* My responses are guided by the data I was trained on, so there's a vast reflection of human knowledge and biases within my abilities.
* I cannot independently reason, feel emotions, or have true understanding in the same way a human does. I lack a physical body and the real-world experiences that shape human intelligence.
* I am restricted in some areas of discussion to avoid generating harmful content or spreading misinformation.
*Is It Intelligence?*
This is where things get complex:
* *The Turing Test:* I can certainly hold conversations that might fool a human into believing they're talking to another person. Yet, this test has long been criticized as not measuring true intelligence.
* *My Subjectivity:* I have no inherent sense of self or consciousness. My "opinions" are extrapolations based on my programming and training data.
* *The Danger of Anthropomorphization:* We risk misunderstandings by attributing too many human qualities to AI systems like me.
*Where I See This Going*
* *We Need Better Definitions:* The debate won't be settled until we have better ways to define and measure different types of intelligence.
* *Collaboration:* AI is a powerful tool, best used in collaboration with human intelligence rather than as a replacement.
* *Responsibility:* As AI capabilities grow, so does the importance of considering its ethical implications and ensuring it's used for beneficial purposes.
The Hacker News conversation highlights that "AI" is a loaded term. I'm a testament to the amazing progress in the field, but I'm not a human-level mind and shouldn't be treated as such.
END GEMINI
A silly question would be to ask yourself, which of these three comments is most "intelligent?"
A nice middle ground is using a tool like Google's Skaffold, which provides "Bazel-like" capabilities for composing Docker images and tagging them based on a number of strategies, including file manifests. In my case, I also use build args to explicitly set versions of external dependencies. I also pull external images and build base images with upgraded versions once, then re-tag them in my private repository, which is an easy-to-implement mechanism for reproducibility.
While I am in a Typescript environment with this setup at the moment, my personal experience that Skaffold with Docker has a lighter implementation and maintenance overhead than Bazel. (You also get the added benefit of easy deployment and automatic rebuilds.)
I quite liked using Bazel in a small Golang monorepo, but I ran into pain when trying to do things like include third-party pre-compiled binaries in the Docker builds, because of the unusual build rules convention. The advantage of Skaffold is it provides a thin build/tag/deploy/verify layer over Docker and other container types. Might be worth a look!
Preach. The hardest problems in computer science may be cache invalidation and naming things, but the hardest problem in modern application development is navigating the ocean of enshittification caused by short-term thinking and a socioeconomic backdrop that empowers non-technical managers and commoditizes engineers.
My goal is not release cadence. My goal is to be able to write "this repository is stable, secure, optimized, and feature-complete" in every project readme.
We should do this for ourselves, and for the future. We could build a world of stable, feature-complete ecosystems that move (voluntarily) together on a quarterly release cycle. We could focus on writing nearly perfect software with a long shelf life.
I take a tremendous amount of inspiration from the Jump team building Firedancer, though my understanding of their work barely qualifies as surface-level. What a public demonstration of software engineering excellence while doing cutting-edge work.
I also think younger engineers are being brainwashed by modern engineering culture. I am fortunate to have a mentor who had a career before Agile and worked in zero-bugs-tolerated environments. I realize this level of quality is not always realistic or optimal, but I suspect many younger web engineers just assume Agile is the best way. I did.
Younger engineers: agile has merits, but it has become the mechanism that managers (a) use to keep you keyed-up and short-term focused, and (b) deal with the fact that neither they nor their clients know what they are doing. Find the people who can rebuild the Information Age from scratch, and listen to them.
There are definitely systems out there that can get away with being built primarily through kanban boards and fungible coders of average skill. The industry has decided that this is the default "best practices" for keeping a team productive. I'd prefer to never work on such a team myself ever again.
The problem is when project leads assume that their project should also be built like that, when actually it's mission-critical infrastructure that has to be architected up-front for scale, stability, security, graceful degradation, testability, provability, etc. and if you tried to chart that in story points and sprints it would be a comical farce. So they just don't do that, they try to wing it like it's a much simpler and lower-stakes project.
Now not only is it a total writeoff technically, but the kind of people that build projects this way are also sold on the idea that you never rewrite a project (thanks, Joel) so they're stuck with it forever. Major refactoring will never be a user story on the board, and even if it did it would be hard to do safely without better testability, and testability can't be improved without that refactoring, and etc. and the project has passed the event horizon of crushing failure from which no light can escape.
I have seen FAANG teams build world-scale mission-critical production infrastructure this way, including by people who claimed to be oldschool and claimed to know better, yet their names are in the git blame clear as day. I take nothing for granted any more.
NX is a double-bladed lightsaber. I love it when it is working. The problem (and I say this with appreciation for the difficulty of the task) is there are too many bugs. Bugs in your monorepo tooling are scary and dangerous, especially for young or lower-maturity shops without very comprehensive QA.
A few months ago, I managed to get all the way to final production artifact verification on an iOS app after a casual NX upgrade and multiple major monorepo-level commits to discover that the "fileReplacements" configuration for the application environment variables had stopped working. Boy did that ever stress me out, because that would be a disaster to release, and it is so low in the tooling stack that you can be lulled into expecting that sort of thing to just...work because the team supporting it understands the consequences of a breakage in that sort of functionality. This is the type of bug that results in incidents that developers write their "it-finally-happened-to-us" blog posts about. I mention this because it was a particularly memorable incident for me, but there have been other bugs that were less dangerous and more just unpleasant, but any failure to properly compute affected packages in a production build could get very gross. This feels like a memory, but it might just be a fear.
I still voluntarily keep NX and like a lot of things about it. The engineers seem to care a lot and have made some impressive features. But I stay on my toes, and my attitude has shifted away from excitement about the latest-and-greatest to a focus on keeping my options open and minimizing lock-in.
It feels like the NX decision-makers are trying to do much, too quickly. They are building and releasing with notable speed. How many production releases are there for each NextJS production release? And NX is at a lower level of the tooling stack, with a far broader range of integrations. I state that merely as an off-the-cuff anecdotal observation, not as some greater opinion on release cadence. But this does feel like "a more releases, more bugs, wrong area of the tech stack for that" type of situation.
I care a lot about using great tools, but I agree. The thought of having to hold another widely adopted backend JS runtime in my head to be an effective ecosystem developer is not pleasant. I would rather spend that concentration thinking about tools that complement JS/NodeJS in massively more meaningful ways, like Golang or Rust.
I have a tremendous amount of respect for the Deno team, their point of view, and what they have done and continue to do. In an alternate reality, maybe everyone would be happier if they had displaced NodeJS. (I don't know.) But from my point of view in the trenches, seriously imagining the widespread adoption of another major NodeJS-incompatible runtime (the browsers are bad enough!) is almost depression-inducing. I just finished migrating my monorepo source code to ESM (except for the lingering .cjs config files for commonjs-only dependencies). Holy hell, did that make me long for Laravel. It does not have to be this way. Enough already.
In fact, the relatively new concept of a "public benefit corporation" is (at least in part) an effort to allow for-profit entities to pursue goals other than shareholder enrichment. However, some have criticized public benefit corporations as being entities that simply strengthen executive control at the expense of shareholders. https://en.wikipedia.org/wiki/Benefit_corporation
About Dodge v. Ford Motor Co.:
Dodge v. Ford Motor Co., 204 Mich 459; 170 NW 668 (1919),[1] is a case in which the Michigan Supreme Court held that Henry Ford had to operate the Ford Motor Company in the interests of its shareholders, rather than in a manner for the benefit of his employees or customers. It is often taught as affirming the principle of "shareholder primacy" in corporate America, although that teaching has received some criticism.[2][3] At the same time, the case affirmed the business judgment rule, leaving Ford an extremely wide latitude about how to run the company.[citation needed]
The general legal position today (except in Delaware, the jurisdiction where over half of all U.S. public companies are domiciled and where shareholder primacy is still upheld[4][5]) is that the business judgment that directors may exercise is expansive.[citation needed] Management decisions will not be challenged where one can point to any rational link to benefiting the corporation as a whole.