Most cloud features are open source tools with special sauce sprinkeled in. But at the same time these companies heavily fund said OS project so I suppose it's not just pure community based work.
It's an encrustified open source offering where the original vendors aren't compensated. Where there's lock in, proprietary offering creep, and highway robbery billing.
The ad for 'deleteme' in the text of the article is too much. But also Stephan was promoting a lot of crypto stuff that turned out to be scams and basically pretends like it never happened. These sorts of influencers just sell whatever and don't care what it is. Not sure I care what he thinks.
We already have advanced autopilots that can fly commercial airliners. We just don't trust them enough to not have human pilots. I would trust the autopilot more than freaking Claude. We already do, every day.
yeah, I think GP misunderstood the nature of a thing like this. It's what hackers do, we play with things. Nobody is suggesting we replace the pilots in real planes with claude, certainly not OP
In aviation there's a saying, "Aviate, Navigate, Communicate" which describes the hierarchy of things to pay attention to while piloting an aircraft.
Autopilot can be thought of better as "auto-aviate". That is to say, if there is already a navigation plan, the aircraft can follow that plan. Simple autopilots just keep the wings level, others can hold an altitude and change heading. More sophisticated ones can change altitude or even fully land the plane.
All of those things, however, require people to manage the "Navigate" part. "Aviate" is a deterministically solved problem, at least in normal flight operations. As you point out we trust autopilots today, including on (nearly) every single commercial flight.
LLMs are a poor alternative to "aviate", but they could be part of a better flight management automation package. The parent article tries to use the LLM to aviate, with predictable results.
If paired with a capable auto-pilot (not the relatively basic one on that C-172), the LLM could figure out how to operate the FMS and take you from post take-off to final approach and aid in situational awareness.
Currently, I don't think there is a commercial solution for GA aircraft that could say, "Ok, I'm 20NM from KVNY, but there are three people ahead of me in the pattern, so I have to do a right 360 before descending and joining downwind on 34L".
Having an LLM propose that course of action and tell the autopilot to execute on it definitely would be an improvement to GA safety.
I think we can trust them to not have human pilots. It is just that having human in loop is very useful in not that rare scenarios. Say airfield has too much wind or fog or another plane has crashed on all runways... Someone needs to make decision what to do next. Or when there is some system failure not thought about.
And well if they are there they might as well fly for practise.
And no. I would not allow LLM in to the loop of making any decision involving actual flying part.
There's also the issue that when something goes wrong, many people will never trust an autopilot again. Just look at how people have reacted to a Waymo running over a cat in a scenario where most humans would have made the same error. There's now many people calling for self-driving cars to never be allowed on roads and citing that one incident.
Yeah, I think it's been technically possible to automate jetliners for a while now, but when a metal tube with hundreds of people in it develops a technical fault while moving 500+ mph there's no substitute for a pilot.
> We just don't trust them enough to not have human pilots.
Much of the value of a human crew is as an implicit dogfooding warranty for the passengers. If it wasn't safe to fly, the pilots wouldn't risk it day after day.
To think of it, it'd be nice if they posted anonymized third-party psych evaluations of the cockpit crew on the wall by the restrooms. The cabin crew would probably appreciate that too.
There are soooo many pilot decisions that AI is nowhere near making. Managing a flight is more than flying. It is about making safety decisions during crisis, from deciding when to abort an approach to deciding when to eject a passenger. Sure, someone on the ground could make many of those decisions, but i prefer such things be decided by someone with literal skin in the game, not a beancounter or lawyer in an office
> It is about making safety decisions during crisis, from deciding when to abort an approach to deciding when to eject a passenger.
Everyone likes to hand wring about this sort of stuff but I think it's the exception. Nailing the "macro level" decisions like "we'll go around this storm but we'll go over that one" or "we must divert to A or B and we will chose B because it's better for our passengers/company/crew even if it's 10min more flying to get there" are what keep the industry humming along mostly in the black rather than in the red. And it's these sorts of things that AI just tends to yolo and get mostly right when they're obvious but also get immensely wrong when any sort of gotcha materializes.
I sincerely doubt that pilots decide "when to eject a passenger". Mostly it would be the cabin crew: the flight attendants are 100% in charge of flight safety, and they would be managing relationships with passengers, and they would be the ones to make the call. It would ultimately be them calling some kind of law enforcement. If an Air Marshal is onboard already, obviously they would be on the front line as well.
Furthermore, the concept of "ejecting a passenger" from a flight would mostly not be something you do while in the air, unless you're nuts. Ejecting a passenger is either done before takeoff, or your crew decides to divert the flight, or continue to the destination and have law enforcement waiting on the tarmac.
Naturally, pilots get involved when it's a question of where to fly the plane and when to divert, but ultimately the cabin crew is also involved in those decisions about problem passengers.
The Pilot in Command has ultimate legal responsibility over the operation of the flight, ICAO conventions explicitly state this. Whilst in practice the cabin crew will be the ones dealing with the passenger(s) and supplying information to the PIC , it won’t be them making the final decision.
Pretty sure ejection here is meant as shorthand for "Transfer the passenger to an entity on the ground to proceed from there" whether that entity is emergency medical services or law enforcement is secondary.
It absolutely can; it's called autoland[1]. In really bad visibility, pilots simply can't see the runway until too late, and most aerodromes which expect these conditions have some sort of autoland system installed. The most advanced ones will control every aspect of the plane from top-of-descent (TOD), flaps and throttle configuration, long and short final, gear down, flare, reverse thrust, and roll-out, all the way to a full stop on the runway. Zero pilot input needed.
And most of this was already available in the late 1970s. We have absolutely no need for LLM-based AI in aviation; traditional automation techniques have proven extremely powerful given how restricted the human domain of aviation already is.
Autopilots can. Both on airliners and small planes, although only landing on the latter as far as I know and it's only meant for emergencies. Airbus ATTOL is probably the most interesting of these in that it's visual rather than ILS (note that no commercial airliners are using this).
>never mind that most crashes are caused by humans, very rarely by technical issues going amok
Because humans are the fallback for all the scenarios that the tech cannot reliably cover. And my intuition says that the tech around planes is so heavily audited that only things that work with 99.999...% accuracy work will be left to tech.
Still those technological issues do happen, and in those situations it's good to have a human pilot in control. See for example Qantas Flight 72 - the autopilot thought aircraft was stalling, and sent the plane into a dive. It could have ended up very badly without human supervision.
and then you have Air France Rio-Paris where the Pitot sensors got something wrong, leading to a disconnection of the autopilot, and the pilots did everything they could to crash the plane by themselves, while it was fully operational.
That's so incredibly reductive, that I'd go ahead and call it plain wrong.
"Caused by a human" is the lowest tier, first base human instinct analysis of any accident, and as such, unless proven otherwise, can be discarded out of hand.
It comes down to: if a human mistake is capable of causing an accident, your system is badly designed because it assumes a part of the system known to be unreliable (a human) is always reliable.
The whole trick is designing systems that are safe despite humans being in the loop. Then you get to benefit from the advantages humans bring over machines without suffering the downsides.
The only thing I use openclaw for is managing my obsidian vault. The flow is a series of crons that prompt me to fill out daily files and update projects as they progress. I also use it for calorie tracking and basic daily journaling. This is simple, secure, and very cheap 'life coaching.'
I find the story of a startup founder who entirely missed the developments of the last two years and did absolutely nothing with AI difficult to believe. If that actually happened it's the exception, not the rule. Most startup founders are way more in-tune with AI developments. This makes it sound like Chris (the mentioned founder) is behind marketing people who use LLM bots to post slop on LinkedIn.
In that case, yes their startup is most certainly DOA.
Look around, you're most definitely in a bubble. LLMs are bleeding edge by themselves. Using agentic anything is mega bleeding edge. Having something actually working reliably is a tiny sliver of the bleeding edge audience. We have barely entered early adoption phase. Most AI users out there are Q&A'ing it and they have no idea what agents, tool calling or context compaction are.
I don't follow. Tech startups are bleeding edge. You may be over-generalizing here. I talk a lot of my dev friends, they are all using AI for work. So if Joe Blow at some consulting company is using it, then a SV startup CEO should be too.
>Most AI users out there are Q&A'ing it and they have no idea what agents, tool calling or context compaction are.
Again, talking about a tech CEO not a random "AI user."
I know a 60-ish year old serial startup CEO who never coded. Now he's running teams of Claude Code agents which he used to build a healthcare platform by himself which he is now selling. He's already wealthy, just doing it for fun. This skill set is the bare minimum for a startup founder today.
Going to VCs with a 2 year outdated deck with no AI functionality or plans or tool use is unimaginable.
Yeah I get what you're saying. Sounds like this guy is driven, smart and savvy then? Definitely a bleeding edge minority who goes where the puck is.
As to my original point I'll go and say that 95/100 of business owners out there in the world don't have any idea about Claude Code, Codex or anything of that caliber. It's too early. By the time that group gets to it there'll be tools tailored to their needs, not a terminal-based coding agent messing up filesystem.
You keep returning to this which is a strawman of my argument. From the start I have been talking about technical startup founders, which is also the subject of the blog post.
You on the other hand keep assuming that startup founders, even in SV are technical enough to envision AI or can change the thesis of their business on a whim of technology change that has been introduced less than a year ago. It's not realistic. Businesses have systems, people, obligations, legacy, etc.
Did you read the blog post? It seems like you may be missing a lot of context here.
I'm not assuming anything, that is my argument. I think you could do a better job countering it than pointing at the status quo of every business in the world. lol.
Me: 'cutting edge startups should use cutting edge tools'
You: 'every business in the world isn't using cutting edge tools!'
It's not about the status quo. It's about the speed of change and focus. While I'm all in on building custom agentic loops and building extensions for my pi I have enough bandwidth and xp to see that it's quite easy for business people to be focused on the matters that are completely non-technical. That plus plain sunk cost fallacy and blind denial ofc.
>Ya I get the need but you miss the point - no, you can't pay me anymore to wade into that and own risk, beyond a consulting context with low skin in the game.
In a situation of triage, "owning risk" is off the table.
Just in general, the outcome of where technology is going may spur many to reduce their usage in favor of "the real world"; I agree it might be a good thing.
I don't think I have unique insight on this but the common belief is they are desperately trying to reach AGI or a least have some halo model that will allow them to rise over the other companies. The problem is they have a hilariously large monthly burn paying for compute. If they don't produce something, they are in trouble if investors stop offering capital.
reply