That's like the argument about how we'll never (or should never) have self driving cars.
Clearly human-run ATC results in situations like this, so the idea that automated ATC could result in a runway collision and should therefore never be implemented is bad.
It's not an argument for total automation but an argument for machine augmentation. It would be fascinating just as an experiment to feed the audio of the ATC + flight tracks [1] into a bot and see if it could spot that a collision situation had been created.
You obviously wouldn't authorize the bot to do everything, but you could allow it to autonomously call for stops or go-arounds in a situation like this where a matter of a few seconds almost certainly would have made the difference.
Imagine the human controller gives the truck clearance to cross and the bot immediately sees the problem and interrupts with "No, Truck 1 stop, no clearance. JZA 646 pull up and go around." If either message gets through then the collision is avoided, and in case of a false positive, it's a 30 second delay for the truck and a few minutes to circle the plane around and give it a new slot.
I'm not well-enough versed in HMI design or similar concepts, but I think this idea for augmentation could collide with alarm fatigue and the disengaged overseer problem in self-driving cars.
If we aren't confident enough in the automation to allow it to make the call for something simple like a runway incursion/conflict (via total automation), augmentation might be worse than the current approach that calls for 100% awareness by the ATC. Self-driving research shows that at level 2 and level 3, people tune out and need time to get back "in the zone" during a failure of automation.
> could collide with alarm fatigue and the disengaged overseer problem
Depends both on the form the "alarm" takes as well as the false positive rate. If the alarm is simply being told to go around, and if that has the same authority as a human, then it's an inconvenience but there shouldn't be any fatigue. Just frustration at being required to do something unnecessary.
Assuming the false positive rate were something like 1 incident per day at a major airport I don't even think it would result in much frustration. We stop at red lights that aren't really necessary all the time.
Depending on how late the go-around/aborted landing is triggered, that can be a danger in itself. Any unexpected event in the landing flow has a risk, to the point that there's a "sterile cockpit" rule in that window.
Even if it's just a warning to the ATC, distracting them and forcing them to reexamine a false positive call interrupts their flow and airspace awareness. I get what you're saying, that we could err on the side of alert first, out of precaution; but all our proposed solutions would really come down to just how good the false positive and false negative rates are.
BTW, stopping at a red light unnecessarily (or by extension, gunning it to get through a yellow/red light) could get you rear ended or cause a collision. Hard breaking and hard acceleration events are both penalized by insurance driver trackers because of that.
I'm assuming there that any such system would be appropriately tuned not to alert outside of a reasonably safe window. My assumption is that it would promptly notice the conflict following any communication which under ordinary circumstances should leave plenty of time to correct. To be fair I don't expect such a system would address what happened in this case because as you note false alarms on too short a notice pose their own danger which may well prove worse on the whole.
This specific situation I think could instead have been cheaply and easily avoided if the ground vehicle had been carrying a GPS enabled appliance that ingested ADS-B data and displayed for the driver any predicted trajectories in the vicinity that were near the ground. Basically a panel in the vehicle showing where any nearby ADS-B equipped planes were expected to be within the next 30 seconds or so.
> stopping at a red light unnecessarily
Is it not always legally necessary where you live? It certainly is here. When I described them as unnecessary I was recalling situations that would clearly be better served by a flashing yellow.
Yeah, I think there's certainly optimizations possible. Listening to ATC traffic, I'm surprised just how much of the ground ops stuff could be computerized: basically traffic signals for runways.
What you're describing almost sounds like TCAS, a collision avoidance system for planes in the air, and would be a good idea.
As for the redlights, yes, legally you would be required to stop if you're before the stop line. My language wasn't clear, as I was trying to describe those scenarios where a light's turning just as you're getting to/into the intersection. Some people will gun it to get through, others will jump on their brakes to not run what's technically a red.
Valid concern. Ultimately, the ideal would be to have commentary from professionals in the space to say what it is that would be most helpful in terms of augments.
In doctor's offices it was easy, just listen to the verbal consult and write up a summary so doc doesn't spend every evening charting. What is the equivalent for ATC, in terms of an interface that would help surface relevant information, maintain context while multitasking, provide warnings, etc, basically something that is a companion and assistant but not in a way that removes agency from the human decision-maker or leaves them subject to zoning out and losing context so they're not equipped to handle an escalation?
There is such a bot and it is installed in LaGuardia Airport. The system is called Runway Status Lights, and it was supposed to show red lights to the truck. And the truck was supposed to stop and ask the controller: “If an Air Traffic Control clearance is in conflict with the Runway Entrance Lights, do not cross over the red lights. Contact Air Traffic Control and advise that you are stopped due to red lights.” https://www.faa.gov/air_traffic/technology/rwsl
That is how it is supposed to work. How did it work in reality is an other question of course, and no doubt it will be investigated.
> That's like the argument about how we'll never (or should never) have self driving cars.
The reason we won't ever have self-driving cars is that no matter how clever you make them, they're only any good when nothing is going wrong. They cannot anticipate, they can only react, too slowly, and often badly.
They absolutely could anticipate, and arguably with more precision than people. The common occurrence of collisions when making left turns at an intersection shows that people's ability to anticipate is fallible too: people can't even anticipate that car driving towards them will continue to do so.
Self driving cars' reaction times aren't slowed by drugs, alcohol, or a Snapchat notification pulling their attention.
Current systems haven't been proven in all weather conditions and all inclement situations (ie that tesla collision with a white semi-trailer), but it's crazy to say that self-driving cars won't match or exceed human drivers in terms of safe miles driven. Waymo has already shown an 80 to 90% reduction in crashes compared to people.
Can you clarify what you mean by unsafe? From what I can tell from the study, they're comparing to a human benchmark - basically the "average" driver, not a cherrypicked "bad" driver cohort.
Just as with wealth the average is drastically skewed by outliers. I don't recall precise numbers off the top of my head but there are plenty of people who have commuted daily for multiple decades and have never been in a collision. I myself have only ever hit inanimate objects at low speeds (the irony) and have never come anywhere near totaling a vehicle; my seatbelts and airbags have yet to actually do anything. Freight drivers regularly achieve absurd mileage figures without any notable incidents.
As I stated earlier I agree with the broader point you were trying to make. I like what they're doing. It's just important to be clear about what human skill actually looks like in this case - a multimodal distribution that's highly biased by category.
Yeah, I agree with you too. Per IIHS, the fatality rate per 100,000 people ranged from 4.9 in Massachusetts to 24.9 in Mississippi, so clearly there's a huge variance even with "US population".
The other person's comment was "we won't ever have self-driving cars" because they aren't good enough: but something like Waymo already is, particularly for the population. If we waved a wand and replaced everyone's car with a Waymo, accident rates would fall, at a population level and at a per-mile driven level.
It's even tough to see that a Waymo would be more dangerous for a good driver: they too have never been the cause of a serious accident and have certainly driven more miles across the fleet than any human driver. All 4 serious injury accidents and both fatalities were essentially "other driver at fault, hit Waymo".
This isn't meant to glaze Waymo, but point out that self-driving cars in certain environments are "solved". They're expensive, proprietary, aren't suitable for trucking or deployment to cold climates (yet?); but self-driving that is safer than people-driving is already here. To your point: human skill in driving is variable: Waymo won't replace Verstappen right now, but just like the AGI argument with LLMs, they're already "smarter" than the average person in certain domains.
Clearly human-run ATC results in situations like this, so the idea that automated ATC could result in a runway collision and should therefore never be implemented is bad.