Did a trial for a month. It's indeed very impressive but at the same time, it's also very stressful because you don't know how the car is going to react. So I was on constant alert if there were any tricky situations. After some time, it became exhausting and more draining then manual driving.
That was exactly my experience with the free trial. I figure if I'm going to have to pay thousands of dollars to still have to pay full attention when I drive, I might as well keep the money and drive the car.
FWIW: My 2026 Huyndai's driver assistance is better than my old 2018 Tesla Model 3's enhanced autopilot.
Same here. Car took a left turn through an orange filter light, detected oncoming traffic and decided 'nope' to the turn and just ... stopped ... in the oncoming lanes.
I find it less cognitive load to drive it myself. It's easier to predict what other vehicles will do than my own. Boo.
I have sympathy with the challenges as I worked in the field.
This was my experience. Kept getting told just try the new version. When I would the issues I had weren’t fixed and I bounced off it again. For very long trips it’s nice, but so is lane assist.
It always had the feeling of being outside with your toddler by the pool. I can look away but I have 50/50 odds of a dead toddler if I do it for to long.
This is what the Tesla fans have been saying for years. "Oh, you're on the previous software version, bro. You gotta try the latest version, bro, trust me bro it's so much better. FSD on the current version is totally working for me, bro." "Oh, you're on today's software version? Don't worry, bro, the next one is going to be so much better, just wait for it bro, trust me bro we're going to have working FSD in the next version, bro."
There are too many things that can go wrong, you should never look away from the road for more than a second or two.
Adaptive cruise control, lane keeping, blind spot detection and emergency braking are all the modern automation I want in a personal vehicle at this point. Other drivers are unpredictable, I want to choose how I respond to their various forms of idiocy and not delegate to a black box.
That was my impression as well. You have to babysit the AI the whole time and if you fail to do that it's basically your life (and others' of course) on the line.
Run on premade, static tracks with clearly divided "roads" from the rest of road participants.
Their role is to stop the train in an emergency and adjust to speed etc. to track/driving conditions.
Automating their job probably wouldn't even need the complex ML used for self-driving because the context is significantly simpler and relatively well defined. Maybe a team in city might need such a model but it would still be a significantly simpler task than driving a car.
Trains are really unpredictable. Even in the middle of a forest two rails can appear out of nowhere, and a 1.5-mile fully loaded coal drag, heading east out of the low-sulfur mines of the PRB, will be right on your ass the next moment.
I was doing laundry in my basement, and I tripped over a metal bar that wasn't there the moment before. I looked down: "Rail? WTF?" and then I saw concrete sleepers underneath and heard the rumbling. Deafening railroad horn. I dumped my wife's pants, unfolded, and dove behind the water heater. It was a double-stacked Z train, headed east towards the fast single track of the BNSF Emporia Sub (Flint Hills). Majestic as hell: 75 mph, 6 units, distributed power: 4 ES44DC's pulling, and 2 Dash-9's pushing, all in run 8. Whole house smelled like diesel for a couple of hours!
Fact is, there is no way to discern which path a train will take, so you really have to be watchful. If only there were some way of knowing the routes trains travel; maybe some sort of marks on the ground, like twin iron bars running along the paths trains take. You could look for trains when you encounter the iron bars on the ground, and avoid these sorts of collisions. But such a measure would be extremely expensive. And how would one enforce a rule keeping the trains on those paths?
A big hole in homeland security is railway engineer screening and hijacking prevention. There is nothing to stop a rogue engineer, or an ISIS terrorist, from driving a train into the Pentagon, the White House or the Statue of Liberty, and our government has done fuck-all to prevent it.
Oh don't worry the LLMs absolutely can kill us, just slightly more indirectly.
Triggering psychosis is not difficult and the LLM is easily capable of doing that. For a person they soon get freaked out and are likely to summon help. "Johnny started acting crazy and I'm not sure what to do, please come". But the LLM isn't a person, Johnny needs to know more about the CIA's programme to cross breed Venusians with Hollywood stars? Here's an itinerary with the address of a real hotel in LA and an entirely hallucinated CIA officer's schedule.
Next thing you know, Johnny is shot dead by officers responding to a maniac with a fire axe who broke into an LA hotel and was screaming about space aliens.
Same here phantom braking on the highway, randomly turned off in the middle of an intersection turn and didn’t get over in time for exit and decided to brake in the left lane to try and force over. While it was fun to try it’s not reliable for me to trust. That and If I lean my head the wrong direction resting it I start getting yelled at by it.
This is where teaching my teenage son how to drive was a great preparation. Also my wife is not a great driver and I've been married to her for 20 years.
Compared to either one of them, FSD is way less stressful and is a vastly better driver.
I hear this a lot, and I'm genuinely curious why you think it might take more energy to be on alert for tricky situations. Wouldn't you already be doing that for your own manual driving?
Think about a junior coworker you offloaded some of your tasks to. It turns out the coworker frequently makes mistakes. At some point you are going to say it is easier to just do this myself. Especially if a single mistake can cost you your life!
I’m guessing that predicting the failure modes of a computer is more taxing than your brain using pattern recognition of what it needs to react to.
If you’re driving, your brain can automatically prioritize the importance of things that you see. But since a computer fails in different ways than a human, you lose all automatic prioritization
I know my normal, non-self-driving car won't randomly slam on the brakes or swerve into a median. Even if I take my hands off the wheel, I know it will keep going straight-ish for a second or two.
A "self-driving" tesla is an adversary you need to supervise to make sure it doesn't take actions you wouldn't expect of a normal car.
As other posters have pointed out, it's like running an LLM with `--dangerously-skip-permissions`: I wouldn't `rm -rf /` my computer (or in the case of tesla, my life), but an AI might.
It's not just "tricky situations", sometimes FSD will do things that no normal driver would ever do, and it will do them inconsistently. Sometimes it's brilliant and sometimes it's drunk.
Because constantly switching between full attention and degraded attention (which the FSD promises) is more tiring that staying on full attention continuously.
This is a subject that has been studied quite a bit, as there are a bunch of jobs where people have to monitor for rare emergencies, and react fast if an emergency should arise. Things like pilots on flights with autopilot; lifeguards watching for swimmers in distress; CCTV monitoring; operating airport X-ray machines, and so on.
Previous studies had found that a human and a computer performed markedly better than either a human alone or a computer alone - but in those studies failures were quite common, so they didn't give the humans time to get bored or distracted.
When researchers got test subjects to perform a simulated flying task, monitoring a system with 99%+ reliability, they found the humans were proportionally much worse at stepping in than they were on less reliable systems.
Swimming pool lifeguards will often change posts every 15-20 minutes and and get a 10-15 minute break every hour, to keep things interesting enough that they can pay attention. Good luck getting drivers to do that.
Funny, I was going to mention exactly that. I'm a private pilot with a modern autopilot and flying is exhausting. Partly because the piston engine is rattling your brain the entire time but also because you're on high alert the entire time. You're always making sure the autopilot is keeping the plane on the blue (or green) line and is being predictable. And my smartwatch shows my heart rate is usually more elevated on autopilot than not.
There's no way to model what a "tricky situation" may be to an opaque and ever-changing piece of self-driving software. It may fail in random ways at random times — it's completely, 100% unpredictable.
Therefore, you have to be 100% ready at all times to react in case anything that's possible happens.
Sounds way more tiring than just driving yourself and only having to account for the known, relatively easy to model human failure modes.
The fundamental problem is that "staying alert for tricky situations" is essentially an exercise in prediction. FSD effectively hides a bunch of variables from you, making the prediction harder.
Have you ever been a passenger of an unpredictable driver? Was that stressful? Now, add not just the capacity but the responsibility to fix their mistakes.
This is the real trick about 95% accurate or 99% accurate, if you never know when that 1% incident will occur, you ALWAYS have to watch for it. And eventually we'll have to live with the fact it'll never hit 100% accuracy, just as we don't have 100% accuracy today with human driving.
Fair question. Supermemory is a hosted SaaS built around embedding + ranking. YantrikDB is self-hosted and adds three things Supermemory doesn't do as first-class operations:
think() — consolidates similar memories into canonical ones (not just deduplication, actual collapse of redundant facts)
Contradiction detection — when "CEO is Alice" and "CEO is Bob" both exist in memory, it flags the pair as a conflict the agent can resolve
Temporal decay with configurable half-life — memories fade, so old unimportant stuff stops polluting recall
Supermemory does more on the cloud side (team sharing, permissions, integrations). YantrikDB does more on the "actively manage my agent's memory" side. Different optimization points — no dig at Supermemory.
same here and I'm using a beefy MacBook (Apple M4 Max, 64gb ram). something is wrong with the front end code. there are a lot of animations, so my hunch would be that something goes wrong there.
Is Kevin Rose known to know how to address bot problems? I think it's a little absurd to address a bot problem with bringing back the original founder. I believe he was great at community building and functionality, but bot prevention is a different beast. The post mentioned that they also worked with third parties which I believe should have more bot prevention experience than Kevin.
To be fair, I don't know Kevin Rose personally, so maybe he knows more than the industry, but I highly doubt it.
Reddit has the same problem. They are fighting it more or less successfully. I would look more in that direction.
Is Reddit fighting the bot problem? They introduced a feature to hide post history which makes it hard to know whether you’re interacting with a spammy bot account. If anything they’re embracing it.
Actions speak louder than words. They’ve added features that help spammers hide their behaviour, they are rejecting API keys when people apply for access to deal with the bot problem, they ignore subreddits with spam-friendly moderators, and they ignore reports on vote manipulation. There’s a tonne of low-hanging fruit for tackling the bot problem on Reddit that they aren’t doing anything about, and often it seems like people outside of Reddit do a better job without access to the raw data than people inside Reddit do with the raw data.
I know they claim to care about the bot problem, but they appear at absolute best incredibly complacent about it, if not complicit. All those OnlyFans spammers, AI spam bots, etc. are engagement. They are ruining the platform for people, but engagement figures don’t distinguish between fake engagement and real people. The outcome of their current behaviour is for engagement to steadily rise while the value to real people steadily falls. It’s like they want to be the poster child for Dead Internet Theory.
I don't think this is helpful to bots tbh. For over a decade every time I come across a clear bot account their comment history seems very human. I assume they're either buying real accounts for one-off astroturfing hit and runs in combination with deleting older astroturfing comments after the submission stops getting traffic to hide their footprints. Or more likely there's a giant ring of bots that submit innocent things and then comment preplanned innocent things in a giant bot circle and then make pointed comments on r/politics or whatever after establishing an innocent baseline. This is the obvious approach I'd take if it were me.
I'd also be really surprised if there wasn't coordination with Reddit employees/execs themselves for big advertisers.
The Reddit CEO mentioned that the community thrives when humans talk to humans - and not with AI slop. He also said they are working on efforts to identify automated accounts.
Reddit can't even manage to regularly identify and ban bots that copy previously popular posts/comments verbatim, and that's a much easier problem than modern LLM-based bots.
for smaller start ups, it's easier to go through one provider (OpenRouter) instead of having the hassle of managing different endpoints and accounts. you might get access to many more users that way.
mid to large companies might want to go directly to the source (you) if they want to really optimize the last mile but even that is debatable for many.
Hey @nnx & @hazelnut, good question, but no, we're not IonStream on OpenRouter.
The purpose of IonRouter is to let people publicly see the speed of our engine firsthand. It makes the sales pipeline a lot easier when a prospect can just go try it themselves before committing. Signup is low friction ($10 minimum to load, and we preload $0.10) so you can test right away.
That said, we do plan to offer this as a usage-based service within our own cloud. We own every layer of the stack— inference engine, GPU orchestration, scheduling, routing, billing, all of it. No third-party inference runtime, no off-the-shelf serving framework. So there's no reason for us to go through a middleman.
It looks completely accurate. I could see the medivac helicopters taking off, and it matched 1:1.
I missed a biplane flying over the city. And some other low-flying planes circling mysteriously.
If I had a telephoto lens and a way to alert myself of large planes flying low (happens frequently), C-130s, F-22s, etc. I think I'd waste way too much time.
I've seen ads on the West Coast for Kars4Kids. To be precise, in the Bay Area. I was wondering who donates a car for a kid. They shouldn't driving ... well, till I read more about it. Quite the surprise to stumble over on HN.