Hacker Newsnew | past | comments | ask | show | jobs | submit | fooblaster's commentslogin

Honestly, this is the AI software I actually look forward to seeing. No hype about it being too dangerous to release. No IPO pumping hype. No subscription fees. I am so pumped to try this!

Same here. I really hope in a near future local model will be good enough and hardware fast enough to run them to become viable for most use cases

No need to hope; it is inevitable.

Is it inevitable though? Open-weight models large enough to come close to an API model are insanely expensive to run for con/prosumers. I'd put the “expensive” bar at ≥24GB since that's already well into 4 digits, which gives you quite many months of a subscription, not including the power will for >400W continuous.

Color me pessimistic, but this feels like a pipe dream.


A decent amount of software developers and gamers do spend 3000 USD on a PC. That kind of hardware is going go get more and more capable over time wrt genAI models.

Of course there will always be a gap to frontier closed hosted models. It is not an either or proposition.


Tesla has not pulled the driver. It's just not comparable.

Their website now prominently states “supervised” since they got into so much hot water overselling the capabilities.

Tesla FSD is really in a pointless middle ground where the steep $99/month they ask for it is just not worth it.

It does basically nothing for you on the highway to alleviate fatigue above and beyond a standard adaptive cruise control system you can find in a Volkswagen Jetta.

The FSD on city streets is not autonomous enough to take away supervision so for the 10-20 minutes people typically spend driving in city traffic situations before reaching their destination it’s not saving a whole lot of effort to just…drive yourself.

I would think if I owned a car that wasn’t an old ass beater like I have I would mainly benefit from adaptive cruise control on long trips and perhaps some convenience stuff like automatic parking.


I had a Model 3 with FSD for the last few years, and when I switched to a Model Y I specifically looked for and paid more for one with FSD.

It makes both road trips and city driving less taxing. I have driven cars with ACC and they are nowhere near the level of usefulness FSD is.

You will argue with some details somewhere, but ultimately I, a customer, chose to seek out a feature. That feature is therefore not "pointless".


There’s diminishing returns to luxuries like this. You’ve found it to be worth it personally, but my point isn’t that a single individual won’t like it, my point is that most drivers don’t really need it and shouldn’t go out of their way to compromise on other aspects of the vehicle to get it.

I would compare this to a niche luxury feature like cooled or massaged seats. The people who seek out those features swear by them but it’s not good advice to tell an average person to spend the money on them, and they aren’t universally praised by people who try them.

I like watching my wealth grow in investments rather than investing in depreciating assets like vehicles. My attention at the wheel in my paid off 12 year old Mazda is free, and I’m still safer than any automated system for the time being (Tesla has the worst fatal accident rate of any brand [1] so I assume that FSD can’t be all that safe)

I also like reducing how much I drive wherever I can rather than band-aiding the problem of driving fatigue with driving automation. Driving less is a solution to driving fatigue. Taking public transit is a solution to driving fatigue. The $30k it costs to buy a gently used Tesla would be better invested in a down payment on an appreciating house or condo in a less car-dependent neighborhood. Hell, moving to the Netherlands and buying a bicycle doesn’t even cost $30k.

[1] https://www.roadandtrack.com/news/a62919131/tesla-has-highes...


What is the point of such trolling?

I’m not trolling, I’m discussing features of automobiles. I follow the automotive industry and have interest in the subject.

300k people subscribe to FSD. Did they forget to unsubscribe or what?

Is that a lot?

Toyota sells that many Camrys in one year.

That means only 18% of Tesla vehicles sold are subscribed. That also means, to your point, a non-zero amount of customers don’t use the feature and forgot to unsubscribe.

For a feature billed as “transformative” that’s not a very good number.

Apple has a higher subscriber rate for TV+ (27%)



It's fine to not be interested, but this time one of the astronauts is black


where are you? that is a massive amount of solar in any place at a reasonably low latitude. Is your house enormous or are you heating your house with resistive heating?


This thing is far from over. Iran will indefinitely be able to block the straight. The us will be stuck in this defensive position for months, until it pulls out and effectively loses the war.

It's clear we are going to lose, because we cannot topple the regime without putting troops on the ground, which we will never do. Setting that as a war aim doomed this whole effort from the start.


Ground invasion would literally be Vietnam again. And don't say they'll never do it. Reports from classified briefings indicate a draft is seriously being considered.


I'm not surprised they would consider it, but it seems hilariously stupid. Even as cynical as I am about politics in this country this will never fly with most of Trump's supporters.


We’ve heard this before…


This isn't targeting immigrants or Democrats though, unless they try to only draft Democrats it's going to be them, their sons, their brothers and husbands getting told they have to go die.


It would be figuratively Vietnam again.


Vietnam was all for nothing either. In the end nothing was accomplished and really it didn't matter that the communists took over a tiny unproductive piece of land across the world. It was a pointless war and many people on both sides (including many of their civilians) died needlessly just for a dick waving contest :( I really hope this won't come to that.


what inference runtime are you using? You mentioned mlx but I didn't think anyone was using that for local llms


LM Studio (which prioritizes MLX models if you're on Mac and they are available) - I have it setup with tailscale running as a server on my personal laptop. So when I'm working I can connect to it from my work laptop, from wherever I might be, and it's integrated through the Zed editor using its built in agent - it's pretty seamless. Then whenever I want to use my personal laptop I just unload the model and do other things. It's a really nice setup, definitely happy I got the 128gb mbp because I do a lot of video editing and 3d rendering work as a hobby/for fun and it's sorta dual purpose in that way, I can take advantage of the compute power when I'm not actually on the machine by setting it up as a LLM server.


LM Studio has had an MLX engine and models since 2024.


It was definitely luck, greg. And Nvidia didn't invent deep learning, deep learning found nvidias investment in CUDA.


I remember it differently. CUDA was built with the intention of finding/enabling something like deep learning. I thought it was unrealistic too and took it on faith in people more experienced than me, until I saw deep learning work.

Some of the near misses I remember included bitcoin. Many of the other attempts didn't ever see the light of day.

Luck in english often means success by chance rather than one's own efforts or abilities. I don't think that characterizes CUDA. I think it was eventual success in the face of extreme difficulty, many failures, and sacrifices. In hindsight, I'm still surprised that Jensen kept funding it as long as he did. I've never met a leader since who I think would have done that.


Nobody cared about deep learning back in 2007, when CUDA released. It wasn't until the 2012 AlexNet milestone that deep neural nets start to become en vogue again.


I clearly remember Cuda being made for HPC and scientific applications. They added actual operations for neural nets years after it was already a boom. Both instances were reactions, people already used graphics shaders for scientific purposes and cuda for neural nets, in both cases Nvidia was like oh cool money to be made.


Parallel computing goes back to the 1960s (at least). I've been involved in it since the 1980s. Generally you don't create an architecture and associated tooling for some specific application. The people creating the architecture only have a sketchy understanding of application areas and their needs. What you do is have a bright idea/pet peeve. Then you get someone to fund building that thing you imagined. Then marketing people scratch their heads as to who they might sell it to. It's at that point you observed "this thing was made for HPC, etc" because the marketing folks put out stories and material that said so. But really it wasn't. And as you note, it wasn't made for ML or AI either. That said in the 1980s we had "neural networks" as a potential target market for parallel processing chips so it's aways there as a possibility.


CUDA was profitable very early because of oil and gas code, like reverse time migration and the like. There was no act of incredible foresight from jensen. In fact, I recall him threatening to kill the program if large projects that made it not profitable failed, like the Titan super computer at oak ridge.


I remember it being less profitable than graphics for a long time.

It did make money that would be interesting to a startup, but not to a public company.


Again, it wasn't exactly a huge sink of resources. There was no genius gamble from jensen like you are suggesting. I suspect your view here is intrinsically tied to your need to feel like you and others who are in your position are responsible for your own success, when in fact it's mostly about luck.


So it could just as easily have been Intel or AMD, despite them not having CUDA or any interest in that market? Pure luck that the one large company that invested to support a market reaped most of the benefits?


I was really happy to see that blue spec was fully open sourced in recent years. Does anyone have experience with a non trivial project with it? Does it have any traction anymore in real silicon development.


Same here, looking for an excuse to use it. It takes some time to get oriented. The BSV fronted makes it easier though. Have been dabbling in it as a hobby on the side.

While there have been tape-outs by universities, I think the learning curve would discourage traditional hardware companies focused on TTM. While bluespec has higher level abstractions, it also provides access to low level HW optimization features like multiple/gated clocks, integrate verilog etc. so I don't see any hindrances.

One needs to be familiar with both using SW abstractions and HW design, which iss a small subset that limits it's usage.


calling neural engine the best is pretty silly. the best perhaps of what is uniformly a failed class of ip blocks - mobile inference NPU hardware. edge inference on apple is dominated by cpus and metal, which don't use their NPU.


b200 is 148 sms, so no


Each SM cluster contains 4 independent 32-wide compute units, and GB202 has 192 SMs, although only 188 of them are enabled on the largest shipping SKU. IMO that makes for 752 "cores", but depending on where you draw the line it could be 188, 752, or 24064.


sms is the Nvidia definition of processor, and cuda device properties returns it, not anything else. If you want a marketing number, use cuda cores, it doesn't consistently match to anything in the hardware design.


no, you really can't.

NVidia's use of "cores" is simply wrong. unless you think a core is a simple scalar ALU. but cores haven't been like that for decades.

or would you like to count cores in a current AMD or Intel CPU? each "core" has half a dozen ALUs/FP pipes, and don't forget to multiply by SIMD width.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: