Hacker Newsnew | past | comments | ask | show | jobs | submit | ehnto's commentslogin

I think the long term idea is that AI becomes continuous in the sense that it behaves like a regular employee, not something you have to prompt. So at a mid sized company say 100 "entities", the CEO has directors still, they have managers, but the managers are managing AI agents not humans.

But I don't think that's how it plays out. I think you still need to imbue talent, skills and direction into these tools and I don't see management, who did not have the skills initially, being able to do that task across multiple business aspects and agents simultaneously.

I think for now and perhaps until/if AGI, the sweet spot is having skilled individuals with experience using the tools to known good results. You still can't really delegate to the tools, you have to work with them. The benefit to management that a human has is they can delegate to a human, even when they completely lack the skillset they are delegating.


Why would it reduce ten fold? I think performance of the chips gets a little better but not that much better. The cost of the infra is probably going to go up if energy costs keep going up and presuming the US can keep getting cheap chips.

They have been exponentially decreasing so far [1] and the Vera Rubin generations chips will be going live that are 35X more efficient in terms of inference/ megawatt [2]. Even with rising prices 10x is possibly conservative.

Maybe if demand is truly crazy the labs will take more margin

1. epoch.ai/data-insights/llm-inference-price-trends

2. http://hashrateindex.com/blog/nvidia-vera-rubin-nvl72-specs-...


I finished my immobiliser for old cars from the last thread. It fills a niche in the market for a no-wireless, no fob solution that will work on older vehicles without much CAN bus integration.

I have been brushing up some drawing skills for concept art, and exploring more embedded automotive product ideas for this niche of cars.


I agree. I think the rapid learning generalist has a real advantage right now, but that kind of advantage cannot be leveraged by big companies structured to utilise specialists. I think that's why individual contributors in big teams aren't seeing massive benefits from AI where a small team or solo developer may be seeing greater leverage.

If you are a strong generalist with an entrepreneurial spirit, I think I would be aiming at getting hired by a small company where you can provide a buttload of value or looking at starting something where you have domain experience outside of software.


Generalists are more competent by definition, but large companies don’t need broad competence, they just need a cog.

> I think that's why individual contributors in big teams aren't seeing massive benefits from AI where a small team or solo developer may be seeing greater leverage.

This rings true. It is the best time ever for small teams. A big team is potentially several smaller teams, so this can be a force multiplier for them too.

Another force multiplier for reorganizing larger teams, be willing to consider smaller teams starting with single contributors.

What this is the worst time for: slow adaptation.


So it follows that the most efficient time to discover bugs is when you first write them.

... or maybe when you see them triggered or exploited reproducibly, then the underlying bug will also be pretty easy to discover. But at that point, it's already too late. :)

I really like your original point, I never thought about it this way.


It still lives on as a bit of a hard skill in automotive/robotics. As someone who crosses the divide between enterprise web software, and hacking about with embedded automotive bits, I don't really lament that we're not using WCET and Real Time OSes in web applications!

I suppose that rough-edgeness of the RTOSes is mostly due to that mainstream neglect for them - they are specific tools for seasoned professionals whose own edges are dent into shapes well-compatible for existing RTOSes.

if you ever worked on automotive you know it's bs.

since CAN all reliability and predictive nature was out. we now have redundancy everywhere with everything just rebooting all the time.

install an aftermarket radio and your ecu will probably reboot every time you press play or something. and that's just "normal".


I’ve working in automotive since it was only wires and never saw that (or noticed it) happening specially since usually body and powertrain work on separate buses tied through a gateway, the crazy stuff happens when people start treating the bus (specially the higher speed ones) like a 12v line or worst.

I didn't experience that but the commercial stuff I worked on was in a heavy industry on J1939, and our bus was isolated from the vehicle to some regard.

Then the stuff I mess with at home is 90s era CAN and it's basically all diagnostics, actually I think these particular cars don't do any control over the bus.


ever use wordstar on Z80 system with a 5 MB hard drive?

responsive. everything dealing with user interaction is fast. sure, reading a 1 MB document took time, but 'up 4 lines' was bam!.

linux ought to be this good, but the I/O subsystem slows down responsiveness. it should be possible to copy a file to a USB drive, and not impact good response from typing, but it is not. real time patches used to improve it.

windows has always been terrible.

what is my point? well, i think a web stack ran under an RTOS (and sized appropriately) might be a much more pleasurable experience. Get rid of all those lags, and intermittent hangs and calls for more GB of memory.

QNX is also a good example of an RTOS that can be used as a desktop. Although an example with a lot of political and business problems.


Every single hardware subsystem adds lag. Double buffering adds a frame of lag; some do triple-buffering. USB adds ~8ms worse-case. LCD TVs add their own multi-frame lag-inducing processing, but even the ones that don't have to load the entire frame before any of it shows, which can be a substantial fraction of the time between frames.

Those old systems were "racing the beam", generating every pixel as it was being displayed. Minimum lag was microseconds. With LCDs you can't get under milliseconds. Luckily human visual perception isn't /that/ great so single-digit milliseconds could be as instantaneous, if you run at 100 Hz without double-buffering (is that even possible anymore!?) and use a low-latency keyboard (IIRC you can schedule more frequent USB frames at higher speeds) and only debounce on key release.


8khz polling rate mouse and keyboard, 240hz 4K monitor (with Oled to reduce smearing preferably, or it becomes very noticeable), 360hz 1440p, or 480hz 1080p, is current state of the art. You need a decent processor and GPU (especially the high refresh rate monitors as you’re pushing a huge amount data to your display, as only the newest GPUs support the newest display port standard) to run all this, but my Windows desktop is a joy to use because of all of this. Everything is super snappy. Alternatively, buying an iPad Pro is another excellent way to get very low latencies out of the box.

I really love this blog post from Dan Luu about latency. https://danluu.com/input-lag/


That's a good one. I probably should have brought up variance though. These cache-less systems had none. Windows might just decide to index a bunch of stuff and trash your cache, and it runs slow for a bit while loading gigabytes of crap back into memory. When I flip my lightswitch, it's always (perceptibly) the same amount of time until the light comes on. Click a button on the screen? Uh...

Hah, that’s a good point! Unfortunately I have Hue smart bulbs and while they’re extremely convenient and better than most, there is sometimes a slight pause when using my WiFi controlled color schemes to switch between my configured red and daylight modes. What you gain in convenience and accessibility (being able to say “turn off the master bedroom” when I’m tired is amazing) I’ve lost in pure speed and consistency.

I believe this is kind of survivor-bias. It's very rare that RTOSes have to handle allocating GBs of data, or creating thousands of processes. I think if current RTOSes run the same application, there would be no noticeable difference compared to mainstream OS(Could be even worse because the OS is not designed for that kind of usecases)

>what is my point? well, i think a web stack ran under an RTOS (and sized appropriately) might be a much more pleasurable experience. Get rid of all those lags, and intermittent hangs and calls for more GB of memory.

... it's not the OS that's source of majority of lag

Click around in this demo https://tracy.nereid.pl/ Note how basically any lag added is just some fancy animations in places and most of everything changes near instantly on user interaction (with biggest "lag" being acting on mouse key release as is tradition, not click, for some stuff like buttons).

This is still just browser, but running code and displaying it directly instead of going thru all the JS and DOM mess


Echoing other thoughts here but also, it's like getting your first 10,000+ lines of output code for 0 token cost, and no prompting effort, no back and forth or testing etc.

Just jump straight to business logic, scaffolding is done for you already.

I think in your question as well is an idea that apps from now on will be bespoke, small and unique entities but the truth is we are still going to be mostly solving already solved problems, and enterprise software will still require the same massive codebases as before.

The real win of frameworks is they keep your workers, AI or human, constrained to an existing known set of tools and patterns. That still matters in long term AI powered projects too. That and they provide battle hardened collection of solutions that cover lots of edge cases you would never think to put in your prompts.


I feel that's a bit uncharitable, it wasn't just vibes, it was imaginative world building, with some truly interesting and novel concepts tied into a decent enough story to enjoy the world within.

As with much from this thread of cyberpunk writing, the cities and world are the most important characters, and the storyline is just an excuse to wander through their streets.


'Vibes' was probably the wrong word. I agree with you.

Though about the world building: he threw out a lot of neologisms on the page, and later other writers gave them meaning.


That's true, I read Neuromancer pretty late, already well primed on the terms of art which smoothed that over a bit. But a lot was left to the imagination.

Totally agree, these kinds of problems are really common in smaller models, and you build an intuition for when they're likely to happen.

The same issues are still happening in frontier models. Especially in long contexts or in the edges of the models training data.


I see us collectively forgetting the training process as time goes on, and I think that explains why people get so surprised by some pretty obvious outcomes of said training. Perhaps also why people keep anthropomorphising these outcomes.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: