Hacker Newsnew | past | comments | ask | show | jobs | submit | overgard's commentslogin

C# with WinForms and Visual Studio was a decent successor for a while, but for whatever reason microsoft decided to go all in on XAML and then a bunch of other half baked frameworks. I have no idea what to even use anymore if I wanted to make a native Windows app, it's a mess.

I dunno, on a subscription one would assume that minimizing token spend would actually be in their interest. Even for API calls I'm not entirely convinced they're profitable.

I know it's going to be extremely painful, but the sooner this ridiculous unsustainable AI bubble pops the better off we'll be. The more it inflates the more collateral damage it will cause, and we're probably already looking at 2008 levels of financial chaos.

There are probably multiple goals of AI investment. It's entirely possible that they are deliberately killing the affordability of how personal electronics like home computers are made and will instead replace them with terminals that stream everything to the cloud. You can make a lot more money off consumers if you can turn their entire computing experience into a utility.

  > replace them with terminals that stream everything to the cloud
they've been trying for a looong time on that one. i still remember those junky "net appliance"s from the early 2000s [0] and oracle and sun making big statements about them...

[0] https://www.ecommercetimes.com/story/sonys-evilla-joins-audr...


What’s the feedback loop that leads to a total financial collapse? This looks much more like dotcom bubble. Everyone knows where the exposure is.

I think it comes down to scale (there's like 2 trillion invested so far by very large institutions) and also AI hollowing out foolish companies that decided to go "AI native" and downsize and lose institutional knowledge. When the rug pull inevitably comes and the AI subsidies are gone, the entire idea of "efficiency gains" in a lot of places is going to look pretty bad as soon as they look at their bill.

if that happens, its going to be one hell of a mess of dominoes to clean up...

I can't really say I agree with this, although I also hate the phrase "agentic engineering".

I'm working on a licensing system for a product I'm building. I've used Claude a little bit to help out with it, but it's also made a lot of very dumb decisions that would have large (security!) consequences if I didn't catch them. And a lot of them are braindead things, like I asked it to create a configurable limit on a certain resource for the trial version of the application. When I said configurable, I mostly meant: put the number in a constant so I can update it later. What Claude thought I asked was "make it so the user can modify the limits of the trial version in the settings panel" (which defeats the entire purpose of a free trial!). Another thing it messed up recently is I was setting up email-magic-link authentication. It defaulted to creating an account for anyone that typed in an email, which could allow a bad actor to both spam people with login requests (probably getting me kicked off Resend) or creating a lot of bogus accounts.

These things do not think. You cannnot outsource your thinking to them.


I don't really understand the argument for these things being conscious. There's no loop or feedback cycle to it. If it's not handling a request it's inert.

Well there is a feedback loop and self-awareness in my harness: https://lethe.gg

The reason people anthropomorphize LLM's is essentially the fault of the tech companies behind them. ChatGPT doesn't need to have the personality it has, it could easily be scaled back to simply answering questions without emoji's and linguistic flare, but frankly I think the tech companies want people to anthropomorphize them.

The core problem is we need to stop calling LLMs "intelligence". They are a form of intelligence, but they're nothing like a human's intelligence, and getting people to not anthropomorphize these systems is really the first step.


I've been thinking of things I'd want an agent for recently. The problem is, everything I think of is something that requires using quite a few different websites, saving a lot of data securely, and working with a lot of sensitive accounts (my email, etc.)

The problem is, all the tasks are essentially: a) things agents probably just can't do, and b) things that absolutely cannot afford to be hallucinated or otherwise fucked up. So far the tasks I've thought of:

- Taxes. So it needs a lot of sensitive information to get W2's. Since I have to look up a lot of this stuff in the physical world anyway, it's not like I can just let it run wild.

- Background check for a new job. It took me 3 hrs to fill out one of them (mostly because the website was THAT bad). Being myself, I already was making mistakes just forgetting things like move in dates from 10 years ago, and having to do a lot of searching in my email for random documents. No way I'm trusting an agent with this.

- Setting up an LLC. Nope nope nope. There's a lot of annoying work involved with this, but I'm not trusting an LLM to do this.

Anyway, I guess my point is that even if an LLM was good at using my computer (so far, it seems like it wouldn't be), the kind of things I'd want an agent for are things that an LLM can't be trusted with.


It’s great at

1. things you wouldn’t otherwise bother doing

2. things where it otherwise would get stuck iterating on hacky workarounds doomed to fail

“Reverse engineer this app/site so we can do $common_task in one click”, “by the way, I’m logged in to $developer_portal, so try @Browser Use if you’re stuck”, etc.

I just had Codex pull user flows out of a site I’m working on and organize them on a single page. It found 116. I went in and annotated where I wanted changes, and now it’s crunching away fixing them all. Then it’ll give me an updated contact sheet and I can do a second pass.

I’d never do this sort of quality pass manually and instead would’ve just fixed issues as they came up, but this just runs in the background and requires 15 minutes of my time for a lot of polish.


I guess the problem I see here is that if the use case is "things I otherwise wouldn't bother doing", that's fine, but it's pretty niche. I dunno, if you're talking about a human "Agent" (like say in sports or entertainment), they'd be a trusted person to handle business matters outside of your competency (contract negotiations, etc.). I don't see AI "agents" being at all like that, they're more like an intern you need to supervise constantly.

> I disagree with the overall premise: Before the acquisition, Bun had to figure out how to monetize at some point.

Incidentally, Anthropic needs to figure out how to monetize at some point too.


It’s organizations figuring out how to monetize all the way up.

The view from the bottom is turtle asses all the way to the top.

Can't answer for an RTX 5090, but for an RTX 5080 16GB of RAM (desktop), I get about 6 tokens/sec after some tweaking (f16->q4_0). Kind of on the borderline of usable.. probably realistically need either a 5090 with more RAM or something like a Mac with a unified memory architecture.


My M5 Pro is getting ~11 tokens per second via OMLX for an 8 bit quant.


A Mac is not going to be all that much faster than a 5080 with any models, other than the ones you can’t currently run at all because you don’t have enough GPU+CPU memory combined.

You’re much better off adding a second GPU if you’ve already got a PC you’re using.


At this rate they're not going to need to do layoffs.. nobody sane is going to want to work there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: