Hacker Newsnew | past | comments | ask | show | jobs | submit | packetlost's commentslogin

Note: this readme appears to be from a very old version (5.x)

I'm really quite confident I don't want these companies collecting face and ID scans to prove age, so no I think this being an OS problem is actually a very reasonable solution.

This was the case before Obsidian existed, see Org-mode, vimwiki, etc.

I was using vimwiki with a ton of plugins for many years before Obsidian came along. It was very nice to be able to open all of my notes in a UI made for editing them.

Why is that? IME pretty much all of their software is a mess and the hardware has some bugs/issues iirc but is otherwise ok?

> all of their software is a mess and the hardware has some bugs/issues

Is that not enough of a reason?


Fair!

> Are AI “gains” really transformative

They're transformative in the sense that will shrink the optimal team size, but I don't expect the jobs to actually go away unless these things both get substantially better at engineering (they're good at generating code but that is like 20% of engineering at best) and we have a means of giving them full business/human levels of context.

Really basic stuff gets a lot easier but the needle doesn't move much on the harder stuff. Without some sort of "memory" or continuous feedback system, these models don't learn from mistakes or successes which means humans have to be the cost function.

Maybe it's just because I'm burnt out or have a miner RSI at the moment, but it definitely saves me a bit of time as long as I don't generate a huge pile and actually read (almost) everything the models generate. The newer models are good at following instructions and pattern matching on needs if you can stub things out and/or write down specs to define what needs to happen. I'd say my hit rate is maybe 70%


> we have a means of giving them full business/human levels of context

Trust me, this is a work in progress. Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.

Imo, What most of the people that are not directly working in this space get wrong is assuming swes are going to be hit the hardest: There are some efficiency gains to be won here, but a full replace is not viable outside of AGI scenarios. I would actually bet on a demand increase (even if the job might change fundamentally). Custom domain made software is cheaper as it has ever been and there is a gigantic untapped market here.

Low complexity to medium complexity white colar jobs are done for in the next decade through. This is what is happening right now in finance: if models stopped improving now, the technology at this point is already good enough to lower operational costs to the point where some part of the workforce is redundant.


> Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.

I think you misunderstand what I'm saying. I'm not really referring to data systems at all, I'm referring to context on what problems are actually being solved by a business. LLMs very clearly do not model outcomes that don't have well-defined textual representations.

I'm not sure that I agree with white collar jobs being done for, not every process has as little consequence to getting it wrong as (most) software does.


> I think you misunderstand what I'm saying. I'm not really referring to data systems at all, I'm referring to context on what problems are actually being solved by a business. LLMs very clearly do not model outcomes that don't have well-defined textual representations.

Yeah i misunderstood your point, i completely agree with what you are saying.

I honestly do not believe that strategy, decision making and other real life context dependent are going to be replaceable soon (and if it does, its something other than llms).

> I'm not sure that I agree with white collar jobs being done for, not every process has as little consequence to getting it wrong as (most) software does.

Maybe im too biased due to working in a particularly inefficient domain, but you would be surprised how much work can be automated in your average back office.

Much of the operational work is following set process and anything out of that is going to up the governance chain for approval from some decision maker.

LLM based solutions actually makes less errors than humans and adhere to the process better in many scenarios, requiring just an ok/deny from some human supervisor.

By delegating just the decision process to the operator, you need way less actual humans doing the job. Since operations workload is usually a function of other areas, efficiency gains result in layoffs.


> Maybe im too biased due to working in a particularly inefficient domain, but you would be surprised how much work can be automated in your average back office.

> Much of the operational work is following set process and anything out of that is going to up the governance chain for approval from some decision maker.

Oh that's very interesting! Thank you for the insights!


> Trust me, this is a work in progress. Right now most corporations do not have their data organized and structured well enough for this to be possible, but there is a lot of heat and money in this space.

This is exactly what people were saying a decade ago when everyone wanted data scientists, and I bet it's been said many times before in many different contexts.

Most corporations still haven't organised and structured their data well enough, despite oceans of money being poured into it.


> will shrink the optimal team size, but I don't expect the jobs to actually go away

If they've shrunk the team size, that means some jobs (in terms of people working on a problem) will have gone away. The question is, will it then make it cheap enough to work on more problems that are ignored today, or are we already at peak problem set for that kind of work?

Spreadsheets and accounting software made it possible to have fewer people do the same amount of work but it ended up increasing the demand of accountants overall. Will the same kind of thing happen with LLM-assisted workloads, assuming they pan out as much as people think?


The biggest cost is the power which is often on multi year contracts. The hardware is comparatively cheap

That's wildly inaccurate. The cost in enormous both on the inference side and the mining side and has short lifetimes if you want SOTA.

Yeah. I do wish there was something that was like Clojure with a TypeScript or Go-like nominal typing, but I do feel myself missing types a lot less with Clojure compared to other languages.

Type annotations mix poorly with s-expressions imo. Try an ML, which answers the same question of "How do we represent the lambda calculus as a programming language?"

There's already type annotations in Clojure and they look fine (though are a bit noisy). There are type algorithms that don't need annotations to provide strong static guarantees anyways, which is the important part (though I'm not sure you can do that with nominal types?). I think TypeScript and Go's syntaxes are a bad fit for s-expr but the idea probably isn't.

Isn't this like saying types mix poorly with ASTs?

I doubt that all of the providers hosting open models on open platforms are losing money on serving inference. They have the benefit of not having to pay for training, but the models are open and aren't going away anytime soon.

Sadly, I have not been able to find any open model that comes close to Opus 4.6. So while they are much cheaper to deploy, they also aren't good enough for unsupervised agent execution. But you need a model that can run unsupervised for the claim "Code is Cheap" to become possible.

I don't really think so. Maybe it's because the systems I build need to be reliable and understandable to humans, but I don't think Opus 4.6 is good enough to be unsupervised. I've spent a lot of time using it and I have to tell it no semi frequently and rewrite by hand/iterate with the model frequently. It's saves me a lot of time when used this way, I think, but I still have to give it overall structure and keep it scoped to small changes to prevent it from going down wrong paths and generating tons of unnecessary code (which is how you end up with unmaintainable slop). Less code is pretty much always better and these models made it really easy to ignore that until it's too late. This is on a healthy mix of greenfield and brownfield projects.

To that end, I've actually found Kimi K2.5 to be "good enough" for a lot of that, not quite as good at Opus 4.6, but good enough that it gets me like 80% of the value for a fraction of the cost and more speed.


Yeah, my work is in a very similar boat. We might need to drop our Tailscale usage because of this. It sucks.

It looks like it depends on how much you use ACLs. The new ACL cap seems strictly worse than before. Otherwise it seems the new free plan is mostly the same as the old Personal Plus plan for most users.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: