Hacker Newsnew | past | comments | ask | show | jobs | submit | nextos's commentslogin

Seconded. All the Haskell people from Chalmers have produced very interesting work. Another example is Agda.

Different methods find different things. Personally, I'd rather use a language that is memory safe plus a great static analyzer with abstract interpretation that can guarantee the absence of certain classes of bugs, at the expense of some false positives.

The problem is that these tools, such as Astrée, are incredibly expensive and therefore their market share is limited to some niches. Perhaps, with the advent of LLM-guided synthesis, a simple form of deductive proving, such as Hoare logic, may become mainstream in systems software.


Also Claude owes its popularity mostly to the excellent model running behind the scenes.

The tooling can be hacky and of questionable quality yet, with such a model, things can still work out pretty well.

The moat is their training and fine-tuning for common programming languages.


>> Also Claude owes its popularity mostly to the excellent model running behind the scenes.

It's a bit of both. Claude Code was the tool that made Anthropic's developer mindshare explode. Yes, the models are good, but before CC they were mostly just available via multiplexers like Cursor and Copilot, via the relatively expensive API.


Even small brand new apartments tend to have their own sauna, which is quite impressive.

It should be better for the reasons explained in the article. Pure functions require no context to understand. If they are typed, it's even simpler. LLMs perform badly on code that has lots of state and complex semantics. Those are hard to track.

In fact, synthesis of pure Haskell powered by SAT/SMT (e.g. Hoogle, Djinn, and MagicHaskeller) was already of some utility prior to the advent of LLMs. Furthermore, pure functions are also easy to test given that type signatures can be used for property-based test generation.

I think once all these components (LLMs, SAT/SMT, and lightweight formal methods) get combined, some interesting ways to build new software with a human-in-the-loop might emerge, yielding higher quality artifacts and/or enhancing productivity.


Wouldnt a fair counter argument be, that llms have been trained on way less fu ctional code though?

Like they are trained on a LOT of js code -> good at js Way less functional code -> worse performance?


You can write functional-style code in many languages, as I have in JS and occasionally Python to great benefit.

For sure. I write functional style code in C# but it is not the same thing as writing OCaml or F#.

That's a very fair point. There are some publications showing lower performance for languages with less training data. I imagine it also applies to different paradigms. Most training code will be imperative and of lower quality.

I think LLMs are great at compression and information retrieval, but poor at reasoning. They seem to work well with popular languages like Python because they have been trained with a massive amount of real code. As demonstrated by several publications, on niche languages their performance is quite variable.

I used to find it better to shortcut the AI by asking it to write python to do a task. Claude 4.6 seems to do this without prompting.

Edit: working on a lot of legacy code that needs boring refactoring, which Claude is great at.


You have a point but current LLM architectures in particular are very fragile to data poisoning [1,2].

[1] https://www.anthropic.com/research/small-samples-poison

[2] https://arxiv.org/abs/2510.07192


Yes, there are quite a few anti-AI projects. https://old.reddit.com/r/badphilosophy/wiki/index


No idea why you're being downvoted. We can't yet even demonstrate that LLMs will withstand training on their own output as they pollute the Internet.

This is very impressive, top labs doing research often don't have experimental designs that are this elaborate. Was the TCR and BCR-seq you conducted helpful to design cell therapies, neoantigen vaccines, and monitor progress?

Given that you carry the HLA-B*27:05 allele, you might have been blessed by being predisposed to a better response. But probably you want to keep an eye on future autoimmunity issues. Talking from experience...


Thanks for the warning, I hope that it wasn't a personal experience for you.

Thanks for the compliment about the elaborate design. I think that when you make something for one or a few patients it is easier to be more elaborate, even with the same knowledge and equipment.

Maybe the TCR and BCR-seq was most helpful for mRNA design and effectiveness monitoring, but hopefully someone else on my team will answer that better.


The TCR sequencing has been helpful for downselecting TCRs for a TCR based cell therapy, and for monitoring response to various immune therapies (including the vaccines)


Interesting, thanks for your replies.

You should consider publishing a patient case report somewhere, as I believe there are lots of valuable conclusions to be extracted from your work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: