> As a result, producers need a way to rapidly explore and validate new formulations without spending months in the lab.
How do you bypass the normal process of pouring test articles and testing them months and years after cure? This is fundamentally a research activity that needs to conduct verifiable science. Not something you can guess at with an LLM.
Hi, I developed the model. We are not bypassing the regular testing process, and are not using LLMs, but Gaussian processes with vetted test data. The predictions are used as recommendations for onsite testing, to accelerate finding mixtures with optimal strength-speed-sustainability trade-offs.
Somebody needs to coin a new term for the scattershot zero-thought AI griping that is pervasive in online comments these days. Meatslop?
Obviously it's going to be more productive for a manufacturer to do a years-long curing test on 100 likely candidates instead of 100 random mixes. They obviously already screen candidates through traditional methods, but if this AI technique improves accuracy, all the better.
The current strategy of the AI hype machine is to exhaust people's reserves of attention by presenting a never-ending stream of hard-to-verify "positive" claims. It's Gish Gallop done on the Internet scale with a never-ending parade of tech influencers, proxy "journalists" and low-value accounts. The whole strategy aims for saturation and demoralized acceptance.
It's no surprise that people readjust their immediate reactions by expressing hostility and skepticism about anything AI-related without spending much time on analysis. In fact, it's an entirely rational repones.
Complaining about it without acknowledging the larger picture is disingenuous.
In this particular case, using the term "machine learning" would likely avoid the immediate negative reaction.
The Gaussian Processes underpinning this work are hardly a product of the 'AI Hype Machine' - they've been around for decades, have strong statistical underpinnings, and are being widely explored for experimental design across many disciplines. Reflexive and poorly-informed backlash to any variety of machine learning is no more productive than blindly hyping up LLMs.
Meta Platforms, Inc featuring this technology with a title announcing “AI for American-produced cement and concrete” is, on the other hand, 1000% a product of the AI Hype Machine.
Sure, it's clearly marketing. I think a private company pursuing marketing via open research with open source code (including datasets) is a good trade. A hypey blogpost + research is better than no blogpost and no research.
Was that the one immediately after the great paradigm shift of November 2025, and before the great paradigm shift of January 2026? I think I remember it.
There was no such paradigm shift. LLMs still suck just as much as they did before, in the exact same ways they did before. In 6 months you'll be trying to BS us about the "great paradigm shift of summer 2026".
Reddit's top contributors are decent, but there is an elite niche of people (granted, mostly of the technical variety) who somewhat regularly show up on HN but do not contribute much on Reddit.
It does help, of course, that HN is moderated in good faith and has a more pervasive commitment to self-moderation than Reddit has ever had (outside a few very niche subreddits).
They both share the same problem: nobody who gripes incorrectly like this suffers any consequences. So you may as well gripe at anything and everything. Griping feels good and you rarely ever get downvotes on HN because griping is such a part of the site culture, whether you're incorrect or not. There's a recent HN guideline about being curmudgeonly but we all know that guidelines on this site are rarely followed.
How do you bypass the normal process of pouring test articles and testing them months and years after cure? This is fundamentally a research activity that needs to conduct verifiable science. Not something you can guess at with an LLM.