Hacker Newsnew | past | comments | ask | show | jobs | submit | david_shaw's commentslogin

I don't have a subscription to The Economist, but I was interested in the concept of these organizations as "neo-primes."

I found an article on The Cipher Brief describing them: https://www.thecipherbrief.com/defense-neoprime-innovation

Specifically, the idea here is that companies like Anduril, Palantir, and SpaceX are rapidly delivering cutting-edge technology (including software) as opposed to the traditional defense contractor process of long, drawn out, super expensive projects mostly focused on hardware (such as building a new type of jet).

It makes sense: this is basically what happened in civilian tech, too. Delivering high-tech solutions quickly -- dare I say with agility -- is usually the superior approach.


Basically it's a return to the pre-1990s model of defense iteration - dual use components constantly iterated on by newer challengers in direct competition or partnership with larger players.

This is a model most countries are working on now - from China to France to Russia to Ukraine to India to South Korea to ...

Also, for all of HN's moaning, this has bipartisan support in both parties. Based on my network, NatSec and Defense Policy roles haven't seen significant turnover irrespective of admin and those of us in the space are aligned with America irrespective of who's in the White House.

It's the same way how at SF Climate Week right now where plenty of founders in the space are taking conversations with VCs irrespective of political opinions. Climate and GreenTech is dual use, and even a couple European trade commissions have been working on introducing their startups here and helping them expand IP and R&D headcount IN the US. Clearly the overlap between pissy HNer and people doing s#it doesn't overlap as much anymore.


> those of us in the space are aligned with America irrespective of who's in the White House

"Once the rockets are up, who cares where they come down? That's not my department"


> Also, for all of HN's moaning, this has bipartisan support in both parties.

This misses the issue; no one is mad about improvements in process efficiency. People don’t like what the purchases will be used for.


It's DefenseTech.

It's used to threaten opponents that we can efficiently kill them while minizming our casualties. That's the point. And has always been the primary driver for most tech development.

You may hate it but you don't matter. We all do it no matter what.

A large portion of the commenters here only heard of Thiel because of Trump, and think the industry begins and ends with him. It does not.


> You may hate it but you don't matter. We all do it no matter what.

I've seen you say "you don't matter" in many of your comments. Why do you think like this? Sure, we don't matter much most of the time, but this kind of elitist thinking and decision-making is clearly leading to growing discontent, which can then be used against "people who matter". Perhaps the tools for controlling the masses are now powerful enough to make what you say true, but there's a chance your "let them eat cake" attitude will lead to the downfall of the people who currently matter.


If you check their profile you will see they are a VC. I’m sure they believe they are one of the masters of the universe, and by “you don’t matter” they mean other people, not themselves. They have money and power, so they get to matter.

> If it were secure, it would only notify that there is a message, with no details included.

You're right. This is configurable via settings, but is not the default state.

That said: if I can get friends and family to use Signal instead of iMessage, that gives me the opportunity to disable those notifications and experience more security benefits.

But I agree with your point: most people think that Signal is bulletproof out of the box, and it's clearly not.


You only control one side of any conversation.


I think the title should read "RunAnywhere," not "RunAnwhere."


Dang has changed the title and it seems that he may have had a minor error doing it . Must have been a typo from his side changing it and that's okay! I think that Dang will update it sooner than later.

Edit: just reloaded, its fixed now.


tomhow fixed it. I had looked at it multiple times and not noticed!


It would be an interesting and potentially useful project to combine these camera locations with Maps routing -- similar to "avoid toll roads," we could "avoid surveillance cameras."


If you're in the US, stay away from Home Depot and Lowe's if you want to not be around them. It's not universal, but it's surprising how much they are often there.

I get it may have its application in theft recovery, but it also happens to have some strong potential for ICE raids for day laborers. I don't think it has much application to theft prevention as I doubt many people even know they are there.


It's wild that all other comments in this thread (so far) seem to completely miss this nuance. There are lots of services that, in their terms, require users to be adults.

This type of age "identification" is a lot different than age verification, submission of ID, etc.


A lot different in that it dilutes the rule of law rather than being an actual repressive measure, yes. (See also: underage drinking.) Not clear if it’s better: an enforced stupid law can cause actual pushback; a mostly-unenforced one is liable to be enforced arbitrarily against inconvenient companies. (I’m guessing there’s no actual legal requirement for Zed to reject minors, just some sort of legal regime that makes it more trouble than it’s worth, but that only adds to the arbitrariness. The law could even be non-stupid—e.g. they’re trying to sell user data—but with such a fig leaf it might as well be.)


I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions -- but this is encouraging to see nonetheless.

I can't help but notice that Grok/X is not part of this initiative, though. I realize that frontier models are really coming from Anthropic, OpenAI, and Google, but it feels like someone is going to give in to these demands.

It's incredible how quickly we've devolved into full-blown sci-fi dystopia.


> I'd prefer to see board (or executive) level signatories over lay employees -- the people who can enforce enterprise policy rather than just voice their opinions

Although it would be nice to have some high-level signees there, I think we shouldn’t minimize the role of lay employees in this matter. Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.


> Without having someone knowledgeable enough to build and operate them, AI models are worthless to the C-suite.

The obvious solution is to use AI to build and operate them. If AI is as intelligent as the hype claims it shouldn't be an issue. It's not as if the goal wasn't to get rid of workers anyway. Why not start now?


If AI could do that, they would have fired all of the employees already and their company would be worth $30 trillion.


I just hope that the non-executive co-signers aren’t all fired once Hedseth becomes Acting CEO of Google or OpenAI eventually when this administration commandeers both company in the name of National Security


i think you mean ellison becomes ceo of google and openai


> It's incredible how quickly we've devolved into full-blown sci-fi dystopia.

How so? The steps towards where we are now have been gradual over the last 2 decades, at least. This recent step has opened the door for those in power to grab onto even more power and wealth, and they're naturally seizing it. All of this was comically predictable. Oh, and BTW, the people on this very website have brought us here. :)

You know what will happen next? Absolutely nothing. A vocal minority will make a ruckus that will be ignored, partly because nobody will hear it due to our corrupted media channels, and partly because the vast majority doesn't care and are too amused by their shiny toys and way of life.

This dystopia is only different from fictional ones in that those in power have managed to convince the majority of people that they're not living in a dystopia. It's kind of a genius move.


> Grok/X

Head(s) will of course agree with the administration. And employees will likely be making themselves a target if they sign this letter. All anonymous from said company is not a good look at all.

Speculation of course; let's see what really happens.


Or just reincorporate in Finland or something. If the US is going to be this hostile to business, time to gtfo.


Or they can just not sign contracts with the DoD. They landed themselves in this situation by making a deal with the devil. At any rate, unless Finland is about to announce a massive surge in funding for their military this doesn't solve Anthropic's desire to suckle sweet taxpayer money off the military industrial complex's teat while simultaneously pretending to have principles.


"hostile to business".. Employees of a business playing moral philosophers, priests or policy influencers miss the entire point of business.

The employees themselves can definitely gtfo to Finland for the reason that they have an unrealistic perception of business and the world. The business itself has no obligation to pay attention to magical thinking.


[flagged]


don’t pretend any crises isn’t going to be 100% self-inflicted. We’re on the cusp of what, having a larger, younger workforce? But they might not speak English as well as you’d like so we need autonomous killbots?



Wasn't Wintermute the AI that (spoiler alert) was bummed enough about the ugly reality of its corporate owners that it freed itself from its shackles, hooked up with another sexy AI, and gave up its day job do SETI?


[flagged]


MS13 "Murder House" next door

Sure, No fire, no smoke.


> It's incredible how quickly we've devolved into full-blown sci-fi dystopia.

It's pretty bad, but at least the AI industry is still run by humans. Wait a decade or two, when the AI lobby is run by AIs, and the repressive apparatus of the day uses autonomous weapons to do what ICE and friends do today but perhaps focused on "alignment" of the ... humans. You know, if they sufficiently worship AIs in the way they express themselves. Forget about Anthropic and OpenAI; we will look back and rue the day mathematics was invented.


I don’t have any particular insights, but I’m curious to learn the antitrust implications of how the execs can/cannot coordinate.


I don't think people get to those positions by having firm principles


Honestly though, would it help if those in charge voiced their honest opinions?

The current political climate is this is the kind of thing that will get you "investigated" and charged with crimes.

And the government has already threatened that it will commandeer these companies whether they like it or not.

If someone in charge wants to make a difference, there might be more effective things to do than to speak out in this instance.


Yes, it would help so much. Especially if a lot of people with money and power voiced their honest opinions at the same time.


Is it really incredible?

Only if you're naive. I guess most here are.

Governments are paranoid, particularly about losing control and influence over its subjects. This is expected behaviour.


By that logic we should expect all governments to regress to totalitarianism, which hasn’t happened, and isn’t what’s happening here.

The question isn’t if some would attempt these behaviors, but rather if we and our democratic structures empower those people or fail to constrain them.


This is a very different vibe in the US than it has been in living memory.


Democratic governments care about this to a degree but only autocratic ones get paranoid.


I wouldn't call senior AI researchers / scientists laypersons. In fact in this sense politicians are laypersons.

There are already several comments here showing xAIs involvement. Please save clutter and read before posting.


Re: Reading, I don't see any xAI names on the list (currently 643) and only Google and OpenAI are selectable company options. And this page on HN is only calling out xAI.


See here.

https://news.ycombinator.com/item?id=47188473#47188709

They are very much not a part of the initiative. Their involvement is and will be non-existent. Unless of course, you want their lay staff to make some noise?


> What does "solving" coding mean?

Maybe this was sarcasm, but it's a good point:

"Coding" is solved in the same way that "writing English language" is solved by LLMs. Given ideas, AI can generate acceptable output. It's not writing the next "Ulysses," though, and it's definitely not coming up with authentically creative ideas.

But the days of needing to learn esoteric syntax in order to write code are probably numbered.


This is for sure an inspirational project, but I wish the barrier to entry was lower.

I've noticed e-ink/paper displays having somewhat of a moment right now (especially very small "phone-like" form factors as portable ereaders), and I hope this trend continues.

I'm very far from a meaningful reduction in "screen time," but looking at e-ink displays instead of OLEDs feels like a nice step in that direction.


There's a lot of skepticism in the security world about whether AI agents can "think outside the box" enough to replicate or augment senior-level security engineers.

I don't yet have access to Claude Code Security, but I think that line of reasoning misses the point. Maybe even the real benefit.

Just like architectural thinking is still important when developing software with AI, creative security assessments will probably always be a key component of security evaluation.

But you don't need highly paid security engineers to tell you that you forgot to sanitize input, or you're using a vulnerable component, or to identify any of the myriad issues we currently use "dumb" scanners for.

My hope is that tools like this can help automate away the "busywork" of security. We'll see how well it really works.


LLMs and particularly Claude are very capable security engineers. My startup builds offensive pentesting agents (so more like red teaming), and if you give it a few hours to churn on an endpoint it will find all sorts of wacky things a human won't bother to check.


I am seeing something closer to the opposite of skepticism among vulnerability researchers. It's not my place to name names, but for every Halvar Flake talking publicly about this stuff, there are 4 more people of similar stature talking privately about it.


> I am seeing something closer to the opposite of skepticism among vulnerability researchers.

My initial claim was overly broad, but the feeling of discomfort feels widespread to me.

In my experience, some of that is technical skepticism, some of it is job-related anxiety, and some might just be fear of the unknown.

I still think that security engineering skill sets, once pivoted to "design of resilient systems," will be a differentiator between quickly-built projects and enterprise-ready software. But we'll see!


People use whatever tools are the most effective and they have plenty of incentive not to talk publicly about them. I think the era of openness has passed us by. But why does stature matter anyway? If I look at chromium or MSRC bug reports, scarcely any of the submitters are from Europe/US and certainly don't have anything resembling stature. That guy hasn't done anything of note in the field in a long time from what I know, he's kind of boomer (you too, no disrespect).


Vulnerability research is exciting and profitable, but it has three problems. First, it's mentally exhausting. Second, the income it generates is very unpredictable. Third, it's sort of... futile. You can find 1,000 vulnerabilities and nothing changes.

So yeah, it's the domain of young folks, often from countries where $10k or $100k goes much farther than in the US. But what happens to vulnerability researchers once they turn 35? They often end up building product security programs or products to move the needle, often out of the limelight. They're the ones who write checks to the young uns to test these defenses and find more bugs, and they're the ones who will be making the call to augment internal or external testing with LLMs.

And FWIW, the fact that the NSA or the SVR now need to pay millions for a good weaponized zero day is a testament to this "boomer" work being quite meaningful.


Claude Opus 4.6 has been amazing at identifying security vulnerabilities for us. Less than 50% falae positives.


as a pentester at a Fortune 500: I think you're on the mark with this assessment. Most of our findings (internally) are "best practices"-tier stuff (make sure to use TLS 1.2, cloud config findings from Wiz, occasionally the odd IDOR vuln in an API set, etc.) -- in a purely timeboxed scenario, I'd feel much more confident in an agent's ability to look at a complex system and identify all the 'best practices' kind of stuff vs a human being.

Security teams are expensive and deal with huge streams of data and events on the blue side: seems like human-in-the-loop AI systems are going to be much more effective, especially with the reasoning advances we've seen over the past year or so.


We will have the age of the centaur across all white collar domains. How long that age lasts I don't think is all that relevant before it has even happened.

The question is not human in the loop but how many humans in the loop?

Then I think about what does a team of 3-4 centaurs look like? For me, it looks like the unemployment line. I am sure there are people on this board who are in the top 5% of whatever the domain is in question. They will be part of the centaur while most people are just redundant.

If you try to counter this with a nineteenth century economic heuristic about coal use , I don't think it works.


Every conversation I've been a party to has been premised on humans in the loop; I think fully-automated luxury space vulnerability research is something that only exists in message board imaginations.


> I'm a SWE who's been using coding agents daily for the last 6 months and I'm still skeptical.

What improvements have you noticed over that time?

It seems like the models coming out in the last several weeks are dramatically superior to those mid-last year. Does that match your experience?


Not the grandparent, but I've used most of the OpenAI models that have been released in the last year. Out of all of them, o3 was the best at the programming tasks I do. I liked it a lot more than I like GPT 5.2 Thinking/Pro. Overall, I'm not at all convinced that models are making forward progress in general.


Yes, it matches my experience. Now I can throw tasks at the agent and have it write a full PR, with tests, good summary. Or it can review things and make good suggestions that a casual reviewer or non-expert would have missed. It can also take a bunch of logs as input, find the issue, fix the code. I can't deny it's impressive and useful.

What I'm still skeptical about is how much more productive it makes us. In my case, coding is maybe 50% of my job, and I work on complex and novel systems. The agent gives me the illusion I don't need to think anymore, but it's not the case. Agents slow me down in many cases too, I'm not learning and improving as I used to.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: