Hacker Newsnew | past | comments | ask | show | jobs | submit | Enginerrrd's commentslogin

Prison IQ is a very different distribution. As I recall, the top 2% IQ of the general population makes up something like 20% of the prison population. You also have quite a few at the other end.

The gifted are more over represented in prison then black males, however, most of those gifted are themselves minorities.


I’ll have to see some evidence on that, in my search it’s basically a normal bell curve shifted 8 pts down. The idea that 130+ IQ individuals make up 1/5th of the prison population does not pass the sniff test, that would be a crazy statistical aberration. In my search I found reports that 130+ IQ individuals only represent less than 0.4% of the prison population.

The latter is an interesting mindset to advocate for. In almost every other engineering discipline, this would be frowned upon. I suspect wisdom could be gained by not discounting better forethought to be honest.

However, I really wonder how formula 1 teams manage their engineering concepts and driver UI/UX. They do some crazy experimental things, and they have high budgets, but they're often pulling off high-risk ideas on the very edge of feasibility. Every subtle iteration requires driver testing and feedback. I really wonder what processes they use to tie it all together. I suspect that they think about this quite diligently and dare I say even somewhat rigidly. I think it quite likely that the culture that led to the intense and detailed way they look at process for pit-stops and stuff carries over to the rest of their design processes, versioning, and iteration/testing.


Racing like in Formula 1 is extremely different from normal product design: each Formula 1 car has a user base of exactly 1: the driver that is going to use it. Not even the cars from the same team are identical for that reason. The driver can basically dictate the UX design because there is never any friction with other users.

Also, turnaround times from idea to final product can be insane at that level. These teams often have to accomplish in days what normally takes months. But they can pull it off by having every step of the design and manufacturing process in house.


Not really. I have a pretty solid 5+% edge over a long time period even on the competitive markets I bet in. On many markets I think it's closer to 10-20%. These are things that inside information can't really help as much as you'd think.

And even on markets where someone would benefit from inside information... insiders leak a lot more information than you'd think before it tends to hit the market. Even reading the news can tell you more than you'd think if you look at it right. The single biggest hint is "Why am I reading this, and why now?". News stories on geopolitics almost never arise naturally, and that question will get you to a LOT of information that was not explicitly stated.


That's not been my observation at all. Rationalists are some of the only people to really embrace fuzzy and probabilistic thinking. Am I missing something?


Maybe rationalists aren’t homogeneous? Unfortunately there are a rather concerning amount of news articles detailing cases where some subset of the rationalist community has gone off the deep end.


Can we add microplastics to the list?


I think the people that dismiss this concern are just as bad and unscientific. It's a pretty decent Bayesian prior to assume that regular exposure to synthetic organic compounds in quantities or concentrations that hominids didn't receive exposure to in our evolution are likely to be potentially problematic. This is especially true when they don't occur naturally in any organisms biochemical processes and yet are active enough to interact with many of these things. This is OBVIOUSLY true for things used as pesticides / herbicides. We have evolved a natural aversion to areas where all the plants and animals are dead and rotting. There are good reasons for this to be a really good heuristic.

I'll go so far as to say that almost any pesticide or herbicide is likely to be bad for vertebrates and invertebrates alike. This is really likely the case for perservatives as well, for what should be obvious reasons.

It's really not that crazy to assume they're probably not good as a default assumption.

Go into a hardware store and almost every chemical, solvent, paint, etc. that you encounter is not good for you. Eat a salmon and enjoy billions of plastic particles. Open almost any prepackaged food and you'll be ingesting all manner of dyes, perservatives, anti-caking agents, etc. etc. etc. that simply weren't around in your food environment during our evolution. It's a surprisingly good baseline assumption that these things aren't likely to be good for you.

If you think about the study design and epidemiologic studies, it should be clear that it's going to be very difficult to prove harm in a lot of cases for things that are only a little harmful, or only harmful in combination, or harmful only after 20 years have passed since exposure, etc. ...except that the science is VERY clear: something (or lots of things) associated with "processed food" is really bad for you.


Sometimes I don't even want to reply to the crazy 'science or it didn't happen' people, but I do it just in case someone less terrible is reading and is on the fence. It's amazing that we even have to write what you just did..


> it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?

I could not disagree more. A big part of that is also knowing when NOT to pull the trigger. And it’s much harder than you’d think. If you think full self driving is a difficult task for computers, battlefield operations are an order of magnitude more complex, at least.


We have fully autonomous weapons, and had them for over a century. We call them "landmines".

I expect autonomous weapons of the near future to look somewhat similar to that. They get deployed to an area, attack anything that looks remotely like a target there for a given time, then stand down and return to base. That's it.

The job of the autonomous weapon platform isn't telling friend from foe - it's disposing of every target within a geofence when ordered to do so.


And the arms industry has been pushing smart mines for decades, so that they can keep selling them despite the really bad long-term consequences (well beyond the end of hostilities) and the Ottawa Treaty ban. In the end, land mines are killing people although the mines are supposed to be sufficiently advanced not to target persons.

From a security perspective, the “return to base” part seems rather problematic. I doubt you'd want to these things to be concentrated in a single place. And I expect that the long-term problems will be rather similar to mines, even if the electronics are non-operational after a while.


"Smart mines" specifically can be designed so that they're literally incapable of exploding once a deployment timer expires, or a fixed design time limit is reached.

It just makes the mines themselves more expensive - and landmines are very much a "cheap and cheerful" product.

For most autonomous weapons, the situation is even more favorable. Very few things can pack the power to sit for decades waiting for a chance to strike. Dumb landmines only get there by the virtue of being powered by the enemy.


Well, I assume that they are at least not to attack their autonomous "comrades". Masquerading as such will be one obvious tactic, no ? You could argue that these guys would use e2e encrypted messages as FOF designation, but I would imagine a contested area would be blanketed with jammers, leaving only other options (light ? but smokescreens. Audio? Also easily jammed). So this isn't as easy as most people think.

Edit: No, I don't think a purely defensive stance like landmines is sufficient and what the people in command think.

We have landmines today. Why spend much more making marginally better, highly intelligent ones with LLMs?


Also, a longer quote from Douglas Adams might be appropriate here (also appropriate to agentic vibe coding ...)

Click, hum.

The huge grey Grebulon reconnaissance ship moved silently through the black void. It was travelling at fabulous, breathtaking speed, yet appeared, against the glimmering background of a billion distant stars to be moving not at all. It was just one dark speck frozen against an infinite granularity of brilliant night. On board the ship, everything was as it had been for millennia, deeply dark and Silent.

Click, hum.

At least, almost everything.

Click, click, hum.

Click, hum, click, hum, click, hum.

Click, click, click, click, click, hum.

Hmmm.

A low level supervising program woke up a slightly higher level supervising program deep in the ship's semi-somnolent cyberbrain and reported to it that whenever it went click all it got was a hum.

The higher level supervising program asked it what it was supposed to get, and the low level supervising program said that it couldn't remember exactly, but thought it was probably more of a sort of distant satisfied sigh, wasn't it? It didn't know what this hum was. Click, hum, click, hum. That was all it was getting. The higher level supervising program considered this and didn't like it. It asked the low level supervising program what exactly it was supervising and the low level supervising program said it couldn't remember that either, just that it was something that was meant to go click, sigh every ten years or so, which usually happened without fail. It had tried to consult its error look-up table but couldn't find it, which was why it had alerted the higher level supervising program to the problem.

The higher level supervising program went to consult one of its own look-up tables to find out what the low level supervising program was meant to be supervising.

It couldn't find the look-up table.

Odd.

It looked again. All it got was an error message. It tried to look up the error message in its error message look-up table and couldn't find that either. It allowed a couple of nanoseconds to go by while it went through all this again. Then it woke up its sector function supervisor.

The sector function supervisor hit immediate problems. It called its supervising agent which hit problems too. Within a few millionths of a second virtual circuits that had lain dormant, some for years, some for centuries, were flaring into life throughout the ship. Something, somewhere, had gone terribly wrong, but none of the supervising programs could tell what it was. At every level, vital instructions were missing, and the instructions about what to do in the event of discovering that vital instructions were missing, were also missing. Small modules of software - agents - surged through the logical pathways, grouping, consulting, re-grouping. They quickly established that the ship's memory, all the way back to its central mission module, was in tatters. No amount of interrogation could determine what it was that had happened. Even the central mission module itself seemed to be damaged.

This made the whole problem very simple to deal with. Replace the central mission module. There was another one, a backup, an exact duplicate of the original. It had to be physically replaced because, for safety reasons, there was no link whatsoever between the original and its backup. Once the central mission module was replaced it could itself supervise the reconstruction of the rest of the system in every detail, and all would be well.

Robots were instructed to bring the backup central mission module from the shielded strong room, where they guarded it, to the ship's logic chamber for installation.

This involved the lengthy exchange of emergency codes and protocols as the robots interrogated the agents as to the authenticity of the instructions. At last the robots were satisfied that all procedures were correct. They unpacked the backup central mission module from its storage housing, carried it out of the storage chamber, fell out of the ship and went spinning off into the void.

This provided the first major clue as to what it was that was wrong.


You don’t need Anthropic for this use case, so obviously this use case is not what the current fight is about.


You don't need Anthropic for any use case. They don't ship VLAs either - nothing from Anthropic's entire model lineup can run on a killer drone.

Which raises the question: why did the Pentagon try to pressure Anthropic at all?

On the principle of it? Political reasons? Or was the real concern "domestic warrantless surveillance"?


I guess by that definition, a bullet is also autonomous. It will strike anything in its path of flight, autonomously without further direction from the operator.


Bullets don't kill people, etc. etc.

If anything represents the logical conclusion of that tired fallacy, it'll be actually autonomous, "thinking" drones which make the targeting decisions and execution decisions on their own, not based on any direct, human-led orders, but derived from second-order effects of their neural net. At a certain point, it's not going to matter who launched the drones, or even who wrote the software that runs on the drones. If we're letting the drones decide things, it'll just be up to the drones, and I don't love our chances making our case to them.


"Since the end of the Vietnam War in 1975, unexploded ordnance (UXO)—including landmines, cluster bombs, and artillery shells—has killed over 40,000 people and injured or maimed more than 60,000 others." - Google AI Overview "How many children were maimed by landmines after the vietnam war"


Yes, but it doesn’t have to be error-free. The friendly fire rates in symmetrical hot wars is pretty high, it’s considered a cost of going to war.

If autonomous weapons lead to a net battlefield advantage, I agree with the GP, they will be used. It is the endgame.


The big asterisk in what you're saying is, like self driving cars, it's hardest when you want to be the most precise and limit the downsides. In this paradigm, false positives and false negatives have a very big cost.

If you simply wanted to cause havoc and destruction with no regard for collateral damage then the problem space is much more simple since you only need enough true positives to be effective at your mission.

The ability to code with ai has shown that it requires an even higher level of responsibility and discipline than before in order to get good results without out of control downside. I think the ability to kill with ai would be the same way but even more severe.


> A big part of that is also knowing when NOT to pull the trigger

"In a press conference, Musk promised that the Optimus Warbots would actually, definitely, for real, be fully autonomous in two years, in 2031. He also extended his condolences to the 56 service members killed during the training exercise"


I've not watched all of Robocop (too much gore for me), but I have seen the boardroom introduction of the ED-209.

That's how I imagine a Musk demo of this kind of thing would play out, if his team can't successfully manage upwards.


And the US learned the lesson the hard way in Iraq that in fact even human intelligence struggles with this. There were major problems throughout the war with individual soldiers not adhering to the published rules of engagement.


Yes, but the important bit is that autonomous drones can't be held accountable for not adhering to the published rules of engagement.


I keep getting timeouts so I'm unable to test this. However, I have a suggestion:

What's really needed IMO is a drop-in tool to increase the ranking of thoughtful comments and decrease comments that drive engagement by making people angry. You need your tool to score comments on a scale for THAT. Combine that with policy mandating its use on algorithmically ranked sites for an audience above a threshold size and you have a tool to bring civility back to society. I don't think angry comments should be censured. I think they just should not be artificially amplified into everyone's feeds. While not perfect, there's a wonderful difference between hackernews comments and reddit comments and a great deal of it stems from the culture of self-moderation here.

Amplifying people with nuanced takes on things would go a long way honestly. As it stands, adversary countries are using this artificial anger amplification as a weapon, and its thus far been devastatingly effective.


Ethical humans are pretty hard to come by if you put them under a microscope.


"Not beating women" doesn't require a microscope.


I agree but when you’re dealing with celebrities people sometimes lie and exaggerate, and third parties sometimes extrapolate beyond any semblance of grounded facts. So most people subject to that level of scrutiny and fame are likely to have some allegations against them whether true or not.

Hendrix’s girlfriend Kathy Etchingham claims he never abused her. Some third parties dispute her claims about her relationship.

His arrest record suggests at least some type of altercation with a previous girlfriend but it’s far from clear cut to me.

People are complex and reality is complex. I myself was subject to false accusations about abuse from a disgruntled ex girlfriend (who actually WAS in fact physically and mentally abusive to me and I have the scars to prove it).

But regardless, I have zero issues reflecting on a person’s accomplishments and talents even in the context of them being a horrible person. In fact, I find that part of the intrigue of really talented people. Reality and people are quite multi-dimensional. The only general rule I know is that nobody is perfect and holding up ANYone as some example of moral perfection is almost certainly wrong.


>it's sort of like running a JS crypto miner in the background on your website.

To be honest, I wish the web had standardized on that instead of ads.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: