Hacker Newsnew | past | comments | ask | show | jobs | submit | catigula's commentslogin

Who's "the enemy"? I surrender.

The philosophy and structure we rest on is much more precarious than our technologies.


Avoid becoming important enough to be targeted by any nation state

Getting a little suspicious that we might not actually get AGI.

Dude we dont even have GI

Well I do have GI issues but that’s a whole other problem

He he touche. I mean that there's nothing to suggest that the types of intelligence we have are all possible types. The human blend might be just part of the story, not general, specific.

1. No.

2. You cannot "control" superintelligent AI.


The implication is that they're pretending to be legitimate employees whereas they are actually exfiltrating IP from a hostile nation state. Seems valid.


You mean like the DOGE team?


I gathered from this article that Palantir apparently has complete transparency into - a "profile" of - every UK citizen.

This is glossed over and not really mentioned as an issue...


These psychedelic treatments always have substantial limitations, and this is no different;

1. Low volume cohort i.e. 40 participants per dose group

2. Industry sponsored study i.e. MindMed.

3. Think about it; how do you blind psychedelics? It's pretty obvious you're on one when you take it.


I recall an experiment where the control group was given Ritalin, and the participants had presumably tried neither Ritalin or the psychedelic.

I thought it was pretty cool, since the control group will still "feel" something and potentially think "oh this is it" but since the effects of stimulants like Ritalin have been more studied, the researchers can easily account for it.


Let me guess: Those limitations are ”unscientific” in this context, but when the article is about the dangers of cannabis, they are suddenly okay?


This isn't unscientific per se, it's just low quality science. No conclusions should be drawn. There are known treatments with extremely robust, good science.


An AI can only be tuned to either be sycophantic or adversarial.

It isn't possible to tune an AI to have some sort of 'correct answer' orientation because that would be full AGI.


At this point, given that we basically literally have AGI, pursuing other avenues seems like an interesting approach.


This argument is going to be skewered in court.


I absolutely don't want random strangers talking to me and I cannot be alone.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: