Hacker Newsnew | past | comments | ask | show | jobs | submit | glenstein's commentslogin

I was going to say the same thing. OP in this case would not count toward either percentage, what you have to wonder is how many people get charges dropped who get put through the ringer.

It also makes the act of accusing incredibly powerful, and you have to wonder what threshold there is and whose accusations matter, because this severe punishment for dropped charges feels extremely powerful.


>Rational, evidence-based skeptics like Mick are doomed to Sisyphean toil because even after they've resoundingly explained a hundred vague claims, UFO (and Chem-Trail, Flat Earth, etc) true believers will always find a new one to hitch their belief to.

Right. And I do think that meticulous effort is invaluable because it heightens the cost of cognitive dissonance which can be important to reaching people on the sidelines.

But it makes you wonder if the debunking community should be a bit more intentional about intercepting whatever these psychological processes are that make people immune to evidence-based correction, and target those mechanisms the same meticulousness in patients of a debunk.

Although obviously I think the trouble with that is such a task would amount to helping steer such people into a fabric of social and cultural connectedness that's more valuable to them than the conspiracies are. Which seems a tall order. But maybe engineering an alternative psychological virus that crowds out the conspiracies in favor of something else is a more efficient option.


> But it makes you wonder if the debunking community should be a bit more intentional about intercepting whatever these psychological processes are that make people immune to evidence-based correction, and target those mechanisms the same meticulousness in patients of a debunk.

You haven't spent much time arguing with people who refuse to listen to any evidence at all, have you? The "psychological processes" you describe are, in many cases, that people will simply stick their (metaphorical) fingers in their ears and say "La la la, I'm not listening!" In other words, a willful, determined refusal to listen.

It's not a matter of psychological processes, at least not for the people I've interacted with in the past. It's plain and simple refusal. They've decided that they're right, they know it, and nobody is going to tell them otherwise, darn it!

As the old quote goes (which is apparently very difficult to pin down to its origin): "My mind is made up. Don't confuse me with the facts!" (https://quoteinvestigator.com/2013/02/13/confuse-me/)

P.S. Edited to add this, because I meant to write it earlier and forgot: It's just stubbornness. You can't cure stubbornness with psychoanalysis. Some people just don't want to believe in what you're trying to tell them. As the even older quote goes, "You can lead a horse to water, but you can't make him drink." You can lead a stubborn person to all the evidence in the world, but you can't make him think.


> nobody is going to tell them otherwise

Indeed. As an example, there was a one-line response far down-thread from my GP which basically said only "Mick West is not credible" (which the poster has since deleted). I found this remarkable because Mick West, more than any skeptic I've seen, meticulously cites all his sources and doggedly sticks to only well-evidenced, fully supported facts. No broad claims, blanket dismissals, appeals to authority or consensus. He just does the work of collating relevant evidence and practical experiments which anyone can confirm and replicate for themselves. Because he's not asking us to trust anything we can't verify ourselves, his credibility is irrelevant.

Which made me want to reply, "If Mick West isn't credible, name one source of evidence which counters UFO true belief who IS credible in your opinion?" The obvious point being, there are none, because their belief is unfalsifiable. But then I remembered why engaging with those holding unfalsifiable beliefs is futile... the main point of your post. :-)


Unique observation conditions definitely can and do make those difficult to identify in some cases. Omniscience in all cases does not follow from success in routine cases.

The Pentagon, White House, &c are not unusual or unique observation conditions. These are not just UFOs at the time, they are UFOs now after going through extensive review regimes.

I think it's bad manners to bluntly tell someone they should "read up" on something because it naturally reads as a kind of a closeted accusation of not being sufficiently well informed. There are ways of broaching the topic of what background knowledge is informing their perspective that don't involve the accusation.

Just to add a small bit of anecdotal value so this comment isn't just a scold: I one time many years ago suggesting an elegant way for Twitter to handle long form text without changing it's then-iconic 140 character limit was to treat it like an attachment, like a video or image. Today, you can see a version of that in how Claude takes large pastes and treats them like attached text blobs, or to a lesser extent in how Substack Notes can reference full size "posts", another example of short form content "attaching" longer form.

I was bluntly told to "look up twitlonger", which I suppose could have been helpful if I had indeed not known about twitlonger, but I had, and it wasn't what I had in mind. I did learn something from it though, which was that it's a mode of communication that implies you don't know what you're talking about with plausible deniability, which I suspect is too irresistible to lovers of passive aggression to go unused.


It wasn't intended as such, but I take your point.

To provide a bit more context: Weizenbaum (a computer scientist in the 60s) developed ELIZA, a LISP-based chatbot that was loosely modeled on Rogerian psychotherapy. It was designed to respond in a reflective way in order to elicit details from the user.

What he found was that, despite the program being relatively primitive in nature (relying on simple natural language parsing heuristics), people he regarded as otherwise intelligent and rational would disclose remarkable amounts of personal information and quickly form emotional attachments to what was, in reality, little more than a glorified pattern-matching system.


If it helps, I didn't find anything wrong with your comment.

I appreciate the link and the info :)


It's a great question, because I do think there are many cases that are neutral, or ones we're able to responsibly distinguish or even cases where it would be an appropriate and necessary form of empathy (I'm imagining some future sci-fi reality where we actually get conscious machines, so not something that exists right now).

But I think it's also at the root of disastrous failures to comprehend, like the quasi-psychosis of the Google engineer who "knows what they saw", the now infamous Kevin Roose article or, more recently, the pitifully sad Richard Dawkins claim that Claudia (sic) must be conscious, not because of any investigation of structure or function whatsoever, but because the text generation came with a pang of human familiarity he empathized with.


>Humans must not anthropomorphise AI systems.

Yes, but. Starting with my agreement, I've seen anthropomorphizing in the typical ways, (e.g. treating automated text production as real reports of personal internal feeling), but also in strange ways: e.g. "transistors are kind of like neurons" etc. And the latter is especially interesting because it's anthropomorphizing in the sense of treating vector databases and weights and so on as human-like infrastructure. Both leading to disasters that could be avoided if one tried not to anthropomorphize.

But. While "do not anthropomorphize" certainly feels like good advice, it comes with a new and unique possibility of mistake, namely wrongly treating certain generalized phenomena like they only belong to humans. Often this mistaken version of "don't anthropomorphize" wisdom leads to misunderstandings when it comes to animal behavior, treating things like fear, pain, kinship, or other emotional experiences like they are exclusively human and that thinking animals have them counts as "anthropomorphizing." In truth the cautionary principle reduces our empathy for the internal lives of animals.

So all that said, I think it's at least possible that some future version of AI could have an internal world like ours or infrastructure that's importantly similar to our biological infrastructure for supporting consciousness, and for genuine report of preference and intent. But(!!!) what will make those observations true will be all kinds of devilish details specific to those respective infrastructures.


They push millions of lines of code every quarter including thousands of patches, constant security updates and performance improvements and deepened support for web platform standards. As open source projects go, it's probably one of the most active and thriving ones there is. As eager as some people are to dance on Mozilla's grave, that day isn't coming anytime soon.

If you wanted to point to the year where they've been the best financed they've ever been and where they've had the most resources invested into browser development they ever have, that year would be 2026. Only to be exceeded by 2027 and then 2028, 2029 and beyond.

At a bare minimum, their endowment gives them probably a two to three year firewall in the event that their funding is cut off, which it hasn't been. I also thought the accusation was supposed to be the other way around, namely that we all knew they were going to get funded into perpetuity as controlled opposition.


People keep saying this like it's just conventional wisdom we all supposedly agree with. I think it's a string of tech articles and spiraling comment sections searching for drama that's kind of been a self-perpetuating phenomenon over the past 3 or 4 years the majority of which I think has been extremely unfair and mostly just based on vibes. If you actually scroll through HN and read the criticisms, they tend to trail off into vague phrases like "all the stuff they've been doing".

If people read the release notes instead of the comment sections, not only would they have a lot more specific knowledge of the work going into the browser but they wouldn't be locked in this cycle of outrage and escalation that normally you only see in YouTube comment sections.


>To give an example, before the MLB rolled out the Automated Ball Strike system this year, last year maybe 65+% of the sentiment in discussions about it was negative or in some cases just neutral.

MLB's ABS does not use AI for its ball tracking. And it has specific payoffs particular to its context from four years of testing and wiel defined limits on use cases that don't necessarily generalize to issues surrounding AI and it's tradeoffs.


Why doesn't pain matter? It's almost the canonical example of a valenced state that practically any moral theory is tasked with making sense of on moral terms.


It's just neural impulses. So what. Why should we care? Plants have different but equivalent mechanisms.

Moral theory is bullshit. It's just made up.


Plants most definitely do not have different but equivalent mechanisms. They don't have subjectivity. Moral theory is about, among other things, explaining the meaning of moral behavior and language as it manifests in real people, and well-being is just as real as health. At least it is on moral realism which is a perfectly mainstream view in moral philosophy.

You are not making any sense, and literally won't understand if you never got beat-up physically in life, perhaps as a kid, and preferably more than once. One has to experience pain first-hand to empathize with it. You must have lived an extremely sheltered life for this to not happen to you, and so you don't understand. Your nihilism doesn't relate with people.

Yes, plants react functionally to damage, but in no way has any shred of consciousness been demonstrated in plants. You are just out to seek any excuse necessary to kill animals and perhaps humans too, with no difference.


You are not making any sense. Perhaps consciousness hasn't been demonstrated in plants because we don't have the correct definition of consciousness or are looking for the wrong things. Why do you have this arbitrary bias towards animals? Perhaps it's time to re-examine your fundamental assumptions.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: