Hacker Newsnew | past | comments | ask | show | jobs | submit | andyh2's commentslogin

Of that long list, what do you think Tesla owners value the most? ACC+Lane Keeping (what openpilot does) is the answer: https://pbs.twimg.com/media/D4y0Fb7XsAEKhFh.jpg


Well yes, of course those are valued highest. What's the point of auto lane changing if you can't ACC and lane keep?


Very cool. How will this compare to VigLink and SkimLinks?


Congrats Workflow team!


I'm a UC Davis student. Agreed, our previous chancellor was not a model public servant. We need a chancellor who can solicit tons of money and has a decent character.

If you're a California resident, UC is a deal compared to private and out-of-state public options. UC guarantees free tuition (grant, not loan) for in-state students with family income under $80,000.

On the other hand, out of state and international students pay full ticket and subsidize the rest of the students.

The real rip-off for UC students is in the large class sizes and inexperienced lecturers and grad students at the podium. Large research institutions are having a lot of trouble ensuring quality of education as enrollments increase and student bodies diversify.


I believe the broadest consensus defines consciousness as having a subjective experience.

What's unclear, and some deem impossible, is how we can externally detect consciousness. A machine could act and respond in a manner indistinguishable from humans but still be unconscious.


>A machine could act and respond in a manner indistinguishable from humans but still be unconscious.

There are some types of behaviour that could make it clear a machine was conscious from a practical point of view. Say you had a robot teacher and threw something when it's back was turned and you tried to frame a fellow classmate but the robot saw through it and told you not to do that. It would be fairly clear it was conscious of what was going on from a practical point of view. You could of course say it's not real consciousness because it a robot but how do you know a human teacher is conscious beyond such behaviour?

A statement like "They've figured out what we're up to and are thinking how to stop us, though luckily they are not conscious." doesn't really make sense regardless of what the they is.


> I believe the broadest consensus defines consciousness as having a subjective experience.

A useless definition from an empirical perspective.

> What's unclear, and some deem impossible, is how we can externally detect consciousness.

Defined that way, its impossible to objectively verify by definition.


So then define "subjective experience" without "it's that thing we have" (which thing?) or using the word subjective. This feels to me like replacing one undefined term with another.

Also Dennett has an excellent rebuttal to the concept of p-zombies (unconscious entities indistinguishable from conscious ones) imo.


And David Chalmers has an excellent rebuttal to Dennett's rebuttal. Different people take different sides, and I'm with Chalmers on this one.

The evidence for the conceivability of p-zombies is in your living room, if you have a television. You can see people on it and they behave intelligently. Are they conscious? Of course not, they're just red, green, and blue dots. Maybe we could make this a bit more realistic?

You're now in a very realistic virtual reality. Your character is conscious, because you are. So is the character being played by your friend over there (assuming he's not a p-zombie). But what about the characters next to you? Maybe Doug Lenat and Geoffrey Hinton collaborated and developed an AI as intelligent as a person, and that's what controls them. They can't be conscious, because they're just pixels like the dots on your flat-screen TV, so not conscious in the virtual reality, and they're not conscious in the real world either, because they're just software, which is an epiphenomenon, like wetness or tidiness. The computer which runs the VR, and the AIs, might be conscious but that would be a very different consciousness from yours. Maybe it senses voltage levels at its memory addresses, but it certainly wouldn't see you or the submachine gun you're carrying in the VR.

Maybe we already are in a virtual reality. Nick Bostrom thinks we might be. If it's sufficiently realistic, there might be no way to tell, and no way to tell whether everyone is conscious, or nobody is conscious except you.


> The evidence for the conceivability of p-zombies is in your living room, if you have a television. You can see people on it and they behave intelligently. Are they conscious? Of course not, they're just red, green, and blue dots.

I don't see how that constitutes evidence for the conceivability of p-zombies. They're obviously distinguishable from conscious entities with a simple test: ask a TV person a question.


The TV person is obviously distinguishable, but the non-playing character in the VR I described would not be.


And your VR example simply assumes the conclusion:

> Maybe Doug Lenat and Geoffrey Hinton collaborated and developed an AI as intelligent as a person, and that's what controls them

How do you know the AI isn't conscious? You're simply asserting that it isn't. For all you know, "AI as intelligent as a person" literally isn't possible without consciousness, which makes your assertion inconceivable.

Which is why the whole p-zombie thought experiment doesn't convince anyone: if you already think consciousness is physical, then p-zombies aren't conceivable and the thought experiment isn't convincing, and if you already think consciousness is not physical, then p-zombies merely assert the same conclusion you already hold and you're "convinced".


The computer that runs the VR (including the AI controlling the behaviour of the NPCs) might well be conscious, but the NPCs themselves are just pixels. Before you argue that you are too, you know otherwise and you see the VR's pixels. Meanwhile, the computer which is generating the VR, and is outside it, doesn't have to see anything. It might sense some electrical signals originating from you and your friend's bodysuits, but that isn't going to feel anything like the experience you and your friend are having.


> The computer that runs the VR (including the AI controlling the behaviour of the NPCs) might well be conscious, but the NPCs themselves are just pixels.

Where and how the information processing actually occurs to produce this intelligence doesn't seem relevant. The NPCs are individually either as intelligent as a person, and equally distinct, or they are not. If they are, they may be conscious. You said they are so intelligent, and therefore, you have no idea whether they are or are not conscious.

> It might sense some electrical signals originating from you and your friend's bodysuits, but that isn't going to feel anything like the experience you and your friend are having.

I don't see how that's relevant to whether the game system or NPCs are conscious.


I think we're arguing past each other here. I accept that the computer controlling the VR might be conscious of something while it runs the AI software, but the brains of the NPCs clearly aren't conscious. In fact, they don't even need to have brains. The computer isn't immersed in the VR the way you and your friend are. It doesn't use senses the same way you do.


> I accept that the computer controlling the VR might be conscious of something while it runs the AI software

But the AI software/algorithm is exactly what would produce the consciousness, if it were possible to do. So if it runs this algorithm individually for each NPC, then each NPC is conscious. And it clearly must run the algorithm individually because each NPC finds itself in different circumstances, and so must respond differently to its environment, even if they all share the same "personality".

> It doesn't use senses the same way you do.

I still don't see why that's relevant. Sensory inputs are still just inputs. The AI algorithm also has such inputs since each NPC must respond to its environment.


I think we'll end up agreeing to disagree here, but the AI would only be producing the appearance of consciousness in the NPCs. There's absolutely no reason to believe it experiences any of the qualia experienced by the people immersed in the VR. It doesn't see anything or hear anything. Only the people do, via their headsets. It just behaves as if it does, like the people on your television screen.


> There's absolutely no reason to believe it experiences any of the qualia experienced by the people immersed in the VR.

Firstly, NPCs tautologically don't experience the same qualia, but that doesn't mean they don't experience qualia of their own. Like I said, they respond to their environment, which means they have senses of a sort.

> It just behaves as if it does [produce consciousness], like the people on your television screen.

How do you know that? You're merely asserting p-zombies exist, you're not proving it.


If you were talking about humanoid robots in the physical world, you'd have a point, but in a VR, NPCs exist in much the same way images of people exist on your TV screen. They're not really there. They also exist in the form of data on the computer running the VR, but data can't be conscious either, as it's entirely passive, like words in a book the computer can read from and write to.


> They're not really there.

In what way can you prove you are "really there" in a way that a simulated entity is not? Fundamentally, your matter could very well be a mathematical construct -- see the Mathematical Universe hypothesis, for instance.

> They also exist in the form of data on the computer running the VR, but data can't be conscious either, as it's entirely passive, like words in a book the computer can read from and write to.

Except they're not, because the NPCs respond intelligently to you and their environment, both of which are always changing, so they aren't anything like passive, unchanging data. No one is claiming that data that isn't changing is conscious, but NPCs don't consist of unchanging data. So they could very well be conscious, and you haven't presented an argument why such intelligent responses would not require consciousness, you've merely been asserting it.

And it's not like you don't state that consciousness can't be a computation, because you've said that the game system as a whole might be conscious. So if we agree to assume for the sake of argument that consciousness is a computation, then it will be a particular kind of computation with a certain property, like how sorting algorithms sort their input.

Which means any algorithm that has this property produces consciousness. Furthermore, personality differences and sensory input from their environment would just be parameters to such an algorithm. And if game AI needs this consciousness property to make believably intelligent characters, which is what we've been discussing, then each of the NPCs will run this algorithm with their own personality and environment configurations, and each of them will be conscious.

The best I can infer about your argument is that these computations don't have access to "real" senses, but this point is entirely irrelevant. All AI senses will reduce to a set of numbers anyway, which means even if the "sensory" inputs we hook them up to are entirely simulated, this makes no difference to the consciousness algorithm. Consciousness would be a logical property, so you can't simultaneously accept that the game system could have consciousness, but the individual NPCs would not.


> Fundamentally, your matter could very well be a mathematical construct.

My consciousness, however, is "really there". It's one of the few things anyone can be sure of.

> And it's not like you don't state that consciousness can't be a computation, because you've said that the game system as a whole might be conscious.

No. I said that the computer controlling the VR could be conscious of something. I should have been more specific about which parts. I meant the CPUs and the GPUs, certainly not the software. Their role in the whole system is the same as the man in the Chinese room. We have his word that he doesn't understand Chinese, but he does experience something. The role of the software is that of the book in the room, with a part of the book where the man can write down intermediate stages in his calculation. Or are you saying that the book is somehow conscious of understanding Chinese simply because there's space in it where the man takes notes?


> My consciousness, however, is "really there".

You didn't answer the question. You can be sure your consciousness exists, and you grant existence to other conscious people no doubt, but you seem certain that AI consciousness is not in a game of AI NPCs that are just as intelligent as humans, yet provide no justification for that.

> Or are you saying that the book is somehow conscious of understanding Chinese simply because there's space in it where the man takes notes?

I'm saying that your attempt to define a "locus" where consciousness resides is futile. A CPU is not a person, even though a person can act like a CPU.

You're effectively asking which line in a sorting algorithm actually does the sorting. All of it does the sorting, and all of the program+scratch memory that produces intelligent behaviour, would produce consciousness as well. The CPU is completely incidental. If you compile your program for a completely different architecture, or take a snapshot and restore it on another computer, the consciousness moves with the program, it does not stay with the CPU on which it was originally running.


I'll try again. I think that consciousness only exists in "ground reality", so if the universe is a simulation, consciousness can only enter it from ground reality. I think ground reality exists - it can't be "simulations all the way down".

I'm assuming that we're in ground reality, but capable of developing realistic virtual realities.

I'm not sure what can give rise to consciousness. I think it most likely that it's related to the ability to collapse quantum states, or in Everett's interpretation, decide which universe to be in.

In that case, since the NPCs are unable to collapse quantum states, they cannot have consciousness although they behave exactly like someone who can.

Alternatively, it might require information processing by real, physical hardware. In that case, CPUs and GPUs can be conscious as well as people, insects, and even thermostats. However, as CPUs and GPUs are processing completely different kinds of information from people, I'd expect them to have completely different qualia.

What you appear to be saying is that NPCs aren't merely conscious but have qualia at least as similar to me as other people have, by virtue of them appearing to behave similarly to people.

And I am saying that as they don't really process any information - they're just instructions and data being processed by computer hardware, that they can't be conscious any more than the characters in a book can be conscious whenever anyone reads the book.

I probably haven't convinced you of anything, but I hope at least I've made my position clear.


Ugh... replacing one vague and undefined word with three vague and undefined words (I'm including "having" because that's a hard one to nail down in context).

    function i_am_conscious(x) { return x + Math.random(); }
This function has a subjective experience.

To avoid any discussion of consciousness becoming a discussion of the meaning of the word "consciousness" we have to accept that consciousness (like everything else in the real world) is not a boolean property, but rather a subjective "I know it when I see it" sort of thing. Or in Bayesian terms, it's a weighted evaluation of evidence against a vast array of mostly unknown priors.

That's why I think it will sneak up on us; there will be systems that some very small group of people (mostly crazy) think are conscious, then systems that a slightly larger group of people (maybe more crazy, maybe less) think are conscious, until one day You wake up and realize that You are one of the crazy people.

But directed attempts to satisfy some definition of "consciousness" in a system will just result in a counterexample to that definition.


I believe the broadest consensus defines consciousness as having a subjective experience.

I could go one further and say "I believe the absolute broadest consensus defines consciousness as having a conscious experience." and only say slightly less vague than your statement.

Conscious, subjectivity, awareness in some of their meanings are just synonyms for some internal experience or other. There's a sense that this internal experience reflects things in a unique way, that it's not controlled by outside constrains, that it reflects a person's unique choices not controlled by any fixed pattern or program. But it's just primarily an intuition.


Not just a machine. I agree with definition of consciousness, but it's implication is that we can only ever be sure of our own present consciousness. The consciousness of other people and even our past selves can't be verified.


VigLink's value to Reddit is likely in the ability to tag a very long tail of non-Amazon merchants.


Rewriting links for dozens, or even hundreds, of networks is trivial, it's just just a bunch of regexes. Creating dozens of affiliate accounts and managing them is the actual annoying part.


A new market for jibjab.com!


Can confirm, I grew up near the area described in the article. USC has some 2010 data [1] on religious beliefs in Marin. None of the popular belief systems are known for discouraging immunization. Anecdotally, Marin has quite a large alternative medicine following which may explain the high opt-out rate.

[1] http://crcc.usc.edu/resources/demographics/marin.html


It's astounding to me that the anti-vaccine movement is so strong in Marin of all paces, good percentage educated overall, good tertiary education and pretty high income.

But, as they say, people will believe despite evidence or overwhelming evidence to the contrary. One can excuse religious people on the basis that their belief is based in faith rather than interpreting the data their way. What happens in Marin, among other places, is that people believe despite data showing otherwise.

In other words, people in Marin aren't saying, well, but I have faith that vaccines are bad, no they're saying we believe data says vaccines are bad. It's incomprehensible.


http://botmasterlabs.net/ This is the official site.


I don't know if you are in on the joke, but this is how XRumer actually works. It has bots or other users provide links to their auto-posted questions, in an effort to avoid detection.


The best case (for them) is when they fool an actual user to look up the product and add the link.


Article says they "will publish a paper Sunday."


[deleted]


Serious question, then, why the need to provide a negative comment when you didn't even examine the material presented?

EDIT: Won't out the author's identity, but parent post explained that he had only skimmed the article. The original post was asking who the computer scientists were and why this wasn't just link bait.


I did, but not in detail. I just see too many unsubstantiated negative articles about bitcoin that it gets tiresome sometimes. I'm glad to know I was wrong this time.


Re edit: No, I never asked who the authors were and frankly it's not relevant. Anyway, it's sunday, and I'm waiting for the publication. It was also pretty obvious from my other replies that I started this thread..


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: