That's an example of ambiguous or misunderstood phenomena explained by a professor who decided that there's more money in UFO BS than in his previous career (or sincerely lost his grip on reality, who knows).
> Compilers have type systems, formal contracts about what code means before it runs.
This is a complete misunderstanding of what makes compilers trustworthy. Those are all properties of the language, not the compiler. The compiler is trustworthy to the extent that it is well built, internally. It is trustworthy to the extent that the mapping from source code to machine code is well defined, and implemented correctly.
You can have the best type system you want, but if the compiler is badly implemented, it won't be trustworthy. A perfect example is C - a language that barely has a type system, yet has some of the most trustworthy and optimized compilers. And it also has, or at least had, plenty of buggy compilers, typically for small embedded platforms with complicated mappings between C constructs and the limited CPU instruction set.
I actually happen to be in The Philippines right now, so funny you mention that.
No of course we don’t, and neither do we offer one with a more Spanish, French, Russian, Polish, Thai or German accent. This is because we decided upon American-English as the language, which is also reflected in the grammar choices on our website (despite being a French company).
The courses are entirely optional. Some colleagues don’t take them, and they have problems communicating with customers, which is very frustrating. I’ve had an Indian manager of a customer complain that one of our Thai support engineers was incomprehensible, and my boss complain that this Indian manager was incomprehensible. It’s just a mess all around.
I’m Dutch myself and these languages courses have benefited me a lot to remove some of my Dutch accent, which helps during business conversations. I’ve traveled the world pretty much constantly over the past 12 years, so I’m quite tolerant of many types of accents, but even just arriving in the Philippines for the first time last week required some recalibration, because they have their own way of pronouncing things.
If you are in the Phillipines, you might notice that English is an official language of the Phillipines - unlike Spain, France, Russia, Poland, Thailand, or Germany (or the Netherlands). This means that the Filipino English accent is just as much a native accent as the Scottish, Canadian, or American, Indian, Australian, etc. accents. And yet, no one is requiring people from London to change the way they speak their language, even if it's sometimes hard to understand for people from NYC.
I've had significant accent friction when I started watching British television after a long time of learning English from simply watching only American television. But I don't expect people in Britain to "fix" their accent.
You know that people studying a second language often study native pronunciation, right? Thats just standard curricula for language acquisition. Youre fishing for racism where theres none.
English is one of the two official languages of the Philippines, so their English accent is native, just as much as the English accent, Scottish accent, American accent, NYC accent, etc.
Sure, there's no such thing as a native accent. In the end these are all concepts and if you dig down the semantic value of the label is blurry at the edges. Language is a malleable construct of agreement which corresponds to an ever flowing ever changing loosely defined idea, and you cannot point to a proper category that transcends cultural and social norms and stratification. We can play the post-structuralist game, but you're not engaging in good faith.
Language is useful insofar as it lets you communicate, and if you lack the phonemes the meaning of your words will be misinterpreted and misunderstood. Learning a more common accent is a reality that has incredible utility and is not in itself racist. At any rate, there's enough variation between the English commonly spoken by Philippinos that it's considered a dialect: https://en.wikipedia.org/wiki/Philippine_English.
I understand the words you are saying, but struggling to make sense of what you are trying to say. We're talking in this thread about learning a native accent in a second language. I do the same when I am learning Hungarian, as the phonemes are different than what I am used to in my native tongues.
Except that there is nothing nonsensical about a particle that has mass and doesn't participate in any SM interaction. It's inconvenient if such a particle exists, as it's very very hard to detect things precisely by their gravitational effects, but there is nothing nonsensical, or even particularly weird, about the idea. Plenty of particles only interact with a few of the SM forces - e.g. photons are not affected by the strong force, nor are electrons, neutrinos are affected by neither the strong force nor EM, only the weak force, gluons only interact with the strong force, not EM nor the weak force, etc
The same is true for PoW, though. You can never guarantee that a single entity or a group of colluding entities will not gain control of most of 50+% of the compute power required. If the compute hardware is useful for other purposes than mining your particular coin, the risk is in fact greater - someone could build or buy up this compute power, destroy the currency, and then use the assets for other purposes, recouping some of their investment. With PoS, at least this much is not possible - anyone who would want to destroy the currency would lose their whole investment.
> The objective would be to stop the death and eradication of languages, e.g., Welsh, German, or any of the numerous other smaller languages and dialects
How is German, a langauge natively spoken in two nation states and quite a few neighboring regions, being eradicated?
> while AI is running on a discretization of this (we're essentially discretizing the physical dynamics and to create state changes of 0 -> 1, 1 -> 0).
But this is just a discretization we impose when we try to represent the system for ourselves. The reality is that the AI is a particular time-ordered relation between the continuous electric fields inside the CPU, GPU, and various other peripherals. We design the system such that we can call +5V "1" and 0V "0", but the actual physical circuits do their work regardless of this, and they will often be at 2V or 0.7V and everywhere in between. The physical circuit works (or doesn't) based exclusively on the laws of electricity, and so the answer of the LLM is a physical consequence of the prompt, just as a standing building is a physical consequence of the relationships between the atoms inside its blocks. The abstract description we chose to use to build this circuit or this building is irrelevant, it's just the map, not the territory.
The computer and the program wouldn't exist without us, though. They only exist to be interpreted by us. The physical properties of the circuits outside of what we cajole them into doing are irrelevant, meaningless. The circuits only do their work regardless of particular interpretations; they wouldn't exist at all without people building them to be interpreted.
The physical computer could exist regardless of us. The program, if by that we mean "a human model of the computation happening in a physical computer" is just a description, yes.
It would be extraordinarily unlikely, but physically conceivable, that a physical system that is organized exactly like a microcontroller running an automatic door program, together with a solar panel, a basic engine, and a light sensor, could form randomly out of, say, a meteorite falling in a desert. If that did happen, the system would produce the same "door motor runs when person is near sensor" effect as the systems we build for this.
The physical circuit are doing what they are doing because of physics. They don't care why they happen to be organized the way they are - whether occurring by human design or through random chance.
Edit: I can add another metaphor. Consider buildings: clearly, buildings are artificial objects, described by architectural diagrams, which are purely human constructs, and couldn't be built without them. And yet, there exist naturally occurring formations that have the same properties as simple buildings - and you can draw architectural diagrams of those naturally occurring formations; and, assuming your diagrams are accurate, you can predict using them if the formations will resist an earthquake or collapse. Physical computers are no different from artificial buildings here, and the logic diagrams and computer programs are no different from the architectural diagrams: they are methods that help us build what we want, but they are still discovered properties of the physical world, not idealized objects of our own making; the fact that naturally occurring computers are very unlikely to form doesn't change this fact.
I disagree that it’s conceivable that a computer could somehow exist without a conscious maker. It’s so unlikely that it may as well be impossible. If something non-human that was capable of consciousness did form in the universe, through known biology or not, it would “just” be another form of life, and not what the paper is talking about.
What you say about buildings is sort of true as far as it goes, but irrelevant for the argument because buildings aren’t symbolic manipulation machines that only mean something via conscious interpretation, that some people are claiming could gain consciousness themselves.
Probability of such a structure forming is completely irrelevant. The argument makes sense if there was a mathematical/physical impossibility, but as long as the laws of physics allow such an object to exist and form by random chance, and predict it would operate exactly the same as the consciousness-designed one, I don't see any reason to discount it.
I also think the arguments against this are contradictory. On the one hand, we have an argument that says that computers only work because a consciousness built them to implement a particular computation. On the other hand, we're saying that the same physical computer doing the same physical thing can be interpreted to be implementing an infinite number of different computations. These two seem to point in different directions to me.
I think a better counter is the question "Is there a meaningful difference between binary discretization and Planck units? Aren't those discrete/indivisible as well?"
That's not really a good counter - Planck units are not a discretization. Space-time is continuous in all quantum models, two objects can very well be 6.75 Planck lengths away from each other. The math of QM or QFT actually doesn't work on a discretized spacetime, people have tried.
I should add one thing here: no theory that is consistent with special relativity can work on a discretized spacetime, because of the structure of the Lorrentz transform. If a distance appears to be 5 Planck units to you, it will appear to be 2.5 Planck units to someone moving at half the speed of light relative to you in the direction of that distance.
I've never understood why certain philosophers view computation as some kind of abstract symbolic manipulation, while they easily accept that consciousness is a physical process.
Computation is something that a computer provably does. We build physical hardware, at great effort, to do computation. The hardware works and does the computation regardless of whether there is anyone to understand or interpret it. If it didn't, we couldn't have built anything like, say, an automatic door: that is a form of computation that provably happens as a physical process that is completely observer-independent.
Sure, a different entity than a human might view it completely differently than a door opening when someone is near - but the measurable physical effect would be the exact same, with the exact same change in momentum and position of the atoms in what we call the door based on the relative position of some other atoms and the sensor.
>Computation is something that a computer provably does.
This is a circular definition. In order to properly define the concept, we must be able to word it without using "computing devices" in the definition.
Finding a satisfactory definition for what constitutes a "Computation" is actually an interesting debate goes back to the 1600s. Currently, the mainstream definition (from wikipedia) gives that: "A computation is any type of arithmetic or non-arithmetic calculation that is well-defined".
One way to understand the author is to learn more about the "The mapping account" theory behind computation: "a physical system can be said to perform a specific computation when there is a mapping between the state of that system and the computation such that the 'microphysical states [of the system] mirror the state transitions between the computational states.'"
I wasn't trying to use that as a definition of what computation means. My point is that regardless of how we define it, the fact is that the devices we build to do some form of computation are objectively successful at doing what they were built to do.
For example, you can say that Firefox is a human-centric abstraction, and that my computer isn't running Firefox right now in an objective sense, that this is is just a human-centric interpretation of what the physical device is doing, and that there exist other computations that we could assign to it.
But what you can't say is that the device is not affecting the physical world in ways that are consistent with performing the Firefox computation, such as causing certain specific wavelengths of light to be emitted by the screen based on state that is stored in a server in the YCombinator data center. This is a measurable fact of the physical world that is independent of the model of computation you chose to ascribe to the physical device - any consistent mapping will have to preserve this same physical property.
The paper addresses this point in section 3.2. They aren't debating the fact that a physical process is taking place in a computer running a program. They are arguing that the semantic interpretation of the output of that program is indeterminate and dependent on the mapping function:
> A single physical vehicle (bottom) possesses a fixed causal trajectory. However, it does not instantiate a unique computation. Depending on the alphabetization key applied (fA or fB ), the same physical states can be mapped to entirely different abstract computations (Top Left vs. Top Right). Therefore, computation cannot be intrinsic to the physics (p).
So yes there is a physical process generating your Firefox browser, but there is also a mapping function taking that program and interpreting that it should display your Firefox browser. There are any number of mapping functions that could be applied to the physical state in order to display other things on your screen besides the browser. Therefore, the Firefox browser being displayed is not inherent or intrinsic to the physical state of your computer. If we did not have the right mapping function, we would have no way of knowing or inferring or discovering which mapping function is correct.
But the results that the computer screen shows, and the inputs fed into the machine, are entirely physical, observer-independent processes. Just like an inscribed papyrus contains letters in a physically observable, provable sense - even if the semantics of those letters are entirely man made.
This is in fact very similar to the notion of text - text is a physical medium, that provably contains a message that one human intended to convey to other humans. The same physical text can be interpreted in an infinite number of ways, and they are all equally valid in that they are self-consistent, but only one is the intention of the original author.
To the best of my understanding I believe the response from the position taken by the paper is that this is still committing the abstraction fallacy (Assuming you aren't just agreeing with them here, which I don't think you are). The book itself doesn't physically instantiate the information it contains in its text. In a vacuum it is informationally inert. That doesn't mean it doesn't physically exist. Likewise the computer running Firefox obviously physically exists. The fallacy is the next jump from there - assuming that the semantic content of the book or software is physically present in those items. Another example given in the article is an analog clock:
> Consider an analog clock. Physically, the device is a collection of gears and springs governed by continuous dynamics (P). It only “computes” time because a mapmaker intervenes, mapping a specific set of continuous angles to a semantic concept (e.g., “3:00 PM”). Without this semantic imposition, the clock is just metal moving in accordance with Hamilton’s equations; it contains no intrinsic “time.” Thus, the physical substrate does not “process information” absent a prerequisite alphabet of intrinsic symbols; rather, it generates continuous dynamics that an external mapmaker interprets as information.
In this example the time "3:00 PM" is instantiated in the mind of the person reading the clock, it is not a real physical property of the clock itself.
I think the book example muddied the waters, unfortunately. I agree that the concept "3:00 PM" only exists in the mind of the observer. But I don't agree that this means the clock's mechanism isn't intrinsically, objectively, a time keeping mechanism. The clock is a physical instantiation of a time keeping algorithm, in an objective sense. The meaning of a particular time, or even the interpretation of time, is a subjective human experience, but that doesn't mean that computation is as well. If the clock wasn't a correct instantiation of the time keeping computation, it wouldn't be possible to interpret it as such, it wouldn't work - that's what makes me believe it's more than semantics.
First I want to say I don't think this is an unreasonable position nor does it appear to be one without some degree of support in the expert debates. The position taken by this paper is not, as far as I can tell, a universally accepted argument (It isn't some tiny minority either FWIW). The bottom line is that in this field there are simply not many universally accepted positions whatsoever. A helpful framing I came across recently is that the field of consciousness research is still in a "pre-scientific" period in the same sense that astronomy and physics were "pre-scientific" prior to general relativity. Science could be done and theories existed, but something so fundamental was missing that you couldn't even reach a broad consensus on where to direct research.
With that out of the way, in my opinion the response from the position of the paper would be something along these lines. The problem with your claim that a clock intrinsically contains a time keeping algorithm/computation is that by this definition, almost anything you can imagine is a clock. For example, given the right mapping function, a rock can perform all computations necessary to be a clock. It may sound extreme but this is an internally consistent position, if you want you can look up the Putnam triviality argument for more info. Under this argument, not only can a rock perform all the computations necessary to be a clock, it can in fact implement every possible computation imaginable (given the right mapping function). This next bit isn't essential to the argument but just because I find it fascinating, we can take this point a step further. If you imagine an organism/mind capable of using a rock as a clock, it isn't impossible but it would require such a radically different way of perceiving reality that they may in fact not recognize our versions of clocks as clocks at all.
Backing out, the examples above clarify just how essential the mapping function is to imputing meaning to a physical process, and makes it harder to see how the physical processes taking place in our computers have any intrinsic meaning whatsoever. To put my cards on the table, the whole reason I ended up on this thread in the first place is because I have become somewhat obsessed with this paper in the past week or so. I did not expect to agree with it or even to find it particularly convincing. Not that I had given it tons of thought but if you asked me, my operating assumption has always been that the human brain is some form of a computer and, more to the point, that consciousness experienced by human brains is a result of some form of computation. Taken on its own terms, I believe this paper really does challenge that view fundamentally, and in a manner that cannot be easily dismissed.
If this mechanical clock mechanism was part of some sort of Von Neumann probe that was sitting dormant in a star system waiting for the clock to strike a specific time and when that occurs it triggers another mechanism in the probe that combines stored nucleotides to produce a particular sequence of RNA and DNA that is seeded into a suitable world and that soup of RNA and DNA eventually evolves into intelligent biological life that finds the Von Neumann probe and learns to interpret the symbols and purpose of the probe and comes to understand what 3:00 PM means, does that whole system have the real physical property of 3:00 PM in it?
No because it doesn't physically instantiate the concept of 3:00pm. By the author's contention if you go and find an analog clock right now and read the time, the clock still doesn't have an intrinsic property of "X AM/PM", so it wouldn't in this scenario either.
In a vacuum (Without an observer/mapmaker) there would be no way to derive the semantic content of an analog clock purely from its real physical properties. Additionally, the physical state of an analog clock that we read (map) to say 3PM could be representative of any time whatsoever, because it is purely dependent on the mapping function. The same physical properties could mean "3pm" or "4pm" or "5:48:00 AM" etc etc.
To put it differently, think about (instantiate the concept of) the time 3:00PM right now. Okay, so that thought just existed. When it existed it was comprised of physical processes. Those physical processes bear no relation to the analog clock set to "3PM". Neither are less real, they are just completely distinct physical phenomena. There is no reason in particular to think that a machine capable of computing the time "3PM" bears resemblance to a machine capable of having the thought "It is 3pm" or "[thinking about] the time of 3pm". By the same token there is no particular reason to think that a mind capable of having the thought "It is 3pm" necessarily contains a computer in it, or that a computer is a necessary, constitutive component of that physical thought.
The implication of the argument that you're making is that humans could never understand any alien artifact that we come across. Or that we could never understand the meaning of Egyptian hieroglyphs or Linear B, or that we could ever understand the purpose of the Antikythera mechanism.
It may be the case that it will be impossible for us to understand a particular alien artifact if we came across it because it is too complex for us to understand, but that doesn't mean that we wouldn't be able to understand all alien artifacts that have ever existed (if they exist)
One could say that hieroglyphs are different because they were made by people, so they have a mapmaker but all that indicates is that meaning can persist across the lifespan of the creators and then if that's the case it's just a question of for how long, and through what casual chains can that meaning persist.
You might also suggest that the function of the Antikythera mechanism is just something that we arbitrarily project onto it but that's not likely. The gear ratios correspond to actual astronomical periods that we didn't just arbitrarily decide, instead we discovered them. That means that the meaning as an astrological clock was fixed into the mechanism by the creators and transmitted to us.
It's the same thing with DNA. It has no mapmaker and yet it contains meaning, meaning that we've made tremendous strides to understand. How is that possible for a thing that doesn't have a mapmaker to have meaning?
No that's not an implication of the argument. You are misunderstanding. It is not necessary to continuously inject increasingly complicated historical scenarios, I am not going to respond to those, we can just talk about the clock. If you want to restate your argument in terms of that example, I will be happy to respond. As it stands I am not even sure what you think the argument was if those are the implications, or what the point of all of these scenarios is supposed to be. Also, I am just relaying the argument of the paper FYI, I am not making the argument.
> I've never understood why certain philosophers view computation as some kind of abstract symbolic manipulation
Possibly very early AI misled people here. In the 80's, a huge amount of AI was logic manipulation; "If A then B is valid"; "A is true"; therefore, "B is true". It's not hard to see how people would conclude that that sort of symbolic manipulation could never result in consciousness.
But modern neural nets aren't like that at all. Calling modern neural nets "symbolic manipulation" seems insane; like calling libraries forests, and insisting we can apply scientific principles about forests to them, because books are made of trees.
Even weirder to me is that in the case of a person doing the computation on a board or paper or whatever medium, its still computation. This time the physical medium doing the work, is the human and their brain.
If consciousness can be proven to emerge from computation alone, then in a way we humans with our brains can simulate a new consciousness.
>I've never understood why certain philosophers view computation as some kind of abstract symbolic manipulation
The abstraction is over the multitude of different physical ways that computation can be performed. That is the role of abstraction, to separate something from a particular means of implementation so that we can think about computation without having to fix a particular physical process.
Sure, but I don't think that's what this paper and other similar ones are saying. I agree, of course, that things like programming languages or algorithms or even logical circuit diagrams are abstractions, obviously. But they are abstract descriptions of a real physical process that happens, for example, inside a CPU - in exactly the same way that an electrical diagram is an abstract descriptio of a real physical process that happens in an electrical circuit, or a thermodynamic calculation is a description of what happens inside an engine.
But the engine, the electrical circuit, and the computation inside the CPU are objective realities. There could be many other ways to describe and characterize the same physical realities, of course, but that doesn't make them observer-dependent phenomena.
The issue that the paper brings up is that the same physical process can be interpreted as multiple different computational processes. If the content of consciousness (the hard problem, qualia) is only dependent on the computational process, and not its particular physical instantiation, then which qualia are generated from a particular physical process?
> The issue that the paper brings up is that the same physical process can be interpreted as multiple different computational processes.
I don't think this is relevant to the notion that consciousness is a form of computation.
The assertion that consciousness is a form of computation basically means that the physical process that happens in the brain/body that we recognize as consciousness can be described in terms of a computational process. A consequence of this, if it is true, is that replicating the same computation in a CPU would make the physical process that happens in the CPU just as conscious - assuming that we had identified the correct computation.
In this theory, the thing that would be conscious would be the physical CPU, just like the thing that is conscious is a physical human brain/body. The computation is just an abstract description of the common properties between the CPU and the human brain/body. It's not relevant that we could also describe the process inside the CPU as being a completely different computation - the abstract model is only required to be able to build and program the CPU.
To go back to my mechanical door analogy: we create an abstract model of the computations needed to make a computational system open a door when a person is near. We use this model to create the computational system, and we see the door opening when a person goes near the sensor. Now, we can interpret the computation happening inside the system in many other ways - but that won't change the fact that the door opens when a person is near, in any way.
I am not claiming that any of this constitutes proof that consciousness must be a computation. What I'm claiming though is that the paper, and similar arguments, are not refuting the right claims, and generally have a misunderstanding of what "computation" actually means, and its relation to physical processes.
The "hard problem" is talking about the thing that it's like to be you, to experience what is happening. I don't know about you, but I only experience one set of things happening at once, i.e. it doesn't feel like I am in two places at once or that there are two completely different versions of my life happening simultaneously.
If the physical thing that is conscious is the CPU, what are the contents of its consciousness if there are multiple interpretations of what it is computing?
Now maybe somehow there are in fact multiple consciousnesses inhabiting the CPU. I don't experience that though, so I don't have a positive reason to believe that that's true.
You are presupposing that there is a single way an outside observer could interpret the way your brain works to produce consciousness. I don't see why we should believe this. The same way, even though we can model the processes in the CPU as multiple computations, perhaps only one of these models is correct in some way, and that is the model we call consciousness. Of course, this becomes highly speculative.
The linked "click bait" article explains this very clearly as well. It clearly explains the methodology: they took the prompt sent to an LLM by a popular open source carb counting iOS app and sent it, together with five different pictures of food that a typical person might take, to all of the frontier models, and checked the responses. They also explain the purpose: to check the possible accuracy of this approach taken by a real app that real people use.
The fact that you somehow perceived this as an attack on LLMs as a technology is a failure entirely on your part. There is nothing in the article that suggests that people shouldn't use LLMs for other purposes - just a statistical verification of the fact that they shouldn't be used for this one particular thing.
I didn't take anything as an attack on LLMs. I took it as a severe misunderstanding of how technology works. I specifically outline that I would like to see the margin of error even when integrating actual apps that claim to achieve results, rather than using tools that don't.
None of my claim perceives anything as an attack on LLMs, which shows a mischaracterisation on your part of my entire point.
reply