Hacker Newsnew | past | comments | ask | show | jobs | submit | danielbarla's commentslogin

Availability is definitely a factor, but I feel that a far more important aspect is that a YouTube feel is personalised. It's A/B testing you for weeks on end, and has a pretty good idea of how to get maximum engagement. TV was never this targeted, nor was there feedback to ratchet up what it suggested to you.


Kids don’t stand a chance against decades of data/research and billions of dollars weaponized against human psychology to garner as much of your attention as possible at all times.


Kids should own a device with "adult" bit set to 0, so that they can only use government-approved applications and sites. Why government? Because parents are too lazy or dumb to configure anything and 90% will just let their children access whatever they want and the rest 10% will feel like losers who cannot watch the things all their classmates are allowed to watch.


What happens when the kid eventually becomes an adult? They have to buy a new device? That seems like an really great way to create a bunch of unnecessary e-waste.

Also, letting Big Daddy Government control what we show the kids has got to be one of the worst ideas I've heard. Propaganda machines that parents have no power over? No thanks. That seems like the most likely outcome of this sort of measure. Next thing you know, every computer will also have a "activist" and "journalist" bit; once you normalize role-based access controls, the catgories will only ever expand.


Ehhh I’m more of a “hybrid model” guy myself. I do think the government should be more involved in regulating what these companies can do to us and how they can use our data, but I’m not really in to your vision of how involved they are in apps directly (imagine that kind of power with the Trump administration).

Meanwhile I do think parents should not be expected to literally handle every element of this because it’s just not possible to have eyes on every bit of media/entertainment/etc our kids can find. That being said it is our responsibility to educate our kids on some level, so we can’t just expect to pass the buck entirely to external systems. I do think it’s reasonable to expect some basic guardrails though.

Needs to be a little bit of effort and restriction across the board.


By our generations “best and brightest”, supposedly.

At least, most well compensated.

Shame on you, if you work for these organisations.


Because obviously, we can be trusted completely!


> Why not!

Responsive layout would be the biggest reason (mobile for one, but also a wider range of PC monitor aspect ratios these days than the 4:3 that was standard back then), probably followed by conflating the exact layout details with the content, and a separation of concerns / ease of being able to move things around.

I mean, it's a perfectly viable thing if these are not requirements and preferences that you and your system have. But it's pretty rare these days that an app or site can say "yeah, none of those matter to me the least bit".


Godot is a great engine, and .Net support is very good. You can't go far wrong with it, especially for small 2D games.


Very interesting post, thank you!

I'd also be curious to know the following: how many new errors or regressions were caused by the bug fixes?


Since the fixit just finished on Friday, I don't have hard numbers from this one I'm afraid :)

Historically though, I would guess maybe 5-10% end up needing some followup fix which is itself usually smaller than the original (maybe a typo in some documentation or some edge case we spot when it hits prod etc).

The smaller the original fixes, the less likely you are to need followups so another reason to prefer working mainly on them!


I think 5-10% is pretty good, it probably means that the codebase is mostly understandable and maintainable. I have definitely worked on some which were full of little traps and landmines just waiting for eager do-gooders to step on, which was sadly a self-fulfilling prophecy for the app.


Heh, good question. In the limit: did you fix 12 bugs, or did you fix 1 bug 12 times?


I've heard this argument before from the perspective of C# having more keywords and language features to be aware of than something else (in my particular argument, the other side was Java).

From this perspective, I can't say I disagree as such. If you look at the full set of language features, it sure is a lot of stuff to know about. The argument that it is too much, and that we should sacrifice expressiveness and signal to noise ratio in the code to keep the language simpler, I don't agree with.


I also spent hours messing around with calculators as a kid. I recall noticing that:

11 * 11 = 121

111 * 111 = 12321

1111 * 1111 = 1234321

and so on, where the largest digit in the answer is the number of digits in the multiplicands.


While I certainly had the _concept_ of compound interest taught to me at some abstract mathematical level, the application to real life practical financial scenarios was definitely not done [1]. Economics as a whole was an optional subject.

I think schools and curriculums could do a whole lot better in representing this important facet of life. More broadly, I often feel that "applying all that math you've learned to real things" is a subject that could be taught.

[1] Seriously, having applied math questions like "Johnny earns X per year, with a cost of living of Y. Assuming inflation of Z and average yearly returns of R, what percentage should he be putting away, starting at age 25, so that at age 50 he essentially gets the equivalent of his own salary each month?" would likely cause some lightbulbs to go off in the kids' heads.


> the application to real life practical financial scenarios was definitely not done

Of course it was. You can't teach compound interest without referring to money or banks. That's the whole point of it. Otherwise it's just multiplication.


It... is just multiplication. And can't talk about GP's experience, but I can tell you that going through scientific schooling and engineering schools in the French system you'll learn exactly how to calculate the math and never have a single example such as mentioned above.

We're here to build bridges, not count stashes of money after all!

You'd probably get those if you went through "economic studies" (which is a different track and where math includes a lot more statistics even in high school).


Not only you can, I still don't see how the financial "magic of compounding" isn't bullshit for vast majority of people - you can't really make significant money this way in reasonable time spans (5 years rather than 50).


5 years is "get rich quick!" scam territory. The real aim is to manage finances for the rest of your life which may or may not be 50+ years, but will definitely be in double digits if you're of an age for thinking of managing your savings. If your horizon is shorter than that, you're essentially on your deathbed already.


5 years isn't a reasonable timespan. Compounding over the course of a 35 year career, earning a modest wage, will fund a comfortable 20 year retirement. If that's "bullshit" for most people then too bad. Good things come to those who wait etc.


A 20 year retirement on the back of 35 years of working means dying before you're 80?

Given current life expectancy, and particularly if you find a life partner, the chances of at least one of you surviving through at least 85 are pretty high (like above 60% for the US).


I assumed that a "real" career and substantial savings, after paying off debt, begin at age 30. And the 20 year retirement was me being conservative. In truth, saving only 20% of your take-home for 35 years will be more than enough for 40 years.


I wonder where it will go when Y>X. Maybe open question what is the solution. A) Violent revolution and Johnny taking over means of production. B) Death.


The problem is that the financial industry is, like, capitalism-maxxing.

How do you teach "financial literacy" in a practical way without referring to specific products, offerings, or corporations? You really can't.

If you talk to people about investing or retirement, they're gonna talk about Fidelity, Vanguard, whatever. Which is very practical. But I'm not so sure we need our government and education system to basically directly endorse these corporations.


> In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese.

I mean, I guess all arguments eventually boil down to something which is "obvious" to one person to mean A, and "obvious" to me to mean B.


Same. I feel the Chinese room argument is a nice thing to clarify thinking.

Two systems, one feels intuitively like it understands, one doesn’t. But the two systems are functionally identical.

Therefore either my concept of “understanding” is broken, my intuition is wrong, or the concept as a whole is not useful at the edges.

I think it’s the last one. If a bunch of valves can’t understand but a bunch of chemicals and electrical signals can if it’s in someone’s head then I am simply applying “does it seem like biology” as part of the definition and can therefore ignore it entirely when considering machines or programs.

Searle seems to just go the other way and I don’t under Why.


First point: if you imagine that the brain is doing something like collapsing the quantum wavefunction, wouldn't you say that this is a functionally relevant difference in addition to an ontologically relevant difference? It's not clear that the characteristic feature of the brain is only to compute in the classical sense. "Understanding," if it leverages quantum mechanics, might also create a guarantee of being here and now (computers and programs have no such guarantees). This is conjecture, but it's meant to stimulate imagination. What we need to get away from is the fallacy that a causal reduction of mental states to "electrical phenomena" means that any set of causes (or any substrate) will do. I don't think that follows.

Second: the philosophically relevant point is that when you gloss over mental states and only point to certain functions (like producing text), you can't even really claim to have fully accounted for what the brain does in your AI. Even if the physical world the brain occupies is practically simulatable, passing a certain speech test in limited contexts doesn't really give you a strong claim to consciousness and understanding if you don't have further guarantees that you're simulating the right aspects of the brain properly. AI, as far as I can tell, doesn't TRY to account for mental states. That's partially why it will keep failing in some critical tasks (in addition to being massively inefficient relative to the brain).


The Chinese room has the outputs being the same, that’s really key in this.

> consciousness and understanding

After decades of this I’ve settled on the view that these words are near useless for anything specific, only vague pointers to rough concepts. I see zero value in nailing down the exact substrates understanding is possible on without a way of looking at two things and saying which one does and which one doesn’t understand. Searle to me is arguing that it is not possible at all to devise such a test and so his definition is useless.


He’s not arguing that it’s not possible to devise such a test. He’s saying, lay out the features of consciousness as we understand them, look for what causes them in the brain, look for that causal mechanism in other systems.

Although for whatever it’s worth most modern AIs will tell you they don’t have genuine understanding (eg no sense of what pleasure is or feels like etc aside from human labeling).


> He’s not arguing that it’s not possible to devise such a test.

The entire point of the thought experiment is that to outside observers it appears the same as if a fluent speaker is in the room. There aren’t questions you can ask to tell the difference.


That's not the entire point, but it is the a big part of the premise. The entire point, on the contrary, is that the system inside the room does not have anything with conscious understanding of Chinese DESPITE passing the Turing Test. It's highlighting precisely that there's an ontological difference between the apparent behavior of the system and the reality of it.


Of course it’s the point, the systems are not distinguishable by behaviour only what’s inside them. There are not tests to determine what’s inside, otherwise the whole thing is pointless.

This was why I have the tin of beans comparison.

The room has the property X if and only if there’s a tin of beans inside. You can’t in any way tell the difference between a room that has a tin of beans in and one that doesn’t without looking inside.

You might find that a property that has zero predictive power, makes (by definition) no difference to what either room can do, and has no use for any practical purposes (again by definition) is rather pointless. I would agree.

Searle has a definition of understanding that, to me, cannot be useful for any actual purpose. It is therefore irrelevant to me if any system has his special property just as my tin of beans property is useless.


Again, it’s not an epistemological test. In reality the material difference between a computing machine and a brain is trivial. It’s showing there’s a categorical difference between the two. BTW—ethically it matters a great deal. If one system is conscious or another, that gives it moral status. Among other practical differences such as guarantee of function over long term.


And again you assign a property or not to things that perform indistinguishably. Your definition is useless. It may as well be based on the tin of beans.

> In reality the material difference between a computing machine and a brain is trivial

No it isn’t. You are making the strong statements about how the brain works that you argued against at the start.

> Among other practical differences such as guarantee of function over long term.

Once again ignoring the setup of the argument. The solution to the chinese room isn’t “the trick is to wait long enough”.

I don’t know why you want to argue about this given you so clearly reject the entire concept of the thought experiment.

I find the entire thing to be intellectual wankery. A very simple and ethical solution is that if two things appear conscious from the outside then just treat them both as such. Job done. I don’t need to find excuses like “ah but inside there’s a book!” Or “it’s manipulations are on the syntactic level if we just look inside” or “but it’s just valves!” I can simply not mistreat anything that appears conscious.

All of this feels like a scared response to the idea that maybe we’re not special.


Ok, things are getting a little heated and personal so I'll attempt to engage one more time in good faith.

The premise of the argument is that the Chinese Room passes the Turing Test for Chinese. There are two possibilities for how this happens: 1) the program emulates the brain and has the right relation to the external world more or less exactly, or 2) the program emulates the brain enough to pass the test in some context but fails to emulate the brain perfectly. We know that as it currently stands, we've "passed the Turing Test" but we do not go further and say that brains and AI perform "indistinguishably." Unless there are significant similarities to how brains work and how AIs work, on some fundamental level (case 1), even if they pass the Turing Test, it is possible that in some unanticipated scenario they will diverge significantly. Imagine a system that outputs digits of pi. You can wait until you see enough digits to be satisfied, but unless you know what's causing the output, you can never be sure that you're not witnessing the output of some rational approximation or some cached calculation that will eventually halt. What goes on inside matters a lot if you want a sense of certainty. This is simply a trivial logical point. Leaving that aside, assuming that you do have 1), which I believe we are still very far from, we're still left with the ethical consequences, which it seems you agree does hinge on whether the system is conscious.

You made a really strong claim, which is "I can simply not mistreat anything that appears conscious"--which is showing the difference in our intuitions. We are not beholden to the setup of the Chinese Room. The current scientific and rational viewpoint is at the very least that brains cause minds and they cause our mental world. I'm sure you agree with that. The very point we are disputing is that it doesn't follow that because what's going on on the outside is the same that what goes on on the inside doesn't matter. This is particularly true if we have clear evidence that the things causing the behavior are very different, that one is a physical system with biological causes and the other is a kind of simulation of the first. So when I say that a brain is trivially different from a calculating machine, what I mean is that the brain simply has different physical characteristics from a calculating machine. Maybe you disagree that those differences are relevant but they are, you will agree, obvious. The ontology of a computer program is that it is abstract and can be implemented in any substrate. What you are saying then, in principle, is that if I follow the steps of a program by tracking bits on a page that I'm marking manually, that somehow the right combination of bits (that decode to an insult) is just as morally bad as me saying those words to another human. I think many would find that implausible.

But there are some who hold this belief. Your position is called "ethical behaviorism," and there's a essay I argued against that articulated this viewpoint. You can read it if you want! https://blog.practicalethics.ox.ac.uk/2023/03/eth%C2%ADi%C2%...


I've read through your essay once, and might explore it in more detail later!


I have been engaging in good faith but to be honest am a little frustrated at having to continually point out what the actual chinese room thought experiment is. I think you have continually made a very important error with it.

> What goes on inside matters a lot if you want a sense of certainty. This is simply a trivial logical point

And yet entirely unrelated to this thought experiment. His point is not that the book isn't big enough, that the man inside the room will trip up at some point, or anything of the sort.

Now you might have a different argument about this all than Searle, and that's entirely fine. I'm saying that Searles definition of understanding is utterly pointless because he defines it as one that is not related to the measurable actions of a system but related to the way in which it works internally.

> The premise of the argument is that the Chinese Room passes the Turing Test for Chinese.

...

> enough to pass the test in some context but fails to emulate the brain perfectly

No. That is a far weaker argument than Searle makes. His argument is not that it'll be hard to tell, or convincing but you can tell the difference, or most people would be fooled.

From Searle, let's dig into this.

https://web.archive.org/web/20071210043312/http://members.ao...

> from tile point of view of somebody outside the room in which I am locked—my answers to the questions are absolutely indistinguishable from those of native Chinese speakers.

Already we get to the point of being indistinguishable.

> I have inputs and outputs that are indistinguishable from those of the native Chinese speaker,

Again indistinguishable.

And then he doubles down on this to the point of fully emulating the brain not being enough

> imagine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes.

Searle has a problem - he looks at two different systems and says there is understanding in one and not in another. Then he ties himself in knots trying to distinguish between the two.

> The idea is that while a person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible.

He cannot at all accept any sort of combination, he can't accept any concept of understanding being anything but binary. He cannot accept that it perhaps is not a useful term at all.

> in this paper I have tried to show that a system could have input and output capabilities that duplicated those of a native Chinese speaker and still not understand Chinese, regardless of how it was programmed

A programmed system *cannot* understand. It doesn't matter how it operates or how well, and again duplicating the capabilities of a real person.

As far as I can tell, since he leans heavily into the physical aspect, if we had two machines:

1. Inputs are received via whatever sensors, go through a physical set of components, and drive motors/actuators

2. Inputs are received via whatever sensors, go through a chip running an exact simulation of those same components, and drive motors/actuators

then machine 1 could understand but machine 2 could not because it has a program running rather than just being a physical thing.

Despite the fact that both simply follow the laws of physics, the very concept of a program is just how certain physical things are arranged.

To go back to my point because I'm rather frustrated yet again just pointing out what Searle explicitly says:

Searle defines understanding in a way that makes it, to me, entirely useless. It provides by definition no predictive power and can by definition not impact anything we want to do.

I am not arguing which of these things understands. I'm saying the term as a whole isn't very useful, and Searles definition has been pushed by him to a point of being entirely useless because he starts by insisting that certain things cannot understand.


I’m with you 98% of the way and then things take a sharp turn. In your world, the mere behavior determines the understanding. In Searle’s the CAUSES of the behavior determine the understanding. The causes are knowable. He stipulates a setup where there’s an epidemic boundary to show that you can have apparent behavior but a fundamental difference in causes that can make you have a point of view on whether there is genuine understanding. If you don’t like this term, you can say conscious understanding. As I said before, there has to be a categorical distinction between a system that feels and a system that is pretending to feel. The distinction you make between machine 1 and machine 2 is correct. The stipulation is that machine 1 has physical causes that produce the physical phenomenon of consciousness (think about how various substances alter conscious feelings, such as pain killers and anesthetics), and machine 2 also has physical causes but the physical causes are doing something different, they’re modifying symbols to execute program steps, and if you like the “output” is just other symbols. Those symbols only have meaning by matter of interpretation and convention and there’s no physical truth to their meaning.

So if you like, one is real and the other is fake. Or, one is physical and the other is symbolic or conventional. One actually had breakfast this morning and the other is lying about having breakfast to pass the Turing Test. One can feel pain, guilt, shame and the other one is just saying that it does because it’s running a program.

Searle says there is an empirical test for which domain a thinking object falls into (your machine 1 and machine 2)—to an outside observer, in the limit, there is no difference in behavior. They will do the same thing. For all that, if you have a metaphysical value for consciousness and “genuine” feeling, then you think the difference is important. If you don’t, you don’t.

FWIW—I think once AI has a full understanding of its ontology, even if it’s simulating a human brain perfectly, if it knows it’s a program it will probably explain to us why it is or is not necessarily conscious. Perhaps that will be more convincing for you.


Most elegant tin of beans I've seen in a while.

If I understand your argument: if there's no empirical consequence, what's the point of the distinction, right?


lol. Imagine a husband arguing to his wife: if you can't tell that I'm cheating on you, what's the point of the distinction of faithful vs. not?


@Kim_Bruning The point of the experiment is that there is some opaque boundary where the behavior is indistinguishable--that's the empirical stance of behaviorists, what goes on inside "doesn't matter." The empirical boundary of a husband and wife might be home life and time together. If you "pierce" the Chinese Room, you see a guy with an exotic setup. If you pierce a native speaker, you see a brain that electrochemical that has microtubules that collapse the wave function (or whatever), just like YOU have, and YOU know you understand (at least relative to English)...these are VERY different things even if they are, externally, yielding the same behavior. So yes, you could hire a private detective and so-on, but the whole point of the "empirically indistinguishable" is that it is empirically indistinguishable relative to some boundary (hence, room). If the Chinese Room was TRULY empirically indistinguishable, then inside it would be a human producing Chinese, not a non-native speaker and a program.

btw--if you'd like to keep the conversation going, email is on my personal webpage in my bio.


You elided the word "Empirical". Say his wife made it empirically as water-tight as she can: for instance she hires a PI who follows him 24/7. The PI finds nothing out of the ordinary. How is this even still cheating?

Maybe he was cheating before or after, sure, but not during. No court would buy that.

...At least, that's how I interpret 'empirical consequence' - something observable or detectable, at very least in principle. Do you mean something different?

(Right this minute I'm coming from an empiricist framework where acts require consequences. If you're approaching this from a realist or rationalist view -which I suspect-, I'd be interested to hear it!)


> First point: if you imagine that the brain is doing something like collapsing the quantum wavefunction, wouldn't you say that this is a functionally relevant difference in addition to an ontologically relevant difference?

I can imagine a lot of things, but the argument did not go this far, it left it as "obvious" well before this stage. Also, when I see trivial simulations of our biological machinery yielding results which are _very similar_, e.g. character or shape recognition, I am left wondering if the people talking about quantum wavefunctions are not the ones that are making extraordinary claims, which would require extraordinary evidence. I can certainly find it plausible that these _could_ be one particular way that we could be superior to the electronics / valves of the argument, but I'm not yet convinced it is a differentiator that actually exists.


The argument doesn’t have to go that far. I think most people have the intuitive, ha, understanding that “understanding” is grounded in some kind of conscious certainty that words have meanings, associations, and even valences like pleasantness or unpleasantness. One of the cruxes of the Chinese Room is that this grounding has physical causes (as all biological phenomena do) rather than computational, purely abstract causes.

There has to be a special motivation to instead cast understanding as “competent use of a given word or concept,” (judged by whom btw?). The practical upshot here is that without this grounding, we keep seeing AI, even advanced AI make trivial mistakes and requires the human to give an account of value (good/bad, pleasant/unpleasant) because these programs obviously don’t have conscious feelings of goodness and badness. Nobody had to teach me that delicious things include Oreos and not cardboard.


> Nobody had to teach me that delicious things include Oreos and not cardboard.

Well, no, that came from billions of years of pre-training that just got mostly hardcoded into us, due to survival / evolutionary pressure. If anything, the fact that AI is as far as it is, after less than 100 years of development, is shocking. I recall my uncle trounce our C64 in chess, and go on to explain how machines don't have intuition, and the search space explodes combinatorically, which is why they will never beat a competent human. This was ~10 years before Deep Blue. Oh, sure, that's just a party trick. 10 years ago, we didn't have GPT-style language understanding, or image generation (at least, not widely available nor of middling quality). I wonder what we will have in 10, 20, 100 years - whatever it is, I am fairly confident that architectural improvements will lead to large capability improvements eventually, and that current behavior and limitations are just that, current. So, the argument is that somehow, intuitively they can't ever be truly intelligent or conscious because it's somehow intuitively obvious? I disagree with this argument; I don't think we have any real, scientific idea of what consciousness really is, nor do we have any way to differentiate "real" from "fake".

On the other end of the spectrum, I have seen humans with dementia not able to make sense of the world any more. Are they conscious? What about a dog, rabbit, cricket, bacterium? I am pretty sure at their own level, they certainly feel like they are alive and conscious. I don't have any real answers, but it certainly seems to be a spectrum, and holding on to some magical or esoteric differentiator, like emotions or feelings, seems like wishful thinking to me.


Your vocabulary presupposes the categories you’re asserting are equivalent. The process of evolution and AI training are vastly different. One confers a survival advantage and is suffused with values that are essential to humans, such as morality, the primacy of vision, taste and smell, etc. AI training is an attempt to transfer functions that allow for human survival and flourishing to objects that are not human. AI training, and especially the Turing Test featured in the Chinese room is about mimicking humans and human evolution is about survival and forms the basis of our aesthetic and moral judgments. One is simply a simulation of the other. Consciousness might not matter to what you concern yourself with as somebody amazed with AI (I am as well), but surely you believe that there is a moral difference between harming a human and harming an LLM, even verbally. What do you think accounts for that, if not consciousness?


> but surely you believe that there is a moral difference between harming a human and harming an LLM, even verbally.

I'm becoming less sure of this over time. As AI becomes more capable, it might start being more comparable to smaller mammals or birds, and then larger ones. It's not a boolean function, but rather a sliding scale.

Despite starting out from very skeptical roots, over time Ethology has found empirical evidence for some form of intelligence in more and more different species.

I do think that this should also inform our ethics somewhat.


As I've argued elsewhere, we should care what the source of the behavior is. The reason expand ethical concern to dogs and birds even though they don't have the capability to use language and why we don't to LLMs, even though they use language very ably, is precisely because we recognize the biological causes of consciousness. The reason we keep getting confused about whether these concerns apply to AI is because we apply a behavioral standard rather than the standard we use everywhere else, which is a biological one. We have higher certainty that dogs are conscious, yes, because of their behavior, but also, and critically, because they share biology with us.


If you're going to refer to biology, be aware that the relevant subfield that defines the biological standard is in fact called Ethology. To attain rigor, Ethology historically rejected anthropomorphism in favor of strict behavioral evidence, seeing as that is the primary empirically measurable evidence available.

On a side note: it's been a pleasure reading through the debates with you, and possibly we can continue over mail!


Exactly. Refuting the premise of the Chinese Room is usually a sign of somebody not even willing to entertain the thought experiment. Refuting Searle's conclusion is where interesting philosophical discussions can be had.

Personally, I'd say that there is a Chinese speaking mind in the room (albeit implemented on a most unusual substrate).


There are two distinct counter-arguments to this way of debunking the Chinese room experiment, not in any specific order.

First, it is tempting to assume that a bunch of chemicals is the territory, that it somehow gives rise to consciousness, yet that claim is neither substantiated nor even scientific. It is a philosophical view called “monistic materialism” (or sometimes “naive materialism”), and perhaps the main reason this view is popular currently is that people uncritically adopt it following learning natural scientific fields, as if they made some sort of ground truth statements about the underlying reality.

The key to remember is that this is not a valid claim in the scope of natural sciences; this claim belongs to the larger philosophy (the branch often called metaphysics). It is not a useless claim, but within the framework of natural sciences it’s unfalsifiable and not even wrong. Logically, from scientific method’s standpoint, even if it was the other way around—something like in monistic idealism, where perception of time-space and material world is the interface to (map of) conscious landscape, which was the territory and the cause—you would have no way of proving or disproving this, just like you cannot prove or disprove the claim that consciousness arises from chemical processes. (E.g., if somebody incapacitates some part of you involved in cognition, and your feelings or ability to understand would change as a result, it’s pretty transparently an interaction between your mind and theirs, just with some extra steps, etc.)

The common alternatives to monistic materialism include Cartesian dualism (some of us know it from church) and monistic idealism (cf. Kant). The latter strikes me as the more elegant of the bunch, as it grants objective existence to the least amount of arbitrary entities compared to the other two.

It’s not to say that there’s one truly correct map, but just to warn against mistakenly trying to make a statement about objective truth, actual nature of reality, with scientific method as cover. Natural sciences do not make claims of truth or objective reality, they make experimentally falsifiable predictions and build flawed models that aid in creating more experimentally falsifiable predictions.

Second, what scientific method tries to build is a complete, formally correct and provable model of reality, there are some arguments that such model is impossible to create in principle. I.e., there will be some parts of the territory that are not covered by the map, and we might not know what those parts are, because this territory is not directly accessible to us: unlike a landmass we can explore in person, in this case all we have is maps, the perception of reality supplied by our mind, and said mind is, self-referentially, part of the very territory we are trying to model.

Therefore, it doesn’t strike me as a contradiction that a bunch of valves don’t understand yet we do. A bunch of valves, like an LLM, could mostly successfully mimic human responses, but the fact that this system mimics human responses is not an indication of it feeling and understanding like a human does, it’s simply evidence that it works as designed. There can be a very different territory that causes similar measurable human responses to arise in an actual human. That territory, unlike the valves, may not be fully measurable, and it can cause other effects that are not measurable (like feeling or understanding). Depending on the philosophical view you take, manipulating valves may not even be a viable way of achieving a system that understands; it has not been shown that biological equivalent of valves is what causes understanding, all we have shown is that those entities measurably change at the same time with some measurable behavior, which isn’t a causative relationship.


It's not mostly mimicking, it's exactly identical. That was always the key point. Indistinguishable from the outside, one thing understands and the other doesn't.

I feel like I could make the same arguments about the chinese room except my definition of "understanding" hinges on whether there's a tin of beans in the room or not. You can't tell from the outside, but that's the difference. Both cases with a person inside answering questions act identically and you can never design a test to tell which room has the tin of beans in.

Now you might then say "I don't care if there's a tin of beans in there, it doesn't matter or make any sort of difference for anything I want to do", in which case I'd totally agree with you.

> just like you cannot prove or disprove the claim that consciousness arises from chemical processes.

Like understanding, I haven't seen a particularly useful definition of consciousness that works around the edges. Without that, talking of a claim like this is pointless.


> talking of a claim like this is pointless.

Not at all. The confusion you expressed in your original comment stems from that claim. If you want to overcome that confusion, we have to talk about that claim.

Your statement was that it’s unclear how a bunch of valves doesn’t understand, but chemical processes do, and maybe you have a wrong intuition. Well, it appears that your intuition is to make this claim of causality, that some sort of object (e.g., valves or neurons), which you believe is part of objective reality, is what would have to cause understanding to exist.

So, I pointed out that assumption of such causality is not a provable claim, it is part of monistic materialism, which is a philosophical view, not scientific fact.

Further hinting at your tendency to assume monistic materialism is calling the systems “functionally identical”. It’s fairly evident that they are not functionally identical if one of them understands and the other doesn’t; it’s easy to make this mistake if you subconsciously already decide that understanding isn’t really a thing that exists (as many monistic materialists do).

> Like understanding, I haven't seen a particularly useful definition of consciousness that works around the edges.

Inability to define consciousness is fine, because logically circular definitions are difficult. However, lack of definition for the phenomenon is not the same thing as denying its objective existence.

You can escape the necessity to admit its existence by waving it away as an illusion or “not really” existing. Which is absolutely fine, as long as you recognize that it’s simply a workaround to not have to define things (if it’s an illusion, whom does it act on?), that conscious illusionism is just as unfalsifiable and unprovable as any other philosophical view about the nature of reality or consciousness, and that logically it’s quite ridiculous to dismiss as illusion literally the only thing that we empirically have direct unmediated access to.

> It's not mostly mimicking, it's exactly identical.

> Both cases with a person inside answering questions act identically and you can never design a test to tell which room has the tin of beans in.

If you constructed a system A that produces some output, and there is a system B, which you did not construct and which you don't have an full understanding of how it works, which produces identical output but is also believed to produce other output that cannot be measured with current technology (a.k.a. feelings and understanding), you have two options: 1) say that if we cannot measure something today then it certainly doesn’t matter, doesn’t exist, etc., or 2) admit that system A could be a p-zombie.


> It’s fairly evident that they are not functionally identical

Then you could tell the difference and the thought experiment is broken. The whole point is that outside observers can’t tell. Not that they’re too stupid, that there isn’t a way they could tell, no question they could ask.

> but is also believed to produce other output that cannot be measured with current technology

Are you suggesting that Searle was saying that there was a difference between the rooms and that we just needed more advanced technology to see inside them? Come on.


> The whole point is that outside observers can’t tell.

I tried to explain that outside observers may not observe the entirety of what matters, whether due to current technical limitations or fundamental impossibility. In fact, to assume externally observed behaviour (e.g., of a human) is all that matters strikes me as a pretty fringe view.

> Are you suggesting that Searle was saying that there was a difference between the rooms and that we just needed more advanced technology to see inside them

Perhaps you are trying to read too much into what the experiment itself is. I do not treat it as “Searle tried to tell us something this way”. If he wanted to say something more specific he probably had done it in relevant works. The thought experiment however is very clear and describable in a paragraph and is open to possible interpretations, which is what we are doing now. That is the beauty of thought experiments like this.


I'd be fine if Searle just very simply said "we have a non-material soul and that's why we understand. Anything doing the exact same job but without a soul isn't understanding because understanding is limited entirely to things with souls in my definition".

> A bunch of valves, like an LLM, could mostly successfully mimic human responses,

The argument is not "mostly successfully", it's identically responding. The entire point of the chinese room is that from the outside the two things are impossible to distinguish between.


You’re talking about Cartesian mind-body dualism. It’s absolutely fine to not sneak in that view into an otherwise sound thought experiment, as it’s quite irrelevant—the concept of p-zombie from Chinese room experiment holds regardless.

> The argument is not "mostly successfully", it's identically responding.

This is a thought experiment. Thought experiments can involve things that may be impossible. For example, the Star Trek Transporter thought experiment involves an existence of a thing that instantly moves a living being: the point of the experiment is to give rise to a discussion about the nature of consciousness and identity.

Thing not possibly existing is one possible resolution of the paradox. There may be a limitation we are not aware of.

Similarly, in Searle’s experiment, the system that identically responds might never exist, just like the transporter in all likelihood cannot exist.

> The entire point of the chinese room is that from the outside the two things are impossible to distinguish between.

To a blind person, an orange and a dead mouse are impossible to distinguish between from 10 meters away. If you can’t distinguish between two things, it doesn’t mean the things are the same. Ability to understand, self-awareness and consciousness are things we currently cannot measure. You can either say “these things don’t exist” (we will disagree) or you have to say “the systems can be different”.


You seem confused as to what I’ve said. I know these things cannot exist in reality.

The Chinese room is setup so that you cannot tell the difference from the outside. That’s the point of it.

> If you can’t distinguish between two things, it doesn’t mean the things are the same.

But it does mean that the differences between them are irrelevant to you by definition.

> Ability to understand, self-awareness and consciousness are things we currently cannot measure. You can either say “these things don’t exist”

Unless you have a way they could be measured but we just lack the technology or skill then your definitions are of things that may as well not exist because you cannot define them. They are vague words you use and are fine if you accept you have three major categories “yes and here’s why, no and here’s why and no idea” that’s fine. I am happy saying I’m conscious and the pillow next to me is not. I don’t have a definition clear enough to say yes/no if the pillow was arguing with me.


I would encourage deeply digging into the intuition that brain states and computer states are the same. Start with what you know, and then work backwards and see whether you still think they aren’t different. For example, we have an intuitive understanding of what kinds of flavors (for us) are delicious versus not. Or what kinds of sounds are pleasant versus not. Etc. If I close my eyes, I can see the color purple. I know that Nutella is delicious, and I can imagine its flavor at will. I share Searle’s intuition that the universe would be a strange place if these feelings of understanding (and pleasantness!) were simply functions not of physical states but of abstract program states. Keep in mind—what counts as a bit is simply a matter of convention. In one computer system, it could be a minute difference in voltage in a transistor. In another, it could be the presence of one element versus another. In another, it could be whether a chamber contains water or not. In another, it could be markings on a page. On and on. On the strong AI thesis, any system that runs steps in this program would not just produce functionally equivalent output to brains, but they would be forced to have mental states too, like imagining the taste of Nutella. To me, it's implausible that symbolic states FORCE mental states, or put another way that mental states are non-physical (we think about how states like pain, euphoria, drunkenness, etc, are physically modulated through drugs..you'd have to modify this to say that they're really modifying symbolic states somehow). Either the Chinese Room is missing something, our understanding of physical reality is incomplete, OR that you have to bite the bullet that the universe creates mental states when systems implement the right program—but then you’re left with the puzzle of how it is that there is a tie between the physical world and the abstract world of symbols (how can causing a mark on a page cause mental states).

So what’s the physical cause for consciousness and understanding that is not computable? If for example you took the hypothesis that “consciousness is a sequence of microtubule-orchestrated collapses of the quantum wavefunction” [1], then you can see a series of physical requirements for consciousness and understanding that forces all conscious beings onto: 1) roughly the same clock (because consciousness shares a cause), and 2) the same reality (because consciousness causes wavefunction collapses). That’s something you could not do merely by simulating certain brain processes in a closed system.

1) Not saying this is correct, but it invites one to imagine that consciousness could have physical requirements that play in some of the oddities of the (shared) quantum world. https://x.com/StuartHameroff/status/1977419279801954744


I think a lot if comes down to the domain, language and frameworks, your expectations, as well as prompt engineering. Having said that, I have had a number of excellent experiences in the past few weeks:

- Case 1 was troubleshooting what turned out to be a complex and messy dependency injection issue. I got pulled in to unblock a team member, who was struggling with the issue. My efforts were a dead-end, but Claude (Code) managed to spot a very odd configuration issue. The codebase is a large, legacy one.

- Case 2 was the same codebase, I again got pulled in to unblock a team mate, investigating why some integration tests were running individually, but not when run as a group. Clearly there was a pretty obvious smoking gun, and I managed to isolate the issue after about 15-30 minutes of debugging. I had set Claude on the goose chase as well, and as I closed the call with my teammate, I noticed it had found the same exact two lines that were causing the issue.

Clearly, it occasionally does insane stuff, or lies its little pants off. The number of times where it "got me" are fairly low, however, and its usefulness to me is extreme. In the cases above, it out-did a teammate who has at least 10 years of experience, and equalled me in the one case and outdid me in the other, with over 25 years now. I have a similar wonderment to your situation, but the opposite: "how are people NOT finding value in this?".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: