I think what comment-OP above means to point at is - given what we know (or, lack thereof) about awareness, consciousness, intelligence, and the likes, let alone the human experience of it all, today, we do not have a way to scientifically rule out the possibility that LLMs aren't potentially self-aware/conscious entities of their own; even before we start arguing about their "intelligence", whatever that may be understood of as.
What we do know and have so far, across and cross disciplines, and also from the fact that neural nets are modeled after what we've learned about the human brain, is, it isn't an impossibility to propose that LLMs _could_ be more than just "token prediction machines". There can be 10000 ways of arguing how they are indeed simply that, but there also are a few of ways of arguing that they could be more than what they seem. We can talk about probabilities, but not make a definitive case one way or the other yet, scientifically speaking. That's worth not ignoring or dismissing the few.
Is this what we are reduced to now, to snap back with a wannabe-witty remark just because you don't like how an idea sounds? Have we completely forgotten and given up on good-faith scientific discourse? Even on HN?
I'm happy to participate in good faith discourse but honestly the idea that LLMs are conscious is ridiculous.
We are talking about a computer program. It does nothing until it is invoked with an input and then it produces a deterministic output unless provided a random component to prevent determinism.
That's all it does. It does not live a life of its own between invocations. It does not have a will of its own. Of course it isn't conscious lol how could anyone possibly believe it's conscious? It's an illusion. Don't be fooled.
Reading what you said literally, you're making a strong statement that an AI could never be conscious and further that consciousness depends on free will and that free will is incompatible with determinism and that all of these statements are obviously self-evident.
But the problem is the narrative around this tech. It is marketed as if we have accomplished a major breakthrough in modeling intelligence. Companies are built on illusions and promises that AGI is right around the corner. The public is being deluded into thinking that the current tech will cure diseases, solve world hunger, and bring worldwide prosperity. When all we have achieved is to throw large amounts of data at a statistical trick, which sometimes produces interesting patterns. Which isn't to say that this isn't and can't be useful, but this is a far cry from what is being suggested.
> We can talk about probabilities, but not make a definitive case one way or the other yet, scientifically speaking.
Precisely. But the burden of proof is on the author. They're telling us this is "intelligence", and because the term is so loosely defined, this can't be challenged in either direction. It would be more scientifically honest and accurate to describe what the tech actually is and does, instead of ascribing human-like qualities to it. But that won't make anyone much money, so here we are.
IMO, that's a common misconception. The fact that it seems this way can be attributed misunderstanding, bias in data, and perhaps a poor treatment choice. Sometimes, it's also the best healthcare can do for now.
Medical interventions for mental health issues aren't a forever-crutch. Plenty of people do taper off/change something about their prescriptions after a certain point, but we rarely ever hear those stories. What we do hear is plenty of people getting on meds/being on meds for a long time, which can bias us and make us think that most people who get on meds are on it for life.
Under-nourishment/malnutrition, traumatic incidents/events, genetics, societal conditions, bullying and abuse, and so many things are also all brain altering. Why do we not consider them so and turn a blind eye to all that?
If you're horrified that we are in a world and society where a 7yo has been put in a position where antidepressants help them, yeah, that's understandable. If you're horrified that a kid is taking them, that the parents sought medical intervention for "just a kid", then, I'd say you're reacting to the concept of a kid on antidepressants than actually listening to the OP and their family's history and story.
Often, people react to the concept of a thing rather than the ground reality of life and its complexities of lived experience. Most people also extrapolate (in either direction) others' lived experiences based on their learnings, understandings, pasts and future ambitions. In this case especially, there's also added stigma around mental health, antidepressants and the locus of personal responsibility when it comes to mental health issues.
The _concept_ of a child on antidepressants suspends trust in parents, that's often assumed and unquestioned depending, depending on the child's age. Maybe close to 18yo? Supportive parents. 7yo? Horrible parents. I'd argue it also tends to suspend critical thinking and introduces an unshakeable bias, that a child of 7yo _never [ever]_ needs antidepressants. Why? What makes you say that? What's your evidence and reasoning?
If you feel so horrified by that, can you consider for a moment that the parents recognize the weight and gravity of this decision too? That they had to really think this through, pursue more thorough medical advice than usual, make a judgement call, and have to live through this decision throughout all their lives?
OP's response to multiple comments indicates that they did not make this decision lightly and without making sure that this was the better thing to do overall. I commend OP's openness and honesty in talking about it. It's certainly inspiring to see a parent care for their child's mental health, and not dismissing that to be "oh, the kid's just young and moody, they'll feel better tomorrow."
PS. We (as a society) are always learning more and newer things about mental health and treatments. It might look like we know a lot. Perhaps. But we also don't know so much!
It's really frustrating now that for every product/service, we have to go through the privacy policy carefully, especially when they're being written in increasingly generic verbiage. We pay for the product/service upfront or as a subscription, then a subscription for additional features, and on top of all that, agree to sell all our data and souls too. And Tech does all this blindly while gaslighting itself that "it's making the world a better place."
The worth and value of something, especially in art, is often assigned wrong in the moment. But on the axis of history, it has a life of its own.
Just 200 years ago, in a world without most of the technologies we take for granted today, art played a fundamentally representative role in stretching the boundaries of ideas, imagination, possibilities, and in extension, human cognition and social impact.
Preservation is the point. Art is made in the moment. Yeah there can be scientifically proven ways of longer-lasting / preservable mediums, inks/paints, etc. Pieces of art capture inspirations and ideas of the creators and pass that feeling along to future generations to also be inspired with new ideas for their times. They can be a snapshot of one place, at one moment in time, but they can also be timeless.
In some ways, it's the point OC makes — that it's subjective. It's a culture problem.
In our profession, our conventional approach to resolve these kinds of differences is to reduce them to a specific set of conditionally applied rules that everyone _has_ to agree on. Differences in opinions are treated as based on top of a more fundamental set of values that _have_ to be universal, modular, and distinct. Why do we do this? Because that's how we culturally approach problem-solving.
Most industries at large train and groom people to absorb structured value systems whose primary function is to promote productivity (as in, delivery of results). That value system, however, ultimately benefits capital most, not necessarily knowledge or completeness.
Roles and positions ultimately encompass and package a set of values and expectations. So, we are left with a small group of people who practiced valuing few other aspects but feel isolated and burdened with having to voluntarily take on additional work (because they really care about it), and others unnecessarily pressured to mass-adopt values and also burdened taking on what feels like additional work that only a small group of people like to care about.
In the cultural discourse, we are trying to fix minimum thresholds of some values and value systems and, correspondingly, their expectations. That is never going to be possible. In and of itself, that can be a valid ask. However, time and resources are limited, and values are a continuum. Fixing one requires compromising on another. This is where we are as a professional culture and community in the larger society today.
The Tech industry refuses to break down the role of a "software engineer/developer" further than what it is today and, consequently, refuses to further break down more complex/ambiguous values and value systems into simpler ones, thus reducing the compromises encompassed in and perceived by different sub-groups and increasing overall satisfaction of developers in the industry. Instead, we've expanded on what software developers should be responsible for, which has caused more and more people to burn out trying to meet a broader set of expectations and a diminished set of value systems with more compromises to accommodate that.
Ideally, we need an industry and a professional culture that allows for and respects niche values and acknowledges the necessity of more niche roles to focus on different parts of the larger craft of software development.
PS. As a side note, the phrasing of it in the article is unfair, which OC is pointing out too — there is a false equivalency drawn between "caring for the craft" and "stressing over minutia." This causes, in this context of having a discourse around the article, those who value and want to talk about the value of caring for the craft to be viewed and perceived as the insane weirdos who stress over the minutia that the author was referring to.
Really appreciate this comment and perspective! In the larger context of immigration and brain drain in other countries, how the US also has one, but of a different kind. Ultimately, it's a loss of potential. I'd somewhat disagree with the directionality of the correlative/causal relation, though. But what can be said is that the US also experiences a knowledge drain towards plainly lucrative jobs. I'd wager that it was/is a cyclical effect that just worsened over the decades and that neither engineers moving to fintech nor low-paying engineering jobs were/are the sole reason.
I think OP's title itself answers the question — they're wealthy, and most if not all things that he could really want in the world is a transaction away. "Constraints" are key to finding purpose and direction. There can be a right/optimal set of constraints, but when there seems to be none, any is better. Constraints forbid us from being able to have something in life (things or experiences) that we also want. When we can satisfy any want, it doesn't feel like the wants matter anymore. The very reason the want existed in the first place was because it was not something that was possible at the time.
I'd advise OP to strategize smartly, given they have enough money to last a simple and full life, save, invest, donate, and keep the transactions small; ie. not investing or donating all or a majority of wealth into one thing, instead of a little bit here and a little bit there, every now and then, gradually and slowly.
Taking it slower itself is a form of constraint. And together with keeping relationships and connections, minimizing the noise in life, and making it simpler towards enjoying the truer pleasures of their life, they can grow richer and live more luxuriously, not just in terms of wealth, but also in a safer, more secure and a cozier human experience.
> "Constraints" are key to finding purpose and direction.
I would modify this as "externally imposed constraints w.r.t. socially validated goals". I had come to this conclusion long ago based on my study of philosophy and my own life situation. I had to give up a software career to become a caregiver; my needs and wants being few and frugal with no dependents i found myself in a situation where i had a place stay, clothes to wear and enough food to eat and no "goal" in life i.e. freedom. That was when i realized that every goal had been imposed from outside and i had simply followed a socially validated path taking it as my own. Breaking out of this cycle means you are suddenly in a situation where you have to define your own goal from a large options menu with no constraints pushing you one way or another. This is when a feeling of "emptiness" dawns on you i.e. everything feels unnecessary/empty/worthless without social validation. Note that it has nothing to do with the amount of wealth you have but having enough for oneself based on how you choose to live.
People from the wealthier first-world nations enjoy more international privileges — visa-on-arrival, stress-free travel, higher rates in currency exchange, dual citizenships, better societal structures and support for assimilation into foreign cultures.
Immigrants are either fleeing persecution or leaving their countries seeking a better life, requirements for visas and security checks, usually with not enough money, little privilege, and defacto distrust from foreign societal structures.
Relatively speaking, the typical expat can move around the world as they wish. Immigrants can't. So yes, immigrants, when they move, often do so, seeking to live elsewhere permanently.
What we do know and have so far, across and cross disciplines, and also from the fact that neural nets are modeled after what we've learned about the human brain, is, it isn't an impossibility to propose that LLMs _could_ be more than just "token prediction machines". There can be 10000 ways of arguing how they are indeed simply that, but there also are a few of ways of arguing that they could be more than what they seem. We can talk about probabilities, but not make a definitive case one way or the other yet, scientifically speaking. That's worth not ignoring or dismissing the few.