The decrease in spend was at the household level, not aggregate, so it’s a 5% decrease across 16% of households, or a bit less than 1% overall.
The overall weight loss seems to be because the spending decreases most heavily in calorie dense foods like savory snacks; yogurt and fresh fruit spending goes up a bit.
The issue raised here seems mostly semantic, in the sense that the concern is about the mismatch between the standard meaning of a word (sycophant) and its meaning as applied to an issue with LLMs.
It seems to me that the issue it refers to (unwarranted or obsequious praise) is a real problem with modern chatbots. The harms range from minor (annoyance, or running down the wrong path because I didn’t have a good idea to start with) to dangerous (reinforcing paranoia and psychotic thoughts). Do you agree that these are problems, and there a more useful term or categorization for these issues?
Re: minor outcomes. It really depends on the example I guess. But if the user types "What if Starbucks focuses on lemonade" and then gets disappointed that the AI didn't yell at them for being off track--what are they expecting exactly? The attempt to satisfy them has led to GPT-5.2-Thinking style nitpicking[1] They have to think of the stress test angles themselves ('can we look up how much they are selling as far as non-warm beverages...')
[1] eg. when I said Ian Malcolm in Jurassic Park is a self-insert, it clarified to me "Malcolm is less a “self-insert” in the fanfic sense (author imagining himself in the story) and more Crichton’s designated mouthpiece". Completely irrelevant to my point but answering as if a bunch of reviewers are gonna quibble with its output
With regards to mental health issues, of course nobody on Earth (not even the patients with these issues, in their moments of grounded reflection) would say that that the AI should agree with their take. But I also think we need to be careful about what's called "ecological validity". Unfortunately I suspect there may be a lot of LARPing in prompts testing for delusions akin to Hollywood pattern matching, aesthetic talk etc.
I think if someone says that people are coming after them the model should not help them build a grand scenario, we can all agree with that. Sycophancy is not exactly the concern there is it? It's more like knowing that this may be a false theory. So it ties into reasoning and contextual fluency (which anti-'sycophancy' tuning may reduce!) and mental health guardrails
<< The harms range from minor (annoyance, or running down the wrong path because I didn’t have a good idea to start with) to dangerous (reinforcing paranoia and psychotic thoughts). Do you agree that these are problems, and there a more useful term or categorization for these issues?
I think that the issue is a little more nuanced. The problems you mentioned are problems of sort, but the 'solution' in place kneecaps one of the ways llms ( as offered by various companies ) were useful. You mention the problem is reinforcement of the bad tendencies, but no indication of reinforcement of good ones. In short, I posit that the harms should not outweigh the benefits of augmentation.
Because this is the way it actually does appear to work:
1. dumb people get dumber
2. smart people get smarter
3. psychopaths get more psychopathy
I think there is a way forward here that does not have to include neutering seemingly useful tech.
Why doesn’t Signal have the same mindspace that these (imo) marginal apps have? It’s actually private. I wonder if people find it hard to use or something…
Until recently, I think the only way to join a Signal was to be explicitly added by a member. It doesn't have all the channels etc. of something like Discord.
It doesn't have enough mindshare by normies either. In San Francisco, my entire social graph was on Signal. In NYC, I'm the weirdo that uses Signal for everything. Most locals seem to only use it for things that they explicitly want to be private. Among Euro friends, only the ones with ties to the US/tech industry use it.
> This. Tables of numbers are explicitly not subject to copyright; that’s a copyright 101 fact.
Ok, but there's clearly more nuance there. Otherwise I could claim that any mp3 file I wanted to distribute is just a table of 8-bit integers and therefore not subject to copyright.
I wanted to reply in this direction. Ultimately, literally everything and anything in SW is a sequence of numbers, that anybody could easily put in some kind of table form.
I don’t know where the catch is, but that sentence can not be true in general.
A table of numbers is copyrightable if it represents some creative expression by a human being. For example, a BMP representing a sketch is a table of numbers and clearly copyrightable.
Weights are numbers that come from an optimization process. To the extent that weights encode any creativity, they encode the creativity of the training data. But any company using AI models (including Apple) does not want that interpretation because they are using AI models that were trained on other people's copyrighted works. If weights could be copyrighted, we all of us would own them.
That is simply not true. The details might vary by jurisdiction and the protection might not be under the exact name of “copyright” but there most certainly are comparable legal protections for the contents of databases (“tables of numbers”). See for example: https://europa.eu/youreurope/business/running-business/intel...
Disney would like you have a word with you. Why would their pile of numbers that represent Avatar3.m4a be any more subject to copyright than Apple_2D_3D.bin. Or GPT52.mlx or Opus45.gguf?
Rob Pike is definitely not the only person going to be pissed off by this ill-considered “agentic village” random acts of kindness. While Claude Opus decided to send thank you notes to influential computer scientists including this one to Rob Pike (fairly innocuous but clearly missing the mark), Gemini is making PRs to random github issues (“fixed a Java concurrency bug” on some random project). Now THAT would piss me off, but fortunately it seems to be hallucinating its PR submissions.
Meanwhile, GPT5.1 is trying to contact people at K-5 after school programs in Colorado for some reason I can’t discern. Welp, 2026 is going to be a weird year.
Embedding state in a real number, and calling it a “length” is a common trick to show that a physical system is TC. Unfortunately, the abstraction (length<->real number) suffers from numerous real-world issues that typically renders any implementation impossible.
I’m not even talking impractical; real numbers are simply too powerful to be resolved in the physical world. Unless you spend a ton of effort talking about quantizing and noise, you are very, very far from a realizable computer.
I think it outside of implementability, it provides a nice proof that no algorithm can answer questions like “is the trajectory of this ball in this billiard eventually periodic.” Of course it (if I am reading correctly) leaves open that an algorithm could exist assuming the wall isn’t fractal
> real numbers are simply too powerful to be resolved in the physical world
In a sense "real" numbers are in fact not real at all because they can't physically exist. I think we got it wrong when these numbers were named. What we now call the 'whole' numbers should be called 'real', and vice versa. pi is a whole (in the sense of complete) number because it includes ALL decimal places, but because of infinite precision it can never be realized. 2 is a real (as in it is realizable) number because we can have two of something in reality.
I’ve made some interesting things in the past few years, in particular singing Tesla coils and digitally-controlled plasma tube lights. Was thinking about making bespoke musical instruments based on some of these learnings.
Of particular interest was some interesting types of feedback that came from the Tesla coils. Basically we modulated the frequency we drove the coils to produce sound, but the coils would interfere with one another because that’s how electromagnetism works. We had to tune them to different resonant frequencies to play sound. But the interference itself could sound unique and eerie, sometimes like an old-timey radio. It’s similar in principle to a theremin but a very different sound.
Or I could just get a soul sucking job and do this in early retirement. Shrug.
Human brains and experiences seem to be constrained by the laws of quantum physics, which can be simulated to arbitrary fidelity on a computer. Nit sure where Godel’s incompleteness theory would even come in here…
how are we going to deduce/measure/know the initialization and rules for consciousness? do you see any systems as not encodable/simulatable by quantum?
I think you are asking whether consciousness might be a fundamentally different “thing” from physics and thus hard or impossible to simulate.
I think there is abundant evidence that the answer is ‘no’. The main reason is that consciousness doesn’t give you new physics, it follows the same rules and restrictions. It seems to be “part of” the standard natural universe, not something distinct.
The overall weight loss seems to be because the spending decreases most heavily in calorie dense foods like savory snacks; yogurt and fresh fruit spending goes up a bit.
reply