Why would you assume culture is immaterial? And to make this less emotional let’s take the micro scale; don’t you think the culture of doing engineering doesn’t affect outcomes team to team within the same company, or company to company within the same country or even country to country within the same company?
I understand your point about misattribution but it cuts both ways. How about when a company is better than competitors because they executed better because they had a superior organizational culture. Or not successful and this is due to poor culture.
YC sets the prime examples. It is never product at the expense of who the team is and in what proven way they have worked together and plan to execute at scale.
First problem is turning engineers into accountability sinks. This was a problem before LLMs too, but now a much bigger and structural problem with democratization of the capacity to produce plausible looking dumb code. You will be forced to underwrite more and more of that, and expected to absorb the downsides.
The root cause is the second problem; short of formal verification you can never exhaustively prove that your code works. You can demonstrate and automate that demonstration for a sensible subset of inputs and states and hope for the state of the world approximately staying that way (spoiler: it won't). This is why 100% test coverage in most cases is something bad. This is why sensible is the key operative attitude, which LLM suck at right now.
The root cause of that one is the third problem; your job is to solve a business problem. If your code is not helping the business problem, it actually is not working in the literal sense of the work. It is an artifact that does a thing, but it is not doing work. And since you're downstream of all the self-contradicting, ever changing requirements in a biased framing of a chaotic world, you can never prove or demonstrate that your code solves a business problem and that is the end state.
At no point compilers produced stochastic output. The intent user expressed was translated down with a much much higher fidelity, repeatability and explainability. Most important of all, it completely removed the need for the developer to meddle with that output. If anything it became a verification tool for the developer‘s own input.
If LLMs are that good, I dare you skip the programming language and have it code in machine directly next time. And it is exactly how it is going to feel like if we treat them as valuable as compilers.
> At no point compilers produced stochastic output. [...] Most important of all, it completely removed the need for the developer to meddle with that output.
Yes, once the optimizations became sophisticated enough and reliable enough that people no longer needed to think about it or go down to assembly to get the performance they needed. Do you get the analogy now?
I don't know why you'd think your analogy wasn't clear in the first place. But your analogy can't support you on the assertion that optimizations will be sophisticated and reliable enough to completely forget about the programming language underneath.
If you have any first principles thinking on why this is more likely than not, I am all ears. My epistemic bet is that it is not going to happen, or somehow if we end up there the language we will have to use to instruct them is not going to be different than any other high level programming language that the point will be moot.
> But your analogy can't support you on the assertion that optimizations will be sophisticated and reliable enough to completely forget about the programming language underneath.
Funny how this exactly applies to instrument playing. Unearned speed only begets sloppiness. The only way to go past a certain velocity is to do meticulous metronome work from a perfectly manageable pace and build up with intention and synchrony. And even then it is not a linear increase, you will need to slow back down to integrate every now and then. (Stetina's "Speed Mechanics for Lead Guitar"; 8 bpm up, 4 bpm down)
At slow, manageable tempos, you can afford to use motions that don't scale to fast tempos. If you only ever play "what you can manage" with meticulous, tiny BPM increments, you'll never have to take the leap of faith and most likely will hit a wall, never getting past like 120-130 BPM 16ths comfortably. Don't ask how I know this.
What got me past that point was short bursts at BPMs way past my comfort zone and building synchrony _after_ I stumbled upon more efficient motions that scaled. IIRC, this is what Shawn Lane advocated as well.
I recommend checking out Troy Grady's (Cracking The Code) videos on YouTube if you're interested in guitar speed picking. Troy's content has cleared up many myths with an evidence-based approach and helped me get past the invisible wall. He recently uploaded a video pertaining to this very topic[0].
> What got me past that point was short bursts at BPMs way past my comfort zone and building synchrony _after_ I stumbled upon more efficient motions that scaled.
This is actually pretty close to what Stetina says. I just probably didn’t do a good job expressing it.
You’re oscillating above and below the comfort zone and that iteration like you say affords insights from both sides, and eventually the threshold grows.
Depends on the instrument. For wind instruments, the motions basically don’t change, and your focus is on synchronizing your mouth with your hands. Tonguing technique is different at high speed but you would typically practice with the same technique at low speed when learning a fast piece.
But the motions do change, at very slow tempos you can move basically one finger at a time, at faster tempos you have simultaneously overlapping motions.
On a trumpet? A clarinet? No, the motions don't simultaneously overlap. The fingering mechanics are slightly different at speed, but you would still start slow while using the higher speed mechanics and tonguing technique, not jump into high speed practice first.
No one is saying not to practice slow first. This advice is specifically for intermediate or advanced students who are putting a focus on developing speed specifically. Practice slow first, increase tempo slowly next, but when you hit a plateau, you need to add some repetitions that are well outside your comfort zone. You need to feel what it feels like to play fast, then clean it up.
It seems like this is a far more time efficient methodology to build speed on guitar, I do not know why it wouldn’t apply to other instruments like trumpet.
When I was in high school, a friend who played drums in a band would try to pull off these super complicated fast fills. He couldn't pull them off and I always thought, "why doesn't he play something he can get right?" Well, after months of practice, he was able to pull them off. He was a great drummer, but he worked at incredibly hard to get there. It's a little tangential to what you said, but it feels appropriately related.
I guess I'm agreeing while also saying that you can get there by failing a lot at full speed first. Maybe he practiced at half-speed when he was alone and I never saw that part.
One could argue that learned speed has the hours of practice "baked in" so it's actually much slower. And that's not a bad thing IMO.
I think this post only covers one side of the coin. Sure, getting things done fast achieves the outcome, but in the long run you retain and learn less. Learning new stuff takes time and effort.
> the exact same things the sloppy player is doing, but you do it in time and in tune.
It depends on the level we look at it, but I think there is fundamental difference in what excellent (professional grade?)players are doing compared to "sloppy" ones.
It is not just done with more precision and care, they will usually have a different mental model of what they're doing, and the means to achieve the better result is also not linear. Good form will give good results, but it won't lead to a professional level result. You'll need to reinvent how you apply the theory to your exact body, the exact instrument in your hand, what you can do and can't and adjust from there.
That's where veteran players are still stellar while you'd assume they don't have the muscle and precision a younger player obviously has.
PS: I left aside the obvious: playing in time and in tune is one thing, conveying an emotion is another. It is considerably hard to move from the former to the latter.
Language is not humanness either; it is a disembodied artifact of our extended cognition, it is a way of transferring the contents of our consciousness to others or to ourselves over time. This is precisely what LLMs piggyback on and therefore are exceedingly good at simulating, which is why the accuracy of "is this human" tools are stuck at %60-70's (%50 is a coin flip), and are going to be bounded for a foreseeable future.
And I am sorry to be negative but there is so much bad cognitive science in this article that I couldn't take the product seriously.
> LLMs can be scaled almost arbitrarily in ways biological brains cannot: more parameters, more training compute, more depth.
- Capacity of raw compute is irrelevant without mentioning the complexity of computation task at hand. LLM's can scale - not infinitely - but they solve for O(n^2) tasks. It is also amiss to think human compute = a singular human's head. Language itself is both a tool and protocol of distributed compute among humans. You borrow a lot of your symbolic preprocessing from culture! Like said, this is exactly what LLM's piggyback on.
> We are constantly hit with a large, continuous stream of sensory input, but we cannot process or store more than a very small part of it.
- This is called relevance, and we are so frigging good at it! The fact that machine has to deal with a lot more unprioritized data in a relatively flat O(n^2) problem formulation is a shortcoming, not a feature. Visual cortex is such an opinionated accelerator of processing all that massive data that only the relevant bits need to make to your consciousness. And this architecture was trained for hundreds of millions of years, over trillions of experiment arms - that were in parallel experimenting on everything else too.
> Humans often have to act quickly. Deliberation is slow, so many decisions rely on fast, heuristic processing. In many situations (danger, social interaction, physical movement), waiting for more evidence simply isn't an option.
- Again a lot of this equivocates conscious processing to entire cognition. Anyone who plays sports or music knows to respect the implicit, embodied cognition that goes on to achieve complex motor tasks. We are yet to see a non-massively-fast-forwarded household robot do a mundane kitchen cleaning task, and go play table tennis with the same motor "cortex". Motor planning and articulation is a fantastically complex computation; just because it doesn't make it to our consciousness or instrumented exclusively through language doesn't mean it is not.
> Human thinking works in a slow, step-by-step way. We pay attention to only a few things at a time, and our memory is limited.
- Thinking, Fast and Slow by Kahneman is a fantastic way of getting into how much more complex the mechanism is.
The key point here is as limited in their recall, how good humans are at relevance, because it matters, because it is existential. Therefore when you are using a tool to extend your recall, it is important to see its limitations. Google search having indexed billions of pages is not a feature if it can't bring the top results well. If it gets the capability to sell me whatever it brought up was relevant, that still doesn't mean the results are actually relevant. And this is exactly the degradation of relevance we are seeing in our culture.
I don't care if the language terminal is a human or a machine, if the human was convinced by the low relevance crap of the machine it just a legitimacy laundering scheme. Therefore this is not a tech problem, it is a problem of culture; we need to be simultaneously cultivating epistemic humility, including quitting the Cartesian tyranny of worshipping explicit verbal cognition that is assumed to be locked up in a brain; we have to accept that we are also embodied and social beings that depend on a lot of distributed compute to solve for agency.
We know from the era of data the power of JOIN. Bring in two different data sources about a thing and you could produce an insight neither of them could have provided alone.
LLMs can be thought as one big stochastic JOIN. The new insight capabilities - thanks to their massive recall - is there. The problem is the stochasticity. They can retrieve stuff from the depths and slap them together but in these use cases we have no clue how relevant their inner ranking results or intermediary representations were. Even with the best read of user intent they can only simulate relevance, not really compute it in a grounded and groundable way.
So I take such automatic insight generation tasks with a massive grain of salt. Their simulation is amusing and feels relevant but so does a fortune teller doing a mostly cold read with some facts sprinkled in.
> → I solve problems faster by finding similar past situations → I make better decisions by accessing forgotten context → I see patterns that were invisible when scattered across time
All of which makes me skeptical of this claim. I have no doubt they feel productive but it might just as well be a part of that simulation, with all the biases, blind spots etc originating from the machine. Which could be worse than not having used the tool. Not having augmented recall is OK, forgetting things are OK - because memory is not a passive reservoir of data but an active reranker of relevance.
LLMs can’t be the final source of insight and wisdom, they are at best sophists, or as Terrence Tao put it more kindly, a mere source of cleverness. In this, they can just as well augment our self-deception capacity, maybe even more than counterbalancing them.
Exercise: whatever amusing insight a machine produces for you, ask for a very strong counter to it. You might be equally amused.
> I'm always surprised how many 'logical' tech people shy away from simple determinism, given how obvious a deterministic universe becomes the more time you spend in computer science, and seem to insist there's some sort of metaphysical influence out there somewhere we'll never understand. There's not.
You might be conflating determinism with causality. Determinism is a metaphysical stance too because it asserts absence of free will.
Regardless of the philosophical nuance between the two, you are implicitly taking the vantage point of "god" or Laplace's Demon: infinite knowledge AND infinite computability based on that knowledge.
Tech people ought to know that we can't compute our way out of combinatorial explosion. That we can't even solve for a simple 8x8 game called chess algorithmically. We are bound with framing choices and therefore our models will never be a lossless, unbiased compression of reality. Asserting otherwise is a metaphysical stance, implicitly claiming human agency can sum up to a "godlike", totalizing compute.
In sum, models will never be sophisticated enough, claiming otherwise has always ended up being a form of totalitarianism, willful assertion one's favorite "framing", which inflicted a lot of pain in the past. What we need is computational humility. One good thing about tech interviews that it teaches people resource complexity of computation.
> what are fair ways to extract value from citizens for the shared value of the state?
The right question is who benefits the most from state’s services. For example if a whole lot of security, legislative or admin services go to protecting the capital, then those who has the most capital need to chip in the most.
> redistribution is usually that “more” people reach a higher standard of living, then adding taxes and friction to processes like automation may conflict with that goal
This is basically a 50 year old trickle down argument. But real wages have not increased in comparison to gdp since 70s, so nothing trickled down. We are demonstratedly bad at sharing what we have achieved together, no reason to believe more tech will magically get better treatment than that.
Besides redistribution is not about shifting the curve up, but making it flatter - see gini coefficient.
> the core benefit of automation, which is to delete non-needed work, make things cheaper, and make the value creator richer.
Except the era of classical capitalism and inventor’s profit is over, since 70s it is rentiers unreciprocated extraction on top of purported value people didn’t necessarily ask for or need in the first place. Likewise most people aren’t dying for AI automation, and not even for structural threats; it is not even proven that it will provide a net total productivity gain when the hype cools down, despite being shoved down people’s throats.
Let’s not kid ourselves, there is little concern for real value creation but a capture-the-flag on a gigantic data-moated compute monopoly. Whatever democratic means enabled proper taxation would have already prevented this type of speculative berserk, failures of which I assure you will be socialized.
So friction = societal consent, internalizing externalized costs, revealing what is actually value versus monopolist’s rent. It is healthy for the society, it is healthy for capitalism.
Actually real wages have increased a lot since the 70s if you count employer contributions to employee heath insurance. The problem is that a lot of that money is being wasted by an inefficient healthcare system, and employers probably shouldn't even be involved in sponsoring group health plans in the first place.
Employers paid for healthcare in 1970s too, and even for higher percentages of the workforce. If there is a premium inflation surpassed the CPI, that is still inflation, not real growth. If there’s an inflation problem in delivering a temporally comparable service, that is not a “real wage” item for the employee [1]. So what the nominal figure today shouldn’t be relevant.
I agree it shouldn’t be an employer item too, but whatever employers lose on premiums, they get more on an overall stickier and cheaper labor supply.
[1] one could argue the productivity of healthcare increased, and the data indeed supports this with the overall life expectancy increase from 70s to now mid 70s plus quality of life treatments. But again most of the spend is actually on the tail end at this age group, which raises the workers’ premium without delivering the benefit. Therefore not much structural gain for the actual working age employee.
I don't understand your point. Very few people in their 70s have employer sponsored group health insurance. Most are only on Medicare, perhaps with a commercial Medicare Advantage or Medicare Supplement plan.
My bad, skipped a chain of thought there. Since medicare pays less than private insurance, hospitals can and do shift costs (which in reality is "opportunity cost of profit") to the latter, which pushes to private premiums up. Regardless, this is a minor effect. Very little of the inflation is justified with productivity gains, as you said it is a very inefficient healthcare system. US prices clock 2x-4x of comparable OPEC peers, admin percent is higher etc.
>if you count employer contributions to employee heath insurance
You shouldn't.
>and employers probably shouldn't even be involved in sponsoring group health plans in the first place.
They are free to lobby for socialized medicine, but they don't because they like how the current system helps lock employees into bad jobs for any amount of healthcare.
If you're trying to understand changes in the share of income going to workers versus employers, then you must count those contributions. For the average family, employers pay $20,143 annually in premiums: https://www.kff.org/affordable-care-act/annual-family-premiu....
From the perspective of the employer, that's real money, no different than if they had paid the $20,143 directly to the employee as wages. It's not the employer's concern what happens to that money after they fork it over.
Maybe people would view it more like that if they actually had the option to get paid cash instead of an insurance plan of the same supposed value. With some employees that is possible to negotiate, but for the vast majority of employees with a healthcare plan that is a big no unless they are willing to accept a tiny fraction of the insurance value.
Unfortunate that he starts with the thinking argument because it will be nitpicked to death, while bullshit and computing freedom arguments are much stronger and to me personally irrefutably true.
For those who will take “bullshit” as an argument of taste I strongly suggest taking a look at the referenced work and ultimately Frankfurt’s, to see that this is actually a pretty technical one. It is not merely the systems’ own disregard to truth but also its making the user care about the truthiness less, in the name of rhetoric and information ergonomics. It is akin to the sophists, except in this case chatbots couldn’t be non-sophists even they “wanted” to because they can only mimic relevance, and the political goal they seem to “care” about is merely making other use them more - for the time being.
Computing freedom argument likewise feels deceptively about taste but I believe harsh material consequences are yet to be experienced widely. For example I was experiencing a regression I can swear to be deliberate on gemini-3 coding capabilities after an initial launch boost, but I realized if someone went “citation needed” there is absolutely no way for me to prove this. It is not even a matter of having versioning information or output non-determinism, it could even degrade its own performance deterministically based on input - benchmark tests vs a tech reporter’s account vs its own slop from a week past from a nobody-like-me’s account - there is absolutely no way for me to know it nor make it known. It is a right I waived away the moment I clicked “AI can be wrong” TOS. Regardless of how much money I invest I can’t even buy a guarantee on the degree of average aggregate wrongness it will keep performing at, or even knowledge thereof, while being fully accountable for the consequences. Regression to depending on closed-everything mainframes is not a computing model I want to be in yet cannot seem to escape due to competitive or organizational pressures.
>For example I was experiencing a regression I can swear to be deliberate on gemini-3 coding capabilities after an initial launch boost
Can you describe what you mean by this more? Like you think there was some kind of canned override put in to add a regression to its response to whatever your input was? genuine question
It is a black box. We don’t know what happens on the other side of the RPC call; good and bad, therefore it could be any number of knobs.
User has two knobs called the thinking level and the model. So we know there are definitely per call knobs. Who can tell if thinking-high actually has a server side fork into eg thinking-high-sports-mode versus thinking-high-eco-mode for example. Or if there were two slightly different instantiations of pro models, one with cheaper inference due to whatever hyperparameter versus full on expensive inference. There are infinite ways to implement this. Zero ways to be proven by the end user.
> An objective and grounded ethical framework that applies to all agents should be a top priority.
I mean leaving aside the problem of computability, representability, comparability of values, or the fact that agency exists in opposition (virus vs human, gazelle vs lion) and even a higher order framework to resolve those oppositions is a form of another agency in itself with its own implicit privileged vantage point, why does it sound to me that focusing on agency in itself is just another way of pushing protestant work ethic? What happens to non-teleological, non-productive existence for example?
The critique of anthropocentrism often risks smuggling in misanthropy whether intended or not; humans will still exist, their claims will count, and they cannot be reduced to mere agency - unless you are their line manager. Anyone who wants to shave that down has to present stronger arguments than centricity. In addition to proving that they can be anything other than anthropocentric - even if done through machines as their extensions - any person who claims to have access to the seat of objectivity sounds like a medieval templar shouting "deus vult" on their favorite proposition.
reply