"Programmer's best friend" is precisely the wrong thing to do though (it says nothing and only makes the reader confused. Are we talking about a language or a pet? I'm not looking for a friend.). They took a step back with that.
The old one was better because it said something about what the language is and how it benefits the user. "Best friend" is not descriptive. "dynamic language with minimal syntax that is easy to read and write" at least tells me something about Ruby, its priorities, and value proposition. I'm very concerned about a language that claims it wants to be my friend.
Dunno, it's a comfy tagline. I never got into Ruby but it always feels to me like it's a really ergonomic and cozy language. Sure, the best friend thing is a stretch, but it's honestly a slogan. How many people land on this page with no knowledge of what Ruby is and will confuse it with an app to make friends?
The number of times Matz is mentioned and depicted on the homepage is offputting. MINASWAN feels too close to WWJD for me. I can't think of another programming language community that does this, and I'm including Wolfram in that assessment.
> I need to press twice to see what the code does when it runs, which isn't a lot
I don't know the exact numbers, but the figures show you lose a high percentage of viewers with each click. So if you have 100 people who view the first page, 10 of them might click the link to the second page, and only 1 of them might click the link to the third page. If having customers view the running code is crucial, you'd want it on the very first page, above the fold.
The result of you having worked 4 hours to implement the thing is not just that you have the thing, it's that you have the thing and you understand the thing. Having the thing is next to useless if you don't understand it.
At best it plods along as you keep badgering Claude to fix it, until inevitably Claude reaches a point where it can't help. At which time you'll be forced to spend at least the 4 hours you would have originally spent trying to understand it so you can fix it yourself.
At worst the thing will actively break other things you do understand in ways you don't understand, and you'll have to spend at least 4 hours cleaning up the mess.
Either way it's not clear you've saved any time at all.
You do learn how to control claude code and architect/orient things around getting it to deliver what you want. That's a skill that is both new and possibly going to be part of how we work for a long time (but also overlaps with the work tech leads and managers do).
My proto+sqlite+mesh project recently hit the point where it's too big for Claude to maintain a consistent "mental model" of how eg search and the db schemas are supposed to be structured, kept taking hacky workarounds by going directly to a db at the storage layer instead of the API layer, etc. so I hit an insane amount of churn trying to get it to implement some of the features needed to get it production ready.
But now I know some new tricks and intuition for avoiding this situation going forward. Because I do understand the mental model behind what this is supposed to look like at its core, and I need to maintain some kind of human-friendly guard rails, I'm adding integration tests in a different repo and a README/project "constitution" that claude can't change but is accountable for maintaining, and configuring it to keep them in context while working on my project.
Kind of a microcosm of startups' reluctance to institute employee handbook/kpis/PRDs followed by resignation that they might truly be useful coordination tools.
I agree with this sentiment a lot. I find my experience matches this. It's not necessarily fast at first, but you learn lessons along the way that develop a new set of techniques and ways of approaching the problem that feel fundamental and important to have learnt.
My fun lesson this week was there's not a snowballs chance in hell GitHub Copilot can correctly update a Postman collection. I only realised there was a Postman MCP server after battling through that ordeal and eventually making all the tedious edits myself.
Yeah, this is close to my experience with it as well. The AI spits out some tutorial code and it works, and you think all your problems are solved. Then in working with the thing you start hitting problems you would have figured out if you had built the thing from scratch, so you have to start pulling it apart. Then you start realizing some troubling decisions the AI made and you have to patch them, but to do so you have to understand the architecture of the thing, requiring a deep dive into how it works.
At the end of the day, you've spent just as much time gaining the knowledge, but one way was inductive (building it from scratch) while the other is deductive (letting the AI build it and then tearing it apart). Is one better than the other? I don't know. But I don't think one saves more time than the other. The only way to save time is to allow the thing to work without any understanding of what it does.
Respectfully, I think I’m in a better position to decide a) what value this has to me and b) what I choose to learn vs just letting Opus deal with. You don’t have enough information to say if I’ve saved time because you don’t know what I’m doing or what my goals are.
Respectfully, a) I didn't say anything about what value this has to you but moreover...
b) you also don't have enough information to say if it's saved you time because the costs you will bear are in the future. Systems require maintenance, that's a fact you can't get rid of with AI. And often times, maintaining systems require more work than building them in the first place. Maintaining systems tends to require a deep understanding of how they work and the tradeoffs that were decided when they were built.
But you didn't build the thing, you didn't even design it as you left that up to Claude. That makes the AI the only thing on the planet that understands the system, but we know actually the AI doesn't understand anything at all. So no one understands the system you built, including the AI you used. And you expect that this whole process will have saved you time, while you play games?
I just don't see it working out that way, sorry. The artifact the AI spit out will eventually demand you pay the cost in time to understand it, or you will incur future costs for not understanding it as it fails to act as you expect. You'll pay either way in the end.
You're still captive to a product. Which means that when CloudCo. increases their monthly GenAI price from $50/mo. to $500/mo., you're losing your service or you're paying. By participating in the build process you're giving yourself a fighting chance.
I will quickly forget the details about any given code base within a few months anyway. Having used AI to build a project at least leaves me with very concise and actionable documentation and, as the prompter, I will have a deep understanding of the high-level vision, requirements and functionality.
> nobody would even begin to suggest this if we were talking about alcohol.
When we talk about alcohol, we explicitly separate presence from impairment using blood alcohol concentration. We set legal thresholds because studies show a sharp increase in crash risk above those levels, relative to sober drivers. If alcohol were evaluated by merely asking "was alcohol present?" we would massively overestimate its causal role the same way THC is being overestimated here.
The problem with THC data is not that baseline comparisons are illegitimate; it's that we lack an agreed-upon, time-linked impairment metric comparable to BAC. THC metabolites persist long after intoxication, so presence alone is a weak proxy for risk.
So applying baseline controls to THC is not "apologism", it's applying the same evidentiary standards we already demand for alcohol, so the opposite of what you said.
> If less than 40% of the population has impairment levels of THC at any given time but 40% of deceased car crash drivers have impairment levels
You're looking at two different populations in this and your other comments, drawing a false equivalence. The study is over a 6 year period, over which 103 people (40%) tested positive for THC. You're saying that because the number of people who self-reported consuming THC in the last year is 20%, that means the result of the study is eye popping and shocking because the number is 40%. But you cannot directly infer elevated risk just because a subgroup has a higher prevalence than the general population without controlling for exposure and confounders. Especially considering what we are talking about is people self-reporting they are criminals.
Moreover, fatal crashes are not randomly distributed across age groups or vehicle types, and younger people, because they are not as experienced, they drive more often, in smaller cars with fewer safety features, are more likely both to smoke THC, and die in crashes even while sober. So there's a strong sampling bias here you're not accounting for.
And this isn't downplaying the results, it's pointing out its limitations of the study and warning you not to read into it what isn't there. You seem to be shocked by the results which should cause you to dig deeper into the study. I would say the most surprising thing here is they found nothing changed before and after legalization.
AI is allowing a lot of "non SWEs" to speedrun the failed project lifecycle.
The exuberance of rapid early-stage development is mirrored by the despair of late-stage realizations that you've painted yourself into a corner, you don't understand enough about the code or the problem domain to move forward at all, and your AI coding assistant can't help either because the program is too large for it to reason about fully.
AI lets you make all the classic engineering project mistakes faster.
For reference this is the old one, which is much better: https://www.ruby-lang.org/images/about/screenshot-ruby-lang-... From: https://www.ruby-lang.org/en/about/website/
The old one was better because it said something about what the language is and how it benefits the user. "Best friend" is not descriptive. "dynamic language with minimal syntax that is easy to read and write" at least tells me something about Ruby, its priorities, and value proposition. I'm very concerned about a language that claims it wants to be my friend.
reply