They also did an MRI scan on Honnold and found that he doesn't have the usual fear response. It's not clear if this was trained away, or if it's something innate.
I recall reading about a certain species of birds where, to impress the females, the males dives to the ground. The closer to the ground before they pull out of the dive the more impressive.
The scientist found there was a gene encoding how daring a bird would be, mostly clustered in two groups IIRC. But there was a rare variant which made them much more fearless, causing them to go much lower than the others.
However they only found birds with one copy of that variation. Turned out if a bird inherited the variant from both parents, they never pulled out of the dive and smacked into the ground, killing the bird.
These crazy free solo climbs and similar reminds me of those birds.
> If you optimize below 1487 cycles, beating Claude Opus 4.5's best performance at launch, email us at performance-recruiting@anthropic.com with your code (and ideally a resume) so we can be appropriately impressed and perhaps discuss interviewing.
That doesn’t seem snarky to me. They said if you beat Opus, not their best solution. Removing “perhaps” (i.e. MAYBE) would be worse since that assumes everyone wants to interview at Anthropic. I guess they could have been friendlier: “if you beat X, we’d love to chat!”
There's more to employees than their raw ability to go below some performance threshold. If somebody passes the test, but lives in an US sanctioned country with no plans to move, is well known for using the n-word on social media or has previously broken an NDA, Anthropic probably doesn't want to interview them.
I understand how it can be interpreted as snarky, but how could it have been written better? It's a hard path to walk and recruiting/interviewing is inherently sensitive it seems.
> It's a hard path to walk and recruiting/interviewing is inherently sensitive it seems.
Hiring and interviewing is in a weird place right now. We’re coming off of a period where tech jobs were easy to get and companies were competing for candidates. A lot of candidates quickly got used to the idea of companies working hard to charm and almost beg them to join. When those candidates encounter what it’s like to apply for highly competitive companies who have 1000x more applicants than they’d ever consider, the resulting straightforwardness can be shocking.
>If you optimize below 1487 cycles, beating Claude Opus 4.5's best performance at launch, email us at performance-recruiting@anthropic.com with your code (and ideally a resume) so we can be appropriately impressed and perhaps discuss interviewing.
Not condescending
> If you optimize below 1487 cycles, beating Claude Opus 4.5's best performance at launch, email us at performance-recruiting@anthropic.com with your code so we can schedule an interview.
No fucking shit, I paraphrased Anthropic's comments as
> do better than we have publicly admitted most of humanity can do, and we may deign to interview you
If you think telling someone that after passing a test that 99.999% of humanity cannot pass, that they _may_ get an interview, you are being snarky/condescending.
That's not how paraphrasing works. They probably intentionally held back from guaranteeing an interview, for various reasons. One that seems obvious to me is that with the bar set at "Claude Opus 4.5's best performance at launch", it's plausible that someone could meet it by feeding the problem into an LLM. If a bunch of people do that, they won't want to waste time interviewing them all.
You may want to consider the distribution and quantity of replies before stating that you WILL do something that might just waste more people’s time or not be practical.
The classy thing to do would be responding to every qualifying submission, even if it’s just to thank everyone and let some people know the field was very competitive if an interview won’t be happening.
So I like these public challenges, but as someone who set some public questions, ask any company who ran any public contest for their opinion. The pool is filled with scammers who either bought the solutions through sites like Chegg or sometimes even just stackoverflow.
I took the "perhaps" as a decision to be considered by the applicant, considering they'd be competent enough to get in at a place of their choice, not just anthropic.
Does the applicant or the employer decide if an interview happens in your experience?
Do you think if the applicants are really in that level of demand that they would be getting a take home test instead of being actively recruited?
Legitimately lay out your understanding of a world where an employer is chasing after employees who are high in demand, give them a test that is expected to take hours, and have a hedged bet in their wording, instead of saying we will absolutely hire you if you pass X bar?
It seems like the biggest downside of this world is iteration speed.
If the AT instagram wants to add a new feature (i.e posts now support video!) then can they easily update their "file format"? How do they update it in a way that is compatible with every other company who depends on the same format, without the underlying record becoming a mess?
Adding new features is usually not a problem because you can always add optional fields and extend open unions. So, you just change `media: Link | Picture | unknown` to `media: Link | Picture | Video | unknown`.
You can't remove things true, so records do get some deprecated fields.
Re: updating safely, the rule is that you can't change which records it would consider valid after it gets used in the wild. So you can't change whether some field is optional or required, you can only add new optional fields. The https://github.com/bluesky-social/goat tool has a linting command that instantly checks whether your changes pass the rules. In general it would be nice if lexicon tooling matures a bit, but I think with time it should get really good because there's explicit information the tooling can use.
If you have to make a breaking change, you can make a new Lexicon. It doesn't have to cause tech debt because you can make all your code deal with a new version, and convert it during ingestion.
That's true if you define the problem as "does my parser crash" and not whether the app is perceived as working correctly. If some platform adds support for video posts, then the next thing that happens is people start making posts that are only video. Meaning that in every other client, users see what appears to be an entirely empty post. Which will be considered a bug.
This is the core argument of Moxie's seminal essay, The Ecosystem Is Moving:
One of the controversial things we did with Signal early on was to build it as an unfederated service. Nothing about any of the protocols we’ve developed requires centralization; it’s entirely possible to build a federated Signal Protocol-based messenger, but I no longer believe that it is possible to build a competitive federated messenger at all.
That was written in 2016 but it was true then and continues to be true today. Users reject federated open platforms because the coordination costs mean they don't move as fast as proprietary centralized platforms, and they often appear broken even if technically working as designed.
Nothing about that analysis is unique to social media. It is also true of file formats. OpenOffice never took off because new features got added to Office first, so files that used those features would open in semi-corrupted ways in OpenOffice. The fact that OO represented things internally using open unions didn't matter at all.
I disagree that Bluesky is in conflict with The Ecosystem Is Moving. In contrast to most decentralized/distributed protocol projects they've managed to maintain control of almost all of their infrastructure with the exception of the personal data servers (pdses) of which they control 99.01%[1]
Almost all ATProto apps just fetch posts by handle => did:plc => post-type aka "lexicon", so they depend on what Bluesky decides to give them. If someone were to introduce unknowns into the flagship product's "lexicon" they could fix that at the API or Indexing level before shipping this data to the apps that depend on their API.
An actually decentralized network would have to overcome Moxie's criticism of the ecosystem. Can it be done? We'll keep trying.
Well, this doesn't prevent the "flagship" app from shipping things and doesn't slow it down. So it's at least not slowing down development which is the argument the parent post was making.
I've actually observed the exact opposite thing. Since Bluesky is open source, it's often visible when developers start working on a feature. And they often check in lexicon changes early on. As a result, there's been a few cases where third party client actually added support for these features earlier than the official one since they already knew the shape of the data.
This wouldn't always work, of course. Yes, if you're developing an app or a client, you better keep up with the ecosystem. But the landscape is competitive and there is no cost to switching. So if something falls behind, you can use something else.
Moxie's argument is that even if something has a flagship app, this doesn't help because if you use a new feature and then your friend complains that they can't see what you posted, the experience is just that the flagship app itself is broken. People don't experience this as, oh well, my friend should just have picked a better client. They experience it as, that's annoying, the video feature doesn't work reliably on Y but it always does on X.
An extreme example of this is WhatsApp and emojis. WhatsApp doesn't use the operating system's text rendering APIs to draw emojis, instead Meta license the Apple emoji font and draw the characters themselves. That's because if you do emoji the standards based, open way you get these problems:
• People use visual puns that only make sense if emojis look a certain way, without realizing they might look very different to other people.
• People use new emoji, without realizing that old operating systems can't draw them.
The experience in both cases is that it's simply broken and buggy. Version skew can kill platforms, which is why the most successful platforms today restrict which clients can connect and forcibly expire them on a regular basis.
BTW I don't think it's worth generalizing from Bluesky. Bluesky is an X clone whose unique competitive advantage is censoring conservatives even more aggressively than Twitter itself once did. It has no technical edge so they can develop the site as open source without it being a concern to them - they don't care if they leak what they're doing because technical innovation isn't a part of their brand to begin with - and the AT protocol clearly doesn't matter to more than a tiny fraction of its users. The point you're making in the essay is general, but ends up feeling overfit to Bluesky.
There is always one party "in control" of the lexicon and its canonical version.
I think it's important to distinguish this from the "every client adds their own features" thing. Technically yes, each app can add their own things to the open union that they support better. But it's also on each implementer's to consider how this would affect UX in other clients (e.g. if you add your own embed type, it seems reasonable to also prepopulate a link embed that acts as fallback). The problems you're describing are real, but I think we should give a bit more credit to the app builders since they're also aware that this is a part of their user experience design.
But still, whoever "owns" the lexicon says what's canonical. Then yes, some other software might not catch up to what's canonical but that's similar to what's happening with any platform that supports multiple clients today. Unless your outlook is that alternative clients in general are not competitive for this reason. I think that's a grim outlook, and if that were true, services wouldn't go to extra lengths to intentionally shut down their APIs, which so has been the trend with every network.
I think in longer term the bet is that the benefits unlocked by interop and a more competitive product landscape will become clearer to end users, who will be less interested in joining closed platforms and will develop some intuitions around that. This would not happen soon, so until then, the bet is that interop will allow creating better products. And if that doesn't happen, yes, it's pretty hard for open to compete.
Well, I never personally formed a strong opinion on Moxie's take, although I do understand it. Basically yes his outlook is that any service that doesn't actively ban alternative clients will be outcompeted by those that do.
The reason is that if alt clients are possible then some fraction of the userbase will adopt them. And if some users adopt them that means the experience of other users of the service gets worse, because new features become unreliable and flaky. You think you understand what another person sees and can do, but you don't, and this leads to poor experiences.
Viewed another way the client is an integrated part of the platform and not something that you can enable users to change freely, any more than they could be allowed to change the servers freely. We don't allow the latter because users would do things that broke the service, and so it is also for the former.
Empirically Moxie seems to be correct. Of the 90s era open protocols the only ones that survive are the web and email. The web survives but it's never been truly open in an extension sense - the definition of HTML has always been whatever the dominant browser of the era accepts, and this has become moreso over the time not less. There are no more plugin APIs for instance. SMTP survives, barely, because it's the foundation of internet identity. But many people even in corporate contexts now never send an email. It's all migrated to Slack or Teams. And if you look carefully at the Slack API it's not possible to make a truly complete alternative client with it, the API is only intended for writing bots.
This is grim but I'm not sure it's false and I'm not sure it can be changed. Also, Moxie's essay ends on a positive note. He observes that competition between mobile social networks does still work well despite the lack of federation, because they coalesced around using the user's phone number as identity and address book as the friends list, so you can in fact port your social network to a different network just by signing up. The notification center in the OS provides the final piece of the puzzle, acting as a unified inbox that abstracts the underlying social network.
This is rather mobile specific but seems basically correct to me. So that suggests the key pillar isn't file formats or protocols but ownable identity. It works because telcos do the hard work of issuing portable identities and helping people keep them, and ownership can be swiftly verified over the internet.
Maybe I’m optimistic for no good reason. I feel there’s something slightly different at play here but I struggle to put my finger on it.
Overall I agree with the take that a client for somebody else’s product is usually degraded experience. You want to deploy features vertically, and even existence of worse clients is a threat to that. I think one exception to that is when the main client is intentionally worse for the sake of other goals. Eg lots of ads. Or just the company being (or becoming) incompetent at making a client. Then alternative clients inject some new life and competition into the product. But if the product owner is good at making clients, they probably don’t benefit from fragmentation.
And still it feels somehow different to me. I think a part of it is that the space itself is vastly expanded in AT. We’re not just talking about a vertical app with a few offshoot clients that do the same thing but poorly. I mean, that still exists, but the proposition is that things can reach way beyond the original product. Products can blend. When products blend, there’s always this inherent coordination problem. Those who make multiple vertical products have to deal with similar things. At some point you have to stagger rollouts, support features partially or in a limited way, or show a simple version and direct to main app for more. Think Threads “bleeding through” Instagram, different kinds of cross-posting that they experiment with between apps. This is valuable but normally only monopolies can build meaningful cross-product bleeding because otherwise you have to negotiate API agreements and it’s just too risky to build on someone else’s turf.
What AT changes is that every product can cross-bleed into every other product. My teal.fm plays show up on my Blento page. So yes, you do have this fragmentation, but you also get something in return. It’s not fragmentation between the main way to show a thing and a couple offshoots, but between a web of products. And there is always a canonical schema so it doesn’t descend into chaos. I think there’s a “there” there.
I actually agree the key pillar is identity. It’s what AT URIs start with identity first. I think one way to look at AT is that the “contact list” analogy is right, but now that we’ve chosen the identity layer, we might as well put all our data into the contact list.
Riffing off the identity thing, one service I wanted for a while is something that issues X.509 certificates based on verified phone numbers. Phone numbers are a pretty great identity, perhaps the most successful private sector identity system ever, but they're expensive and annoying to verify, and the verification isn't portable across systems. A CA that did SMS verification and then gave you a certificate you could use with S/MIME or bind to passkeys or just use to sign software/documents in general, would democratize stable cryptographic identity. People generally can't handle key management directly, it's too easy to lose keys, but issuing transient keys tied to a phone number is much more palatable.
And PNs have got good features you want in general:
• Can have >1 of them if you want.
• Anonymous if you want.
• Not tied to any specific provider due to number portability laws.
• Hard to lose; phone companies will accept govt issued ID to get your account back if you lose your SIM and it's tied to a contract.
• Verifiable over the internet.
The only problem with them is they don't yield asymmetric keypairs.
The difficulty is business model. The people who want to consume such certificates are people who don't want to pay to verify numbers directly, but users don't want to pay either. So who pays.
Most apps reading records will validate a record against the schema for that type. e.g. there's nothing stopping you from making a app.bsky.feed.post record with more than 300 graphemes in the "text" field, but that post won't appear in the "official" app/website because it fails schema validation.
Similarly, there's nothing stopping you from adding another field in your post. It'll just get ignored because the app you're using doesn't know about it. e.g. posts bridged from mastodon by bridgy have an extra field containing the full original post's text, which you can display in your app if desired. reddwarf.app does this with these posts.
Lexicon validation works the same way. The com.tumblr in com.tumblr.post signals who designed the lexicon, but the records themselves could have been created by any app at all. This is why apps always treat records as untrusted input, similar to POST request bodies. When you generate type definitions from a lexicon, you also get a function that will do the validation for you. If some record passes the check, great—you get a typed object. If not, fine, ignore that record.
I enjoyed hearing Claude Code creator Boris Cherny talk about "latent demand"[0], which is when users start using your product for something it was not intended for. When that happens, it's a great signal that you should go build that into a full product.
Cowork seems like a great application of that principle.
People like tailwind because it feels like the correct abstraction. It helps you colocate layout and styling, thereby reducing cognitive load.
With CSS you have to add meaningless class names to your html (+remember them), learn complicated (+fragile) selectors, and memorise low level CSS styles.
With tailwind you just specify the styling you want. And if using React, the “cascading” piece is already taken care of.
The point of CSS is specifically to separate styling and semantics, so that they are not tightly coupled.
If you were writing a blog post you would want to be able to change the theme without going through every blog post you ever wrote, no?
If I'm writing a React component I don't want it tightly coupled to its cosmetic appearance for the same reason. Styling is imposed on elements, intrinsic styles are bad and work against reusability, that's why we all use resets is it not?
I do agree that the class name system doesn't scale but the solution is not to double down on coupling, but rather to double down on abstraction and find better ways to identify and select elements.
Content should come from your database, Markdown, JSON, models etc.
Presentation is determined by the HTML and CSS together.
So your content and presentation is already separate enough to get the benefits. Breaking up the presentation layer further with premature abstractions spread over multiple files comes at a cost for little payback. I'm sure everyone has worked on sites where you're scared to make CSS file edits because the unpredictable ripple of changes might break unrelated pages.
Styling code near your semantic HTML tags doesn't get in the way, and they're highly related too so you want to iterate and review on them together.
I've never seen a complex website redesign that didn't involve almost gutting the HTML either. CSS isn't powerful enough alone and it's not worth the cost of jumping through hoops trying because it's rare sites need theme switchers. Even blog post themes for the same platform come with their own HTML instead of being CSS-only.
> If you were writing a blog post you would want to be able to change the theme without going through every blog post you ever wrote, no?
Tailwind sites often have a `prose` class specifically for styling post content in the traditional CSS way (especially if you're not in control of how the HTML was generated) and this is some of the simplest styling work. For complex UIs and branded elements though, the utility class approach scales much better.
> I'm sure everyone has worked on sites where you're scared to make CSS file edits because the unpredictable ripple of changes might break unrelated pages.
CSS gives you multiple tools to solve this problem, if you don't use any of them then it's not really CSS's fault.
> Styling code near your semantic HTML tags doesn't get in the way
It does. When I'm working on functionality I don't want to see styles and vice versa. It adds a layer of noise that is not relevant.
If I'm making e.g. a search dropdown, I don't need to see any information about its cosmetic appearance. I do want to see information about how it functions.
Especially the other way around: if I'm styling the search dropdown I don't want to have to track down every JSX element in every sub-component. That's super tedious. All I need to know when I'm styling is the overall structure of the final element tree not of the vdom tree which could be considerably more complex.
> I've never seen a complex website redesign that didn't involve almost gutting the HTML either
Perhaps for a landing page. For a content-based website or web app you often want to adjust the design without touching your components.
> I've never seen a complex website redesign that didn't involve almost gutting the HTML either. CSS isn't powerful enough alone
I recognize your experience. But I would also like to argue that good semantic CSS class names require active development effort. If you inherit a code base where no one has done the work of properly assigning semantic CSS names to tags, then you can't update the external stylesheet without touching the HTML.
https://csszengarden.com/ shows how a clean separation between HTML and CSS can be achieved. This is obviously a simple web site and there is not much cruft that accumulated over the years. But the principles behind it are scalable when people take the separation of content and representation seriously.
I'll add to my sibling commenters and say that there is a long history of critiquing the value of separation of concerns. One of my favorite early talks that sold me on React was "Pete Hunt: React: Rethinking best practices -- JSConf EU" from Oct 2013 [1] that critiqued the separation of concerns of HTML templates + JS popular in the 2000s and early 2010s and instead advocated for componentization as higher return on investment. I think people already saw styling separation of concerns as not particularly valuable at that point as well, just it wasn't clear what component-friendly styling abstraction was going to win.
I do want styles tightly coupled to my React components. The product I work on has tens of thousands of React components.
I don't want to have to update some random CSS file to change one component's appearance. I've had to do this before and every time its a huge pain to not affect dozens of random other components. Other engineers encounter the same challenge and write poor CSS to deal with it. This compounds over time and becomes a huge mess.
Having a robust design system that enables the composition of complicated UIs without the need for much customization is the way.
Front end development got taken over by the Enterprise Java camp at some point, so now there is no html and css. There’s 10,000 components, and thus nothing that can be styled in a cascading way.
All these arguments are just disconnects between that camp and the oldskool that still writes at least some html by hand.
When I get sucked into react land for a gig, it starts making sense to just tell this particular div tag to have 2px of padding because the piece of code I’m typing is the only thing that’s ever going to emit it.
Then I go back to my own stuff and lean on css to style my handful of reusable pieces.
You’re kinda late to the party. 15 years ago that was the way to build UIs, but componentization changed that. Now we reason about UIs as blocks, not as pages, so collocation of logic, markup, and style makes the most sense.
Not to say that every component should be unique, generic components can be built in an extensible way, and users can extend those components while applying unique styling.
Theming is also a solved issue through contexts.
Reducing coupling was never a good idea. Markup and styling are intrinsically linked, making any change to the markup most likely will require changes to the styling, and vice versa. Instead of pretending we can separate the two, modern UI tools embrace the coupling and make building as efficient as possible.
In the webdev world being late is the same as being early. Just wait for the pendulum to swing back.
Tailwind is like GenZ has discovered the bgcolor="" attribute.
> Markup and styling are intrinsically linked, making any change to the markup most likely will require changes to the styling, and vice versa.
No, not vice versa. It's only in one direction. Changing the component requires changing styles, but changing styles doesn't require changing the component if it's merely cosmetic. If I have a button and I want to make it red the button doesn't have to know what color it is.
There’s nothing “gen z” about Tailwind, and there’s no pendulum effect either, and dismissing the very real benefit thousands of people report from Tailwind based on that is very small minded.
That kind of lack of intellectual curiosity is not a great trait for an engineer.
You're talking about separation of concerns (SOC), as opposed to locality of behavior (LOB).
This is the insight that Tailwind and others like HTMX made clear: Separation of concerns is not a universal virtue. It comes with a cognitive cost. Most notably when you have a growing inheritance hierarchy, and you either need 12 files open or tooling that helps you understand which of the 482 classes are in play for the specific case you’re troubleshooting. Vanilla CSS can be like that, especially when it’s not one’s primary skillset. With Tailwind you say ”this button needs to be blue”, and consolidate stuff into CSS later once the right patterns of abstraction become clear. Tailwind makes exploratory building way faster when we’re not CSS experts.
SOC is usually a virtue when teams are split (frontend/bavkend, etc), but LOB is a virtue when teams are small, full stack, or working on monoliths (this is basically Conway’s law, the shape of the codebase mirrors the shape of the team).
I think the problem is simply that css is too restricted that you can style a fixed piece of html in any way you want. In practice, achieving some desired layout require changing the html structure. The missing layer would be something that can change the structure of html like js or xslt. In modern frontend development you already have data defined in some json, and html + css combined together is the presentation layer that can't really be separated.
People who have tried both throughout their careers are generally sticking with Tailwind. I didn’t get it at first either, but after using it extensively I would never go back to the old way.
> The point of CSS is specifically to separate styling and semantics, so that they are not tightly coupled.
That was the original point, and it turned out that nobody cares about that 99% of the time. It's premature optimization and it violates "YAGNI". And in addition to not being something most people need, it's just a pain to set and remember and organize class names and organize files.
Remember CSS Zen Garden from the late 90s? How many sites actually do anything like that? Almost none.
And the beauty of Tailwind is, when you actually do need themes, that's the only stuff you have to name and organize in separate CSS files. Instead of having to do that with literally all of your CSS.
Not only does no one care, but it's not even true. There are effects you simply cannot achieve without including additional elements. So separation of styling and sementics is dead on arrival.
Those are the same selling points as CSS-in-JSS libs like Styled Components. Or CSS Components.
Except your last point about "low-level CSS styles" which I'd argue is a weak point. You really should learn the underlying CSS to gain mastery of it.
Not arguing for one thing over another, just saying Tailwind really never had anything to offer me personally, but maybe if I wasn't already proficient in CSS and the other 2 options didn't exist it might hold some appeal for me.
It’s more about cognitive load, and abstraction level. If you’re trying to make an object spin, it’s much easier to use the tailwind class than it is to remember css keyframes.
Sure, when debugging a complex issue, it’s worth knowing the low-level, but CSS is not a great abstraction for day-to-day work.
You’re right that it’s not much more than a css in js library, but I’ve found myself pleasantly surprised at how efficient I am using it, despite also having years of css experience.
Things like remembering what the flex syntax is, or coming up with a padding system or a colour scheme become very very easy.
I think the editor tooling for tailwind is where most of the benefit comes from.
I also prefer the syntax over direct css in js systems. It’s less characters, which means it’s easier to parse.
Agentic AI companies are doing millions in revenue. Just because agents haven’t spread to the entire economy yet doesn’t mean they are not useful for relatively complex tasks.
And just because people are thowing money at an AI company doesnt mean they have or will ever have a marketable product.
The #1 product of nearly every AI company is hope, hope that one day they will replace the need to pay real employees. Hope like that allows a company to cut costs and fund dividends ... in the short term. The long term is some other person's problem. (Ill change my mind the day Bill Gates trusts MS copilot with his personal banking details.)
When did hacker news become laggard-adopter/consumer-news.
Cal is a consumer of AI - interesting article for this community, but not this community. I thought hacker news was for builders and innovators - people who see the potential of a technology for solving problems big and small and go and tinker and build and explore with it, and sometimes eventually change the world (hopefully for the better). Instead of sitting on the sidelines grumbling about that some particular tech that hasn’t yet changed the world / met some particular hype (yet).
Incredibly naive to think AI isn’t making real difference already (even without/before replacing labor en masse.)
Actually try to explore the impact a bit. It’s not AGI, but doesn’t have to be to transform. It’s everywhere and will do nothing but accelerate. Even better, be part of proving Cal wrong for 2026.
A philosophy of software design by John Ousterhout is the best software book I have read.
But, from your post it’s not clear specifically what you are looking for. If you think you will level up by learning how to apply numerical modelling techniques, then it’s probably best to focus on that.
I'd bet a lot of people are trying to optimize their codebases for LLMs. I'd be interested to see some examples of your ASI-unlocking codebase in action!
If you're interested, I added my email to my profile 'about' section. I could try to screen record my next feature development on one of my side projects.
I'm also kind of interested to see how others use LLMs for coding. I can speak for myself having worked on both good and bad code-bases; my experience is that it works MUCH better on those 'good' codebases (by my definition).
https://nautil.us/the-strange-brain-of-the-worlds-greatest-s...
reply