>You need to follow up with a set of strict requirements to set expectations, guard rails, and what the final product should do (and not do).
that usually very hard to do part, and is was possible to spent few days on something like that in real word before LLMs. But with LLMs is worse because is not enough to have those requirements, some of those won't work for random reasons and is no any 'rules' that can grantee results. It always like 'try that' and 'probably this will work'.
Just recently I was struggled with same prompt produced different result between API calls before I realized that just usage of few more '\"' and few spaces in prompt leaded model to completely different route of logic which produced opposite answers.
it looks like some kind marketing push or 'growth hack', just to get some viral thing around which justify why do you extra reason to pay for Claude or Tailscale subscription.
I personally not even convinced that Claude Code any better on average than something like Aider+Gemini 3 or other good model. May be in some specific cases it actually better but in those Aider+'Antropic Model via API' most likely will work too.
is one of the issues of modern web as it's optimized for quick most efficient reading, but something like this website is more optimized for slow thoughtful experience. Like some of the books about art history where you not only trying to get extract meaning of words but trying imagine how that time felt and try look on things from different perspective or different values.
From what I heard, even if a person is an EU citizen and does not have dual citizenship but was born in some ex-USSR/CIS countries. For example, if they migrated with their parents at the age of 1y, they will always be considered a higher-risk client by EU banks and will always be under some suspicion. So if that true is not possible to stop being related at least fully.
I'm switching a lot between Sonnet and Gemini in Aider - for some reason some of my coding problems only one of models capable to solve and I don't see any pattern which cold give answer upfront which I should to use for specific need.
I'm not sure about bots but it looks like they have real peoples on payroll or who paid per comment or something like that. And they trying push narrative 'use it now or you will be left behind' on every place where someone could share experience of using ai tools.
that usually very hard to do part, and is was possible to spent few days on something like that in real word before LLMs. But with LLMs is worse because is not enough to have those requirements, some of those won't work for random reasons and is no any 'rules' that can grantee results. It always like 'try that' and 'probably this will work'.
Just recently I was struggled with same prompt produced different result between API calls before I realized that just usage of few more '\"' and few spaces in prompt leaded model to completely different route of logic which produced opposite answers.
reply