Well, the GPs criteria are quite good. But what you should actually do depends on a lot more things than the ones you wrote in your comment. It could be so irrelevant to only deserve a trace log, or so important to get a warning.
Also, you should have event logs you can look to make administrative decisions. That information surely fits into those, you will want to know about it when deciding to switch to another provider or renegotiate something.
Don't use continuous math in either a design system nor a constraint solver that you expect random developers to use. Either case will only lead to problems.
The one problem with using your distro's Postgres is that your upgrade routine will be dictated by a 3rd party.
And Postgres upgrades are not transparent. So you'll have a 1 or 2 hours task, every 6 to 18 months that you have only a small amount of control over when it happens. This is ok for a lot of people, and completely unthinkable for some other people.
IMO, the reason to self-host your database is latency.
Yes, I'd say backups and analysis are table stakes for hiring it, and multi-datacenter failover is a relevant nice to have. But the reason to do it yourself is because it's literally impossible to get anything as good as you can build in somebody's else computer.
> but it will certainly feel like it if labor demand is reduced enough
All the last productivity multipliers in programming led to increased demand. Do you really think the market is saturated now? And what saturated it is one of the least impactful "revolutionary" tools we got in our profession?
Keep in mind that looking at statistics won't lead to any real answer, everything is manipulated beyond recognition right now.
Demand for software has been tied to demand for software engineering labor. That is no longer true. So demand for software may still go up while demand for labor goes another direction.
Also I do hold a belief that most tech companies are taking a cost/labor reduction strategy for a reason, and I think that’s because we’re closing a period of innovation. Keeping the lights on, or protecting their moats, requires less labor.
Each of the last productivity multipliers coincided with greatly expanded markets (e.g. PC revolution, internet, mobile). Those are at the saturation point. And we've effectively built all the software those things need now. Of course there is still room for innovation in software, but it is not like in the past where we also had to build all the low-hanging fruit at the same time. That doesn't require nearly as many people — and that was already starting to become apparent before anyone knew what an LLM is.
This AI craze swooped in at the right time to help hold up the industry and is the only thing keeping it together right now. We're quickly trying to build all the low-hanging fruit for it, keeping many developers busy (although not like it used to be), but there isn't much low-hanging fruit to build. LLMs don't have the breadth of need like previous computing revolutions had. Once we've added chat interfaces to everything, which is far from being a Herculean task, all the low-hanging fruit will be gone. That's quite unlike previous revolutions where we had to build all the software from scratch, effectively, not just slap some lipstick on existing software.
If we want to begin to relive the past, we need a new hardware paradigm that needs all the software rewritten for it again. Not an impossible thought, but all the low-hanging hardware directions have also been picked at this point so the likelihood of that isn’t what it used to be either.
> Each of the last productivity multipliers coincided with greatly expanded markets
They didn't. But it may be a relevant point that all of that was slow enough to spread that we can't clearly separate them.
Anyway, the idea that any one of those large markets is at saturation point requires some data. AFAIK, anything from mainframe software to phones has (relatively) exploded in popularity every time somebody made them cheaper, so that is a claim that all of those just changed (too recently to measure), without any large thing to correlate them.
> That's quite unlike previous revolutions where we had to build all the software from scratch
We have rewritten everything from scratch exactly once since high-level languages were created in the 70s.
Or we can accept it, make a good access control system in an app platform for once, and add the few missing parts that the web standards are still missing so it becomes a good platform.
And none of that requires that we give up on an entire facade focused on reading text.
But if Mozilla focus on resisting, they can't do that, and honestly, nobody else out there will.
reply