Rather ironic when you consider that Michelangelo, along with many of his contemporaries of the era, are precisely where the term 'Renaissance man' comes from.
This sounds like it'd be one of the many ideas that sounds great on paper but in reality just creates an even greater stratification in society. I think you're completely correct that in many places, particularly higher end - people would come together to keep the place looking great, possibly even better since you get to 'own' it on some ways.
But on the other hand in many 'urban' neighborhoods, there's far less motivation to take care of things - and once you remove the external actors going in there to do what little they already do, these places would fall into an even more pitiful state very rapidly. But I also think we're looking at things superficially. There's a lot of technical work that can't be casually done like plumbing or electrical that is currently moderately compensated. In an UBI world costs for this would likely skyrocket which would lead to an even higher UBI which would lead to even higher costs which would lead to Zimbabwe.
Pessimism aside I would probably actually support it, simply because I think it would be the ultimate expression of liberty - but you have to realize that you're not going to create anything like the same society we have, but with everybody being able to independently support themselves. You're going to completely destroy the contemporary economy and create a new entity that would probably be much closer to something of times long since past when the overwhelming majority of America was self employed. 'The Expanse' offers a realistic take on what UBI would probably entail.
> But on the other hand in many 'urban' neighborhoods, there's far less motivation to take care of things - and once you remove the external actors going in there to do what little they already do, these places would fall into an even more pitiful state very rapidly.
You're letting your prejudice get in the way of making a rational argument. There is no difference between what you chose to call "urban" and any other place, be it rural, suburban or urban. You don't see people taking care of their surroundings because you only get to see a snapshot of it's current state, not what others have done in the recent and not so distant past.
Of course OP is silly in making the mistake of believing UBI will get all people working on urban waste management fired and out of a job. It's like believing that if a service provides a free tier, all other services will suddenly vanish. But presuming people don't care about their surroundings because they live in an 'urban' neighborhood reflects a problem that's about prejudice and not UBI.
This is rather a tangent but I spent years living in these areas. Have you ever wondered why it seems so many people who grew in these sort of places tend to have seemingly so much less 'empathy' for them than those who grew up e.g. upper middle class? You are probably seeing things through a foreign perspective where you assume everybody is, more or less, like you and so these awful differences must be caused by reparable externalities. You probably imagine that if you were granted infinite power, you could create a utopia.
But what you learn living in these areas for years is that no - not everybody is like you, or even remotely like it. There are a significant number of people who are simply broken and beyond repair. It reminds me of this video [1] which is from a minister of the UAE speaking on a perfect analog. The one thing I'd certainly agree with you about is that prejudice is bad, but the direction of one's prejudice, good or bad, matters not. We should always form our opinions based on reality, and not ideals.
> You don't see people taking care of their surroundings because you only get to see a snapshot of it's current state, not what others have done in the recent and not so distant past.
I think that is what observation actually is, you get to see what others have done in the recent and not so distant past, or am i missing your point.
I wonder if YouTube is next; not just AI music, but videos. My recommendations are getting completely hijacked by AI generated garbage filled with comments complaining about the exact same thing. Ironically their algorithm is probably currently promoting that as 'engagement.' I see no way that this isn't greatly diminishing the overall 'value' of YouTube. At the minimum they're going to need to start downranking AI generated stuff hard.
I look at TikTok for a few minutes perhaps every six months or so. I did that a few weeks ago and it seemed like half of the stuff it was showing me were AI ring doorbell/dashcam/etc. type stuff.
I have nothing against AI - my computer is doing text-to-image training most nights. However, the kind of videos it was showing me are only entertaining if they’re real. I don’t care to see a fake dog scare away a fake bear from a kid, and I doubt others do either.
However, most people probably can’t tell and think it’s real. That’s only going to get worse. I don’t know where that leaves us.
I had pretty much the exact same experience as the peer comment in this thread. They were videos I was interested in, or at least got clickbaited into, but the fact that they were AI generated destroyed their value. At scale I think the most likely outcome of this is not that people embrace AI for this sort of stuff, but rather that it also destroys the value of genuine content by making people doubtful of the authenticity of anything that seems improbable.
The phone in your pocket can perform arithmetic many orders of magnitude faster than any human, even the fringe autistic savant type. Yet it's still obviously not intelligent.
Excellence at any given task is not indicative of intelligence. I think we set these sort of false goalposts because we want something that sounds achievable but is just out of reach at one moment in time. For instance at one time it was believed that a computer playing chess at the level of a human would be proof of intelligence. Of course it sounds naive now, but it was genuinely believed. It ultimately not being so is not us moving the goalposts, so much as us setting artificially low goalposts to begin with.
So for instance what we're speaking of here is logical processing across natural language, yet human intelligence predates natural language. It poses a bit of a logical problem to then define intelligence as the logical processing of natural language.
The problem is that so far, SOTA generalist models are not excellent at just one particular task. They have a very wide range of tasks they are good at, and good scores in one particular benchmarks correlates very strongly with good scores in almost all other benchmarks, even esoteric benchmarks that AI labs certainly didn't train against.
I'm sure, without any uncertainty, that any generalist model able to do what Einstein did would be AGI, as in, that model would be able to perform any cognitive task that an intelligent human being could complete in a reasonable amount of time (here "reasonable" depends on the task at hand; it could be minutes, hours, days, years, etc).
I see things rather differently. Here's a few points in no particular order:
(1) - A major part of the challenge is in not being directed towards something. There was no external guidance for Einstein - he wasn't even a formal researcher at the time of his breakthroughs. An LLM might be able to be handheld towards relativity, though I doubt it, but given the prompt of 'hey find something revolutionary' it's obviously never going to respond with anything relevant, even with substantially greater precision specifying field/subtopic/etc.
(2) - Logical processing of natural language remains one small aspect of intelligence. For example - humanity invented natural language from nothing. The concept of an LLM doing this is a nonstarter since they're dependent upon token prediction, yet we're speaking of starting with 0 tokens.
(3) - LLMs are, in many ways, very much like calculators. They can indeed achieve some quite impressive feats in specific domains, yet then they will completely hallucinate nonsense on relatively trivial queries, particularly on topics where there isn't extensive data to drive their token prediction. I don't entirely understand your extreme optimism towards LLMs given this proclivity for hallucination. Their ability to produce compelling nonsense makes them particularly tedious for using to do anything you don't already effectively know the answer to.
> I don't entirely understand your extreme optimism towards LLMs given this proclivity for hallucination
Simply because I don't see hallucinations as a permanent problem. I see that models keep improving more and more in this regard, and I don't see why the hallucination rate can't be abirtrarily reduced with further improvements to the architecture. When I ask Claude about obscure topics, it correctly replies "I don't know", where past models would have hallucinated an answer. When I use GPT 5.2-thinking for my ML research job, I pretty much never encounter hallucinations.
Hahah, well you working in the field probably explains your optimism more than your words! If you pretty much never encounter hallucinations with GPT then you're probably dealing with it on topics where there's less of a right or wrong answer. I encounter them literally every single time I start trying to work out a technical problem with it.
His concept sounds odd. There will always be many hints of something yet to be discovered, simply by the nature of anything worth discovering having an influence on other things.
For instance spectroscopy enables one to look at the spectra emitted by another 'thing', perhaps the sun, and it turns out that there's little streaks within the spectra the correspond directly to various elements. This is how we're able to determine the elemental composition of things like the sun.
That connection between elements and the patterns in their spectra was discovered in the early 1800s. And those patterns are caused by quantum mechanical interactions and so it was perhaps one of the first big hints of quantum mechanics, yet it'd still be a century before we got to relativity, let alone quantum mechanics.
It's only easy to see precursors in hindsight. The Michelson-Morley tale is a great example of this. In hindsight, their experiment was screaming relativity, because it demonstrated that the speed of light was identical from two perspectives where it's very difficult to explain without relativity. Lorentz contraction was just a completely ad-hoc proposal to maintain the assumptions of the time (luminiferous aether in particular) while also explaining the result. But in general it was not seen as that big of a deal.
There's a very similar parallel with dark matter in modern times. We certainly have endless hints to the truth that will be evident in hindsight, but for now? We are mostly convinced that we know the truth, perform experiments to prove that, find nothing, shrug, adjust the model to be even more esoteric, and repeat onto the next one. And maybe one will eventually show something, or maybe we're on the wrong path altogether. This quote, from Michelson in 1894 (more than a decade before Einstein would come along), is extremely telling of the opinion at the time:
"While it is never safe to affirm that the future of Physical Science has no marvels in store even more astonishing than those of the past, it seems probable that most of the grand underlying principles have been firmly established and that further advances are to be sought chiefly in the rigorous application of these principles to all the phenomena which come under our notice. It is here that the science of measurement shows its importance — where quantitative work is more to be desired than qualitative work. An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals." - Michelson 1894
With the passage of time more and more things have been discovered through precision. Through identifying small errors in some measurement and pursuing that to find the cause.
It's not precision that's the problem, but understanding when something has been falsified. For instance the Lorentz transformations work as a perfectly fine ad-hoc solution to Michelson's discovery. All it did was make the aether a bit more esoteric in nature. Why do you then not simply shrug, accept it, and move on? Perhaps even toss some accolades towards Lorentz for 'solving' the puzzle? Michelson himself certainly felt there was no particularly relevant mystery outstanding.
For another parallel our understanding of the big bang was, and probably is, wrong. There are a lot of problems with the traditional view of the big bang with the horizon problem [1] being just one among many - areas in space that should not have had time to interact behave like they have. So this was 'solved' by an ad hoc solution - just make the expansion of the universe go into super-light speed for a fraction of a second at a specific moment, slow down, then start speeding up again (cosmic inflation [2]) - and it all works just fine. So you know what we did? Shrugged, accepted it, and even gave Guth et al a bunch of accolades for 'solving' the puzzle.
This is the problem - arguably the most important principle of science is falsifiability. But when is something falsified? Because in many situations, probably the overwhelming majority, you can instead just use one falsification to create a new hypothesis with that nuance integrated into it. And as science moves beyond singular formulas derived from clear principles or laws and onto broad encompassing models based on correlations from limited observations, this becomes more and more true.
Tests and proofs can only detect issues that you design them to detect. LLMs and other people are remarkably effective at finding all sorts of new bugs you never even thought to test against. Proofs are particularly fragile as they tend to rely on pre/post conditions with clean deterministic processing, but the whole concept just breaks down in practice pretty quickly when you start expanding what's going on in between those, and then there's multithreading...
I generally agree with you on market discussions, but I don't think you're considering this one correctly. Imagine a country responsible for just 10% of global oil production decided to stop producing. What's going to happen oil prices assuming no other country starts producing more?
They're going to skyrocket in a seemingly irrational way. But it's completely rational. The reason is that they're a finite resource that is needed, and so there is very minimal price elasticity. People will pay as low as they can, but simultaneously must have oil and so have a practically uncapped price ceiling if that's all that's available. The same is true of housing.
You're right that people won't, generally speaking, buy a house for $100 when there's another one for sale for $80. But what you've done there is greatly increase the demand for that $80 house, which is now going to naturally send its price upwards.
---
Finally there's the issue that figures on the percent of homes that are owned by investment groups are misleading, because they aren't just buying homes randomly. They're going to pick up lots of houses in precise areas, and so the impact on prohibiting this behavior will be dramatic in these areas.
Having lived through the various oil crises, I can confidently assert that there's a great deal of demand elasticity.
For example, when the 70s oil crisis hit, people stopped driving to the store for a loaf of bread, but would shop weekly instead. For another, people buy more fuel efficient cars when gas prices are high. For a third, people switch to electric cars.
There are regular major disruptions in the flow of oil. Pump prices change on a daily basis, and that results in the amount of gas available == number of gallons customers pay for. No gluts and no shortages.
I don't think this is really accurate because the traditional state of society, and one that remains in the 'developing world' which is almost certainly still the wide majority of the world at this point, is families living in multi generational housing with many people contributing. This enables older to generations to comfortably 'retire' when they see fit, and provides financial comfort and security. It's basically like decentralized pensions.
This new world of low fertility, small household size or even people living entirely alone, high external dependence, and the consequent broad insecurity - is still extremely new. And I do not think it will survive the test of time.
I think you might be romanticizing multi-generational households a bit. We introduced social security systems precisely because the family systems failed so frequently. In all but the richest families no retirement as we understand it today was possible. Illness or death of the main bread winners was fatal to the whole household and children were expected to work as soon as possible.
It was not because family systems were failing. It came about in the era of the great depression, and the idea was rather unpopular at first, particularly among groups like farmers who had no interest in the new taxes that would come alongside it. Some of the arguments in favor of it were it being a way to get older individuals out of the work force in order to make room for younger workers. You have to keep in mind it was introduced at a time when unemployment rates were upwards of 20%. And retirement was and is absolutely possible. When people own their land and house and have basic maintenance skills, your overhead costs become extremely low.
Of course there's also no reason these things must be mutually exclusive. I think the ideal is to learn from the past, which proved its sustainability over millennia, and work to improve it. In modern times we've instead set out to completely replace it - or at least build up something from scratch, and what we've created just doesn't seem particularly sustainable.
Pre-1960s, the elderly were living in SROs, often windowless, with family (without aid or care), in county poorhouses, or marked as senile and sent to a mental hospital.
Retirement and living with family was viable for many as long as they remained healthy. People imagine Norman Rockwell. Reality was very different.
There are already numerous competitors to YouTube. Of course they have collectively like 1% marketshare, but that's because it's basically impossible to compete against YouTube right now. But if YouTube died, these sites would rapidly become fully competent replacements - all they're missing is the users.
>these sites would rapidly become fully competent replacements
they wouldn't. For two reasons. Without the capital (that to a large extent comes from ads) nobody could run the herculean infrastructure and software behemoth that is Youtube. Maintaining that infrastructure costs money, a lot. Youtube is responsible for 15% of global internet traffic, it's hard to overstate how much capital and human expertise is required to run that operation. It's like saying we'll replace Walmart with my mom&pop shop, we'll figure the supply chain details out later
Secondly content creation has two sides, there aren't just users but also producers and it's the latter who comes first. Youtube is successful because it actually pays its creators, again in large part through ads.
Any potential competitor would have to charge significantly higher fees than most users are willing to pay to run both the business and fund content creators. No Youtube competitor has any economic model at all on how to fund the people who are supposed to entertain the audience.
However, you brought up the distinction between consumers and producers, but I'd argue that such a thing doesn't inherently exist. YouTube was thriving before Google when it mostly just a site for people to share videos on. Here [1] is one of e.g. Veritasium's oldest videos. What it lacks in flare and production quality, it makes up for in content and authenticity.
You don't need 'creators', you simply need people. And I think a general theme among many of the most successful 'creators', is that they weren't really in it for the money. They simply enjoyed sharing videos with people. Like do you think Veritasium in that video could even begin to imagine what his 'channel' would become?
And that's extremely harmful. In theory we have democracies. In practice, if you have the capital, you get to decide for what products and services the world's resources are used for.
How would they pay for the infrastructure required to support all those users? I can't stand ads, but when I was younger, no way would I have paid for YT Premium (though to be fair, ads are much, much worse now).
Let me pay usage based, with full transparency in hosting, infra, and energy costs. Like a utility.
Subscription services are like hungry hungry hippos, you give them $10 a month and next year they want $100.
I honestly think if everyone starts paying, it will only make them remove the free tier quicker. I think society is better with youtube free, even if ads are annoying.
Bandwidth transit prices, peering, and other data for for ISPs and the like tend to be highly classified (lol), but it's very close to $0. Take Steam for instance. They are responsible for a significant chunk of all internet traffic and transfer data in the exabytes. Recently their revenue/profit data was leaked from a court filing and their total annual costs, including labor/infrastructure/assets/etc, was something like $800 million. [1]
Enabling on site money transfers (as YouTube does) and taking a small cut from each transfer (far less than YouTube's lol level 30% cut) would probably be getting close to enough to cover your costs, especially if you made it a more ingrained/gamey aspect of the system - e.g. give big tippers some sort of swag in comments or whatever, stuff like that. It's not going to be enough to buy too many [more] islands for Sergey and Larry, but such is the price we must all pay.
reply