Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t quite understand the full extent of the AI will create jobs argument. In prior revolutions, say automation, automation created jobs because like building and maintaining robots is a whole thing. Building and maintaining AI is a whole thing, but if you’re talking about wholesale automation of intelligence, the fundamental question I have is:

What jobs will AI create that AI cannot itself do?

In the automation revolution, the bots were largely single purpose, the bots couldn’t be created by bots. There could and probably will be trillions of jobs created by AI, but they will be done by trillions of agents. How many jobs do you really create if ChatgGPT is so multi-purpose, it only takes one say 250k company to support it.



> What jobs will AI create that AI cannot itself do?

Part of the problem is the definition of "AI" is extremely nebulous. In your case, you seem to be talking about an AGI which can self-improve, while also having some physical interface letting it interact with the real world. This reality may be 6 months away, 6 years away, or 600 years away.

Given the current state of LLMs it's much more likely they will create jobs, or change workflows in existing jobs, rather than wholesale replace humans. The recent public spectacle of Microsoft's state-of-the-art Github Copilot Agent[0] shows we're quite far away from AI agents wholesale replacing even very junior positions for knowledge work.

[0] https://news.ycombinator.com/item?id=44050152


in a sense I think this is my question, is anyone writing any of these think pieces providing specific definitions?


Yeah but LLMs won't stay at the current state though. I don't understand this argument. Is there any particular reason to believe that they'll stop getting better at this point?


Yes, I do think there is, that LLMs are a paradigm with a certain limited functionality and not a path to AGI. I actually find this assumption of constant and never ending improvement of LLMs interesting, almost all technology has diminishing returns in terms of improvements, why would LLMs be the exception? Why not believe that all future iterations of the LLMs will be gradual improvements of current behavior rather than LLMs necessarily can become super-intelligent AGI?


Most of the adults alive today have lived sometime when CPU speeds doubled every 12–24 months (see Moores law). This has conditioned many to believe that all information technologies improve exponentially, while, in reality, most are not.


It kind of reminds me of the solar power chart that one institute made where every year they predict that growth will definitely flatten from here on out and every year it fails to do so.

I don't have any reason to expect LLMs to be flattening or for the technology to be capped out. In fact, I have a lot of reasons to believe the opposite such as the plethora of papers and proposed techniques that haven't even been attempted at scale. (QuietSTaR my beloved, you will have your day.) It just doesn't look like a mature technology, rather the opposite. So my baseline assumption, on that evidential basis, is that number keeps going up, and if this prediction results in a strange-looking future then that says more about my taste than about the prediction.


The question though is the rate of growth. To get the kind of “end of scarcity AGI” advertised by Sam Altman probably requires continuous over year exponential growth. Will that happen in the normal course of LLM research? Or will it be more gradual, and the exponential growth will come from future paradigm shifts. I’m arguing the second.


I guess I'm arguing the first, yeah. LLMs are not anywhere close to tapped out as a technology.


> Is there any particular reason to believe that they'll stop getting better at this point?

Are they better now? When ChatGPT came out I could ask it for the song lyrics to The Beatles - Come Together. Now the response is

> Sorry, I can't provide the full lyrics to "Come Together" by The Beatles. However, I can summarize the song or discuss its meaning if you'd like.

You can argue that ChatGPT "knowing" that 9.11 is less than 9.9 or counting the 3 r's in strawberry means it's better now, but which am I more likely to ask an LLM?


I'm sure it still knows, it's just been explicitly told not to divulge it for copyright reasons. Iirc song lyrics are even explicitly forbidden in the system prompt.


> What jobs will AI create that AI cannot itself do?

Good question. And one where "in the immediate..." is not relevant. In the long term then, what?

One direction: Humans are not good for semiconductor fabs. Humans bring contamination of all kinds into a space that shouldn't have any. Humans can't help themselves and tweak things they are not qualified to. Humans can't help themselves and don't report changing conditions that they should. etc, etc. So that current fabs contain mountains of automation. And yet they also have lots of humans. Full automation is really difficult. It's more that automation creates the economic conditions that make it manageable to employ the remaining humans.

> How many jobs do you really create if ChatgGPT is so multi-purpose, it only takes one say 250k company to support it.

Is this very relevant? For example the entertainment industry - I expect - relies on just a handful of data centers for digital distribution. But creating the movies and TV shows employees insane numbers of people. The question is more "What does NextGenChatOpen enables, that it cannot exploit itself"?


> What jobs will AI create that AI cannot itself do?

If we are looking long term, PART of the answer may be that asking about jobs is an insufficient question. Pending other unresolved questions, GDP is already not a good measure of standard of living anymore. Other measures such as happiness index are in their infancy but some people are thinking about alternatives.

So perhaps these non-AI jobs are non-jobs: humans living their lives instead of filling jobs. Which means whatever humans like to do: playing games (sometimes competitively), performing, hiking, collecting stamps, painting murals...


https://en.wikipedia.org/wiki/Productivity_paradox

If AI is roughly where IT is in the 60's, we might see actually decreased productivity for a while until people (yes people) figure out how to use it effectively.


The question is how long? The exchange rate of information is very fast these days and there are typically less broad secrets on tool usage and management these days.


The jobs created by the need to build and maintain robots (and industrial machinery in general) are very few compared to the amount of jobs the machines replaced. The new jobs that the industrial and other technological revolutions created were mostly in other economical sectors, like services and commerce.


> What jobs will AI create that AI cannot itself do?

AI lack skin-in-the-game, they cannot bear responsibility for outcomes. They also don't experience desires or needs, so they depend on humans for that too.

To make an AI useful you need to apply it to a problem, in other words it is in a specific problem context that AI shows utility. Like Linux, you need to use it for something to get benefits. Providing this problem space for AI is our part. So you cannot separate AI usefulness from people, problems are distributed across society, non-fungible.

I am not very worried about jobs, we tend to prefer growth to efficiency. In a world of AI, humans will remain the differentiating factor between companies.


Jevons Paradox likely applies here. There could be an initial reduction in jobs, but longer term humans using AI will reduce the cost (increase the efficiency) of those jobs which will increase demand more than merely satisfy it.

Basically any job that uses a word processor, spreadsheet, drawing tool, etc will all become more efficient and if Jevons Paradox applies, demand for those things will increase beyond the reduction due to efficiency gains.

I can imagine that for many fields it will be cheaper to have humans use AI (in the near term) rather than try to make fully automated systems that require no/little supervision.


Agreed - let's say your employees cost $100/yr and your revenue is $150/yr, with a $50/yr profit. Now your employees suddenly become 2X more productive because of AI. Do you...

A) Fire half your staff, now earning $100 profit? ($50/yr to employees, $150/yr revenue)

B) Attempt to double your business, now earning $200 profit? ($100 emp, $300 rev)

C) Double your headcount and earn $400 profit?

If companies are profit-seeking, B and C should be more common than A!


Firing people is the number 1 way companies improve short term profits. Since our stock market is based on quarterly results, firing people is absolutely what the business will do. You can watch all the AI layoffs and "not hiring because of AI" in real time, right now.


You’re just making things up here. LLMs or other forms of “AI” can’t do most jobs, so it’s silly to speculate what will happen when it replaces humans in those jobs it can’t actually perform.

To the extent it can automate tasks under the direction of humans, it’s not even clear it makes those humans more productive, but it is clear that it harms those humans’ own skillsets (beyond prompt engineering).


They can't do most jobs, but they can still reduce the amount of jobs.

For example, even outsourcing giants like Infosys shrank hiring by 20% AND increased personnel utilization from 70% to 85% just by mandating employees to start using code-gen tools, and as a result were able to significantly enhance margins.


>so it’s silly to speculate what will happen when it replaces humans in those jobs it can’t actually perform

Why?

If you're waiting around for AI to do these things by the time it happens it will be getting hit like a truck. The speed of technological implementation these days is very fast, especially when compared to the speed of regulation.

Moreso we are not just seeing improvement in things like LLMs, there is broad improvements in robotics and generalized behavior in AI.


Technically the article is making things up, and I’m responding to those assertions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: