Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When asked about what jobs the robots would come for first, I would have had to say that digital artist was pretty low on my ranking before now.


It's coming for us, too.

It won't be long before most software engineer positions are eliminated while some are replaced by software "technicians" with enough expertise to command AI to generate working code. Perhaps the technicians will be tasked with building tests and some automation, but even that stuff can be delegated to AI to an extent.

This may seem far off because the present economy is accustomed to paying engineers large sums of money to write apps. Even with the retractions we've been seeing in hiring and venture capital, there's just enough easy money still there and the capabilities of code-writing AIs isn't quite there yet.

All we need is a significant market correction and the next generation of AI to wipe out a large swath of tech jobs.

The next step regardless is applying technologies like DALL-E to web design, and for said technology to be widely used, open and affordable. We won't need web designers or even UXD.

Then we won't need as many engineers when AI can solve a lot of common problems in building software. AI can do it better because it won't spend inordinate amounts of time dillydallying over next-gen frameworks, toolchains, and preprocessors. AI won't even have to worry about writing "clean" and maintainable code because those things will no longer matter.


I don't think we'll see this in our lifetimes.

For that scenario to be possible, general AI needs to be developed first.

A huge (and awful) part of software engineering is figuring out what exactly the stakeholders want you to build or fix. Sometimes, they themselves don't even know.

Dealing with ambiguos jira tickets, poorly reported bugs, non-existent requirements, missing or outdated documentation; these are the "common problems" in building software. Current AI technology isn't even close to being able to sort these types of problems today, and it won't be until a monumental breakthrough in the field is achieved.

Generating art is "easy" in the sense that art can't be wrong or right, it just is.

Generating the backend of a streaming platform? I'd like to live long enough to see it.


> A huge (and awful) part of software engineering is figuring out what exactly the stakeholders want you to build or fix. Sometimes, they themselves don't even know.

Ask any creative out there what the hard part of their job is.


> A huge (and awful) part of software engineering is figuring out what exactly the stakeholders want you to build or fix. Sometimes, they themselves don't even know.

Yeah, but that part can be learned by anyone without a CS degree.

Perhaps not everything in software can be automated, but I could see a team of 10 programmers be replaced by 1 person (programmer or not) skillful enough to control a bunch of AI software tools.


A tool that 10xs programmer productivity will if anything lead to higher demand for programmers, because we're nowhere close to developing 1/10 of the total software the world demands.


Perhaps, but that would be an indirect consequence.


>I don't think we'll see this in our lifetimes.

I already see a clear path that'd take about 20 years to execute properly. That's assuming low pressure conditions and a very large amount of funding though, both of which aren't typically present in reality.

The result is essentially what GP describes, with a path to AGI in the form of extremely competent tool AI. We're going to hit self-assembling programs before we hit true AGI.

I can't say I'm particularly excited to see such things become reality. Fortunately, humans usually find a way to fuck things up. Our species' collective incompetence is the largest barrier to AGI currently, which may be a blessing in disguise depending on how you look at it.


The backend of a streaming platform is trivially summoned by logging into Twitch. GPT3 has already demonstrated its "understanding" between a problem statement, a variable, and it's value (and if you haven't, it's worth finding the tweet/gif). Bridging the gap between the words "streaming platform backend" and an ffmpeg command line may involve a bunch more details, but the gap between the two is only going to shrink.


AI is great for recommendation systems and art because they are fuzzy. "Good enough" results are relatively easy to achieve. There is lots of tolerance for errors, because human preferences are flexible and fuzzy themselves.

Engineering is a different ballgame... If anything, all the code monkeys will simply become QA monkeys/test-engineers, because you need to be really sure that your black box algorithm is actually doing what you think it should be doing.


> software "technicians" with enough expertise to command AI to generate working code

People keep trying to make simplified programming environments for significantly less-trained people and they keep failing. Is mixing in an AI actually going to make it easier to get a result that has no crippling bugs?


No, but when have crippling bugs ever stopped software businesses from shipping it anyway?


>> People keep trying to make simplified programming environments for significantly less-trained people and they keep failing. Is mixing in an AI actually going to make it easier to get a result that has no crippling bugs?

Yeah. I've even worked in one of those environments for a year (not my choice).

I'm of the opinion those kind of environments won't ever work. They'll either be:

1. Extremely cookie-cutter (e.g. make a clone of our "standard app" with a insignificant little tweaks).

2. Require software engineers to get anything useful out of them, and those engineers will feel like they're working with one hand tied behind their backs (or banging their heads against a wall).

IMHO, one of the main skills of a software engineer is translating user requirements into technical requirements that work and understanding when they work. I don't think skill is automatable without a fairy-tale AGI.

> No, but when have crippling bugs ever stopped software businesses from shipping it anyway?

A lot? Depends on your definition of "crippling." A software engineer will gripe and say, "I don't want to use this;" something that's awkward but the people who use it can still get their work done; or the system literally incapable of performing its function?


And how is it going to increase information in the output by having AI involved, if these AIs aren’t actually thinking and pouring out of their own entropy source into outputs?


Judging by the current state of DALL-E, the generated software will look good at first impression, but have lots of weird bugs when examined closely. So yeah, not much different to current software dev.


"Physical" engineering fields will probably come first... think AI-generated architecture, with AI-generated structural engineering, plumbing, electrical wiring, etc... with human-guidance of the generative process, and human-review/accountability of final output. Amplification of humans, not obsolescence.

In software, yeah boiler-plate and function-level code-generation... I could also see generating trivial UIs for CRUD apps, or no-code data-pipelines for small businesses... maybe even generating high-level architectures for new services... but we're far off from AI auto-generating code for enterprise applications or foundational services. The differentiation being making changes within an existing complex domain/code-base, in contrast with generating new assets from nothing.


Most of the math for structural engineering is already done through software, we just don't call it AI. The difficult part and valuable part of being a good structural engineer is translating requirements and dealing with clients. The actual math and engineering work is often not much more difficult than what's done taught in their undergrad, and much of it is offloaded to designers anyway.

Source: My family owns one of the largest civil engineering firms in my home province.


It will never come for us. You think it will, but that’s because you don’t understand software.

Pick any random Jira ticket for a large software project. Could an AI understand and implement that feature into a larger project? Can it correctly deploy it without interruptions to production jobs? Will it correctly implement tests and decent code coverage? If there are regressions will it know how to go in and fix them? If there are bugs reported by users will it be able to investigate and accurately fix them? What about when multiple branches of feature development have to be merged, will it know how to do it correctly? Will it know how to write high performance software or just use shitty random algorithms?

If it can’t do these things AI is basically useless. Because this is basically 90% of software development.


A plane flies and is nothing like a bird.

Most likely, the APP-E or GAME-E, given a prompt generate an application or game, will not generate C++/JavaScript/Swift/Kotlin but directly target the pixel space, running in a 60+ FPS loop a single "function" such as `nextFrame(currentState)`.

It will probably be here in the next few years: write a prompt such as "2D game like Mario but with butterflies" and receive a 2GB blob which opens a window accepting inputs and changing the pixels accordingly. Or, something more serious, a prompt like "invoicing application following the laws of France, Material Design, store the database in AWS using <token>". APP-E or GAME-E doesn't need to totally replace software development, just be good enough to replace in 99% of use cases.

Bugs/tests could probably be solved by some mechanism for injecting localized prompts: given an already generated binary, fine tune it accordingly to a list of other prompts.

As for deployment, it's already pretty much solved with the CI/CD solutions galore all around, not sure why you would need generative statistics for it.

What DALL-E offers is a glimpse of the next 30 years, and probably 99% of the infrastructure required to run it to its full potential is not here yet. Just as in 1992 (3 years after the HTTP proposal, but 2 years before the launch of Netscape) there were only glimpses of what a connected world would look like.


If we had such advanced AI we wouldn’t ask it to build programs, we would just tell it to do those computations directly. So instead of asking for an invoicing application, we’d just ask to generate an invoice to whatever parameters we need, and to remember the reusable data for next time.


Sure, who knows. Although humans are allegedly bad with 7±2 parameters, hence a nice user interface is still required if humans are to be kept in the loop.

My point was that something like APP-E or GAME-E seems very plausible in the near future and it is more likely to render pixels with the underlying logic encoded in an inscrutable sparse matrix, somewhat the consequence of a beefier DALL-E with regard to the data set, the learning modalities, and the attention span, than to write programs to be compiled/interpreted by any current language stack.


I guess the difference is, with the art the human can immediately reject/accept and iterate. And a bad image can be crappy, it doesn't break anything.

With software it might take days of testing to verify the result, and then repeat that for every iteration. Would be cheaper to build the thing!

Where AI might work is in some restricted subset of software, like a web CRUD app where you say "I want an app that stores a TODO list with dates". With the constraints of it being crud, it just needs to AI the database and arrangement of fields and so on.

The AI is not programming so much as it is choosing which "rails-like scaffolds" to initiate.


Simple CRUD apps are mere toys these days. We don’t even need AI to quickly generate them, it can be done with a couple scripts. The only people that would be replaced by an AI that specializes in CRUD would be recent CS graduates, junior developers.

The serious engineers are all working on things that go far deeper, and they could never be replaced.

If you want to build a business big enough to be listed on the NASDAQ, you need real developers, and you need to pay them real money.


Writing software is like writing novels: putting words together to make sentences is easy. Making the story make sense is difficult.

One could think that much of art is just pretty form without sense and that is why DALL-E works.


I don't think that assessment of art is accurate.

Most art is not "pretty form without sense". It actually has sense and meaning more often than not, so we can debate what a particular piece "means".

The difference with engineering is that art's meaning is way more subjective, and that if I "miss the point" or simply disagree with the consensus on its meaning, this doesn't make an airplane go down or a nuclear reactor to melt down.


I don't think it will be like that, for two reasons.

One, coming up with a correct description of a program is what computer programming actually is. Implementation is something we're always looking to do faster, so we can describe more behaviors to the computer.

Two, we're nowhere near the scale of software production which would clear market demand. If everyone who writes code for a living woke up and was ten times as productive, there would be more churn than usual while the talent finds its level, but the result would be ten times as much code per year, not five times and 50% unemployment.

Today I wrote a little bit of code to generate a prefix trie and write it out in a particular format, with some extra attention to getting the formatting 'nice'. This took me about three hours.

It won't be long before something in the nature of Copilot could have gotten this down to, maybe, a half hour for results of the same (minimal, acceptable) quality.

Wonderful! Can't wait, I'll be six times as productive at this kind of task.

This might make it hard, on the margin, for some of the more junior and glue-oriented developers to find work, but I think the main result will be more software gets written, the same way using a structured programming language got people further in a day than writing in assembler did.


I think when AI can do the full job of a programmer we'll have reached the singularity. Programmer will be the last job to go.


Time for me to open up my "AI Code Refurbishing" shop and specialize in fixing all the disasters these "AI technicians" make.


Imma be honest, working as an artist who has to come up with Dall-E prompts and as a programmer who has to maintain a codebase slapped together from GPT-5 output sounds equally horrifying. I think I'll stick to my grug brain engineering.


I am personally bearish on this assumption unless a few hurdles are reached. Being a software engineer involves a lot of translation of intent from a required feature into an efficient and maintainable implementation.

In a good number of cases it is more difficult to communicate what needs to be built rather than actually building the end product.

The recent work with DALL-E 2 echos a similar problem, coming up with a descriptive prompt can be difficult to do and needs fine tuning to be done. Not unlike trying to communicate with a graphic designer your expected intentions and giving similar works to draw from.


I agree. Most people fail to see it, because they see all the effort they need to put into producing good results (regardless of their actual job, BTW). Programmers keep thinking their job is secure, because, after wall, we are the ones, who write the software. Even if it's a ML system. (But ML systems don't necessarily need much coding.)

However, software development is probably the most thoroughly documented job, the job with the most information online how to do it right, the job with the best available training set. There is a lot of quality code (yes, bad code too), a lot of questions and answers with sample code (stackoverflow...) available. Maybe we've even already written most the software needed in the near future, it's just not available to everyone who needs it (because no one knows all the things out there and also these might be in several pieces in several repos).

Now the one critical thing I think is still needed, based on how actually we create software is an agent that has reasonable memory, that can handle back references (to what it has been told earlier), i.e. one that can handle an iterative conversation instead of a static, one time prompt.

This might be a big leap from where we are now or it may seem like one but AI/ML keeps surprising us for the past decade or so with these leaps. Another thing that may be needed is for it to be able to ask clarification questions (again, as a part of an iterative conversation). I'm not sure about this latter one, but this is definitely how we do the work.


The problem with that theory is that writing code is easier than reading code. This is generally not the case in other professions. It is definitely not the case for an artist.

You still need correct code, and the halting problem says you can't prove whether code does what you want it to. At the end of the day, someone needs to be able to go in and fix shit the AI did wrong, and to do that you need to understand the code the AI wrote.


> The problem with that theory is that writing code is easier than reading code. This is generally not the case in other professions. It is definitely not the case for an artist.

> You still need correct code, and the halting problem says you can't prove whether code does what you want it to. At the end of the day, someone needs to be able to go in and fix shit the AI did wrong, and to do that you need to understand the code the AI wrote.

This might have been your point, but chances are the "code the AI wrote" will be an unmaintainable mess, so "fixing it" means throwing it away and re-doing it.


The question is how many developers will fix code after AI and how many developers will grow potatoes.


AI has been over promising and under delivering for 50 years.

There's a reason why the general models aren't being released. The second you look under the hood and start poking the unhappy paths you see that it doesn't understand anything and you're talking to something dumber than a hamster.


There's a weird tension between people who say saying "AI is overblown" and people who say "this is the most magical thing I've seen in my lifetime".

I lean towards the latter but with a healthy dose of "it's deeply weird and hard to get anything useful from". But that doesn't make it any less magical.

And no - it's not "intelligent" in any human sense.

But I can't relate to people who pooh-pooh it as if there's nothing exciting happening. Either they are deliberately cultivating a dismissive air, or they are deeply jaded and weary.

EDIT - There's a 3rd option. People are making a rhetorical point because they perceive a need to correct an imbalance in the general mood. This is actually the most likely explanation and is often under-appreciated as motivator in public statements. I've noticed it in myself frequently.


AI can be overblown and it can also be magical.

This was true for state of the art in 2010: https://xkcd.com/1425/ today you have a free phone app that does both. Of course it also classifies a spoon as a large breasted robin which is why you need a human in the loop. It's even truer in programming.


That xkcd actually claims the opposite: "some things that many people assume to be trivial CS problems actually require advancements in AI"


That's what I said. The converse is that despite the huge advance the model is still incredibly fragile and quite useless outside the niche for which it was trained for.


Programming requires far more breadth and precision than 2D art.

I think that in the very long run programming work will be automated, but by that stage we will either be post-scarcity or reconstituted in computation substrate.


You say that with such confidence.

I'm looking for ways to hedge my reliance on my skills.


I'm confident that society would be so radically different that trying to predict and prepare is a fools game.


>This time will be different.

The 20th time you hear that is when you stop caring.


programming is going to get automated with language models in <5 years

however, I find that my job (SWE) is about 1% programming and 99% strategizing, designing & communicating.


Really? How much money are you willing to put on the claim "LLMs are going to produce >80% of newly written production code in 5 years"? If it's less than your yearly salary, then even you don't believe your own assertion.


For github & google, I'd put my own money on it being >80% in 5 years. Maybe not a whole year's salary though, I'm a big baller


Well, you will lose but I basically don't believe your claim is anything more than hyperbolic bullshit as no one who is working as a SWE is actually spending only 1% of their time programming as per your initial comment.


some software engineers spend most of their time designing, communicating with other teams, and managing operations. It becomes the case as you get more experienced, depending on your staff engineer path.


This is a truism that every senior engineer understands. There is no way to get below 10% coding unless you move into people management/product. At this point, you are no longer a SWE (except by title, maybe).


this thread is funny because I ended up spending my whole day yesterday and today heads down coding


> "This may seem far off..."

After experimenting with GitHub Co-Pilot I can see that day being 50% - perhaps even just 25% - as far as it used to feel.


Wouldn't millions of unemployed developers start creating software that would compete/overthrow existing software companies?


Oh people don't realize that all those movies/series about dystopian societies were blueprints for the elite.


AI still can't translate lol


Maybe, but this is replacing fifty cents of stock photo, not digital artists.


Who do you think created that 50-cent stock art?


There is a semantic issue going on here, "digital art" I thought meant fine art, not illustration for blog posts


Probably not a digital artist. Is there a big fraction of that on stock photo sites?


And digital artists aren't still pretty low on your ranking?

I don't know, after all the predictions about self-driving cars, I'm cautious. Especially considering that back then, it almost seemed obvious that we'd have self-driving cars by now. Cars were certainly capable of driving themselves back in 2016, it just seemed like we needed to iron out a few kinks. How long could that possibly take?

Now, I have no idea when it'll happen.

I'm not necessarily saying that it'll take AI forever to do what humans can do. Rather, I think its very hard to make good predictions with all the hype slightly deceptive marketing.


I think self-driving cars might be one of the hardest domains ever, considering they need to do the right thing (or at least avoid doing the wrong thing) 99.99% of the time to be even considered viable.

An AI artist can get away with a 5% success rate and still be considered viable for replacing humans.

Likewise, and AI programmer can be right only 50% of the time with expert oversight (someone sitting there fixing the code), or 90% with non-expert oversight.


Self-driving cars are reality for a few years already. Issue is with laws and people, much less with tech.


They issue is absolutely with tech. Cars can drive themselves right now in good conditions and with a human taking over ASAP in case it fails and even that isn't working 100%. Put that self driving car on a mountain road or in deep snow and see what happens.

Also, if your bar for accidents is "slightly better than a drunk driver and even less accountable", sure, then we can fully deploy that right now. Unfortunately this really isn't sufficient and car companies are fully aware. There's a reason Mercedes made headlines by taking responsibility for the car for ten seconds after disengaging the auto pilot and why they're the only ones who are doing this so far.


I think this is a comment for which "sources needed" is kind of the bare minimum.

To most people, the "readiness" of self-driving cars doesn't even come about when they would accomplish parity with human drivers across all situations, because they should be better. And we're not even close to parity with human drivers across a large swath of common situations.

Unless you have links showing otherwise...


They works great when conditions are ok. As soon as there is an outstanding parameter they fail spectactularly.

We can also talk about AI support service crap with accounts banned/locked without the user having any way to know why or prove he didn't break any terms of the contracts.


Plus the approach. I'm not sure we need AI at all for self driving.


Same. I would have thought the arts would be the last to move to AI. What do you put at the end of the list now? It's not trucking. Or ordering. Or anything related to porn. Eliza is 50+ years old, not actually a good replacement for a psychotherapist yet but I would imagine it could go a long way.

I'm biased from the terrible experience I had trying to get my kids to learn online in the pandemic, but I think schoolteacher might be one of the mass professions that is least susceptible to being AI Engineered away.

Ethicist is probably a safe career path too, but there aren't that many of those. And Politicians will of course prevent robots from taking over their jobs.


The reason it's focused so much is that art has incredibly low stakes... and people don't want AI making any seemingly important decisions...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: