Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: When AI can write all code, what is the role of programming languages?
5 points by jmfldn on Jan 31, 2023 | hide | past | favorite | 13 comments
Now, I know that this future is way off, and that ChatGPT or any descendent of it, possibly won't be able to write all the code required to build a big real world system. This question is based on a, currently, hypothetical future AI system, possibly AGI-based, that can basically write all code required for a major project.

My question is, what is a programming language for at this point? As a human programmer, I feel like I would need a way to verify what the machine has written, and programming languages are, by their very nature, the human / machine interface. But they're really only of benefit to the human. This AI could just flip the bits in theory and write machine code. However, how can we easily verify the resulting software? Automated tests? What if we miss something? Will future, AI-written software be a blackbox, or will languages still have their place? If so, will they be a different kind of language to the ones we have now? Will a code-writing AI in 2070 be writing assembly but converting to C for our benefit? Seems unlikely.



Writing the code is not the real problem, understanding what code should be written is the real challenge.

So when AI can tease out my clients requirements when they have no clue, or even worse are adamant they know what they need, but are clueless, then I'll be concerned.

And furthermore, try getting regulatory approval in say, the banking or machinery safety sector for a system that you can't explain in detail what it will do for any given circumstance.

This problem of explainability already exists, one of our local banks gave a talk at AI meetup and said they were able to get results with AI/ML that they couldn't roll out because they could not explain adequately to the regulator the exact results it would give in certain circumstances.


“Writing the code is not the real problem, understanding what code should be written is the real challenge.”

This seems like more of a surprise to 1990s-2020s IT designers and developers than it should be. The legacy of IT development from 1910s-1980s turned out to be a grand exercise in free systems engineering/requirements development for the Personal Computer/Smartphone era. Microsoft/Apple/IBM PC/Blackberry/etc. rode on the backs of all those fielded legacy systems that had to figure out from scratch what people wanted and how to successfully meet those needs. What was needed in the last 30 years was already sketched out and validated in the Big Iron/landline era. Moore’s law and the like from the 1990s did mean some new doors/new solutions were possible, but the requirements “walls and ceilings” were already architected and validated for the post-90s players by their IT forebears.

AI tech does not have the same advantages. AI in 2020 is at the stage of development that computers were in the 1950s. In the 1950s, the big question was is a computer just a big punched card sorter; if not, what else can it be? To answer that question in the 1950s-1980s took a lot of systems engineering and prototype (in the guise of early production) development. The 90s-20s developers could simply exploit all that hard-won early insight and so we got the “nanosecond Nineties” where you could quickly develop new products with less fumbling around determining what needs were actually important to meet.

AI…welcome to the fumbling 1950s. Probably won’t need the pocket protectors, crew cuts and button-down shirts this time.


I think I see your point, but in my thought experiment, this future AI will be bootstrapped from the current accumulated knowledge of programming and software engineering. I don't think it would be back to square one. At the very least it will be able to write code in whatever human programming languages are around. I'm taking as given in this thought experiment that, given the requirements, humans don't need to write any code whatsoever.

My point then reduces to, what happens next? The machine probably doesn't need the abstraction layer of programming languages. It might even have its own, layer(s) which make no sense to us, or it writes machine code. The question then hinges on how we, as humans, would validate such a system.


In my thought experiment, let's say that I tease out the requirements but the machine implements absolutely everything. That's the question I'm driving at. How do I verify the software's functional and non-functional requirements have been implemented correctly, and is a programming language required for that so that I can check the machine's work?


>> let's say that I tease out the requirements but the machine implements absolutely everything.

>> How do I verify the software's functional and non-functional requirements have been implemented correctly

If you have the requirements sufficient to have the AI create code, wouldn't those same requirements be sufficient to generate tests and acceptance criteria?


Right exactly. So to my original point, are tests enough? Would code, apart from machine code, be irrelevant at this point? How do I inspect the program? Do I need to?


> What is a programming language for at this point?... AI could just flip the bits in theory and write machine code.

I think the wrong question is being asked. Instead of asking "how will a programming language apply," the question should be "how many layers of abstraction are tolerated?"

The amount of abstraction is a kind of cost-benefit analysis. In one scenario, it may be more attractive to forego tooling at a more abstract layer, in favor of using under-girding tooling at a less abstract layer. For example, if a software dev is two or more layers deep in plugins/frameworks/libraries and realizes that working with the stdlib would actually take less time. Or, in your scenario, a company using an AI system to write high-level scripts realizes that working with the high-level scripts directly would actually take less time.

However in another scenario a software dev may realize that abstraction tooling actually saves time versus working "close to the metal." Or, in your scenario, a company may realize that an AI system actually saves time versus having a human write out boilerplate.

> Will a code-writing AI in 2070 be writing assembly... If we're looking into crystal balls for the answer, mine says ~ 50 years is long enough time for drastic changes that don't fit into this conversation at all. But ignoring all that for a minute, the cost-benefit analysis indicates to me that there is plenty of room for bounces and whiplashes. Meaning, a period of lots of AI abstraction, followed by a period of abstraction consolidation/standardization and death.


Stack Overflow banned GPT and ChatGPT answers. Here's an important part of that policy: "The objective nature of the content on Stack Overflow means that if any part of an answer is wrong, then the answer is objectively wrong."

https://stackoverflow.com/help/gpt-policy

Every step in the evolution of programming has some degree of controversy and I think this is normal. We can read in history books (and maybe first-hand knowledge from a few senior folks on HN) how there was concern about moving from assembly to the first high-level languages. There will probably come a day when AI software is sufficient for common use, though it might take a while.


> We can read in history books (and maybe first-hand knowledge from a few senior folks on HN) how there was concern about moving from assembly to the first high-level languages

That's an interesting point, I'd like to hear more about that from someone.

The issue here seems like an extension of that, but an order of magnitude more complex. Going from assembly to high level means being able to understand and trust another program, a compiler. We're still in world of high level programming languages. An AI system writing code, especially if the output is a binary artefact and not source code written in a high level language, is essentially a blackbox.

How then do we verify the work apart from running tests? Running a disassembler? I guess the challenge there is to get something easily parseable by humans. Just thinking about current tech, I'd have the AI write some sort of byte code since reversing that (to Java for example thinking of current languages) is easy, and you'll get something fairly readable. Decompiling machine code to C is likely to be fairly unreadable.

I think my preoccupation is that I want to be able to read the whole program, not just trust the AI system and a battery of tests. So what then is the role of the language and what languages (if any) do we need in this future world?


To make sure stuff works right. If we forget how to do the stuff we automated, we'll basically wind up in idiocracy.


I just take that as a given that this is our future. When nobody needs to work, few people will care to make sure stuff works right. E.M. Forster had it right.


I had the same thought!


Variation of traditional role of mentor / educational environment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: