Writing the code is not the real problem, understanding what code should be written is the real challenge.
So when AI can tease out my clients requirements when they have no clue, or even worse are adamant they know what they need, but are clueless, then I'll be concerned.
And furthermore, try getting regulatory approval in say, the banking or machinery safety sector for a system that you can't explain in detail what it will do for any given circumstance.
This problem of explainability already exists, one of our local banks gave a talk at AI meetup and said they were able to get results with AI/ML that they couldn't roll out because they could not explain adequately to the regulator the exact results it would give in certain circumstances.
“Writing the code is not the real problem, understanding what code should be written is the real challenge.”
This seems like more of a surprise to 1990s-2020s IT designers and developers than it should be. The legacy of IT development from 1910s-1980s turned out to be a grand exercise in free systems engineering/requirements development for the Personal Computer/Smartphone era. Microsoft/Apple/IBM PC/Blackberry/etc. rode on the backs of all those fielded legacy systems that had to figure out from scratch what people wanted and how to successfully meet those needs. What was needed in the last 30 years was already sketched out and validated in the Big Iron/landline era. Moore’s law and the like from the 1990s did mean some new doors/new solutions were possible, but the requirements “walls and ceilings” were already architected and validated for the post-90s players by their IT forebears.
AI tech does not have the same advantages. AI in 2020 is at the stage of development that computers were in the 1950s. In the 1950s, the big question was is a computer just a big punched card sorter; if not, what else can it be? To answer that question in the 1950s-1980s took a lot of systems engineering and prototype (in the guise of early production) development. The 90s-20s developers could simply exploit all that hard-won early insight and so we got the “nanosecond Nineties” where you could quickly develop new products with less fumbling around determining what needs were actually important to meet.
AI…welcome to the fumbling 1950s. Probably won’t need the pocket protectors, crew cuts and button-down shirts this time.
I think I see your point, but in my thought experiment, this future AI will be bootstrapped from the current accumulated knowledge of programming and software engineering. I don't think it would be back to square one. At the very least it will be able to write code in whatever human programming languages are around. I'm taking as given in this thought experiment that, given the requirements, humans don't need to write any code whatsoever.
My point then reduces to, what happens next? The machine probably doesn't need the abstraction layer of programming languages. It might even have its own, layer(s) which make no sense to us, or it writes machine code. The question then hinges on how we, as humans, would validate such a system.
In my thought experiment, let's say that I tease out the requirements but the machine implements absolutely everything. That's the question I'm driving at. How do I verify the software's functional and non-functional requirements have been implemented correctly, and is a programming language required for that so that I can check the machine's work?
>> let's say that I tease out the requirements but the machine implements absolutely everything.
>> How do I verify the software's functional and non-functional requirements have been implemented correctly
If you have the requirements sufficient to have the AI create code, wouldn't those same requirements be sufficient to generate tests and acceptance criteria?
Right exactly. So to my original point, are tests enough? Would code, apart from machine code, be irrelevant at this point? How do I inspect the program? Do I need to?
So when AI can tease out my clients requirements when they have no clue, or even worse are adamant they know what they need, but are clueless, then I'll be concerned.
And furthermore, try getting regulatory approval in say, the banking or machinery safety sector for a system that you can't explain in detail what it will do for any given circumstance.
This problem of explainability already exists, one of our local banks gave a talk at AI meetup and said they were able to get results with AI/ML that they couldn't roll out because they could not explain adequately to the regulator the exact results it would give in certain circumstances.