Developers who get excited by agentic development put out posts like this. (I get excited too.)
Other developers tend to point out objections in terms of maintainability, scalability, overly complicated solutions, and so on. All of which are valid.
However, this part of AI evolves very quickly. So given these are known problems, why shouldn't we expect rapid improvements in agentic AI systems for software development, to the point where software developers who stick with the old paradigm will indeed be eroded in time? I'm genuinely curious because clearly the speed of advancement is significant.
> Other developers tend to point out objections in terms of maintainability, scalability, overly complicated solutions, and so on. All of which are valid.
I've spent the bulk of my 30+ career in various in-house dev/management roles, and small to medium sizes digital agencies or IT consulting places.
I that time I have worked on many hundreds of project, probably thousands.
There are maybe a few dozen that were still in production use without major rewrites on the way for more than 5 years.
I think for a huge amount of commercial projects, "maintainability" is something that developers are passional about, but that is of very little actual value to the client.
Back in the day when I spent a lot of time on comp.lang.perl.misc, there was a well know piece of advice "alway throw away the first version". My career-long takeaway from that has been to always race to a production ready proof of concept quickly enough to get it in front of people - ideally the people who are then spending the money that generates the business profits. Then if it turns successful, re write it from scratch incorporating everything you've learned from the first version - do not be tempted to continually tweak the hastily written code. These days people call something very like that "finding product market fit", and a common startup plan is to prove a business model, and them sell or be acquired before you need to spend the time/money on that rewrite.
Anecdotally, I find early mover advantage to be overrated (ask anyone who bought Betamax or HD-DVD players). It is significantly cheaper – on average – to exploit what you already know and learn from the mistakes of other, earlier movers.
> However, this part of AI evolves very quickly. So given these are known problems, why shouldn't we expect rapid improvements in agentic AI systems for software development, to the point where software developers who stick with the old paradigm will indeed be eroded in time?
Because writing code has always been the easy part. A senior isn't someone who's better at writing code than a junior - they might well be worse at writing code. AI can now do the easy part, sure. What grounds does that present for believing that it's soon going to be able to do the hard part?
I don’t know what level of experience you have with agentic AI, but the frontier models are also really good at things like product management and data modeling. You can start with a description of the problem and end up with a really solid design plan that you can then give the AI to implement.
So yeah, if you’re starting with a “write me code that does X Y Z” then you aren’t getting the most out of these tools, because you’re right, that’s not the hard part.
Given that the problems are known and given that things are changing rapidly, we should expect them to be solved eventually (by some force)? No, I think the burden of proof is on whoever wants to address those problems. Not just refer to the never-changing answer “but why not?”[1]
All I see from “excited” developers is denial that there is a problem. But why would it be a problem that you have to review code generated by a program with the same fine-tooth comb that you use for human review?
[1] Some things change fast, some things never change at all.
In short: yes. It can be done. Clean, almost limitless energy, funded in a way to provide effectively free electricity for ordinary people. Restrictions would have to be in place to prevent true excess, but regulations already handle such matters in other areas.
The ambient vibe of our time, and here on HN, is often really pessimistic. I don't believe such pessimism is realistic. Commercial grade fusion power will come, and we should push very hard to make it happen. It will change the equations at the core of the economy and open up whole new paths for technology -- far beyond the pure digital.
Asking for someone else: is the NIW category still backlogged? If the goal is to get a green card within the STEM-OPT extension timeline of 2 years, for someone who has very high achievements that fit either NIW or EB1A categories, which of these is likely a better option? I'm hearing some rumors that NIW can happen faster these days.
The NIW (EB2) category is still backlogged and the standard as applied by USCIS has changed significantly and I would argue now isn't much easier than the EB1A, which used to be one of the reasons to pursue an NIW. With the changing standard and the worsening backlogs, the EB1A is the much better option, although a higher standard, and probably worth aggressively pursuing.
I think it's partly because we all just have way too much to do. Every day. All day. And the harder you work, it seems the more you have to do. On top of cognitive processing of all the ambient events in our time, which is a heavy load just by itself.
Most of the time, AI tools promise to be timesavers. So it's natural many folks look for shortcuts. We're simply overloaded, partly due to current situations generated by existing machine learning tools deployed elsewhere in the system.
Nothing is ever ideal, but centuries of labor laws gets us in the right direction. A 4 day workweek would do wonders while still having plenty of work to be done.
Your statement is also why I fear this supposed promise that "AI will do all the work, society won't need jobs!". I don't think we're getting this post-work utopia that tecunocrats love to promise.
I didn't use it much growing up since they moved west when I was young. but it turns out that "y'all" is surprisingly nifty: a gender neutral, 2nd person pronoun for a group of peope. So I picked it up more in adulthood and put it into my daily vernacular.
It's a term commonly used in some of the "Southern" US states.
And also by Indian Christians (Catholics) in some parts of India, such as Mumbai and nearby areas, like Pune and Goa, along or near the Western coast of India. Partly grew up there, and also did some of my schooling there, that's how I know this.
I don't know if there is any historical connection between the usage of that phrase (y'all) in those two areas (of the US and India).
It could have been, via (US) Christian missionaries coming here. There were and still are some of them, in some parts of India, from more than 100 years ago. Again, I know this from experience.
That general area of India, and some other parts, do have a relatively high percentage of Christians.
Wish the UK would have their version of the SBIR/STTR program too -- and open to all, not just Oxbridge and other elites. Mandatory small business set-asides, especially for large defense procurement, has outsized effects on innovation.
Are marriage-based green card applications still being processed at some pace when the petitioner's spouse is also a green card holder? I hear these days this category is ultra slow with no option to expedite.
This is the intention of tech transfer. To have private-sector entities commercialize the R&D.
What is the alternative? National labs and universities can't commercialize in the same way, including due to legal restrictions at the state and sometimes federal level.
As long as the process and tech transfer agreements are fair and transparent -- and not concentrated in say OpenAI or with underhanded kickbacks to government -- commercialization will benefit productive applications of AI. All the software we're using right now to communicate sits on top of previous, successful, federally-funded tech transfer efforts which were then commercialized. This is how the system works, how we got to this level.
> As long as the process and tech transfer agreements are fair and transparent
I think that's the crux of the guy you're responding to's point. He does not believe it will be done fairly and transparently, because these AI corporations will have broad control over the technology.
If so, yes indeed, fair point by him/her. It's up to ordinary folks like us to push against unfair tech transfer because yes, federal labs and research institutions would otherwise provide the incumbents an extreme advantage.
Having been in this world though, I didn't see a reluctance in federal labs to work with capable entrepreneurs with companies at any level of scale. From startup to OpenAI to defense primes, they're open to all. So part of the challenge here is simply engaging capable entrepreneurs to go license tech from federal labs, and go create competitors for the greedy VC-funded or defense prime incumbents.
> I didn't see a reluctance in federal labs to work with capable entrepreneurs
My reluctance is when we talk about fraud, waste, and corruptions in government, this is where it happens.
The DoD's budget isn't $1T because they are spending $900B on the troops. It's $1T because $900B of that ends up in the hands of the likes of Lockhead martin and Raytheon to build equipment we don't need.
I frankly do not trust "entrepreneurs" to not be greedy pigs willing to 100x the cost of anything and everything. There are nearly no checks in place to stop that from happening.
Not that it fully takes away from your argument but a lot of that high price tag is also due to requiring much better controls on material to prevent supply chain attacks ala getting beepers with explosives in the hands of all your leadership
Yet that's the exact opposite of what's been done with something like the F-35[1], with widely distributed production, typically among countries seen as US allies (at least prior to this year), but with key components still made in China.[2] And the problem is even worse in the larger defense industry.[3] Americans pay an immense premium for a military-industrial complex where the PR is largely divorced from reality; for example the USS Gerald R. Ford, commissioned in 2017 still isn't combat ready.[4]
All the more reason to bring such initiatives inhouse and not outsource them.
You can hope that a defense company is doing the right things in terms of supply chain attacks, but that's a pretty lucrative corner to cut. They'd not even need to cut it all the time to reap benefits.
The only other alternative is frequent audits of the defense company which is expensive and wouldn't necessarily solve the problem.
Reasonably there should be a two way exchange? It might be okay for companies to piggyback on research funds if that also means that more research insight enters public knowledge.
I’d be happy if they just paid their fair share of tax and stopped acting like they were self-made when they really just piggybacked on public funds and research.
There’s zero acknowledgment or appreciation of public infra and research.
What do you mean universities can't commercialize in the same way (I may have misunderstood what you meant)? Due to Bayh-Dole, Universities can patent and license the tech they develop under contract for the government- often helping professors start up companies with funding, while simultaneously charging those companies to license the tech. This is also true for National labs run by universities (Berkeley and a few others). the other labs run under contract by external for-profit companies.
If this were just about tech transfer, in which private firms commercialize public research, I agree. But that's not what Jason Pruet is saying. In the Q&A he notes:
> “Why don’t we just let private industry build these giant engines for progress and science, and we’ll all reap the benefits?” The problem is that if we’re not careful, it could lead us to a very different country than the one we’ve been in.
This isn't about commercialization, it's about control. When access to frontier models and SOTA compute is gated by private interests, academics (and the public) risk getting locked out. Not because of merit, but because their work doesn't align with corporate priorities.
Yes indeed, what a travesty. :) Or they may study misinformation, another affront to civilization itself, because of course we know exactly how it works in this ultra-fast AI era with several competing superpowers.
Often, when the gods bring great prosperity as a gift for men
They do so not out of goodwill towards them
But so that their ruin may be more conspicuous.