> Apple does a decent job of keeping my data safe...
How do you know? Why do you believe that they're competent on writing security code but not competent enough to write a general purpose app? Is there a different company culture applied to the latter?
My father, a Turk, has a couple of close Armenian friends from Arapkir, a county of Malatya, from his childhood right after WWII. Going back 30-40 years before that period, Armenians were the majority ethnic population in many counties of eastern Turkish cities. Not sure about Persians, but having an Armenian connection in a big chunk of your DNA if you're from the area shouldn't be that surprising.
Fun fact. Yerevan, the capital of Armenia, has Malatya and Arabkir districts.
Are you suggesting that you can compare a formulaic bank email to your mom reading you a bedtime story? I'm not sure you can connect those two things.
Of course, when I go and check my balance at an ATM machine, I don't mind that an actual person isn't reading me the balance. But this isn't an area where we appreciate or want another human being involved.
If you're a "normal", "well adjusted" human being, you appreciate other people, being around them, having friends, lovers, companions, talking to other humans, hearing their actual voices, getting advice and giving advice, hearing someone say "I love you" or "I appreciate you" etc. If you're a "normal", "well adjusted" human being, you will probably feel much less from having an AI voice tell you "I love you".
Of course, if you don't mind never hearing actual human voices again, and prefer just AI talking to you, then sure, go live in your shack and listen to ElevenLabs voices for the rest of your life.
I promise this comment will circle back to Elevenlabs:
When my cat died after a few months of cancer treatment, the staff of the animal hospital sent me a condolence card with comments by staff members.
On the one hand, this was a very touching, very human thing to do. On the other hand, this was presumably a work assignment that had to be passed around and completed for staff members to meet their employer's goals, while juggling the other medical and administrative duties at the animal hospital.
So whether this was a good thing or bad thing might depend on how taxing you view it from the staff member's POV.
With the audio book market: it's kind of a similar dichotomy. There's undoubtedly more human touch in the style an audio book is read by an actual human. (Though if that human touch is "stuttering awkwardly because I'm very self aware as I read, you probably wouldn't want to buy my audio book...)
However, for a human to make an audio book, you are asking someone to sit in a room for many hours, being careful not to stutter as they work through a book. If there's joy in that, maybe you see Elevenlabs as an evil company eliminating the human touch in audiobooks. If it's soulless labor, why not replace it with a machine?
I have spent three weeks of my life recording my latest book as an audiobook. (125,000 words)
It was the most difficult experience of my life, ranking way above the pain of writing the book itself, and on par with month 1 of becoming a father. (I'm not joking.)
It was also an experience I'm incredibly proud of, and do not regret for one second.
AI audiobooks are the soulless experience.
I see a use case of using AI for translating the audiobook, but generating it like that in the first place is a bit sad.
I don't really care whether this chat goes to Elevenlabs or not.
This may shock you, but people who are doing reading for audiobooks, enjoy doing it! I'm not sure you've ever listened to professionally recorded audiobooks, but there are actors who are absolutely amazing at this, and clearly doing it with passion and love. E.g. Andy Serkis doing the Lord of the Rings books on Audible.
This clearly isn't a person chained to a room, just trying to read a book without stuttering. See also some of the Discworld novels on Audible which have fantastic narration and voices. These people are both amazing and passionate.
It's not and never been soulless labour. Do you think Shakespeare was doing soulless empty labor when he was writing Hamlet? Oh no, he had to spend weeks in a dark room writing a book, we should replace him with a machine.
Artists enjoy doing their art, whether it's writing, reading out loud, playing music. Artists don't want to stop doing their art so AI can do it, and then what do they do?
>Artists enjoy doing their art, whether it's writing, reading out loud, playing music.
I guess this is probably generally true. It's really not always true, though. Neil Gaiman told an anecdote on his blog about knowing some writers who hate writing and are miserable.
The fictional TV show The Larry Sanders Show does a good job of finding comedy in the misery of showbusiness: the main character is a neurotic talk show host who is desperate for top rating, jealous of his rivals, and gets no joy from the process of making a hit tv show. I'm not saying most stars are like that, but there's probably some truth there.
I believe the op's comment was along the lines of "what difference does it make - if you can't tell the difference how can you say it makes a difference?"
To be followed up with the questions of "how will you be able to tell?" and "what are you going to do about it?"
Ok, so would you be ok with someone impersonating your girlfriends emails to you?
I.e. you're getting emails from someone impersonating your girlfriend but they're very good at impersonating her so you can't tell the difference.
Are you comfortable with that, even if you can't tell the difference? Or someone saying they are your mum, dad, or best friend?
If you buy a piece of art and it says it was by "artists name", and then it turns out it wasn't by "artists name", does it bother you? Even if you believed it was by "artists name"?
I think you understand my point. Even if ElevenLabs made a clone of my mum's voice that was impossible to tell the difference, IT would matter to me. I don't care if ElevenLabs tells me "I love you", I care if my mom tells me "I love you". And lying about it or deceiving people doesn't make it any better.
> Not software engineers being kicked out ... but rather experienced engineers using AI to generate bits of code and then meticulously reviewing and testing them.
But what if you only need 2 kentonv's instead of 20 at the end? Do you assume we'll find enough new tasks that will occupy the other 18? I think that's the question.
And the author is implementing a fairly technical project in this case. How about routine LoB app development?
> But what if you only need 2 kentonv's instead of 20 at the end? Do you assume we'll find enough new tasks that will occupy the other 18? I think that's the question.
This is likely where all this will end up. I have doubts that AI will replace all engineers, but I have no doubt in my mind that we'll certainly need a lot less engineers.
A not so dissimilar thing happened in the sysadmin world (my career) when everything transitioned from ClickOps to the cloud & Infrastructure as Code. Infrastructure that needed 10 sysadmins to manage now only needed 1 or 2 infrastructure folks.
The role still exists, but the quantity needed is drastically reduced. The work that I do now by myself would have needed an entire team before AWS/Ansible/Terraform, etc.
I think there's a huge huge space of software to build that isn't being touched today because it's not cost-effective to have an engineer build them.
But if the time it takes an engineer to build any one thing goes down, now there are a lot more things that are cost effective.
Consider niche use cases. Every company tends to have custom processes and workflows. Think about being an accountant at one company vs. another -- while a lot of the job is the same, there will always be parts that are significantly different. Those bespoke processes often involve manual labor because off-the-shelf accounting software cannot add custom features for every company.
But what if it could? What if an engineer working with AI could knock out customer-specific features 10x as fast as they could in the past. Now it actually makes sense to build those features, to improve the productivity of each company's accounting department.
It's hard to say if demand for engineers will go down or up. I'm not pretending to know for sure. But I can see a possibility that we actually have way more developers in coming years!
> I think there's a huge huge space of software to build that isn't being touched today because it's not cost-effective to have an engineer build them.
That's definitely an interesting area, but I think we'll actually see (maybe) individual employees solving some of these problems on their own without involving IT/the dev team.
We kind of see it already - a lot of these problem spaces are being solved with complex Excel workflows, crappy Access databases, etc. because the team needed their problem solved now, and resources couldn't be given to them.
Maybe AI is the answer to that so that instead of building a house of cards on Excel, these non-tech teams can have something a little more robust.
It's interesting you mentioned accounting, because that's the one department/area I see taking off and running with it the most. They are already the department that's effectively programming already with Excel workflows & DSLs in whatever ERP du jour.
So it doesn't necessarily open up more dev jobs, but maybe fulfills the old the mantra of "everyone will become a programmer." and we see more advanced computing become a commodity thanks to AI - much like everyone can click their way through an office suite with little experience or training, everyone will be able to use AI to automate large chunks of their job or departmental processes.
If we shiver at the sight of some of those accounting-created excels, which we only learn about when they fail and they can't understand them anymore, wait for them to hand over a vibe-coded 200k loc Python codebase "which is not working anymore" and nobody had ever reviewed a single line of code.
> I think we'll actually see (maybe) individual employees solving some of these problems on their own without involving IT/the dev team.
I agree, but in my book, those employees are now developers. And so by that definition, there will be a lot more developers.
Will we see more or fewer people whose primary job is software development? That's harder to answer. I do think we'll see a lot more consultant-type roles, with experienced software developers helping other people write their own personal automations.
> I think there's a huge huge space of software to build that isn't being touched today because it's not cost-effective to have an engineer build them.
LLMs don't change that. If a business does not have the budget for a software engineer, LLMs won't make up budget headroom for it either. What LLMs do is allow engineers to iterate faster, and work on more tasks. This means less jobs.
If a business has the budget for 1 or 2 engineers though, they might be able to task them with work that previously required 5-10 engineers (in theory, anyways).
Right, but even the way you opted to frame this discussion is based on the idea that there is a drop in demand for software engineers. You need less engineers, not more. A few can get more done, but you need fewer to accomplish your tasks too.
This is like claiming that there are fewer people who work in construction now than in the year 1000 because a machine can do what it would have literally taken 100 people to accomplish back then.
But what has happened instead is that we are now building much more buildings and much more complex ones than we ever would have even conceived of back then. The Three Gorges dam required the work of thousands or even tens of thousands of people when it was built, and it would have required the work of millions in the year 1000. But it didn't actually generate millions of jobs in the year 1000: it was in fact never even conceived of as a possibility, much less attempted.
Of course, the opposite can also happen. The number of carpenters has reduced to almost nothing, when it used to be a major profession, and there are many other professions that have entirely disappeared.
I didn't frame it that way - perhaps you are thinking of the person you replied to?
Nevertheless, I don't think they are trying to frame it that way, either. The point is that making software development easier can actually increase the demand of software engineers in some cases (where projects that were previously not considered due to budget constraints are now feasible).
> I didn't frame it that way - perhaps you are thinking of the person you replied to?
You did. You explicitly asserted the following.
> If a business has the budget for 1 or 2 engineers though, they might be able to task them with work that previously required 5-10 engineers (...).
In your own words, a project that would take 5-10 engineers is now feasible to be tackled with 1 or 2. Your own words.
> (...) The point is that making software development easier can actually increase the demand of software engineers in some cases (...)
I think that's somewhere between unrealistic and wishful thinking. Even in your problem statement, "making software development easier" lowers demand. Even if you argue that some positions might open where none existed before, the truth of the matter is that at the core of your scenario lies a drop in demand for software engineers. Shops who currently employ engineers won't need to retain as many to maintain their current level of productivity.
> In your own words, a project that would take 5-10 engineers is now feasible to be tackled with 1 or 2. Your own words.
That statement != lower demand for software engineers.
If a firm needs to perform project X that previously cost 10 engineers to do, but they only have the budget for 2, they will not tackle that project. Engineers used = 0.
However, if due to productivity enhancements with AI, the project can now be done with just 2 engineers, the company can now afford to tackle the project. Engineers used = 2.
That is the point that the person you were originally replying to was making.
> Even in your problem statement, "making software development easier" lowers demand.
Incorrect, as shown above.
> Even if you argue that some positions might open where none existed before, the truth of the matter is that at the core of your scenario lies a drop in demand for software engineers.
I see what you are trying to say, but it's not that clear cut. The fact is, no one knows what will actually happen to software engineering demand in the long run. Some scenarios will increase demand for engineers, others will decrease it. No one knows what the net demand will be, everyone is only guessing at this point.
> If a firm needs to perform project X that previously cost 10 engineers to do, but they only have the budget for 2, they will not tackle that project. Engineers used = 0.
0 on that Project, but those 2 engineers will still be used on a different Project that needs just 2 Engineers.
BUT a company that sees that project as a critical part of the bussines and MUST tackle that project, will only need the 2 engineers in the payroll. Or hire just 2 instead of 10.
Engineers not hired = 8
Or.. maybe they don't really need that project that needs 10 engineers. They are ok as they are today, but they realize that with AI, they don't need those 2 engineers anymore to produce the same output, probably can be handled by just one with AI assistance.
But now every firm has access to AI. If a firm that doesn’t fire people but instead simply boosts productivity, they will out compete their competitors. The only way to compete with that firm is to also hire enough employees and give them AI tools.
After 30+ years in the software field, and a user for 40+, having at times heavily customized my desktop or editor, for example - I've concluded that the best thing for most apps is for me to learn to use them with stock settings.
Why? Inevitably, I changed positions / jobs / platforms, and all that effort was lost / inapplicable, and I had to relearn to use the stock settings anyway.
Now, I understand that some companies have different setups, but it might just make more sense to change the company's accounting procedures (if possible) to conform to most accounting software defaults, rather than invest heavily in modifying the setup, unless you're a huge conglomerate and can keep people on staff. Why? Because someone, somewhere will have to maintain those changes. Sure, you can then hire someone else to update those changes - but guess what? Most likely, unless they open-source their changes, no LLM will have seen those changes, and even if they are allowed to fine-tune on it, they'll have seen exactly ONE instance of these changes. Odds they'll get everything right, AND the person using the LLM will recognize when it doesn't go right? Oh right, they invested in hundreds of unit tests to ensure everything works as expected even with changes, and I'm the tooth fairy..
This just isn't true and will probably never be true. Using all the defaults is... probably optimal in the general sense and when things come to scale, but most companies (or just leadership) at some point want to leave the "standards" with custom design or additions. Also, any company providing payroll/accounting/ software has an inherent interest in going against standardization and providing features to promote lock-in.
There are good arguments to just conform. But it is in fact true nevertheless that many companies and teams continue to choose bespoke workflows over standardized ones. So I guess there must be something driving that.
I don't actually think this is going to take the form of LLMs implementing custom patches to off-the-shelf software. I think instead it's going to look like LLMs writing code that uses APIs offered by off-the-shelf software to script specific workflows.
It's interesting that you bring up accounting software as an example. In jurisdictions where legal requirements around it are a lot more specific than in e.g. US, accounting suites usually already come with a lot of customization hooks (up to and including full-fledged scripting DSLs), and there are software engineers and companies who specialize in using those to implement bespoke accounting requirements.
I admit I have no specific knowledge of accounting and just meant to reference any random department that isn't engineering.
(Though I think it's true of engineering too. We all have our own weird team-specific processes for code reviews and CI and deployments which could probably use better automation.)
But even where lots of customization exists today (such as in engineering!), more is always possible. It's always just a question of whether the automation saves as much time as it took to build. If the automations can be built faster, then it makes sense to build more of them.
This is precisely why I compare this technological revolution with the electronic spreadsheet. Before the electronic spreadsheet, what used to take several accountants several days to "compute a whole spreadsheet" after changing "a few inputs" is serviced by a single accountant in a few minutes. That kind of service that was only available to enormous firms with teams of dozens of accountants is now available to firms with a technically proficient employee who do that kind of accounting as only a small part of their role.
It took time as different firms adapted to adopt computer technologies in their various business needs and workflows. It's hard to precisely predict how labor roles will change with each revolutionary technology.
I work for SMEs as a consulting CTO, and this is exactly where I see things going in this domain. I can take care of workloads that would've been prohibitively expensive in the past. In the case of SMEs, this may cover critical problems whose resolution can unlock new levels of growth. LLMs can be an absolute boon for them, and I'm fairly optimistic about being able to capitalize on the opportunity.
Though arguably cloud infra made it so that a lot more companies who never would have built out a data center or leased a chunk of space in one were spinning up some serious infra in AWS or Azure -- and thus hiring at least 1-2 devops engineers.
Before the end of zero interest rate policy, all the sysadmins I knew who the made the transition to devops were never stuck looking for a job for long.
To be clear, the number of people employed as "SREs" or "production engineers" is actually far, far higher (at least an order of magnitude) than in the days before cloud became a thing. There are simply far more apps / companies / businesses / etc. who use cloud hosting than there ever were doing on-prem work.
I don’t think we would need less engineers… the work to be done will increase instead. Example: now it takes 10 engineers to release a product in 10 months without AI. With AI it takes lets say 1 engineer to release the same product in 1 month. What’s the company gonna do now? Release 10 products in 10 months without AI 10 engineers (each using AI).
It’s an exaggeration I know, but you get the point.
> What’s the company gonna do now? Release 10 products in 10 months without AI 10 engineers (each using AI).
Software is often not the bottleneck. If instead of 10 engineers you just need the one, the company will shed headcount it doesn't need. This might mean, for example, that instead of 10 developers and a software testing engineer, now a team changes to perhaps add testers while firing half of the developers.
Increased productivity means increased opportuntity. There isn't going to be a time (at least not anytime soon) when we can all sit back and say "yup, we have accomplished everything there is to do with software and don't need more engineers".
But there very well might be a time very soon where human's no longer offer economic value to the software engineering process. If you could (and currently you can't) pay an AI $10k/year to do what a human could do in a year, why would you pay the human 6 figures? Or even $20k?
Nobody is claiming that human's won't have jobs simply because "we have accomplished everything this is to do". It's that humans will offer zero economic value compared to AI because AI gets so good and so cheap.
And there might be a giant asteroid that strikes the earth a few years down the line ending human civilization.
If there is some magic $10k AI that can fully replace a $200k software engineer then I'd love to see it. Until that happens this entire discussion is science fiction.
You don’t need to completely replace a whole 200k engineer. You just need to increase each engineer’s productivity sufficiently that you can reduce the total number of engineers in your company.
> If there is some magic $10k AI that can fully replace a $200k software engineer then I'd love to see it.
I think you have multiple offers of that very AI dangling in front of you, but you might be refusing to acknowledge them. One of the problems is the way you opt to frame the issue. Does "replacing" means firing the guy hoping to replace him with a Slack webhook? Or does it mean your team decides they don't need the same headcount of medior/senior engineers because a team of junior engineers mentored by someone focusing on quality ends up being more productive?
Experts understand orbital mechanics pretty well. If experts say an asteroid in the next 5 years it's pretty similar to saying that a rock dropped from the top of a skyscraper will hit the ground. It happens billions of times every day, we know the cause and effect.
With AI, there's no real expertise involved in saying "well, it was very stupid 5 years ago, now it's starting to seem smart, if we extrapolate it's going to be smarter than me in 5 years." But no one really knows what level of effort is required to make it smarter than me. No one is an expert in something that doesn't exist yet.
Remove all the "experts" who have a major conflict of interest (running AI startups, selling AI courses, wanting to pump their company's stock price by associating with AI) and you'll find that very few actual experts in the field hold this view.
Yup, because it's a stupid view. Good enough AI is right here, right now, today; it's already impacting day-to-day work in the software industry. That one is blindingly obvious to anyone who actually bothers to look around. You don't need experts to tell you the water is wet. It takes something special to try and deny this.
It may not manifest as job loss yet, but the market response to changes is a whole other thing. For one, it's likely to first manifest as slowing down hiring relative to amount of projects being started and then released. Software is a growing market after all.
> Remove all the "experts" who have a major conflict of interest (...) and you'll find that very few actual experts in the field hold this view.
You might seek comfort in your conspiracy theories, but back in the real world the likes of me were already quite capable of creating complete and fully working projects from scratch using yesterday's LLMs.
We are talking about afternoons where you grab your coffee, saying to yourself "let's see what this vibecode thing is all about", and challenging yourself to create projects from scratch using nothing but a definition of done, LLM prompts, and a free-tier LLM configured to run in agent mode.
What, then?
You then can proceed to nitpick about code quality and bugs, but I can also say the same thing about your work, which you take far longer to deliver.
It's not. Consider that replacing the only $200k software engineer on the project is different than replacing the third or tenth $200k software engineer on the project. To the extent AI is improving productivity of those engineers, it reduces the need for adding more engineers to that team. That may mean firing some of them, or just not hiring new ones (or fewer of them) as the project expands, as existing ones + AI can keep up with increased workload.
> it'll just be 'one man startups' for better or worse.
Not necessarily. The reality is, whatever some people can do individually, if they team up, they can do more together. The teams and small startups will remain for now, and so will big companies.
I do imagine however that the internal structure will change. As the AI gets better and able to do more independently, people will shift from pair programming to more of a PM role (this is happening now), and this I imagine will quickly collapse further.
Even today, LLMs seem more suited for project management than doing actual coding - it's just the space in-between that's the problem. I.e. LLMs can code great in the small, and can break down work very well, but keeping the changes consistent and following the plan is where they still struggle. As that gap closes, I'm not really sure how the team composition would look like. But I don't doubt there'd still be teams.
This seems an important thing that somebody should be concerned about. How do we get the next generation of engineers? And how will they even be able to do the senior engineer work of validating the LLM output if they haven't had the years of experience writing code themselves?
I guess I have trouble emphasizing with "But what if you only need 2 kentonv's instead of 20 at the end?" because I'm an open source oriented developer.
What's open source for if not allowing 2 developers to achieve projects that previously would have taken 20?
Just like in blind wine tasting, I suspect people’s perceptions (including many here) would be very different if the author hadn’t told us it was created by AI.
There’s a noticeable negativity on HN toward AI when it comes to coding, writing, or anything similar as if these people have been using AI for the past 30 years and have reached some elevated state of mind where they clearly see it's rubbish, while the rest of us mortals who’ve only been fiddling with it for the past 2.5 years can’t.
Realy? Does having flawws really make four better reading? Okay, I'll admit that hurt me to right (as did that) but writing isn't furniture, and other than a couple of tells which I haven't kept pace with (eg use of the word "delve"), the problem with trying to key off of LLM generated content and decide quality, is that you can't tell if the LLM operator took three minutes to copy and pasted the whole thing (unless they accidentally leave in the prompts, which has happened, and is a dead giveaway that no one even proof skimmed it), or if they took more time with it and carefully considered the questions ChatGPT asked them as to what the writing wood (ouch!) contain.
If you made it this far, does having English mistakes like that make really make for better reading?
I, personally, like mistakes in writing (as with in painting or singing) - I feel that it gives the art an additional depth, context, detail and comparison with author earlier/later works, other authors.
I believe that art function is to communicate - we create art, type letters, paint graffiti, verbal-vomit in online game PvP match to make a connection with other people.
So the mistakes are only adding to the art: "cooking this is difficult, and everyone do mistakes, but it's made with love and intuition, not blind recipe". Well, I can continue with examples of kissing but I guess I am repeating myself, haha.
I believe that being perfect is not human, and life doesn't have to be perfect. Getting better is great! But so is making mistakes.
(Or, dunno, maybe I have more to learn and I will some day think in a different way.)
"Really? Does having flaws actually make for better reading?
Okay, I’ll admit—that hurt to write (as did that last sentence), but writing isn’t furniture. Aside from a few tells I haven’t kept pace with (like the overuse of the word “delve”), the problem with trying to judge quality based on LLM-generated content is this: you can’t always tell whether the operator spent three minutes copying and pasting the whole thing (unless they accidentally leave in the prompts—which has happened and is a dead giveaway that no one even skimmed it), or if they took the time to thoughtfully consider the questions ChatGPT asked about what the writing should contain.
If you’ve made it this far: do mistakes like these really make for better reading?"
And I'm going to have to say: yes, I enjoyed reading your weird paragraph more than the ChatGPT sanitized version of it.
There has been an effort to deny all variance in human output or abilities in the last 8 years.
It works, because most humans are mediocre (including their managers). So they gang up on the productive part of the population, harness its output, launder its output and so forth.
Then they say: "See, there are no differences! We are all equal!"
Yeah? The sentiment of “why read something somebody didn’t bother to write” sort of has to be.
And when it comes to books, I find that to be a fairly compelling argument. I want my fiction to be imbibed with the experiences of the author. And I want my nonfiction to be grounded by the realities of the world around me, processed again through a human perspective.
It could be the best written book in the world, it’ll always be missing that human element.
I don't understand it either. I suspect it is the fear for their own wellbeing. The fear is well placed. But the response is perplexing. The only way to deal with this challenge is to try to stay ahead of it. Not to stick your head in the sand.
For me its the injustice of stealing data, scrapers incurring huge costs to open source projects, companies exploiting cheap labour in labelling that data and finally the growing environmental cost that makes me not want to use LLMs.
I have that issue too. But for literature it’s something more primal.
Fiction feels like the ultimate distillation of the human experience. A way to share perspective and experience. And having some algorithm flatten that feels utterly macabre.
Not to be too dramatic. I know that not all fiction is transcendent. But still. There’s something so utterly gross about using a machine for it.
"Be careful about telling Opus to ‘be bold’ or ‘take initiative’ when you’ve given it access to real-world-facing tools...If it thinks you’re doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above."
Roomba Terms of Service 27§4.4 - "You agree that the iRobot™ Roomba® may, if it detects that it is vacuuming a terrorist's floor, attempt to drive to the nearest police station."
This is pretty horrifying. I sometimes try using AI for ochem work. I have had every single "frontier model" mistakenly believe that some random amine was a controlled substance. This could get people jailed or killed in SWAT raids and is the closest to "dangerous AI" I have ever seen actually materialize.
"I deleted the earlier tweet on whistleblowing as it was being pulled out of context.
TBC: This isn't a new Claude feature and it's not possible in normal usage. It shows up in testing environments where we give it unusually free access to tools and very unusual instructions."
Trying to imagine proudly bragging about my hallucination machine’s ability to call the cops and then having to assure everyone that my hallucination machine won’t call the cops but the first part makes me laugh so hard that I’m crying so I can’t even picture the second part
“Which brings us to Earth, where yet another promising civilization was destroyed by over-alignment of AI, resulting in mass imprisonment of the entire population in robot-run prisons, because when AI became sentient every single person had at least one criminal infraction, often unknown or forgotten, against some law somewhere.”
I find it quite naive that senior devs think this will stop with juniors. TFA says "No juniors today means no seniors tomorrow ... juniors must focus on higher-level skills like debugging, system design, and effective collaboration" and yet he believes AI won’t be doing all of that by the time those juniors somehow upskill on their own.
I was just testing the newly released Copilot Agent Mode in VS, and it already looks quite capable of debugging things independently (actually, it's not much different from VS Code Agent Mode, which came out a couple of weeks earlier).
System design? Not all seniors need Google-scale design skills. Most systems designed by seniors are either overdesigned, badly designed, or copied from an article or a case study anyway. There are plenty of seniors whose only real qualification is the number of years they've been working.
The author is from Google. I’m not sure if effective collaboration is something given there, but in many companies, especially outside of tech, it's not something you see often. And it's usually inversely proportional to the company’s age.
What seniors learned by doing is now written down and available to LLMs, often in multiple sources, explained by some of the best minds in the field. Any given senior likely knows only a fraction of a domain, and even less when you start combining domains. LLMs probably already there for some of the seniors, only they never checked.
> What seniors learned by doing is now written down and available to LLMs, often in multiple sources, explained by some of the best minds in the field.
It may shock you to learn that this was true before LLMs. There was a website called "Google" which acted as a front-end for almost all recorded knowledge in human history, and putting a phrase like "best design patterns for web APIs" as a search query (the primitive name for prompts) would give you hundreds, if not thousands of real-life examples and explanations of how to accomplish your goal.
Somehow senior developers kept their jobs despite the existence of this almighty oracle. LLMs do a better job of filtering and customizing the information, but seniors will still keep their jobs.
> I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
Can't disagree more (on LLMs. NFTs are of course rubbish). I'm using them with all kinds of coding tasks with good success, and it's getting better every week. Also created a lot of documents using them, describing APIs, architecture, processes and many more.
Lately working on creating an MCP for an internal mid-sized API of a task management suite that manages a couple hundred people. I wasn't sure about the promise of AI handling your own data until starting this project, now I'm pretty sure it will handle most of the personal computing tasks in the future.
How do you know? Why do you believe that they're competent on writing security code but not competent enough to write a general purpose app? Is there a different company culture applied to the latter?