Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did AI write all these comments? AI is turning me into a conspiracy theorist? I keep seeing AI is like having a team of 3-4 people, or doing the work of 3-4 people type posts everywhere lately like it's some kind of meme. I don't even know what it means. I don't think you're saying you have 4x'd your productivity? But maybe you are?


Best I can tell, it’s resulting in less churn, which isn’t the same as work getting done faster. Maybe it’s a phenomenon unique to engineering, but what I’m observing isn’t necessarily work getting done faster — it’s that a smaller number of people are able to manage a much larger footprint because AI tools have gotten really good at relaying existing knowledge.

Little things that historically would get me stuck as I switch between database work, front-end, and infrastructure are no longer impeding me, because the AI tools are so good at conveying the existing knowledge of each discipline. So now, with a flat org, things just get done — there’s no need for sprint masters, knowledge-sharing sessions, or waiting on PR reviews. More people means more coordination, which ultimately takes time. In some situations that’s unavoidable, but in software engineering, most of the patterns, tools, and practices are well established; it’s just a matter of using them effectively without making your head explode.

I think this relay of knowledge is especially evident when I can’t tell an AI comment from a human one in a technical discussion — a kind of modern Turing Test, or Imitation Game.


I'm not saying anything that hasn't been said a thousand times before. But I find it's evident when I'm getting it to do something I consider myself good at. And that's what's worrying to me. I work in DevOps and there are a couple of tools I'm really good at. If I were trusting the output all my configuration would be outdated and set up like a blog example with all the issues and shortcuts one takes in them (and I see that in the PRs that I get from the team members that rely on claud heavily). But if you didn't know the tool it would look fine. So when I code with the agent, it all looks really good, but I must be missing things right? For scripts that have no impact if they fail, I llm the shit out of that.


No need for reviewing pull requests generated by AI? Lol. Lmao, even.


At beast I think it means you can have the AI bot review your PR. Even then, it feels like a way to reinforce one’s own learned behavior rather than help. Things like great Ike do a great job at catching bugs and suggesting coding conventions, but the idea they can review your ideas is risible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: