Hacker Newsnew | past | comments | ask | show | jobs | submit | zwnow's commentslogin

Also it really baffles me how many are actually in on the hype train. Its a lot more than the crypto bros back in the day. Good thing AI still cant reason and innovate stuff. Also leaking credentials is a felony in my country so I also wont ever attach it to my codebases.

I think the issue is folks talk past each other. People who find coding agents useful or enjoyable are labeled “on the hype train” and folks for which coding agents don’t work for them or their workflow are considered luddites. There are an incredible number of contradicting claims and predictions out there as well, and I believe what we see is folks projecting their reaction to some amalgamation of them onto others. I see a lot of “they” language, and a lot of viral articles about business leadership “shoving AI down our throats” and it becomes a divisive issue like American political scene with really no one having a real conversation

I think the reason for the varying claims and predictions is because developers have wildly different standards for what constitutes working code. For the developers with a lower threshold, AI is like crack to them because gen ai's output is similar to what they would produce, and it really is a 10x speedup. For others, especially those who have to fix and maintain that code, it's more like a 10x slowdown.

Hence why you have in the same thread, some developer who claims that Claude writes 99% of their code and another developer who finds it totally useless. And of course others who are somewhere in the middle.


There's also the effect of different models. Until the most recent models, especially for concise algorithms, I felt it was still easier to sometimes do it myself (i.e. a good algo can be concise/more concise than a lossy prompt) and leave the "expansion/repetitive" boilerplate code to the LLM. At least for me the latest models do feel like a "step change" in that the problems can be bigger and/or require less supervision on each problem depending on the tradeoff you want.

Have you considered that it's a bit dismissive to assume that developers who find use out of AI tools necessarily approve of worse code than you do, or have lower standards?

It's fine to be a skeptic. Or to have tried out these tools and found that they do not work well for your particular use case at this moment in time. But you shouldn't assume that people who do get value out of them are not as good at the job as you are, or are dumber than you are, or slower than you are. That's just not a good practice and is also rude.


I never said anything about being worse, dumber, and definitely not slower. And keep in mind worse is subjective - if something doesn't require edge case handling or correctness, bugs can be tolerated etc, then something with those properties isn't worse is it?

I'm just saying that since there is such a wide range of experiences with the same tools, it's probably likely that developers vary on their evaluations of the output.


Okay, I certainly agree with you that different use cases can dictate different outcomes when using AI tooling. I would just encourage everyone who thinks similar to you to be cautious about assuming that someone who experiences a different result with these tools is less skilled or dealing with a less difficult use case - like one that has no edge cases or has greater tolerance for bugs. It's possible that this is the case, but it is just as possible that they have found a way to work with these tools that produces excellent output.

Yeah I agree, it doesn't really have to do with skill or different use cases, it's just what your threshold is for "working" or "good".

Hard to have a conversation when often the critics of LLM output receive replies like "What, you used last week's model?! No, no, no, this one is a generational leap"

Too many people are invested into AI's success to have a balanced conversation. Things will return to normal after a market shakedown of a few larger AI companies.


Its all a hype train though. People still believe in the AI gonna bring utopia bullshit while the current infra is being built on debt. The only reason it still exists is that all these AI companies believe in some kind of revenue outside of subscriptions. So its all about:

Owning the infrastructure and enshittify (ads) once enough products are based on AI.

Its the same chokehold Amazon has on its Vendors.


your credentials shouldn't be in your codebase to begin with!

.env files are a thing in tons of codebases

but thats at runtime, secrets are going to be deployed in a secure manner after the code is released

.env files are used to develop as well, for some things like PayPal u dont have to change the credentials, you just enable sandbox mode. If I had some LLM attached to my codebase, it would be able to read those credentials from the .env file.

This has nothing to do with deployment. I never talked about deployment.


If you have your PayPal creds in your repository, you are doing it wrong.

.gitignore is a thing

Which every AI tool I’m aware of respects and ignores by default.

Why is it that they can add new env variables then?

If your secrets are in your repo, you've probably already leaked them.

Idk, I still mostly avoid using it and if I do, I just copy and paste shit into the Claude web version. I wont ever manage agents as that sounds just as complicated as coding shit myself.

It's not complicated at all. You don't "manage agents". You just type your prompt into an terminal application that can update files, read your docs and run your tests.

As with every new tech there's a hell of a lot of noise (plugins, skills, hooks, MCP, LSP - to quote Kaparthy) but most of it can just be disregarded. No one is "behind" - it's all very easy to use.


Shhhh the original poster is the CEO of an AI based company. I am sure there is no bias here. /s

The model of capitalism is to use money to build some means of production and hopefully generate more money from that.

We are arriving in a form of techno-feudalism where nobody produces anything. People with means to produce will pay the owners of AI who just own and make money through renting shit to people who cant afford building their own data centers or whatever. Its what companies like Amazon or Microsoft already do. They dont produce, they own and collect.

All of this has nothing to do with capitalism, its all about selling the big AI lie to producers so people base their products on some AI they dont own. And the cycle repeats, Amazon squeezed actual producing companies out of their money decades ago and now we will repeat this with AI.

Capitalists have means to produce, feudalists just collect and own.


Most businesses get their inputs from suppliers and add value to produce an output.

Even if you could afford to build your own datacenters you could not make the chips. Any realistic business will have a web of suppliers.

I think before you can diagnose "feudalism" you need to think really carefully about who has power in what parts of the value chain, and why. This will be specific to the business you are in.

I agree, it is a real problem if your suppliers are acting in a monopolistic or market distorting way.


> Capitalists have means to produce, feudalists just collect and own.

I think I get your distinction, but I think reality doesn't sort itself that way. Strictly, the means to produce is the factory, the land, something tangible that effects transformation, while money is just "finance". Capitalists are owners, though, and capitalism is ownerism. "Capital" is conventionally just money.


> Being able to not only look up but automatically integrate things into your codebase that already exist in some form in the training data is incredibly useful.

Until it decides to include code it gathered from a stackoverflow post 15 years ago probably introducing security related issues or makes up libraries on the go or even worse, tries to make u install libs that were part of a data poisoning attack.


It's no different from supervising a naïve junior engineer who also copy/pastes from 15 year old SO posts (a tale as old as time): you need to carefully review and actually grok the code the junior/AI writes. Sometimes this ends up taking longer than writing it yourself, sometimes it doesn't. As with all decisions in delegating work, the trick is knowing ahead of time whether this will be the case.

Naive junior engineers eventually learn and become competent senior engineers. LLMs forget everything they "learn" as soon as the context window gets too big.

Very true! I liken AI to having an endless supply of newly hired interns with near-infinite knowledge but intern-level skills.

There are like a dozen well-established ways to overcome this. Learn how to use the basic tools and patterns my dude.

I have yet to see a junior trying to install random/non existing libs.

If you forced them to try it from memory without giving them access to the web you sure would.

Tldr: Yes my lord, I'll build the concentration camps as you desired, as its not my business to question thy authority. As long as I get paid, I wont question the effects my work has.

What an ignorant way of working. Guess that's who they hire to build the software spying on Amazon drivers so they have to piss into bottles to make deliveries on time.


Not quite. I do not think it was implied. This is why the interview was mentioned and the criterion for quitting.

What is your approach?


> But they are the best paid and everyone wants to move to the US.

Not really, I prefer less pay but way more affordable cost of living over a dying country rapidly moving into fascism. What's your money worth compared to a life worth living without tyrannical maniacs ruling over your every move. Fuck the USA.


sure but most don't feel that way and simply prefer higher salary.

Thats how Finland made it to one of the "happiest" countries. People just not expecting anything from anyone, so if by chance something is even slightly above the bare minimum, its been good.

Oddly enough I happen to be Finnish, and formed my view of the world during my first twenty'ish years in there. That view has served me well over the subsequent decades.

It's no surprise or secret that I have since left the country.


We were splicing some fiberglass in job training a few years back and it was honestly pretty cool! The website is also really nice, I remember seeing the color codes on the splicing machine. Mesmerizing piece of technology.

Definitely mesmerising the first time! We have ribbon fibre these days as well which is very cool too.

Thank you :)


You cant possibly expect software engineers to be able to understand human emotions and meaning. We built Palantir and all the other fun tech making people's lifes miserable. If software engineers had ethics and would understand human meaning they wouldn't pump out predatory software like its cow milk. Fuck software engineers (excluding all the OSS devs that actually try and make the world a better place).

That's just another distraction from the class war being waged on the un-wealthy. We all contribute to it in small ways while it is being pushed by those with the means. Collectively we love to control others

Palantir wouldn't exist if regular people didn't use it to lookup details on an ex all the time to stalk them /jk.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: