Hacker Newsnew | past | comments | ask | show | jobs | submit | buddhistdude's commentslogin

could be automated though?

yes, everything can be automated, and as you people don't always have time to automate everything, so it depends if your area has many c200 which is a home camera, not outdoor

not necessarily worried, but like put on some pants before entering the room

No worries brother


Can one instruct an LLM to pick the parts of the context that will be relevant going forward? And then discard the existing context, replacing it with the new 'summary'?


no programmer in my company invents things often


And so you would accept “hey I spun up a react-create-element project, but instead of React I asked an LLM to copy the parts I needed for react so we have another dependency to maintain instead of tree shaking with webpack” as a useful work?


not necessarily, but it's not less creative and inventive than what I believe most programmers are doing most of the time. there are relatively few people who invent new patterns (and they actually might be overrepresented on this website). the rest learns and applies those patterns.


Right that is well understood, but having an LLM compile together functions under the guise of custom built library is hardly a software engineer applying established patterns.


It is exactly the same as applying established patterns - patterns are what the LLMs have trained on.

It seems you haven't really used LLMs for coding. They're super useful and improving every month - you don't have to take my word for it.

And btw - codespin (https://github.com/codespin-ai/codespin) along with the VSCode plugin is what I use for AI-assisted coding many times. That was also generated via an LLM. I wrote it last year, and at that point there weren't many projects it could copy from.


I don’t need to use an LLM for coding because my projects where I would need an LLM don’t include things already existing that would be a waste of time no matter how efficiently I could do it.

Furthermore it is an application of principles but the application was done a long time ago by someone else, not the LLM and not you. As you claimed you did none of the work, only went in and tweak these applied principles.

I’ll tell you what slows me down and why I don’t need an LLM. I had a task to migrate some legacy code from one platform to another, I made the PRs, added some tests, and prepared the deploy files as instructed in the READMEs of the platform I was migrating to. This took me 3-4 days. It then took 26 days to get the code deployed because 5 people are gate keepers of Helm charts and AWS policies.

Software development isn’t slow because I had to read docs and understand what I’m building, it is slow because we’ve enabled AWS to create red tape and gatekeepers. Your LLM doesn’t speed up that process.

> They're super useful and improving every month - you don't have to take my word for it.

And each month that goes by that you continue to invest, your value decreases and you will be out of a job. As you have demonstrated, you don’t need to know how to build a UI library or even that your UI library you “generated” is just a reskin of something else. If it’s so simple and amazing that you don’t need to know anything, why would I keep you around?

Here’s a fun anecdote, sometimes I pair with my manager when working through something pretty causally. I need to rubber duck an idea or am stuck on finding the documentation for a construct. My manager will often take my problem and chat with an LLM for a few minutes. Every time I end up finding the answer before he finishes his chat. Most of the time his solution is often wrong because by nature LLMs are scrambling the possible results to make it look like a unique solution.

Congrats on impressing yourself that LLM can be a slightly accurate code generator. How does paying a company to do something TabNine was doing years ago make me money? What will you do with all your free time generate more useless dependencies?


If you think TabNine was doing years ago what LLMs are doing today, then I can't convince you.

We'll talk in a year or so.


No we won’t, we’ll all be laid off and some young devs will be hired 1/3 the cost to replace your ui library with something else spit out of an llm which specifically tuned to cobble together js apps.


It's a matter of truth and not optics.


some of the activities that we're involved in are not limited in complexity, for example driving a car. you can have a huge amount of experience in driving a car but will still face new situations.

the things that most knowledge workers are working on are limited problems and it is just a matter of time until the machine will reach that level, then our employment will end.

edit: also that doesn't have to be AGI. it just needs to be good enough for the problem.


not fully related to what the parent is saying, but I need to get this off my chest:

isn't this development obviously going to result in the deprecation of the value of the human intellect to near-zero? which is the thing that virtually all people on this platform base their livelihood on?

there's such a deafening silence around this topic on the internet where there should be - i don't know what but not this silence. we don't know what to do right? and we're avoiding this topic.

with this version they broke the assumed wall of llms's developed that was the last copium that we could believe in. the wall is broken and now it's just a matter of time until your capacity to think will be completely unneeded. the machine will do it more accurately and more quickly by orders of magnitude.

am I a doomer? I was in the home country of my parents recently, that is completely dysfunctional and war is on the verge of breaking out. what I learned there is that humans stay ignorant of great dangers until the very moment in which it affects them. this must've been the case with all the great wars that we've had. the water is rising but until I start to suffocate I don't agree to see it. i make up copes, or I think some are going to drown but I'm safe, or I distract myself.

what are all the software engineers here thinking? what's your cope for this? or are we all freezing in shock right now? this o1 is solving problems that i know many of my colleagues can never solve. what are we hoping for I think? I don't have a future because my future was the image that I had of it. and no image of the future that would be nice to keep around seems plausible at this point.


I wouldn't say there is a silence (as in avoidance) there are just folks convinced it's going to completely replace people (with 2 main subgroups: utopia and dystopia), folks convinced it's a parlor trick and never going to result in more, and folks convinced it's just the next efficiency increasing tool where some busy-ness we have will go away but only to make room for increasing total output not for replacing everyone wholesale.

Generally these folks have all said their piece and are tired of talking about it every time LLMs come up -> silence (as in nothing more to say) as each group is self convinced and most don't necessarily feel the need to get 100% of folks on board with their view. The dystopia or "doomer" group are the main ones left feeling like they need more of an answer, the rest move on quietly in either excitement or disinterest.


No he is right, he is saying taken to the extreme. The point is the more and more specific you have to prompt, the more you are actually contributing to the result yourself and the less the model is


Yes but the build up isn't manual. You go patching prompts with responses until the final result. The last prompt will be almost the whole code complete, obviously.


Again, you are missing the "taken to the extreme".

What has happened to HN discourse recently?


Taking things to the extreme is rarely that useful in nuanced discussion though (it ignores that the optimal approach is rarely at the extremes).


I'm asking the same question. "taken to the extreme". What bullshit measurement is that?


Sounded like sarcasm to me.


I think he means that thoughts have an actual effect on the unfolding of the physical world.


Which is a "self-fulfilling prophecy", also known by a new-age term as "manifestation".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: