Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Our industry never exhibited an abundance of caution, but if you have trouble understanding the value of AI here, consider that you are akin to an assembly language programmer in the 1970s or 80s who couldn't understand why people are so gung-ho about these compilers that just output worse code than they could write by hand. In retrospect, compilers only got better and better, and familiarity with programming languages and compilation toolchains became a valuable productivity skill and the market for assembly language programming either stagnated, or shrank.

Doesn't it seem plausible to you that, whatever the ratio of bugs in AI-generated code today, that bug count is only going to really go down? Doesn't it then seem reasonable to say that programmers should start familiarizing themselves with these new tools, where the pitfalls are and how to avoid them?





> compilers only got better and better

At no point compilers produced stochastic output. The intent user expressed was translated down with a much much higher fidelity, repeatability and explainability. Most important of all, it completely removed the need for the developer to meddle with that output. If anything it became a verification tool for the developer‘s own input.

If LLMs are that good, I dare you skip the programming language and have it code in machine directly next time. And it is exactly how it is going to feel like if we treat them as valuable as compilers.


> At no point compilers produced stochastic output. [...] Most important of all, it completely removed the need for the developer to meddle with that output.

Yes, once the optimizations became sophisticated enough and reliable enough that people no longer needed to think about it or go down to assembly to get the performance they needed. Do you get the analogy now?


I don't know why you'd think your analogy wasn't clear in the first place. But your analogy can't support you on the assertion that optimizations will be sophisticated and reliable enough to completely forget about the programming language underneath.

If you have any first principles thinking on why this is more likely than not, I am all ears. My epistemic bet is that it is not going to happen, or somehow if we end up there the language we will have to use to instruct them is not going to be different than any other high level programming language that the point will be moot.


> But your analogy can't support you on the assertion that optimizations will be sophisticated and reliable enough to completely forget about the programming language underneath.

Where did I make that assertion?


Here is where I got that impression:

> once the optimizations became sophisticated enough

Either way I am not trying to litigate here. Feel free to correct me if your position was softer.


No because programmers aren't the ones pushing the wares, it's business magnates and sales people. The two core groups software developers should never trust.

Maybe if this LLM craze was being pushed by democratic groups where citizens are allowed to state their objections to such system, where such objections are taken seriously, but what we currently have are business magnates that just want to get richer with no democratic controls.


> No because programmers aren't the ones pushing the wares, it's business magnates and sales people.

This is not correct, plenty of programmers are seeing value in these systems and use them regularly. I'm not really sure what's undemocratic about what's going on, but that seems beside the point, we're presumably mostly programmers here talking about the technical merits and downsides of an emerging tech.


This seems like an overly reductive worldview. Do you really think there isn't genuine interest in LLM tools among developers? I absolutely agree there are people pushing AI in places where it is unneeded, but I have not found software development to be one of those areas. There are lots of people experimenting and hacking with LLMs because of genuine interest and perceived value.

At my company, there is absolutely no mandate for use of AI tooling, but we have a very large number of engineers who are using AI tools enthusiastically simply because they want to. In my anecdotal experience those who do tend to be much better engineers than the ones who are most skeptical or anti-AI (though its very hard to separate how much of this is the AI tooling, and how much is that naturally curious engineers looking for new ways to improve inevitably become better engineers who don't).

The broader point is, I think you are limiting yourself when you immediately reduce AI to snake oil being sold by "business magnates". There is surely a lot of hype that will die out eventually, but there is also a lot of potential there that you guarantee you will miss out on when you dismiss it out of hand.


I use AI every day and run my own local models, that has nothing to do with seeing sales people acting like sales people or conmen being con artists.

Also add in the fact that big tech has been extremely damaging to western society for the last 20 years, there's really little reason to trust them. Especially since we see how they treat those with different opinions than them (trying to force them out of power, ostracize them publicly, or in some cases straight up poisoning people + giving them cancer).

Not really hard to see how people can be against such actions? Well buckle up bro, come post 2028 expect a massive crackdown and regulations against big tech. It's been boiling for quite a while and there's trillions of dollars to plunder for the public's benefit.


If I have a horse and plow and you show up with a tractor, I will no doubt get a tractor asap. But if you show up with novel amphetamines for you and your horse and scream "Look how productive I am! We'll figure out the long-term downsides, don't you worry! Just more amphetamines probably!", I'm happy to be a late adopter.

A tractor based on a Model T wouldn't have been very compelling either at the time. Not many horse-drawn plows these days though.

I understand that you've convinced yourself that progress is inevitable. I'll ponder over it on my commute to Mars. Oh wait, that was still on the tele.

High-level languages were absolutely indispensable at a time when every hardware vendor had its own bespoke instruction set.

If you only ever target one platform, you might as well do it in assembly, it's just unfashionable. I don't believe you'd lose any 'productivity' compared to e.g. C, assuming equal amounts of experience.


> I don't believe you'd lose any 'productivity' compared to e.g. C, assuming equal amounts of experience.

I'm skeptical, but do you think that you'd see no productivity gains for Python, Java or Haskell?


Those are garbage-collected environments. I have some experience with a garbage-collected 'assembly' (.NET CIL). It is a delight to read and write compared to most C code.

Agree to disagree then! I've done plenty of CIL reading and writing. It's fine, but not what I'd call pleasant, not even compared to C.

Type checking, even that as trivial as C's, is a boon to productivity, especially on large teams but also when coding solo if you have anything else in your brain.

compilers aren't probabilistic models though

True. The question is whether that's relevant to the trajectory described or not.

Successful compiler optimizations are probabilistic though, from the programmer's point of view. LLMs are internally deterministic too.

What? Do you even know how compilers work?

Are you able to predict with 100% accuracy when a loop will successfully unroll, or various interprocedural or intraprocedural analyses will succeed? They are applied deterministically inside a compiler, but often based on heuristics, and the complex interplay of optimizations in complex programs means that sometimes they will not do what you expect them to do. Sometimes they work better than expected, and sometimes worse. Sounds familiar...

> Are you able to predict with 100% accuracy when a loop will successfully unroll, or various interprocedural or intraprocedural analyses will succeed?

Yes, because:

> They are applied deterministically inside a compiler

Sorry, but an LLM randomly generating the next token isn't even comparable.

Deterministic complexity =/= randomness.


> Yes, because:

Unless you wrote the compiler, you are 100% full of it. Even as the compiler writer you'd be wrong sometimes.

> Deterministic complexity =/= randomness.

LLMs are also deterministically complex, not random.


> Unless you wrote the compiler, you are 100% full of it. Even then you'd be wrong sometimes

You can check the source code? What's hard to understand? If you find it compiled something wrong, you can walk backwards through the code, if you want to find out what it'll do walk forwards. LLMs have no such capability.

Sure maybe you're limited by your personal knowledge on the compiler chain, but again complexity =/= randomness.

For the same source code, and compiler version (+ flags) you get the exact same output every time. The same cannot be said of LLMs, because they use randomness (temperature).

> LLMs are also deterministically complex, not random

What exactly is the temperature setting in your LLM doing then? If you'd like to argue pseudorandom generators our computers are using aren't random - fine, I agree. But for all practical purposes they're random, especially when you don't control the seed.


> If you find it compiled something wrong, you can walk backwards through the code, if you want to find out what it'll do walk forwards. LLMs have no such capability.

Right, so you agree that optimization outputs not fully predictable in complex programs, and what you're actually objecting to is that LLMs aren't like compiler optimizations in the specific ways you care about, and somehow this is supposed to invalidate my argument that they are alike in the specific ways that I outlined.

I'm not interested in litigating the minutiae of this point, programmers who treat the compiler as a black box (ie. 99% of them) see probabilistic outputs. The outputs are generally reliable according to certain criteria, but unpredictable.

LLM models are also typically probabilistic black boxes. The outputs are also unpredictable, but also somewhat reliable according to certain criteria that you can learn through use. Where the unreliability is problematic you can often make up for their pitfalls. The need for this is dropping year over year, just as the need for assembly programming to eke out performance dropped year over year of compiler development. Whether LLMs will become as reliable as compiler optimizations remains to be seen.


> invalidate my argument that they are alike in the specific ways that I outlined

Basketballs and apples are both round, so they're the same thing right? I could eat a basketball and I can make a layup with an apple, so what's the difference?

> programmers who treat the compiler as a black box (ie. 99% of them) see probabilistic outputs

In reality this is at best the bottom 20% of programmers.

No programmer I've ever talked to has described compilers as probabilistic black boxes - and I'm sorry if your circle does. Unfortunately there's no use of probability and all modern compilers definitionally white boxes (open source).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: