Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

First off, there's a lot of people shooting off their mouths - ignore anyone who hasn't used ChatGPT extensively: it takes some training to learn to use it.

Several senior developer friends have been using ChatGPT quite a bit and it seems to work well in lots of places: - isolated algorithms and fiddly bits - it writes complex SQL statements in seconds, for example. LLMs should makes quick work of fussy config files. - finding, diagnosing and fixing bugs (just paste the code and error message - really!) - unit tests and examples - comments and documentation

Professional developers will recognize that we're talking 50-90% of the LABOR-HOURS that go into software development, and therefore fewer developers to get the same work done. Sure, we just do more - but then we quickly hit other speed limits, where coding isn't the problem. I can see layoffs among the bottom-N% of developers, while more sophisticated developers add LLMs to their toolbox and use this productivity to justify their high $/hour.

I see AI writing code that casual human readers don't really understand, but this is OK because the AI includes comments -- just like developers do for each other today.



Like you I found that ChatGPT is not really all that great at coding, but great when you ask it to do very specific grunt work. I'm working on a new database and one thing I found it super useful for is generating test data, I would just tell it: "here's the CREATE TABLE statement, create 50 rows of test data off of it, with all of these specifications: this has to be this, that can only be 1 or 2, yada yada yada.

> Professional developers will recognize that we're talking 50-90% of the LABOR-HOURS that go into software development,

I call it 'dumb coding'. You have a type of programming that requires you to really think, and then there's the type where you just need to write 200 lines of code but you know exactly what to write. If AI could pickup the slack on 'dumb coding' and let us think about 'smart coding', we would all be way way more efficent.


Gpt4 is on another level but still very far from being able to do work on anything larger than a medium sized class


gpt-4-32k is yet on another level. I think a gpt-4-32m would replace any senior engineer working on a complex code base.


Is the 32k model now available?

> I think a gpt-4-32m would replace any senior engineer working on a complex code base.

...maybe?


> Professional developers will recognize that we're talking 50-90% of the LABOR-HOURS

More like 20-30% at max. And it's not including debugging the output of chatGPT, which I've found that it has been making subtle mistakes - which will probably take away all of the time gained.

Writing code isn't the biggest time sink, figuring out what to write is.


I’m sure my org isn’t unique, but we are constantly at max capacity and we have no money to hire new people. We have projects in the queue that will keep us occupied for years. I don’t think even a 50-90% speed up will lead to lay offs. We will just finally be able to get more shit done.


The backlog grows at a faster pace than the company completes work. The backlog is never meant to be completed. Your job security is not based on having a long well groomed backlog.


If ChatGPT / GPT-4 or future versions can write unit / functional / integration tests, that's an absolute productivity game changer.


What prompts are they finding useful for creating SQL statements?


GPT-4 is simply outstanding at writing SQL statements. I made a bunch of examples with non-trivial customer revenue metrics assessments:

https://www.dropbox.com/s/hdhycf7l00d3sx8/gpt4_attempt_sql_q...

It can do basic math reasonably well (and this is achieving generation where GPT-3 failed). Interestingly, asking it to verify itself does resolve bugs sometimes. Managed to fix subtle count() denominator bugs and an inflation-adjustment error with not much hinting on my end.

You can only see it struggle really hard at the end when it tries normalizing month ranges correctly. It seemed to reach conceptual problems over how LAST_DAY() was being used and current debug itself.


I tell it that I'm using <database and version> and give it the relevant DDL statements (e.g. CREATE TABLE, etc) then ask it to write the query to do <x> in plain English. It does surprisingly well.

But!!! the first response is rarely dead-on and instead, just like a junior eng I need to guide it: use (or don't use) SQL construct <x>, make sure to use index <x>, etc.

Example: to sum the values in a JSONB field, GPT desperately wanted to use a lateral join but that would have made for a very awkward set of VIEWs. So instead I directed it to create a function to perform the summation.


Sorry but no, ChatGPT can only do some very specific and specialized tasks, it doesn’t save meaningful time. It’s a tool in the toolbox, but it’s not a game changing tool; just one more thing to reach for when you need a complex transformation, or when you need to unblock yourself.

Zero developers will lose their jobs due to LLMs. That’s just yet more needless hype and expectation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: