Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI can’t write its own prompts. 10k people using the same prompt who actually need 5000 different things.

No improvements to AI will let it read vague speakers’ minds. No improvement to AI will let it get answers it needs if people don’t know how to answer the necessary questions.

Information has to come from somewhere to differentiate 1 prompt into 5000 different responses. If it’s not coming from the people using the AI, where else can it possibly come from?

If people using the tool don’t know how to be specific enough to get what they want, the tool won’t replace people.

s/the tool/spreadsheets

s/the tool/databases

s/the tool/React

s/the tool/low code

s/the tool/LLMs



> AI can’t write its own prompts.

What makes you say that? One model can write the prompts of another, and we have seen approaches combining multiple models, and models that can evaluate the result of a prompt and retry with a different one.

> No improvements to AI will let it read vague speakers’ minds. No improvement to AI will let it get answers it needs if people don’t know how to answer the necessary questions.

No, but it can certainly produce output until the human decides it's acceptable. Humans don't need to give precise guidance, or answer technical questions. They just need to judge the output.

I do agree that humans currently still need to be in the loop as a primary data source, and validators of the output. But there's no theoretical reason AI, or a combination of AIs, couldn't do this in the future. Especially once we move from text as the primary I/O mechanism.


I agree with your point, just want to point out that models have been trained on AI generated prompts as synthetic data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: