I think it's substantial to say that AI is currently overhyped because it's hitting a weak spot in human cognition. We sympathize with inanimate objects. We see faces in random patterns.
If a machine spits out some plausible looking text (or some cookie-cutter code copy-pasted from Stack Overflow) the human brain is basically hardwired to go "wow this is a human friend!". The current LLM trend seems designed to capitalize on this tendency towards sympathizing.
This is the same thing that made chatbots seem amazing 30 years ago. There's a minimum amount of "humanness" you have to put in the text and then the recipient fills in the blanks.
But if nearly everyone else is saying this has real value to them and it's produced meaningful code way beyond what's in SO, then doesn't that just mean your experience isn't representative of the overall value of LLMs?