Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not true, expensive llms think better than you and I do about a lot of domains.

The problem here is that Google is using a very cheap AI, and didn't learn the lessons from Bing search's unhinged results last year.



In what domain have LLMs been demonstrated to so-called “think” consistently better than a college- or even high-school-educated adult?


I have executive dysfunction and get blocked by obsessive worries. Dumping it into an LLM lets me escape the panic and get relaxed and unstuck. It's better at this than any human manager I've ever had.

I've got dyslexia and adhd that makes it hard for me to do long form software engineering writing like requirements analysis and test plans. With an LLM I can really quickly sketch the use case, create a reasonable list of requirements, break those into stories, and write implementation stubs and unit test cases. It's like having a really decent project manager on the payroll, when before I couldn't manage the complexity of the writing a good system spec.

Obviously in both cases it's me doing the thinking. But again just me, in both cases, would be stuck and completely overwhelmed.

This kind of co-regulation is incredibly valuable, even for me as a fairly educated developer. But perhaps you're right. It's not real thinking. I would say that this kind of AI assisted co-regulatory interaction could be called "co-thinking".

The idea is that the llm has certain cognitive and material weaknesses that I cover for it, like fact checking, big picture thinking, and true identity/agency. And in at the same time, it's able to cover certain cognitive and communication weaknesses that I have.

The result is that I'm much more technically independent than ever before, and can do things in my career that my disabilities prevented before. That matters a lot to me, and is my very personal reason for believing this tech will matter to humanity.


LLMs don’t think, silly goose.


Thinking is when biological brains create new ideas from old thoughts and inputs.

LLMs can take old ideas and inputs, as text, and create text that turns into useful new ideas when a human reads them. The new LLMs actually do this in a meaningful way, bullshitting far less than older LLMs, and actually producing meaningful criticism and suggestions. The reader does not do the thinking needed to create the new idea. They just decode the text into the new idea.

So either actually meaningful new ideas can be created without thinking, or the LLM is doing a kind of artificial thinking.

Critics will say that we may as well argue that bones can think, because casting bones in a cup influences the prediction in a soothe sayer's mind. But the words created by LLMs - especially higher grade ones - are much more meaningful and thought like than bones in a cup. They can clearly advance a line of thinking in a way that is analogous to how a brain advances a line of thinking.

Therefore, it's reasonable to say LLMs are capable of limited artificial thought. They can effectively process thoughts represented externally to humans.

Maybe we should call this co-thinking, because it still requires a human as the final mile of the loop, to turn the result back into a real thought.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: