Because intent implies: 1. goal-directedness, 2. mental states (beliefs, desires, motivations), and 3. consciousness or awareness.
LLMs lack intent because 1) they have no goals of their own. They do not "want" anything, they do not form desires, 2) they have no mental states (they can simulate language about them, but do not actually posses them, and 3) they are not conscious. They do not experience, reflect, or understand in the way that conscious beings do.
Thus, under the philosophical and cognitive definition, LLMs do not have intent.
They can mimic intent, the same way a thermostat is "trying" to keep a room at a certain temperature, but it is only apparent or simulated intent, not genuine intent we ascribe to humans.
LLMs can make false statements. The distinction about real vs simulated intent doesn't seem useful.
LLMs can have objectives. Those objectives can sometimes be advanced via deception. Many people call this kind of deception (and sometimes others) lying.
If we have these words which apply to humans only, definitionally, then we're going to need some new words or some new definitions. We don't really have a way to talk about what's going on here. Personally, I'm fine using "lying".
Yeah, but lying requires intent to deceive. Why would an [want to] LLM deceive? If it "deceives", we would have to answer this question. Otherwise, why should we not just avoid assuming malice (especially in terms of LLMs) and just call them hallucinations or mistakes?
These things are constructed in secret. I have no particular reason to grant them any benefit of the doubt. Controlling the output of LLMs in arbitrary ways is certainly worth a lot money to a lot of parties with all kinds of motivations. Even if LLMs are free of hidden agendas now, that's not a stable situation in the current environment.
I am not saying that it does not "lie", but it does not lie because it wants to lie, or because it has the intent to lie or deceive. It does so because of its system prompts or something else, done by its developers.
What you said is exactly why I said "If it "deceives", we would have to answer this question.". The developers made it so.