This isn't 4chan, there is an objective reality. Alexa telling people Trump won is in the same space as my digital girlfriend telling me to kill the Queen. It's not smart.
If alexa told people to drink bleach, lawyers would be queueing up to hit Bezos for restitution. And, if you read the story the programmers intervened.
Whether a defendant can be held liable for false speech is a different question from whether the government can prohibit the speech in the first place.
My prior comment was a bit tangential and sort of off-topic, but I'll leave it there for context.
> Whether a defendant can be held liable for false speech is a different question from whether the government can prohibit the speech in the first place.
How is such a distinction useful?
I can, physically, drive a car over a speed limit, resulting in a fine; the fine is the enforcement of the prohibition. For speech (in the broader sense that includes non-vocal publication) we also have various prohibitions such as copyright and (as Musk has found) influencing stock prices, which are still prohibitions even if when they can only be enforced after the event.
Whether or not we should have such prohibitions/penalties specifically on AI models is something that I feel will be the defining political battle of the decade, and may yield different results in different polities: we already have very vocal cohorts who speak with anger of OpenAI "lobotomising" ChatGPT and "censoring" DALL•E (ditto Midjourney), while other vocal cohorts are appalled at the way even these models are being used to deceive, to sexualise, to propagandise, etc.
Misinformation is the same.