I don't know what this is meant to prove or disprove, I can only assume the constant urge to compare human beings with LLMs has simply become a pathological drive at this point. Yes, humans are capable of reproducing copyrighted text verbatim, as are LLMs, and when they do, as with LLMs, it is copyright infringement and plagiarism.
An irreducible and unavoidable tendency towards committing copyright infringement and plagiarism are not features one generally wants in either a human author or a piece of software.
It means that if LLMs mimic humans in learning and creating, then we should probably apply similar standards to LLMs. It is impossible to create an LLM without training it on copyrighted text. Just like in humans.
Should the world refuse to use LLMs because of that?
unlike humans, of course.