While it's true that LLMs are not strictly a formal system, they do operate on Turing-complete hardware and software, and thus they are still subject to the broader limitations of computability theory, meaning they cannot escape fundamental constraints like undecidability or incompleteness when attempting formal reasoning.
LLMs don't do formal reasoning. Not in any sense. They don't do any kind of reasoning - they replay combinatorics of the reasoning that was encoded in their training data via "finding" the patterns in the relationships of the tokens at different scales and then applying those to the generation of some output triggered by the input.
That's irrelevant. They have an "input tape" and an "output tape" and whatever goes on inside the LLM could in principle be implemented in a TM (of course that wouldn't be efficient but that's besides the point).