That's a pretty big "if". LLMs are by design entirely unlike GoFAI reasoning engines. It's also very debatable whether it makes any sense to try and hack LLMs into reasoning engines when you could just... use a reasoning engine. Or have the LLM to defer to one, which would play to their strength as translators.
I also believe the problem is we don't know what we want: https://news.ycombinator.com/item?id=45509015
If we could make LLMs to apply a modest set of logic rules consistently, it would be a win.