If you have a widely distributed system and you need tight synchronization, it can be much easier to regulate to an outside stable source than to try to make a stable consensus algorithm.
Who does use those top-level servers? Aren’t some of them propagating the error or are all secondary level servers configured to use dispersed top-level servers? And how do they decide who is right when they don’t match?
Is pool.ntp.org dispersed across possible interference and error correlation?
You can look at who the "Stratum 2" servers are, in the NTP.org pool and otherwise. Those are servers who sync from Stratum 1, like NIST.
Anyone can join the NTP.org pool so it's hard to make blanket statements about it. I believe there's some monitoring of servers in the pool but I don't know the details.
For example, Ubuntu systems point to their Stratum 2 timeservers by default, and I'd have to imagine that NIST is probably one of their upstreams.
An NTP server usually has multiple upstream sources and can steer its clock to minimize the error across multiple servers, as well as detecting misbehaving servers and reject them ("Falseticker"). Different NTP server implementations might do this a bit differently.
From my own experience managing large numbers of routers, and troubleshooting issues, I will never use pool.ntp.org again. I’ve seen unresponsive servers as well as incorrect time by hours or days. It’s pure luck to get a good result.
Instead I’ll stick to a major operator like Google/Microsoft/Apple, which have NTP systems designed to handle the scale of all the devices they sell, and are well maintained.
reply