Decades ago ... I think it was Computer Recreations column in "Scientific American" ... the strategy, since computers were less-abled then, was to advance all the bodies by some amount of time — call it a "tick" — when bodies got close, the "tick" got smaller: therefore the calculations more nuanced, precise. Further apart and you could run the solar system on generalities.
One way of doing that is by each time step delta perform two integrations, one which give you x_1, and another x_2, with different accuracies. The error is estimated by the difference |x_1 - x_2| and you make the error match your tolerance by adjusting the time step.
Naturally, this difference becomes big when two objects are close together, since the acceleration will induce a large change of velocity, and a lower time step is needed to keep the error under control.
Yes, the author uses a globally-adaptive time stepper, which is only efficient for very small N. There are adaptive time step methods that are local, and those are used for large systems.
If you see bodies flung out after close passes, three solutions are available: reduce the time step, use a higher order time integrator, and (the most common method) add regularization. Regularization (often called "softening") removes the singularity by adding a constant to the squared distance. So 1 over zero becomes one over a small-ish and finite number.
>Regularization (often called "softening") removes the singularity by adding a constant to the squared distance. So 1 over zero becomes one over a small-ish and finite number.
IIRC that is what I did in the end. It is fudge, but it works.
The suggested method of trying two time steps and comparing the accelerations does about twice as many calculations per simulation step, which don't require twice the time thanks to coherent memory access: a reasonable price to pay to ensure that every time step is small enough.
Optimistic fixed time steps are just going to work well almost always and almost everywhere, accumulating errors behind your back at every episode of close approach.
You need to invent calculus to get partial differential equations. You need invent PDEs to get approximate solutions to PDEs. You need to invent approximate solutions to PDEs to get computational methods. You need to invent computational methods to explore time integration techniques.
The reason uncertainty goes up the closer we try to observe something in physics, is because everything is ultimately waves of specific wavelengths. So if you try to "zoom in" on one of the 'humps' of a sine wave, for example you don't see anything but a line that gets straighter and straighter, which is basically a loss of information. This is what Heisenberg Principle is about. Not the "Say my Name" one, the other one. hahaha.
And yes that's a dramatic over-simplification of uncertainty principle, but it conveys the concept perfectly.