One way of doing that is by each time step delta perform two integrations, one which give you x_1, and another x_2, with different accuracies. The error is estimated by the difference |x_1 - x_2| and you make the error match your tolerance by adjusting the time step.
Naturally, this difference becomes big when two objects are close together, since the acceleration will induce a large change of velocity, and a lower time step is needed to keep the error under control.
Yes, the author uses a globally-adaptive time stepper, which is only efficient for very small N. There are adaptive time step methods that are local, and those are used for large systems.
If you see bodies flung out after close passes, three solutions are available: reduce the time step, use a higher order time integrator, and (the most common method) add regularization. Regularization (often called "softening") removes the singularity by adding a constant to the squared distance. So 1 over zero becomes one over a small-ish and finite number.
>Regularization (often called "softening") removes the singularity by adding a constant to the squared distance. So 1 over zero becomes one over a small-ish and finite number.
IIRC that is what I did in the end. It is fudge, but it works.
The suggested method of trying two time steps and comparing the accelerations does about twice as many calculations per simulation step, which don't require twice the time thanks to coherent memory access: a reasonable price to pay to ensure that every time step is small enough.
Optimistic fixed time steps are just going to work well almost always and almost everywhere, accumulating errors behind your back at every episode of close approach.
Naturally, this difference becomes big when two objects are close together, since the acceleration will induce a large change of velocity, and a lower time step is needed to keep the error under control.