 Theory of the N-Body Problem
June 9, 1996
28
Let's look at the taylor series expansion again. Recall that it is:
Now when
, each term of the taylor series gets smaller and smaller and
eventually converge toward zero. When
, the terms still converge toward zero
because
will decrease faster than will increase
1
. It might appear that more terms of
the Taylor series would be required to get the same accuracy, however, for the N-body
problem, this is not true. A star moving at 1/2 mile per minute (
h=1/2
) could also be con-
sidered to be moving at 30 miles per hour (
h=30
). Simply changing the units of measure
doesn't change the physical system. The nature of the force function
f()
cancels out any
changes in choice of
h
(as long as the velocities and masses are changed accordingly).
So does this mean that the order of the method is meaningless? That it is just as
good to use a third order taylor series expansion as it is to use a 7th order Adam-Bashford
method? Well, sometimes it is better, sometimes it isn't. The critical parts of the error term
are the constant (which is unknown in some methods) and the value of the unknown time
which the derivative of
f()
is evaluated at. So, we can't say that one method is always
going to be better than another method.
We can, however, look at how the error term changes as we change the step size.
Say we have two methods, one of
O(h
2
)
and one at
O(h
3
)
. If we double the step size, then
the first method's error term will change to be
while the other
method will have
. When ratio
is evaluated, we
see that it is equal to 2. So, the higher order method
improves faster
than the lower order
method. This also means that it will get
worse faster
as you decrease the accuracy.
The Taylor series' constant for the error term tends to be smaller than the other
methods, and the Runge-Kutta's method tends to have smaller error term constants than
the Adam-Bashford or Adam-Moulton methods. Thus for low accuracy levels, the taylor3
method is usually better than the ab7 method.
The last part of the error term, the evaluation of the n
th
derivative of
f()
, is also
important. Sometimes this value will be very small, sometimes it will be large. Sometimes
it will cancel out the error from a previous step, sometimes it will make things worse. In
1. Consider the case where
n=2h
. Then
h
2h
will have
2h h
's multiplied together, but
2h!
will also have
2h
numbers multiplied together and half of them will be larger than
h
. This is about the point where
n!
will be
larger than
h
n
and thus
h
n
/n!
will be less than one.
x t h
+
(
)
x t
( )
hf x t
( )
(
)
12
h
2
f
'
x t
( )
(
)
1
2 3
h
3
f
''
x t
( )
(
) ...
1
n
!
h
n
f
n
1
-
(
)
x t
( )
(
)
+
+
+
+
+
=
E
1
n
1
+
(
)
!
h
n
1
+
f
n
( )
x
( )
(
)
=
t
...
t h
+
(
)
0
h
1
< <
h
1
>
1
n
!
h
n
E
1
2
h
( )
8
c
1
h
3
x
'''
( )
=
E
2
2
h
( )
16
c
2
h
4
x
4
( )
( )
=
E
1
2
h
( )
E
1
h
( )
E
2
2
h
( )
E
2
h
( )