** Next:** References
**Up:** ParallelOut-of-core methods for
** Previous:** Results

We have demonstrated that disk can be used for dynamic
storage in an ``out-of-core'' implementation of an astrophysical
treecode. An 80 million body model can run on a cluster of 16
PC-class systems. Simulating such a model over the age of the
Universe will take a couple of months, but one should recall that the
computer is extremely economical, costing under $60000. One can
use cost-effective processors, modest amounts of DRAM, and
much larger amounts of disk to address N-body problems that had
heretofore been accessible only on the largest of parallel
supercomputers. On the other hand, one can now imagine integrating
extraordinarily large systems (billions of particles)
on large MPPs with independently addressable disks.
Finally, we observe that memory hierarchies are getting deeper, with
the gap between processor clock rates and memory latency continuing to
widen. Out-of-core methods are designed to tolerate the extreme
latencies of disk systems, but they may also be adapted to make
effective use of caches and memory hierarchies in more traditional
systems. Some approaches to the next generation of ``petaflop''
computers [5] will display latencies (measured in clocks
ticks) as large as those we observe today in disk systems, so we might
expect that optimal algorithms on those systems will be closely
related to the out-of-core algorithms of today.

*John Salmon *

Wed Jan 1 23:00:51 PST 1997