Next: machine
Up: Literature on Threads in
Previous: CML
GAML is a parallel implementation of lazy ML [17] on a
shared-memory computer.
- Several G-machines cooperate in reducing a graph concurrently
with the intention of speeding up the reduction.
- Annotations in the source program determine partitions to be
evaluated in parallel. Annotation consists of `#' (usually, called
fork or spark ).
- Automatic parallelization (with information of what the compiler
is doing) is possible. Locking and unlocking of tags are used to
ensure that when sharing takes place, a graph being run on one
G-machine cannot be run by another G-machine. For performance
improvement, the G-machines must cooperate. A scheduling of fork
pools is necessary.
- Heap management divides the heap space into from-space and
to-space pages, a to-space page being allocated by a global
manager to each processor initially. A processor that runs out of its
heap page can request for more heap to-space page. If this request
cannot be fulfilled, there is a page fault, and the processor will
enter a collector state. All the other processors will end up in the
collector state naturally when they have used up their allocated heap
page. Stop and Copy garbage collection is performed after swapping the
to-space and from-space pages of the heap. Parallel garbage
collection of all running processors is done, but program execution is
halted.
- The stacks for unwinding the spine, for intermediate results of
the arithmetic computations, and for remembering the G-machine states,
do not require constant checks for overflow due to parallel
implementation.
- Parallelism is seen to be beneficial. However, as the number of
processors is increased, garbage collection time also increases. This
is due to more objects to be copied during garbage
collection. Furthermore, this also increases the number of stacks and
hence the cost due to stack overflow checks.
Next: machine
Up: Literature on Threads in
Previous: CML
Ananda Amatya
2/16/1999