Integrating Message-passing and Shared-memory: Early Experience
Author(s)
Kranz, David; Johnson, Kirk; Agarwal, Anant; Kubiatowicz, John; Lim, Beng-Hong
DownloadMIT-LCS-TM-478.pdf (182.2Kb)
Metadata
Show full item recordAbstract
This paper discusses some of the issues involved in implementing a shared-address space programming model on large-scale, distributed-memory multiprocessors. Because message-passing mechanisms are much more efficient than shared-memory loads and stores for certain types of interprocessor communication and synchronization operations, we argue for building multiprocessors that efficiently support both shared-memory and message-passing mechanisms. We describe an architecture, Alewife, that integrates support for shared-memory and message-passing through a simple interface. We expect the compiler and runtime system to cooperate in using appropriate hardware mechanisms that are most efficient for specific operations. We report on both integrated and exclusively shared-memory implementations of our runtime system and one complete application; the final paper will contain results for other applications as well. The integrated runtime system drastically cuts down the cost of communication incurred by the scheduling, load balancing, and certain synchronization operations. We also present some preliminary performance results comparing the two systems.
Date issued
1992-10Series/Report no.
MIT-LCS-TM-478