As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
A MapReduce (MR) technique known as MR message passing enables the development of distributed relaxation algorithms for the Laplace and Poisson equations. While a message-based MR relaxation solver can handle data grids in a fault-tolerant and scalable distributed execution, it also may generate a large number of messages to be routed from mapper to reducer tasks. The volume of intermediate data in the MR network can become a performance bottleneck for larger-scale grids and thus offset the benefits of distributed MR execution. In this paper, we introduce two optimizations, local in-mapper aggregation and strip partitioning, that reduce the volume of MR messages. Technically, we propose relaxation algorithms for the Laplace equation in MR streaming, including a basic message-passing algorithm and algorithms optimized with local aggregation and strip partitioning. We evaluate empirically the optimizations' effect through experiments on Elastic MR, the Amazon MR cloud. Our results can be beneficial to others who would like to develop and optimize MR streaming algorithms for grid-based models, such as PDE and optimization solvers, and cellular automata.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.