In this work we investigate the alternating direction method of multipliers (ADMM) for the solution of regression problems using sparse grids on parallel and distributed systems. This method was successfully used in a number of applications for the parallel processing of large datasets. While the method allows for both parallelization in the data and in the degrees of freedom, research was mostly focused on the first approach so far. In this work we consider and compare both approaches.
On the one hand, we present the grid-splitting algorithm for hierarchical sparse grids which we employ to deal with vast datasets and high dimensions. The hierarchical basis of sparse grids is inherently difficult to parallelize in the degrees of freedom as ignoring the hierarchical structure affects stability. Here we use the property that a regular sparse grid with the maximum level n in d dimensions can be split into two d-dimensional grids with level n−1 and one d−1-dimensional grid with level n. The method also converges if asynchronous one-sided communication is used. It thus increases the robustness of the algorithm and introduces fault tolerance – the essential properties of parallel algorithms for next-generation supercomputers.
On the other hand, we study the data parallelization of the sparse grid ADMM algorithm using one-sided communication. While the reduction of the parallel runtime is lower than for grid splitting, this method does not require changes of the sparse grid learning algorithm and can be used with existing software. Due to its fast convergence the method is suited for dealing with large datasets.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 firstname.lastname@example.org
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 email@example.com