Avoiding communication in primal and dual block coordinate descent methods

TitleAvoiding communication in primal and dual block coordinate descent methods
Publication TypeConference Paper
Year of Publication2016
AuthorsDevarakonda, A., Fountoulakis K., Demmel J., & Mahoney M. W.
Date Published12/2016
Abstract

Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every s iterations--where s is a tuning parameter--instead of every iteration for the regularized least-squares problem. We derive communication-avoiding variants of the primal and dual block coordinate descent methods which reduce the number of synchronizations by a factor of s on distributed-memory parallel machines without altering the convergence rate. Our communication-avoiding algorithms attain modeled strong scaling speedups of 14x and 165x on a modern supercomputer using MPI and Apache Spark, respectively. Our algorithms attain modeled weak scaling speedups of 12x and 396x on the same machine using MPI and Apache Spark, respectively.

Acknowledgment

We thank Grey Ballard, Penporn Koanantakool and Evangelos Georganas for helpful comments and discussions. AD is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE 1106400. This research used resources of the National Energy Research Scientific Computing Center, a DOE Oce of Science User Facility supported by the Oce of Science of the U.S. Department of Energy under Contract Nos. DE-AC02-05CH11231,DE-SC0010200, and DE-SC0008700. This work is supported by Cray, Inc. and the Defense Advanced Research Projects Agency XDATA program. Research partially funded by ASPIRE Lab industrial sponsors and affiliates Intel, Google, Hewlett-Packard, Huawei, LGE, NVIDIA, Oracle, and Samsung. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily re ect the views of the sponsors.
 

URLhttps://arxiv.org/pdf/1612.04003.pdf
ICSI Research Group

Big Data