|Speaker :||Giovanni Neglia|
|INRIA Sophia Antipolis|
|Time:||2:00 pm - 3:00 pm|
|Location:||Paris-Rennes Room (EIT Digital)|
Many learning problems are formulated as minimization of some loss function on a training set of examples. Distributed gradient methods on a cluster are often used to this purpose. In this talk we discuss how the variability of task execution times at cluster nodes affects the system throughput. In particular, a simple but accurate model allows us to quantify how the time to solve the minimization problem depends on the network of information exchanges among the nodes. Interestingly, we show that, even when communication overhead may be neglected, the clique is not necessarily the most effective topology, as commonly assumed in previous works.
Giovanni Neglia received a Ph.D. degree in Computer Science, Electronics and Telecommunications from the University of Palermo, Italy, respectively in 2005. In 2005 he was a research scholar at the University of Massachusetts Amherst, visiting the Computer Networks Research Group. Since 2006 he works at Inria Sophia Antipolis, France. He is associate editor for Elsevier Computer Communications journal and recipient of several best paper awards (ITC, Greencomm, NetSciCom, VTC, Valuetools, Bionetics). His current research interests include cache networks and distributed optimization.