In This Section
Introduction
Introduction

12:00PM - 1:00PM EST

Deep neural network's recent success is as unexpected as it isgroundbreaking. The surprise is twofold. First, training of large deepnets, a high-dimensional non-convex optimization problem, turns out tobe unreasonably easy. Second, grossly overparameterized models, withorders of magnitude more parameters than there are data examples,generalize unreasonably well. The curse of dimensionality seems no more.These double miracles have engendered an unchecked growth of deep neuralnet models, and with it, an exploding demand for computationalresources. Algorithmic blessings turned into a computational curse. Whatholds the promise in breaking the curse yet again? While building morepowerful computing hardware to meet the growing demand has become anobvious path forward, more efficient algorithms that beat brute-forceup-scaling is equally important. At Cerebras Machine Learning Research,we design the smart brain behind the powerful muscle of our products. By taking advantage of the peculiarities of deep neural networks, we seekto make their computations more efficient. In this talk I will presentsome recent relevant results of this kind.

Speaker: Xin Wang, Cerebas Systems

Xin is a Research Scientist at Cerebras Systems. Over the nearly twodecades of his research career, Xin made a number of key findings andauthored numerous papers and patents in a range of areas, includingmachine learning, neuromorphic engineering, systems and computationalneuroscience. His current research interest is on efficient deep learning.

Ray and Maria Stata Center
Speakers

Xin Wang, Research Scientist, Cerebas Systems

Xin is a Research Scientist at Cerebras Systems. Over the nearly two
decades of his research career, Xin made a number of key findings and
authored numerous papers and patents in a range of areas, including
machine learning, neuromorphic engineering, systems and computational
neuroscience. His current research interest is on efficient deep learning.