Understanding Random Deep Networks and Their Pretraining: From Theory to Applications

Wuyang Chen (wuyang (point) chen <at> utexas (point) edu)

UC Berkeley

Le 25 octobre 2023 à 11 h 30

Room 3195, AndrĂ©-Aisenstadt Pavilion — Simultaneous broadcast on Zoom


The remarkable advancements in artificial intelligence (AI) owe much of their success to deep learning. Over the past decade, the scientific community has persistently designed and scaled up deep neural networks (DNNs), employing a myriad of pretraining and finetuning strategies. Nonetheless, the inherent complexity of network architectures, coupled with their intricate training dynamics, poses a formidable challenge in theoretically understanding practical DNNs, both at initialization and throughout the training. Moreover, with the popularity of foundation models and large language models (LLMs), the gap between deep learning theory and application is growingly large.

This talk centers around this challenge and tries to bridge the gap between the two worlds. We first develop practical principles to characterize the dependence of DNN properties (convergence, expressivity, generalization, learning rates, etc.) on its architectures, both at random initialization and during pretraining. Subsequently, we seek broad impacts of our theoretical analysis across a wide spectrum of application settings. This includes but is not limited to: the design and scaling of foundation models, addressing scientific problems, and the understanding of LLM and its in-context learning. More importantly, we target minimal or even zero training cost in our design strategies, facilitating the theory-guided acceleration of deep learning.

Recording: https://drive.google.com/file/d/1UZdjxkA5zNRQEDGsgonVkw8xjBXqPgl4/view?usp=drive_link


Pour recevoir les annonces hebdomadaires par courriel, visitez http://rali.iro.umontreal.ca/rali/?q=fr/node/1631

Liste des autres séminaires