Hello everyone,

This Monday (2023/09/25) at noon, Peng Wang will be presenting in EECS room 2311. Please fill out the food form before attending, so we can buy enough pizza for everyone.

If you have research to share, please volunteer to present using this link. Currently, there is no one scheduled for 2023/10/09 (our next seminar date). As a token of gratitude, presenters get to choose a customized meal from a selection of local restaurants, as listed here.

All seminar info is available on the SPEECS website, and a Google calendar link with dates/times/presenters is can be found here. If you have any questions, you can contact Zongyu Li or me directly, or email speecs.seminar-requests@umich.edu. Suggestions are always welcome :)

Speaker: Peng Wang

Topic: Understanding Hierarchical Representations in Deep Networks

Abstract: Over the past decade, deep learning has proven to be a highly effective method for extracting meaningful features from high-dimensional data. This work attempts to unveil the mystery of feature learning in deep networks. Specifically, for a multi-class classification problem, we explore how the features of training data evolve across the intermediate layers of a trained neural network. We investigate this problem based on simple deep linear networks trained on nearly orthogonal data, and we analyze how the output features in each layer concentrate around the means of their respective classes. Remarkably, when the deep linear network is trained using gradient descent from a small orthogonal initialization, we theoretically prove that the features exhibit a linear decay in the measure of within-class feature variability as we move from shallow to deep layers. Moreover, our extensive experiments not only validate our theoretical findings numerically but also reveal a similar pattern in deep nonlinear networks which well aligns with recent empirical studies.

Supplementary link: None

Mirror: http://websites.umich.edu/~speecsseminar/presentations/20230925/

Thanks,

Matt Raymond