Upcoming Events

CSE Seminar with University of Washington Ph.D. Candidate Tianyi Zhou


Name: Tianyi Zhou

Date/Time: Thursday, March 25, 2021 at 11:00 am 

BlueJeans Link: https://bluejeans.com/6622130444

Title: Learning like a Human: Curriculum inspired by Learning Dynamics

Abstract: Machine learning (ML) can surpass humans on certain complicated yet specific tasks, given well-processed data. However, most ML methods treat samples/tasks equally during the course of training, e.g., by taking a random batch per step and repeating many epochs' SGD on all data, which is extraordinarily suboptimal and inefficient from human perspectives, since we would never teach children or students in such a way. On the contrary, human learning is more strategic in selecting or generating the training contents via experienced teachers, collaboration, curiosity/diversity-driven exploration, tracking of memorization, sub-tasking, etc., which have been underexplored in ML. In addition, most of them can be encoded into the selection and schedule of data/tasks during learning, which is another type of intelligence as important as model optimization. A key observation from human learning is that the learning history can indicate the most informative examples/tasks to learn in the future.

In this talk, I will present curricula built upon training dynamics that can substantially improve ML in the wild, e.g., supervised/semi-supervised/self-supervised learning, robust learning with noisy labels, reinforcement learning, ensemble learning, etc., when the data are imperfect and a curriculum can make a big difference. Firstly, we build both empirical and theoretical connections between curriculum learning and the training dynamics of ML models. Our empirical studies show that deep neural networks are fast in memorizing some data but also fast in forgetting some others. Hence, we can accurately allocate those easily forgotten data by earlier-stage training dynamics and make the future training mainly focus on them. Moreover, we find that the consistency of model output overtime for an unlabeled sample indicates its pseudo-label's correctness in self-supervised learning and predicts the future forgetfulness on learned data. These discoveries are in line with human learning and combining them leads to more efficient and smarter curricula for a rich class of ML problems. Theoretically, we study how to find a curriculum that can optimize the training dynamics over a data distribution in continuous time. Interestingly, the theoretically derived curriculum matches our empirical strategies and has an insightful interpretation with the tangent/path kernel from deep learning theories. Secondly, I will give you some examples of human curriculum strategies that are effective on ML. I will show you how to translate them to discrete-continuous optimizations that can be solved efficiently. Lastly, I will discuss future directions and potential applications for healthcare, transportation, banking, scientific discovery, social good, etc.


Bio: Tianyi Zhou (https://tianyizhou.github.io) is a Ph.D. candidate in the Paul G. Allen School of Computer Science and Engineering at University of Washington, advised by Professor Jeff A. Bilmes. His research interests are in machine learning, optimization, and natural language processing. His recent research focuses on transferring human learning strategies, e.g., curriculum and sub-tasking, to machine learning in the wild when the data are unlabeled, redundant, noisy, biased, or are collected via exploration. The techniques can improve supervised/semi-supervised/self-supervised learning, robust learning with noisy data, reinforcement learning, meta-learning, ensemble method, etc. He has published >50 papers at NeurIPS, ICML, ICLR, AISTATS, NAACL, COLING, KDD, AAAI, IJCAI, Machine Learning (Springer), IEEE TIP, IEEE TNNLS, IEEE TKDE, etc., with >2000 citations. He is the recipient of the Best Student Paper Award at ICDM 2013 and the 2020 IEEE Computer Society Technical Committee on Scalable Computing (TCSC) Most Influential Paper Award.