Jun-Kun WangI am in my fifth year of CS PhD at Georgia Tech. My advisor is Professor Jacob Abernethy. Before that, I was a research assistant at CSIE Department, National Taiwan University (NTU), where I worked with Prof. Shou-De Lin and also worked closely with Dr. Chi-Jen Lu at Academia Sinica. My PhD research focuses on theoretical understanding of modern algorithms/techniques in optimization and deep learning (e.g. Polyak's momentum, Nesterov's momentum, accelerated gradient methods), as well as designing new algorithms with provable guarantees. I hope to study reinforcement learning in the future! Fun skill 1: I like to run and enjoy long-distance running. I run a lot! Fun skill 2: I am a top x% reviewer of NeurIPS 2018, 2019 (x is the number that guarantees a registration of NeurIPS) and a top 33% reviewer of ICML 2020. Give me a paper, I can quickly provide a quite good review! Reviewer of NeurIPS 2016,2017,2018,2019,2020, of ICML 2017,2018,2019,2020,2021 of COLT 2017,2018,2019,2020,2021, of ALT 2017,2018,2019,2020, of ICLR 2021. Publications: *Corresponding Author/*Presenting Author A Modular Analysis of Provable Acceleration via Polyak's momentum: Training a Wide ReLU Network and a Deep Linear Network Escape Saddle Points Faster with Stochastic Momentum. Online Linear Optimization with Sparsity Constraints Revisiting Projection-Free Optimization For Strongly Convex Constraint Sets Acceleration through Optimistic No-Regret Dynamics Faster Rates for Convex-Concave Games On Frank-Wolfe and Equilibrium Computation Efficient Sampling-based ADMM for Distributed Data Parallel Least-Squares Policy Iteration Robust Inverse Covariance Estimation under Noisy Measurements Techical Reports: 1 Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex Optimization |