image:wibisono
      Andre Wibisono

GEORGIA INSTITUTE OF TECHNOLOGY

wibisono [at] gatech [dot] edu

I am a postdoctoral researcher in computer science at Georgia Tech, advised by Jake Abernethy. My research interests are in algorithmic methods for machine learning, in particular for problems in optimization, sampling, and game dynamics.

Previously I was a postdoc in the ECE department at UW Madison, where I worked on information theory and statistics advised by Po-Ling Loh.

I received my PhD in computer science in 2016 and my MA in statistics in 2013 from from UC Berkeley, where I was fortunate to be advised by Michael Jordan.

I received my bachelor's degrees in mathematics and in computer science from MIT in 2009. I also received my MEng in computer science from MIT in 2010, advised by Tomaso Poggio.

CV | Google Scholar.


RESEARCH

Fictitious play: Convergence, smoothness, and optimism
Jacob Abernethy, Kevin Lai, and Andre Wibisono
arXiv preprint arXiv:1911.08418, 2019
Proximal Langevin Algorithm: Rapid convergence under isoperimetry
Andre Wibisono
arXiv preprint arXiv:1911.01469, 2019
Last-iterate convergence rates for min-max optimization
Jacob Abernethy, Kevin Lai, and Andre Wibisono
arXiv preprint arXiv:1906.02027, 2019
Rapid convergence of the Unadjusted Langevin Algorithm: Isoperimetry suffices
Santosh Vempala and Andre Wibisono
NeurIPS (Neural Information Processing System) 2019
arXiv version | poster
Accelerating Rescaled Gradient Descent: Fast optimization of smooth functions
Ashia Wilson, Lester Mackey, and Andre Wibisono
NeurIPS (Neural Information Processing System) 2019
Convexity of mutual information along the Ornstein-Uhlenbeck flow
Andre Wibisono and Varun Jog
ISITA (International Symposium on Information Theory and Applications) 2018
Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem
Andre Wibisono
COLT (Conference on Learning Theory) 2018
Convexity of mutual information along the heat flow
Andre Wibisono and Varun Jog
ISIT (International Symposium on Information Theory) 2018
Information and estimation in Fokker-Planck channels
Andre Wibisono, Varun Jog, and Po-Ling Loh
ISIT (International Symposium on Information Theory) 2017
A variational perspective on accelerated methods in optimization
Andre Wibisono, Ashia Wilson, and Michael Jordan
Proceedings of the National Academy of Sciences, 133, E7351--E7358, 2016. [arXiv version]
Optimal rates for zero-order convex optimization: the power of two function evaluations
John Duchi, Michael Jordan, Martin Wainwright, and Andre Wibisono
IEEE Transactions on Information Theory, 61(5): 2788--2806, May 2015
A Hadamard-type lower bound for symmetric diagonally dominant positive matrices
Christopher Hillar and Andre Wibisono
Linear Algebra and Applications, 472: 135--141, 2015
Convexity of reweighted Kikuchi approximation
Po-Ling Loh and Andre Wibisono
NIPS (Neural Information Processing System) 2014
How to hedge an option against an adversary: Black-Scholes pricing is minimax optimal
Jake Abernethy, Peter Bartlett, Rafael Frongillo, and Andre Wibisono
NIPS (Neural Information Processing System) 2013
Streaming variational Bayes
Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia Wilson, and Michael Jordan
NIPS (Neural Information Processing System) 2013
Maximum entropy distributions on graphs
Christopher Hillar and Andre Wibisono
arXiv preprint arXiv:1301.3321, 2013
Inverses of symmetric, diagonally dominant positive matrices and applications
Christopher Hillar, Shaowei Lin, and Andre Wibisono
arXiv preprint arXiv:1203.6812, 2013
Finite sample convergence rates of zero-order stochastic optimization methods
John Duchi, Michael Jordan, Martin Wainwright, and Andre Wibisono
NIPS (Neural Information Processing System) 2012
Minimax option pricing meets Black-Scholes in the limit
Jacob Abernethy, Rafael Frongillo, and Andre Wibisono
STOC (Symposium on the Theory of Computing) 2012

THESES

Variational and Dynamical Perspectives on Learning and Optimization
PhD in Computer Science, University of California, Berkeley, May 2016
Maximum Entropy Distributions on Graphs
MA in Statistics, University of California, Berkeley, May 2013
Generalization and Properties of the Neural Response
MEng in Computer Science, Massachusetts Institute of Technology, June 2010