This course studies the complexity of basic algorithmic problems in high-dimensional spaces, including optimization, integration, rounding, centroid computation, learning and sampling. These problems are all hopelessly intractable in general. However, the assumption of convexity leads to polynomial-time algorithms, while still including very interesting special cases, e.g., linear programming, volume computation and many instances of discrete optimization. We will survey the breakthroughs that lead to the current state-of-the-art and pay special attention to the discovery that all of the above problems can be reduced to the problem of *sampling* efficiently. In the process of establishing upper and lower bounds on the complexity of sampling in high dimension, we will encounter geometric random walks, isoperimetric inequalities, generalizations of convexity, probabilistic proof techniques and other methods bridging geometry, probability and complexity.
Notes (including homework exercises).