Theory of Computation Colloquium 2006

Room MiRC 102A at 2:00 p.m. on Monday unless otherwise noted.

DateSpeakerTopic
Sep 11 Subhash Khot New Results for Learning Noisy Parities and Halfspaces
Sep 18 Ehud Kalai An Approach to Bounded Rationality
Sep 21 Alan Frieze Line of Sight Networks (Joint ACO/TOC Colloquium)
Oct 2 Juan Vera On the Average Case Performance of some Approximation Algorithms for the Facility Location Problem
Oct 9 Roman Vershynin Beyond Hirsch Conjecture: on Smoothed Complexity of the Simplex Algorithm
Oct 23 No talk FOCS '06
Oct 30 Christine Heitsch The Challenge of Coding Biological Information in Nucleotide Sequences
Nov 6 Venkat Guruswami List Decoding with Optimal Rate: Folded Reed-Solomon Codes
Nov 10 Santosh Vempala, Richard Karp, Ravi Kannan, Richard Lipton ARC ThinkTank Distinguished Lectures
Nov 13 Adam Kalai Approximation-Algorithms Turned Bandit
Nov 20 Eric Vigoda, Jennifer Chayes, Christian Borgs, Milena Mihail Kickoff for the Special Focus on Discrete Random Systems


Sep 11, Subhash Khot, Georgia Tech

New Results for Learning Noisy Parities and Halfspaces

I will present some new results on the learnability of parities and halfspaces in the presence of noise. The results can be informally stated as:
(1) In uniform distribution setting, learning juntas on k variables reduces to learning noisy parities on k variables. Learning DNFs reduces to learning noisy parities on O(log n) variables.
(2) Learning halfspaces: Given a set of labeled examples in Rn such that there is a halfspace that correctly classifies 1-ε fraction of examples, it is NP-hard to find a halfspace that is correct on 1/2+ε fraction of examples for any ε > 0. This is optimal.
(3) It is hard to PAC-learn majorities of halfspaces assuming that the Ajtai-Dwork lattice-based cryptosystem is secure (which amounts to assuming that the shortest vector problem for n-dimensional lattices is hard to approximate within factor nO(1)).

Joint work with Vitaly Feldman, Parikshit Gopalan, and Ashok Kumar Ponnuswami. To appear in FOCS'06.


Sep 18, Ehud Kalai, Northwestern University

An Approach to Bounded Rationality

A central question in game theory, learning, and other fields is how a rational intelligent agent should behave in a complex environment, given that it cannot perform unbounded computations. We study strategic aspects of this question by formulating a simple model of a game with additional costs (computational or otherwise) for each strategy. While a zero-sum game with strategy costs is no longer zero-sum, we show that its Nash equilibria have an interesting structure and the game has a new type of "value." We also show that potential games with strategy costs remain potential games. Both zero-sum and potential games with strategy costs maintain a very appealing property: simple learning dynamics converge to Nash equilibrium.


Sep 21, Alan Frieze, Carnegie Mellon University

Line of Sight Networks

Random geometric graphs have been one of the fundamental models for reasoning about wireless networks: one places n points at random in a region of the plane (typically a square or circle), and then connects pairs of points by an edge if they are within a fixed distance of one another. In addition to giving rise to a range of basic theoretical questions, this class of random graphs has been a central analytical tool in the wireless networking community.

For many of the primary applications of wireless networks, however, the underlying environment has a large number of obstacles, and communication can only take place among nodes when they are close in space and when they have line-of-sight access to one another --- consider, for example, urban settings or large indoor environments. In such domains, the standard model of random geometric graphs is not a good approximation of the true constraints, since it is not designed to capture the line-of-sight restrictions.

Here we propose a random-graph model incorporating both range limitations and line-of-sight constraints, and we prove asymptotically tight results for k-connectivity. Specifically, we consider points placed randomly on a grid (or torus), such that each node can see up to a fixed distance along the row and column it belongs to. (We think of the rows and columns as "streets" and "avenues" among a regularly spaced array of obstructions.) Further, we show that when the probability of node placement is a constant factor larger than the threshold for connectivity, near-shortest paths between pairs of nodes can be found, with high probability, by an algorithm using only local information. In addition to analyzing connectivity and k-connectivity, we also study the emergence of a giant component, as well an approximation question, in which we seek to connect a set of given nodes in such an environment by adding a small set of additional "relay" nodes.

Joint work with Jon Kleinberg, R. Ravi and Warren Debaney.


Oct 2, Juan Vera, Georgia Tech (via CMU)

In combinatorial optimization, a popular approach to NP-hard problems is the design of approximation algorithms. These algorithms typically run in polynomial time and are guaranteed to produce a solution which is within a known multiplicative factor of optimal. Conventional wisdom holds that "in practice" approximation algorithms will produce solutions closer to optimal than their proven guarantees. In this talk, we use the rigorous-analysis-of-heuristics framework to investigate this conventional wisdom.

We analyze the performance of 3 related approximation algorithms for the uncapacitated facility location problem (from [Jain, Mahdian, Markakis, Saberi, Vazirani, 2003] and [Mahdian, Ye, Zhang, 2002]) when each is applied to an instances created by placing n points uniformly at random in the unit square. We find that, with high probability, these 3 algorithms do not find asymptotically optimal solutions, and, also with high probability, a simple plane partitioning heuristic does find an asymptotically optimal solution.

Joint work with Abraham D. Flaxman and Alan M. Frieze


Oct 9, Roman Vershynin, UC Davis

Beyond Hirsch Conjecture: on Smoothed Complexity of the Simplex Algorithm

Smoothed analysis of algorithms is concerned with the expected running time of an algorithm under slight random perturbations of arbitrary inputs. Spielman and Teng proved that the (shadow-vertex) simplex method had polynomial smoothed complexity. On a slight random perturbation of arbitrary linear program, the simplex method finds the solution after a walk on polytope(s) with expected length polynomial in the number of constraints n, the number of variables d and the inverse standard deviation of the perturbation 1/sigma.

Our main result is that the length of walk in the simplex method is actually polylogarithmic, rather than polynomial, in the number of constraints n. This shows that the tight Hirsch conjecture n-d on the length of walk on polytopes is not a limitation for the smoothed Linear Programming. Random perturbations create short paths between vertices.

Smoothed analysis is related to theoretical open problems of independent interest, such as estimating the condition number of a random matrix, and the problem of "decoupling" phase-I from phase-II in linear programming. We will discuss some approaches to these problems based on concentration inequalities.


Oct 30, Christine Heitsch, Georgia Tech

The Challenge of Coding Biological Information in Nucleotide Sequences

In the nucleus, lengthy DNA molecules have a canonical double-stranded helix structure well-adapted for information storage and retrieval. In the laboratory, short single-stranded DNA sequences have a variety of possible applications, ranging from microarrays to nanomolecular structures and DNA computation. Furthermore, ongoing discoveries highlight the many vital regulatory and catalytic functions performed by different RNA molecules, other than mediating the production of proteins from DNA. Thus, increasing interest is focused on the self-bonding of RNA molecules and on the design, analysis, and prediction of RNA secondary structures.

With the goal of understanding and manipulating the structural / functional information encoded by the selective base pair hybridization of single-stranded DNA and RNA molecules, we consider algorithms for the design of RNA secondary structures and DNA code words based on discrete mathematical models. Our combinatorial approach relies on reducing the problem of designing an RNA sequence with a specified set of base pairings to the question of creating a set of nucleotide code words of sufficient quality. Thus, the essential challenge in both cases is producing sets of short oligonucleotides whose elements are strongly differentiated from each other with respect to the biochemical energetics.


Nov 06, Venkat Guruswami, University of Washington

List Decoding with Optimal Rate: Folded Reed-Solomon Codes

Suppose you want to communicate a message of k packets on a noisy communication channel. So you judiciously encode it as a redundant collection of n = ck packets and transmit these. What is the fewest number of correct packets one needs to receive in order to have any hope of recovering the message?

Well, clearly one needs at least k correct packets. In this talk, I will describe an encoding scheme that attains this information-theoretic limit: for any desired constant eps > 0, it enables recovery of the message as long as at least k(1+eps) packets are received intact. The location of the correct packets and the errors on the remaining packets can be picked adversarially by the channel.

This achieves the optimal trade-off (called "capacity") between redundancy and error-resilience for a malicious noise model where the channel can corrupt the transmitted symbols arbitrarily subject to a bound on the total number of errors. These results are obtained in an error-recovery model called list decoding. The talk will introduce and motivate the problem of list decoding, and then give a peek into the algebraic ideas and constructions that lead to the above result.

Based on joint work with Atri Rudra.


Nov 13, Adam Kalai, Weizmann Institute.

Approximation-Algorithms Turned Bandit

In a multi-armed bandit problem, a decision maker makes a sequence of decisions, online, in the face of uncertainty. For example, consider a traveling salesman who repeatedly must choose a route to visit a set of cities. Each period, she decides on a route and then finds out the total time that it took. Her goal is to achieve low average time over many periods. More generally, the decision-maker has a known (possibly infinite) set of decisions D, and there is an unknown sequence of cost functions chosen from a known family of functions F.

Unfortunately, many problems like the TSP example above, cannot be solved efficiently offline, even if everything was known in advance, and one must use approximation algorithms instead. We give a general reduction, showing how to solve *online* linear optimization problems (ones where costs scale linearly like the online TSP example), using an approximation algorithm designed for the *offline* (full-information) setting. Performance is guaranteed to be nearly as good as the average performance of the best single decision, e.g., best route, if one had to use the same decision each period, where the best is chosen with the benefit of hindsight.

Our results generalize prior work which gave similar results for the case where the offline problem could be solved exactly, or for special types of approximation algorithms. The use of approximation algorithms presents interesting new difficulties for which we provide geometric solutions.


Maintained by Dani Denton, Tejas Iyer and Santosh Vempala. ({denton,ti,vempala} at cc dot gatech dot edu)