Reading Group: Schedule for Fall 2004

The group usually meets on Mondays at 3:00 in CoC 358.

Date Presenter Presenting
Aug 30th Deeparnab Chakrabarty An Approximate Max-Steiner-Tree-Packing Min-Steiner-Cut Theorem
Lap Chi Lau


Given an undirected multigraph G, and a set of required vertices S of V(G), the STEINER TREE PACKING problem is to find a largest collection of *edge disjoint* Steiner trees(1), each over the required set S. This problem is APX-hard (no PTAS unless P=NP). This paper gives a constant approximation ratio. In this talk, I shall show the main theorem of the paper which states the following: If the set S is 26k connected, then there are atleast k edge disjoint Steiner trees. In other words, if the minimum size of a Steiner cut(2) is 26k, then there are k edge disjoint Steiner trees.

Interestingly, if the set S is the whole of V(G), that is the Steiner tree is actually a Spanning tree, then it follows as a simple corollary of the Nash-Williams-Tutte theorem (3) that if a graph is 2k connected then there are k edge disjoint Steiner trees. It is conjectured that the same is true for Steiner trees, but before this paper no constant times k was known. This paper uses a result of Frank etal which states: If a graph is 3k connected and there are no edges between two Steiner vertices, then the graph has k edge disjoint Steiner Trees.

1. A Steiner tree over a required set S is a tree which connects all the vertices in S
2. A Steiner cut is a cut which disconnects the required set
3. Nash-Williams-Tutte Theorem: A graph G has k edge-disjoint spanning trees iff for all partitions P = {V_1,V_2,...,V_t} of the vertex set, we have |E(P)| >= k(t-1) where E(P) are the edges across the partitions.


Tuesday, Sept 7th, 4:45
(Note change of date and time)
Nikhil Devanur Stochastic Optimization is (almost) as easy as deterministic optimization.
C. Swamy and D.B. Shmoys. (To appear in FOCS 2004).


In this paper, the authors consider 2 stage stochastic optimization with recourse. These problems model the situations where there is uncertainty in the data. For instance, consider the vertex cover problem. In the first stage, only a probability distribution on the set of possible graphs is given, and the algorithm is allowed to pick some vertices (at the first stage cost). In the second stage, a graph is realised from the probability distribution, and the algorithm might have to pick additional vertices (at the second stage cost) to cover all the edges. The problem is to minimize the first stage cost plus the expected second stage cost.

They consider the stochastic versions of set cover-type problems, in particular, set cover, vertex cover and facility location, and prove improved approximation ratios for these. The main technique invovles rounding and a modification of the ellipsoid algorithm, to solve an exponential sized Linear Program.


Sept 13th Nayantara Bhatnagar Solving Convex Programs by Random Walks
Dimitris Bertsimas and Santosh Vempala

Minimizing a convex function over a convex set in n-dimensional space is a basic problem with many interesting special cases. The authors present a simple algorithm for convex optimization given a separation oracle using sampling.

It is a theorem that for any convex set K in R^n any halfspace containing the centroid contains at least 1/e fraction of the volume of K. The authors generalize this to show that for isotropic* convex sets any halfspace containing a point at distance t from the centroid contains at least 1/e - t fraction of the volume. Sampling uniformly from an isotropic convex set bounding the set of interest gives an approximate centroid. Thus a hyperplane separating the approximate centroid and the convex set will cut off at least a constant fraction of the volume.
* A convex set is isotropic if its centroid is at the origin, and the expected distance squared in any direction is 1.


Sept 20th Nikhil Devanur A Polynomial Time Algorithm for Computing the Arrow-Debreu Market Equilibrium for Linear Utilities.
Kamal Jain. (To appear in FOCS 2004.)


In this paper, the author presents a polynomial time algorithm for computing the Market Equilibrium in the Arrow-Debreu model when the agents have linear utilities. Previously, only FPTASes were known for this problem. The author formulates the problem as a convex program, and shows that it can be solved using the ellipsoid method and diophantine approximation. However, these methods were only known to give an exact solution when the polyhedron is ``well-described''. This paper shows that this requirement can be weakened, and this weakening is sufficient to get an exact answer for the problem considered.


Sept 27th Yan Ding Extracting Randomness from Few Independent Sources
Barak, Impagliazzo and Wigderson

This paper is concerned with the problem of _deterministic_ extraction of near perfect randomness from _independent_ weak random sources. It is well known that deterministic extraction of randomness from a single general weak source is impossible, while explicit deterministic extractors from 2 independent sources were only known for entropy rate greater than 1/2.
In this paper, the authors construct, for _any constant_ $\delta > 0$, a deterministic extractor from $poly(1/\delta)$ independent sources of entropy rate $\delta$. The construction employs tools from additive number theory, in particular a recent result of Bourgain, Katz and Tao.


Oct 4th Parikshit Gopalan On the List and Bounded Distance Decodability of Reed-Solomon Codes
Qi Cheng and Daqing Wan

An [n,k]_q Reed Solomon code consists of all polynomials of degree at most k evaluted at n distinct point in the finite field F_q. Presently the best known decoding algorithms can recover a list of all codewords that agree with a received word at $t > \sqrt(nk)$ positions. It is not known whether one can do better, or even if the size of the list remains polynomially bounded for $t < \sqrt(nk)$.

In this paper, the authors show a connection between the decoding problem for Reed Solomon codes and the discrete logarithm problem over certain extensions fields of F_q. They show for some parameter T (where k < T < \sqrt(nk)) that if there is a list decoding algorithm that recovers all codewords with T agreements, then there is an effecient algorithm for discrete log over an extension of F_q.

Oct 11th Vangelis Markakis Tolls for heterogeneous selfish users in multicommodity networks and generalized congestion games
Fleischer, Jain and Mahdian
Edge pricing of multicommodity networks for heterogeneous selfish users
Karakostas and Kolliopoulos
A model of selfish routing is considered, in which network users route traffic on paths that minimize their own cost (i.e. latency on the edges of the path + tolls), given the other users's choices. A multicommodity flow resulting from such a routing is called a Nash equilibrium or a Nash flow. By using LP-duality theory, the authors show that it is possible to impose tolls on the edges so that a Nash flow will minimize the social cost which is the total latency of the network. The tolls can be computed in polynomial time under certain assumptions on the edge latency functions. This answers an open question posed by Cole, Dodis and Roughgarden (STOC '03). It is also shown that the same result holds even if the social cost is any nondecreasing function of the congestion of the network. Finally their results are extended to general congestion games.
Oct 18th Fall Break
Oct 27th Tejas Iyer Multilinear Formulas for Determinant and Permanent are of Super-Polynomial Size
Ran Raz, STOC 2004
An arithmetic formula is multi-linear if the polynomial computed by each of its sub-formulas is multi-linear. It is shown that any multi-linear arithmetic formula for the permanent or determinant of an n x n matrix is of size super-polynomial in n.

Nov 8th David Cash Perfect Zero-Knowledge Arguments for NP Using any One-Way Permutation
Moni Naor, Rafail Ostrovsky, Ramarathnam Venkatesan, Moti Yung Journal of Cryptology, 1996
In cryptography it is often interesting to seek the minimal assumptions required to build secure primitives. This paper provides the first construction of perfect zero-knowledge proofs with computational soundness from general assumptions. This paper also introduces \emph{interactive hashing protocols}, which have proven to be independently interesting. Perfect zero-knowledge arguments allow a poly-time prover to convince a verifier that a statement is true without leaking any other information. This version of zero-knowledge proofs is interesting cryptography, as it models the task where limited users must authenticate themselves with a more powerful untrusted server.

Nov 15th Eric Vigoda A mildly impractical algorithm for the permanent
Jerrum, Sinclair, Vigoda + some recent work
I'll present the proof (or as much as desired) for the randomized poly-time algorithm to approximate the permanent of an arbitrary non-negative matrix. I also sketch some recent improvements (with Vijay and some others) which leads to a O(n^7) time algorithm (ignoring a few log factors).

Nov 22th Aranyak Mehta AdWords and Generalized On-line Matching
Aranyak Mehta, Amin Saberi, Vijay Vazirani, Umesh Vazirani
Advertising is the main source of revenue for search engine companies such as Google. Each advertiser makes bids on specific AdWords to Google and specifies his maximum daily budget, and is charged whenever he wins a query placed on one of his chosen AdWords. Should Google simply award each query to the highest bidder whose budget is not exhausted? It is easy to show that this strategy may achieve only half the value of the optimal allocation. We introduce the notion of a tradeoff revealing LP and provide two algorithms achieving a competitive ratio of 1 - 1/e for this problem (which can be viewed as a generalization of the on-line matching problem). Our algorithms take into consideration the bid and the left-over budget of advertisers. The optimal tradeoff between bid and budget is given by the dual of the tradeoff revealing LP. This is joint work with Amin Saberi, Umesh Vazirani, and Vijay Vazirani.

Dec 1st, 4:30 Yan Ding Cryptography in NC_0
Applebaum, Ishai, Kushilevitz, FOCS 2004

This paper provides strong evidence for the existence of cryptographic primitives (e.g. one-way functions, pseudorandom generators, etc.) that can be computed by uniform NC0 circuits, in which each output bit depends on a _constant_ number of input bits. (An NC0 circuit is one which has polynomial size, constant depth, and constant fan-in.) More precisely, the paper shows that any OWF or PRG in Parity-L (which contains Log-Space) can be converted to one in NC0. The existence of such primitives in Parity-L follows from almost all number-theoretic or algebraic hardness assumptions.

The technical tool of the paper is a method for randomized constant-degree and constant-locality encoding of functions in Parity-L that preserves security.


Previous semesters -- Theory Group