Ümit V. Çatalyürek

OSU | citeseer | DBLP
Partitioning | Middleware | Scheduling | Coloring | Clustering | Semantic Graphs
Advising | ECE 5362 | ECE 662 | ECE 694 | ECE 864 | BMI 731
2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001 | 2000 and earlier
PaToH | Zoltan | DataCutter | STORM | MSSG
Family | Hobbies
Research | Hardware&Game | Bargain | Others


Below few "older" software projects. For more recent software, please visit TDA Lab's Software page.

PaToH v3.3

PaToH (Partitioning Tools for Hypergraph) is a Multilevel Hypergraph Partitioning tool that I developed during my doctoral studies at Bilkent University (1994-1999). It was the fastest hypergraph partitioner when I wrote it, and probably it is still the fastest sequential partitioner today.

Important features of PaToH:

  • Fast, stable multilevel hypergraph partitioner,
  • Hypergraph partitioning with fixed cells,
  • Multi-constraint hypergraph partitioner.

Below you will find the latest binary distributions of PaToH for Linux and Mac OS X.

Below you will find older binary distributions of PaToH for different architectures.

Please note that some distributions are not updated as often as the others, if ever. If you are interested in running PaToH on a different/newer platform, please notify me, and I will attempt to prepare a version for your preferred platform provided I have access to it. Each distribution also contains a version of the manual for that specific distribution. Here is the most recent manual in PDF format.

You can also use PaToH via Zoltan for hypergraph partitioning or via Mondriaan for matrix partitioning.

PaToH Matlab Interface: The aim of the PaToH Matrix Partitioning Interface is to provide sparse matrix partitioning routines in Matlab. The partitioning routines are based on hypergraph models (see related publications below) and use PaToH hypergraph partitioning tool within a mex function. Apart from the mex function routine that builds a hypergraph and calls PaToH, everything else is based on matrices and vectors and implemented in Matlab.

Download: PaToH Matlab Interface source files and the manual.

Latest Changes:

  • Nov 7, 2019: v3.3 has major improvement in fixed cell partitioning (average ~20% on my test set).
  • Oct 1, 2019: a rare bug in partitioning with fixed cells is fixed.
  • Sep 29, 2019: patoh executable now gives warning when a cell is repeated in the net list and ignores repeated cells.
  • Aug 29, 2019: yet another a very minor final output bug: Multi-Constraint partitioning's final output cut cost was not taking the net weights into account.
  • Jun 20, 2019: a very minor corner bug fixed in patoh executable only occured when k=2 and perfect balance is requested and some nets were filtered with threshold.
  • Mar 22, 2011: a new parameter "balance" is added to allow users to relax balance constraints during recursive bisections.
  • Mar 13, 2011: v3.2 includes multi-constraint partitioning with non-unit net weights and target part weights. A minor bug fixed in V-cycle for disconnected hypergraphs.
  • Oct 28, 2010: A small typo is fixed in Matlab Interface.
  • Aug 13, 2010: Matlab Interface license info added.
  • Jun 27, 2010: Minor bug fix to improve balance for disconnected hypergraphs.
  • May 31, 2010: A new unified interface for regular, multi-constraint and fix-vertex hypergraph partitioning, that also allows user to specify target part weights (ratios).
  • Oct 9, 2009: Small load imbalance improvement for partitioning hypergraphs with isolated vertices into non-power of two parts. Also added a cuttype option to PaToH Matlab MEX interface.
  • Jun 28, 2009: Matlab Interface is available!
  • Nov 24, 2008: Added PaToH_Refine_Bisec to interface.
  • May 12, 2006: Fixed the memory alignment on 64-bit libraries, improving the performance significantly on IA64 architectures. Thanks to Duraid Madina for noticing the alignment problem on IA64 and suggesting the fix!
  • Jan 10, 2005: Changed the default behavior to continue to partition, with default values, if the chosen algorithm is not implemented in the particular partitioner Also a minor balance improvement was made.
  • Nov 11, 2004: Mac OS X distribution has been added.
  • Aug 15, 2002: Fixed a couple of small bugs and changed the default values of a few parameters.

Data & Results

  • Exp-1: ISPD98: 2% actual area results comparison with hMeTiS and UCLA MLPart
  • Exp-2: Comparison of hMeTiS and PaToH on 134 hypergraphs arising from different areas, Sparse-Matrix Vector Multiplication, LP, VLSI  (including ISPD98).

Related Publications:

  • A matrix partitioning interface to PaToH in MATLAB. B. Ucar, U.V. Catalyurek, and C. Aykanat. Parallel Computing, , Vol. 36, No. 5-6, pp. 254 - 272, Jun 2010.
  • On scalability of hypergraph models for sparse matrix partitioning. B. Ucar and U.V. Catalyurek. Proceedings of the PDP 2010: 18th Euromicro International Conference on Parallel, Distributed and Network-Based Computing, 2010.
  • On two-dimensional sparse matrix partitioning: Models, Methods, and a Recipe. U. Catalyurek, C. Aykanat and B. Ucar. SIAM Journal of Scientific Computing, Vol. 32, No. 2, pp. 656-683, 2010.
  • A Hypergraph-Partitioning Approach for Coarse-Grain Decomposition. U. Catalyurek and C. Aykanat. ACM/IEEE SC2001, Denver, CO, November 2001. [ps] [pdf]
  • A Fine-Grain Hypergraph Model for 2D Decomposition of Sparse Matrices. U. Catalyurek and C. Aykanat. Proceedings of International Parallel and Distributed Processing Symposium (IPDPS), 8th International Workshop on Solving Irregularly structured Problems in Parallel (Irregular 2001), San Francisco, April 2001. [ps] [pdf]
  • Hypergraph Models for Sparse Matrix Partitioning and Reordering. Ü. V. Çatalyürek, Ph.D. thesis, Computer Engineering and Information Science Bilkent University, November 1999.
  • Hypergraph-Partitioning Based Decomposition for Parallel Sparse-Matrix Vector Multiplication. U. Catalyurek and C. Aykanat. IEEE Transaction on Parallel and Distributed Systems, Vol. 10, No. 7, pp. 673-693, 1999. [ps] [pdf]
  • Decomposing irregularly sparse matrices for parallel matrix-vector multiplications. U. Catalyurek and C. Aykanat. Lecture Notes in Computer Science, vol. 1117, pp. 75-86, 1996. [ps] [pdf]
  • Decomposing linear programs for parallel solution. A. Pinar, U. Catalyurek, C. Aykanat, and M. Pinar. Lecture Notes in Computer Science, vol. 1041, pp. 473-482, 1996. [ps] [pdf]
  • A hypergraph model for mapping repeated sparse matrix-vector product computations onto multicomputers. Ü. V. Çatalyürek and C. Aykanat, Proceedings of International Conference on High Performance Computing, HiPC ’95, Goa, India, December 1995. [ps] [pdf]

top, complete list of publications


Zoltan, developed and maintained by Sandia National Laboratories, is a toolkit for load balancing and parallel data management. In the last couple of years, I had the pleasure of collaborating with the Zoltan team. As an outcome of this collaboration we developed a parallel multilevel hypergraph partitioning algorithm, as well as distance-1 and distance-2 coloring algorithms. Zoltan release v2.0 (and later) includes the implementation of these algorithms and its source code is distributed under the GNU Lesser General Public License.

More information and the distribution of Zoltan can be found at the project web site.



DataCutter is a component-based middleware framework designed to support coarse-grain data-flow execution on heterogeneous environments. In DataCutter, application processing structure is implemented as a set of components, named filters, that exchange data through logical streams. A stream denotes an uni-directional data flow from one filter (i.e., the producer) to another (i.e., the consumer). A filter is required to read data from its input streams and write data to its output streams only.

The DataCutter runtime system supports both data- and task-parallism. Processing, network and data copying overheads are minimized by the ability to place filters on different platforms. DataCutter's filtering service performs all steps necessary to instantiate filters on the desired hosts, to connect all logical endpoints, and to call the filter's interface functions for processing work. Data exchange between two filters on the same host takes place by memory copy operations, while a message passing communication layer (e.g. TCP sockets or MPI) is used for communication between filters on different hosts.

More information and the distribution of DataCutter can be found at the project web site.



STORM is a middleware, being built using DataCutter, that is designed to provide basic database support for 1) Selection of the data of interest, 2) Transfer of data from storage nodes to compute nodes for processing.

STORM is designed as a set of coupled services. The query service provides an entry point for clients to submit queries to the database middleware. It is responsible for parsing the client query to determine which services should be instantiated to answer the query. The metadata service maintains information about datasets, and indexes and user-defined filters associated with the datasets. The data source service provides a view of a dataset as virtual tables to other services. Efficient execution of select operations is supported by indexing service and filtering service. The filtering service supports efficient execution of user-defined filters. The purpose of the partition generation service is to make it possible for an application developer to export the data distribution scheme employed in a parallel client program. The data mover transfers the selected data elements to the client based on the partitioning information returned from the partition generation service. STORM has been integrated and distributed with the NSF Middleware Initiative, since Release 5

More information and the distribution of STORM can be found at the project web site.



Developing scalable algorithms for relationship analysis on social network graphs (also known as semantic graphs) has been identified as an important need by the Department of Homeland Security. Unlike graphs from most scientific applications, in a scale-free graph most of the vertices are connected only to a small number of other vertices, while a few vertices, known as hubs, are connected to a large number of other vertices. This extreme irregularity requires novel data structures and databases to store these large graphs. We have recently started developing a framework that will support storage, retrieval and processing of large scale-free semantic graphs.


©2006- Umit V. Catalyurek. Last Update: Mar 22, 2011