Tactix Project


The Tactix project combines the previous work Prof. Pu has been doing in the areas of extended transaction processing and Internet information processing.

Extended Transaction Processing

Calton Pu has introduced the idea of epsilon serializability (ESR) for asynchronous transaction processing. ESR is a generalization of classic serializability, with efficient divergence control methods (that generalize concurrency control) and consistency restoration methods (that generalize crash recovery). ESR has wide applicability in situations where some inconsistency can be tolerated in exchange for high performance, availability and autonomy. Besides several papers written and others in preparation, he is leading the effort to design and implement ESR support in a commercial transaction processing system.

This work will continue in collaboration with researchers at several institutions around the world, including Columbia University, IBM T. J. Watson Research Center, and University of Massachusetts, among others.

One of the projects that continues is TAM (Transaction Activity Model) work.

Continual Queries project (CQ)

The Harvest Project, which is building an integrated set of tools to gather, extract, organize, search, cache, and replicate relevant information across the Internet. The Continual Queries project is an extension of Harvest project to handle heterogeneous information producers and a large number of information consumers. The main goal of CQ is to propagate update information from producers to consumers in an efficient and scalable way.

The main approach in the CQ project is to explicitly describe the facilities provided by information producers and the queries asked by information consumers. From the producer side, we need the description such as the kind of information available (data quality), how it is represented (database schema), how it can be accessed (query languages supported), and how much it will cost (billing). From the consumer side, we need to describe the query in terms of information quality required, willingness to pay, and syntactic query representation. CQ will then match the query from the consumer to the information sources, passing the query and returning the answer.

This matching process involves many technical challenges. In collaboration with the CQ project, the DIOM Project plans to develop an adaptive methodology and toolkits for integration and access of heterogeneous information sources in large-scale and rapidly growing networking environment (such as the Internet). The information sources can either be structured data repositories (like databases) or semi-structured or unstructured data (e.g., www pages, news articles). The Diorama system consists of several components that extract properties from unstructured data or collect data through other information brokers/mediators, and convert gathered information into a common object model - DIOM.


We are looking for good people to join our project and work with us. We would like to hear from you, if you are interested in a postdoc, research programmer, or graduate student assistant positions. See our job ad for more information.

HOMEPAGE Back to Calton Pu's home page