BIG Cache Abstraction for Cache Networks
Eman Ramadan, Arvind Narayanan, Zhi-Li Zhang, Runhui Li and Gong Zhang
University of Minnesota, University of Minnesota, University of Minnesota, Huawei Future Network Theory Lab, Huawei Future Network Theory Lab

In this paper, we advocate the notion of BIG cache as an innovative abstraction for effectively utilizing the distributed storage and processing capacities of all servers in a cache network. The BIG cache abstraction is proposed to partly address the problem of (cascade) thrashing in a hierarchical network of cache servers, where it has been known that cache resources at intermediate servers are poorly utilized, especially under classical cache replacement policies such as LRU. We lay out the advantages of BIG cache abstraction and make a strong case both from a theoretical standpoint as well as through simulation analysis. We also develop the dCLIMB cache algorithm to minimize the overheads of moving objects across distributed cache boundaries and present a simple yet effective heuristic for addressing the cache allotment problem in the design of BIG cache abstraction.