College of Computing News

Alexandros Daglis Finds New Beginnings at the End of Moore’s Law

The number of computer chip transistors is no longer expected to double every year as Moore’s law declines. Most computer scientists see this as a problem, but new School of Computer Science Assistant Professor Alexandros Daglis thinks it’s an opportunity.

“There is a shifting balance of computing resources,” he said. “The advancements in raw computing power are tapering off, so networks have the time to catch up.”

The possibilities of hardware

Daglis has always been fascinated by what can make hardware faster. Although he got into computer science by programming video games as a teenager, he quickly discovered the appeal of hardware during his undergraduate years at the National Technical University of Athens.

“Caching and locality, they just felt so natural to me,” he said. “This is why computers work. The fundamental techniques that make computers so fast really piqued my interest.”

His undergraduate thesis was about how to improve caching. Yet during his Ph.D. at École polytechnique fédérale de Lausanne (EFPL), Daglis realized there were bigger problems to tackle.

Catching up with Moore

Under Moore’s law, central processing units (CPUs) became faster, whereas networks lagged. Yet as the paradigm shifts, networks can finally close the gap. According to Daglis, now is the time to lower latency, or data transfer speed, and increase bandwidth.

As a computer architect, Daglis wants to rethink the fundamentals of how communication-intensive systems, like social network applications running in datacenters, function. Users want to retrieve a small amount of data —a message, a friend request — fast. Yet, the existing network isn’t set up to make this run efficiently, according to Daglis.

“Fundamental latency bounds are catching up: we’re getting to speed-of-light data propagation within a datacenter’s internal network soon,” Daglis said. “But the way we build systems precludes leveraging the full potential of these faster networks. Network protocols are too slow for them, and the long-established interfaces our computing resources rely on to tap into the network are too slow as well.”

So Daglis wants to create a new paradigm: co-designing hardware and networks. As hardware needs to evolve and networks improve, it’s the perfect time to revisit system design.

“Networking’s legacy is blocking us from unleashing the true power of modern networks,” Daglis said.

Daglis believes moving higher-level operations closer to the CPU’s network endpoint is one effective way to better leverage growing network capabilities. For example, software predominantly handles decisions that balance incoming network messages across a server CPU’s many cores, but enabling the network endpoint to make these decisions can yield significant latency gains. This means transitioning from traditional CPU-centric computing to network- and memory-centric computing, which could have impacts across software, systems architecture, and algorithms.

For now, though, Daglis is taking a step back and focusing on how he can drastically improve the performance of communication-intensive systems. To do this, he plans to leverage relevant new technologies that are becoming commercially available, such as “smart” programmable network interface comtrllers and switches. Yet his goals are still ambitious.

“It’s interesting to explore the extent of immediate performance gains we can achieve by properly leveraging new commercial system components. However, it’s important to think about what we can do in the longer term that is not just incremental, but fundamentally different from existing computing systems by using pieces of increasingly heterogeneous hardware resources used for computation and networking,” he said. “My vision of co-designing the two will enable a much broader portfolio of functionality.”