Anish Saxena

Student Awarded NVIDIA Fellowship

Ph.D. student Anish Saxena has been named a 2025-2026 NVIDIA Graduate Fellowship Program recipient for his research related to large language model (LLM) training.  

NVIDIA awards students doing “outstanding work relevant to NVIDIA technologies” up to $60,000 each.

Saxena, a fourth-year Ph.D. student in the School of Computer Science, is a computer architect focusing on data movement optimization for large language model training. His research goal is to help overcome the GPU memory wall.

“In these very large models, you have billions of parameters, and when trying to do generative inference, you fetch all these parameters from memory. If you can optimize the data movement at the model architecture, system software, and hardware levels, you can speed this process up a lot,” Saxena said.

Saxena said he is motivated to improve the efficiency of these processes so that they can be used and researched by those with limited resources.

“Being able to run these models efficiently, even with limited resources, I think, would be a very impactful way to democratize LLM research,” he said.

Saxena interned at NVIDIA last summer and will return this year as part of the fellowship. He described his previous internship as an enjoyable and productive experience and said that he will work with the same lab this summer.

“I think the biggest positive I’ve gotten out of this experience so far is seeing that I have a vision in something, and that people believe in that vision. It’s motivating to keep working toward that knowing that people are rooting for you,” he said.

Saxena said he hopes to continue finding pressing problems in the production of LLM systems and providing elegant and practical solutions for improving training and serving efficiency. He believes that the NVIDIA fellowship will help him identify these problems and allow him to work with eminent experts to solve them effectively.

Saxena said his advisor, Professor Moinuddin Qureshi, has motivated him to research a wide variety of problems, and he thanked him for his continued support.

“What is most impressive about Anish is his incredible drive for research and passion for learning,” Qureshi said. “When he joined my group, he hit the ground running. For the past year, he has been working on memory systems for ML and developing very good insights. This is an important emerging area, as the growth of ML models and ML systems is primarily constrained by memory capacity and bandwidth. I am very excited to see what Anish will develop.”