Student Researchers Make the Most of Their Allocation

By Megan Johnson, NCSA
An abstract piece of concept art indicating data being shared. Blue boxes on a blue motherboard connected by light.

When ACCESS launched last fall, one of the program’s new features was an expansion of the types of allocations awarded, allowing for an easing of requirements in some of the application processes. For example, ACCESS made it very simple for smaller projects to get an allocation with the addition of the Explore ACCESS allocation option. Another group that benefits significantly from the less restrictive allocation requirements is student researchers. Graduate students are eligible to be principal investigators (PIs) on Explore ACCESS allocations. One such student research team from the University of Illinois at Urbana Champaign (UIUC) was able to make the most of student allocation options. Not only did the allocation allow them to complete their work, but it also gave them the resources to produce a paper that was accepted for a poster presentation at the 2023 ICML Federated Learning workshop. On top of aiding with the research itself, these kinds of opportunities can help strengthen a young researcher’s career, giving them a strong foundation of practical research skills and a jumpstart on their future.

The team was given an allocation on Delta, NCSA’s GPU-based supercomputer. Their project was focused on Federated Learning (FL). Federated Learning is a way to train machine learning models by having many different computers work together while preserving the privacy of data local to each machine. These computers are called clients, and they’re teamed up to train a model with the aid of a central server. This differs from the more traditional methods of training a model where all the data is sent directly to the server. In FL, updates to the model are sent between the client and the server. This is beneficial in certain situations where one wants to keep data private, like with medical information, because the client computers don’t need to share the data with the server. In short, FL allows an artificial intelligence (AI)  model to be trained without having to share sensitive or protected information.

Rishub Tamirisa

Getting an ACCESS allocation through the application was very straightforward. We gave details about our project and got approval quickly.

– Rishub Tamirisa

Rishub Tamirisa is one of the student researchers who worked on this project. He said they came to ACCESS because they knew they needed more power for their work. “We chose ACCESS because the GPUs are better than most GPUs in typical university lab clusters (NVIDIA A100s vs. V100s). For federated learning, or any large model training, having the best GPUs at one’s disposal is critical for doing efficient research.”

The team was pleasantly surprised at how easy it was to get started. “Getting an ACCESS allocation through the application was very straightforward. We gave details about our project and got approval quickly,” Tamirisa said.

His group’s project aimed to refine the existing method of FL. “The goal of our research was to introduce a new method for federated learning,” Tamirisa explained. “ Federated learning aims to solve the problem of having multiple models trained on different datasets learn from each other during training, without sharing training data (thus preserving privacy). Our paper introduced a new algorithm for doing this that achieved higher accuracy on existing benchmarks than prior methods.”

The multiple-GPU setup via Delta made training significantly more efficient and enabled faster research iteration on our ideas. In just three months, we went from initial research formulation to implementation, resulting in work accepted at ICML, one of the top ML conferences worldwide.

Rishub Tamirisa, student researcher, UIUC

While there are many research domains that could benefit from time on a supercomputer, when it comes to machine learning, supercomputers are quickly becoming essential. “Federated learning in particular, is expensive,” said Tamrisa, “since it requires training multiple models in parallel and aggregating their results. ACCESS helped us get much faster results because of the high-end GPU access.”

You can read more about this story here:  Delta Powers Student Research


Project Details

Resource Provider Institution: National Center for Supercomputing Applications (NCSA)
Affiliations: University of Illinois at Urbana Champaign
Funding Agency: NSF
Grant or Allocation Number(s): OCI 2005572

The science story featured here was enabled by the ACCESS program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.

Sign up for ACCESS news and updates.

Receive our monthly newsletter with ACCESS program news in your inbox. Read past issues.