The Georgia Institute of Technology, Sandia National Laboratories, and the Pacific Northwest National Laboratory are jointly launching a new research center to solve some of the most challenging problems in artificial intelligence (AI) today, thanks to $5.5 million in funding from the U.S. Department of Energy (DoE).
The Georgia Institute of Technology, Sandia National Laboratories, and the Pacific Northwest National Laboratory are jointly launching a new research center to solve some of the most challenging problems in artificial intelligence (AI) today, thanks to $5.5 million in funding from the U.S. Department of Energy (DoE).
AI enables computer systems to automatically learn from experience without being explicitly programmed. Such technology can perform tasks that formerly only a human could: see, identify patterns, make decisions, and act.
The new co-design center, known as the Center for ARtificial Intelligence-focused Architectures and Algorithms (ARIAA), funded by DoE’s Office of Science, will promote collaboration between scientists at the three organizations as they develop core technologies important for the application of AI to DoE mission priorities, such as cybersecurity, electric grid resilience, graph analytics, and scientific simulations.
PNNL Senior Research Scientist Roberto Gioiosa will be the center’s director and will lead the overall vision, strategy, and research direction. Tushar Krishna, an assistant professor in Georgia Tech’s School of Electrical and Computer Engineering (ECE), and Siva Rajamanickam, a computer scientist at Sandia, will serve as deputy directors.
Each institution brings to the collaboration a unique strength: PNNL has expertise in power grid simulation, chemistry, and cybersecurity and has research experience in computer architecture and programming models, as well as computing resources such as systems for testing emerging architectures. Sandia has expertise in software simulation of computer systems, machine learning algorithms, graph analytics, and sparse linear algebra, and will provide access to computer facilities and testbed systems to support early access to emerging computing architectures for code development and testing. Georgia Tech has expertise in modeling and developing custom accelerators for machine learning and sparse linear algebra and will develop and provide access to novel hardware prototypes.
“New projects like the center were made possible by the strategic collaboration between Sandia and Georgia Tech for the past few years," said Sandia’s Rajamanickam.
Special-purpose hardware can enable AI tasks to run faster and use less energy than on conventional computing devices such as CPUs and GPUs. ARIAA is centered around a concept known as “co-design,” which refers to the need for researchers to weigh and balance the capabilities of hardware and software – and corral the vastly different types of architectures and algorithms possible to best solve the problems at hand.
Krishna’s lab will lead the effort on architecting and evaluating reconfigurable hardware accelerators that can adapt to the rapidly evolving needs of AI applications. In particular, running sparse computations efficiently will be a key focus. Sparse computations are critical to several computational areas of interest to the DoE because they greatly reduce the number of computations on problems with large amounts of data. One way of thinking about sparsity is that there might be millions or even billions of users on a social media site, but a user cares about updates only from a few hundred friends.
Krishna, the ON Semiconductor Junior Professor in ECE, works on custom hardware accelerators for AI. In 2015, he co-developed the Eyeriss Deep Learning ASIC (in collaboration with MIT), which was one of the earliest prototypes demonstrating real-time inference on a state-of-the-art deep neural network then known as AlexNet. More recently his lab has been working on an analytical microarchitectural design-space exploration tool for AI accelerators known as MAESTRO (developed in collaboration with NVIDIA) and a reconfigurable AI accelerator platform known as MAERI. Both of these will be leveraged to perform co-design as part of the ARIAA center.
“Georgia Tech provides a great environment to carry out research in hardware-software co-design due to a rich collaborative environment across ECE and the College of Computing, and vibrant research centers such as Machine Learning at Georgia Tech (ML@GT) and the Center for Research into Novel Computing Hierarchies (CRNCH) that bring together researchers with experience in algorithms, compilers, architecture, circuits, and novel devices, fostering collaboration and innovation,” said Krishna.
- Adapted from an article by the Pacific Northwest National Laboratory
Research News
Georgia Institute of Technology
177 North Avenue
Atlanta, Georgia 30332-0181 USA
Media Relations Contact: John Toon (404-894-6986) (jtoon@gatech.edu)