LOADING...
3 Indian-origin scientists win top US award for AI, supercomputing
The award recognizes early-career scientists

3 Indian-origin scientists win top US award for AI, supercomputing

May 05, 2026
04:58 pm

What's the story

Three Indian-origin researchers have been awarded the prestigious 2025 Outstanding Postdoctoral Performance Awards by Argonne National Laboratory. The award recognizes early-career scientists who are advancing scientific knowledge and contributing to national missions in energy and security. The recipients of 2025 Outstanding Postdoctoral Performance Awards are Kiran Kumar Yalamanchi, FNU Shilpika, and Krishna Teja Chitty-Venkata. Their work spans artificial intelligence (AI), high-performance computing, and sustainability.

Research focus

Yalamanchi's work in computational science

Yalamanchi's research focuses on computational science, where he integrates traditional physics-based modeling with machine learning. His work particularly concentrates on fluid dynamics and energy applications. One of his major contributions is the development of multimodal foundation models that can analyze different types of data to predict behavior and design new materials. He also uses AI for "inverse molecular design," identifying and creating fuel molecules with improved efficiency and sustainability.

AI transparency

Shilpika's contributions to digital twin development

Shilpika, who works at Argonne's Leadership Computing Facility, is focused on making complex algorithms more transparent. She has developed a "digital twin" of Aurora, one of the world's most powerful exascale supercomputers. This virtual replica mimics the real system and lets engineers and scientists monitor performance, predict failures, and optimize operations in real time. Her work is crucial for maintaining efficiency and trust as supercomputers become increasingly complex.

Advertisement

AI optimization

Chitty-Venkata's focus on AI efficiency

Chitty-Venkata's research focuses on making AI systems more efficient. He has worked on techniques such as pruning and quantization to streamline neural networks without significantly compromising performance. He has also developed open-source tools like LLM-Inference-Bench, which measures how efficiently AI models run on high-performance systems. These efforts are crucial for scaling AI in a cost-effective and environmentally sustainable manner.

Advertisement