AI can speed up decisions, but can't be fully trusted
At the India AI Impact Summit 2026, Vianai founder and ex-Infosys CEO Vishal Sikka called out major flaws in large language models (LLMs).
He said that while LLMs can speed up decisions and rebuild services, they still aren't reliable enough for businesses to trust them fully.
Hallucinations and safety checks
Sikka pointed to "hallucinations"—basically, when AI just makes things up—as a key reason enterprises hesitate.
He stressed the need for serious safety checks so AI systems don't act unpredictably, comparing it to how nuclear tech is tightly regulated for everyone's safety.
Power consumption of AI systems
He also criticized how much power LLMs consume.
Every little prompt lights up huge data centers, while the human brain runs on just 15-20W—about as much as a laptop on sleep mode.
Sikka called this "completely absurd" and urged redesigns that make AI more efficient and scalable.