“Agentic AI is each an enormous alternative and a possible legal responsibility for India.”

3 Min Read

The way forward for AI in India might be outlined by the intelligence of its techniques, their strengths, and the duty the nation takes to deploy and defend them. |Picture credit score: KIRILL KUDRYAVTSEV

For India, the place digital public infrastructure and AI-driven innovation have gotten central to financial progress, agent AI is each an enormous alternative and a possible legal responsibility, mentioned Saugat Sindhu, World Head of Cybersecurity & Danger Providers Advisory Providers at Wipro Restricted.

However he rapidly added that “safety, privateness, and moral oversight have to evolve as quick as AI itself.”

The way forward for AI in India might be outlined by the intelligence of its techniques, their strengths, and the duty the nation takes to deploy and defend them.

In keeping with Sindhu, agentic AI applied sciences are reshaping productiveness, governance, and nationwide safety in an period the place machines now not simply help however act.

Itemizing among the most important cyber dangers of agent AI, he mentioned that from UPI funds to Aadhaar-enabled companies, from sensible manufacturing to AI-enabled governance, India’s digital economic system is booming. Nonetheless, the cyber menace panorama is altering dramatically as synthetic intelligence evolves from passive large-scale language fashions (LLMs) to autonomous decision-making brokers.

These agent AI techniques can plan, purpose, and act independently, work together with different brokers, adapt to altering environments, and make selections with out direct human intervention. “Whereas this autonomy offers vital productiveness positive aspects, it additionally opens the door to new high-impact dangers that conventional safety frameworks can not tackle,” he warned.

See also  IIFL Residence Finance has gained $100 million from AIIB

There might also be threats involving device abuse, the place an attacker tips an AI agent into abusing an integration device (API, cost gateway, doc processor) by way of fraudulent prompts, resulting in hijacking. Or it could possibly be reminiscence poisoning, the place malicious or false information is injected into an AI’s short-term or long-term reminiscence, destroying its context and altering its decision-making. ‘ he defined.

One other type of critical menace could possibly be useful resource overload, together with makes an attempt to overwhelm the AI’s compute, reminiscence, or service capabilities, lowering efficiency or inflicting failure, particularly in mission-critical techniques reminiscent of healthcare or transportation techniques.

Chain hallucinations are additionally a form of menace. Right here, AI-generated false however believable data spreads all through the system, disrupting decision-making from monetary threat fashions to authorized doc drafting.

For instance, Sindhu detailed that the inventory buying and selling platform’s AI agent generated deceptive market reviews, which had been then utilized by different monetary techniques to amplify the errors.

Share This Article
Leave a comment