India teams power our global AI ambitions, says IBM Research AI head

India teams power our global AI ambitions, says IBM Research AI head

“Our research teams here are actually at the heart of much of the global code work. Indian software teams are also leading the development of watsonx Orchestrate, a digital labor automation tool, and watsonx.data, the data component of IBM’s watsonx platform,” Sriram Raghavan, vice-president for AI at IBM Research, told Mint in an interview.

Raghavan, who leads a global team of over 750 research scientists and engineers across all IBM Research locations, including India, was in town to attend the company’s country-specific annual flagship event being held in Mumbai this year.

India is like a microcosm for IBM. Every part of IBM is represented here—research labs, software labs, systems labs, and we are continuing to grow.

For instance, IBM, which is a partner of India’s AI Mission and the country’s Semiconductor Mission, has installed watsonx on the Center for Development of Advanced Computing’s (C-DAC) Airavat graphics processing unit (GPU) infrastructure, which “startups and ecosystem partners can use”.

On 23 September, PM Narendra Modi met top tech leaders, including IBM chief executive (CEO) Arvind Krishna and Google CEO Sundar Pichai, in New York. He discussed with them topics like AI, quantum, biotechnology and life sciences, and semiconductor technologies.

Also Read: What turned IBM from tech titan to cautionary tale

Raghavan pointed out that IBM has a strong public-private ecosystem in New York, where its lab in Albany works closely with the State University of New York and the New York State nanotechnology center. “We’re applying lessons from this to help the Indian government build similar ecosystems,” he explained.

Closer home, IBM is collaborating with L&T Semiconductor Technologies Ltd, combining its expertise in semiconductor intellectual property (IP) with L&T’s industry knowledge in a bid to foster innovation in semiconductor solutions.

Table of Contents

AI Evolution

Raghavan underlined that AI is gaining serious attention across hardware, programming and enterprise applications. “Companies want fit-for-purpose models that are efficient, scalable and affordable, which is IBM’s focus too,” he said.

IBM’s AI approach, according to him, comprises three key elements: the Granite (IBM’s flagship brand of open and proprietary large language models, or LLMs) series; the InstructLab open-source project for customizing models; and the WatsonX platform for integrating, managing and securely deploying AI models across different environments, including on-premises, public or IBM clouds.

That said, like Meta Platforms Inc., IBM believes in making open-source models publicly available. “The real value comes in managing and optimizing those models, much like we’ve done with Red Hat and Linux,” said Raghavan.

When Gen AI emerged, it gave rise to concerns about the models being closed, proprietary and dangerous.

“Hence, we (IBM and Meta) launched the AI Alliance (in December 2023) to emphasize the value of an open approach, and many Indian companies too have joined the movement, recognizing that AI is too important to be developed behind closed doors,” Raghavan said.

Also Read: Quantum-centric supercomputers to soon be a reality: IBM’s Dario Gil

The AI ​​Alliance now includes IIT-Bombay, AI4Bharat (IIT-Madras), IIT-Jodhpur, Infosys Ltd, KissanAI, People+AI, and Sarvam AI.

“By keeping models open, we invite more eyes to help innovate and build better safeguards. It’s not the model that poses the risk, but how it’s used,” Raghavan insisted.

He underlined the US government has acknowledged this approach in Recent executive ordersunderstanding that overly restrictive measures would stifle innovation, especially in academia and startups.

According to Raghavan, foundational technologies should be open to fostering collaboration and driving new ideas, even as clients will still pay for enterprise-grade support, security and management. “Monetization will come from managing AI applications,” he explained.

But are enough companies moving from pilots to the production stage, and how are they getting the return on investment (ROI) from GenAI? “Our priority is cost, performance, security and skills as clients move from proof-of-concept (POC) to production,” Raghavan asserted. He cited an IBM study, which revealed that 10-20% of companies had scaled at least one AI use case. He acknowledged that the number is growing, but challenges remain, especially in regulated industries.

Also Read: Let’s see if AI can work its magic to close education deficiencies

“Successful companies focus on key areas with clear ROI potential rather than spreading efforts too thinly across multiple POCs. This targeted approach allows them to scale up efficiently and realize meaningful returns. As companies scale their AI use cases, they discover the importance of balancing technology, process and culture,” he said.

AI use cases

According to him, IBM sees three key use case categories: customer care, application modernization, and digital labor and business automation. “Customer care is a natural fit, even before Gen AI. Everyone wants better customer service at a lower cost. The real value comes from creating fit-for-purpose models tailored to specific needs,” he explained, adding that a customer service model, for instance, doesn’t need to solve complex problems, which helps keep costs down.

Application modernization is also critical, especially as enterprises deal with massive legacy codebases. “For example, IBM’s Watson code assistant for COBOL (an old programming language) helps modernize mainframe code, allowing developers to work with older languages ​​more easily. We’re extending this to Java, another language crucial to enterprises. Digital labor, or business automation, covers processes like supply chain, finance, and HR. Our Watson Orchestrate suite is designed to streamline these operations using AI,” he elaborated.

Raghavan acknowledged, though, that as companies adopt AI, they face challenges in three areas: skills, trust and cost. “That’s where watsonx.governance comes in—it helps automate model governance, ensuring proper usage, tracking data, and running risk assessments.”

When asked about the debate surrounding AI acquiring enhanced “reasoning” abilities, Raghavan admitted that it’s “a nuanced topic”. Models don’t reason like humans with logic, he explained. Instead, they learn by example. “While current AI can reason in specific domains like IT systems or code, general-purpose reasoning remains out of reach.”

Also Read: GenAI has a killer app. It’s coding, says Databricks AI head Naveen Rao

Domain-specific reasoning, too, is “incredibly useful”, according to Raghavan. For example, AI can improve IT automation or help fix code issues by learning from examples, making it a practical and valuable approach.

He concluded: “We are also seeing the shift from models that simply provide answers to those that ‘think’ before responding. These models, which engage in system two behavior (like Daniel Kahneman’s analogy), can self-criticize and refine their responses. This will drive more complex AI tasks but raise costs, as inference times increase with more in-depth reasoning.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *