Enterprises must build flexible AI foundations as models outpace infrastructure, says SUSE’s Abhinav Puri

Date:


Artificial intelligence models are evolving at a pace that far outstrips traditional enterprise technology cycles, forcing organisations to rethink how they design and operate their infrastructure.

Abhinav Puri, VP and General Manager of Portfolio Solutions and Services, SUSE, tells TechObserver.in’s Mohd Ujaley that the answer lies not in betting on any single model or vendor, but in building composable, open foundations that allow AI systems to evolve independently of the underlying platform.

“It’s not about betting on the right model; it’s about building a flexible platform,” Puri says.

Advertisement


EVENT

Infosec Reimagined

Infosec Reimagined

Infosec Reimagined 2026 is the premier information security summit where top leaders—CISOs, CROs, CIOs, CTOs and risk executives—converge to redefine cyber resilience.


Register Now →

EVENT

CIO PrismCIO Prism

CIO Prism

CIO Prism unites forward-thinking technology leaders to exchange transformative insights, shape digital strategies, and foster innovation, empowering enterprises to excel in an era of rapid technological change.


Register Now →

EVENT

DEFINE 2047DEFINE 2047

DEFINE 2047

DEFINE 2047 Conclave spotlights futuristic innovation and emerging enterprises, tackling critical technological and policy challenges to shape India’s strategic roadmap for global defence leadership.


Register Now →

EVENT

Digital SenateDigital Senate

Digital Senate

Digital Senate is a premier conference uniting government leaders, technologists and innovators to share ideas, success stories and strategies on digital governance, public sector transformation, cybersecurity and emerging technologies in India.


Register Now →

EVENT

Cyber Surakshit Uttar PradeshCyber Surakshit Uttar Pradesh

Cyber Surakshit Uttar Pradesh

Find out strategies, frameworks and solutions for building a resilient and secure digital ecosystem across Uttar Pradesh.


Register Now →

Edited Excerpts:

AI models are evolving faster than traditional enterprise technology cycles. How are organisations adapting their infrastructure to keep pace without constantly rebuilding their systems?

Most teams are standardising on Linux and Kubernetes as a stable base and then abstracting everything above that. Models, guardrails and tooling are treated as interchangeable services rather than baked-in dependencies. That way the infrastructure stays largely the same even as models and evaluation tools change underneath.

In India, some local firms are bypassing heavy, expensive setups in favor of Small Language Models that handle regional dialects efficiently. By blending these lean models with India Stack principles, they’re building AI that is high-performance but remains cost-effective for a massive, multilingual scale.

New models and tools are challenging incumbents on cost and capability. What criteria should enterprises use today to decide which AI models to deploy and when to switch?

Enterprises are getting much more pragmatic: cost per token, latency, risk profile, deployment flexibility and how easy it is to swap the model out later. Accuracy still matters, but avoiding lock-in and being able to move fast is now just as important as raw capability.

Advertisement

Another emerging decision criterion is whether a problem actually requires a general-purpose LLM at all. Many enterprise workloads, such as customer support, document classification, compliance review, or regional language interactions, are better served by domain-specific or task-optimised language models, including smaller models or fine-tuned LLMs.

These models often deliver higher accuracy, lower latency and significantly reduced cost because they are trained or adapted for a narrow problem space rather than broad reasoning. By aligning model choice to the specific use case, organisations can avoid over-engineering their AI stack while still achieving production-grade outcomes.

In a composable architecture, this allows enterprises to mix general LLMs for complex reasoning with specialised models for high-volume, repeatable tasks and swap either as requirements evolve.

We see that a key focus for Indian enterprises today is on building a composable AI stack. By using open APIs and modular containerisation, enterprises can transition from experimentation to production-grade AI while maintaining full sovereignty over their data and choice of hardware.

Open-source software plays a growing role in AI development globally. What advantages does this bring to enterprises, and where do companies still need tighter controls?

Open source gives transparency, faster innovation and far more control over where and how AI runs. Building on open principles offers the transparency and speed needed for enterprises to stay ahead, without getting stuck in a vendor’s walled garden. By building on an open source foundation, organisations can better audit model provenance and ensure every bit of code is secure and compliant.

Enterprises still need a rock-solid, managed platform to handle supply chain security and governance. The goal is to marry the rapid innovation of the community with the stability of enterprise-grade lifecycle management. By leveraging open source foundations, India can position itself to build a strong Digital Public Infrastructure that taps into the best that public and private sectors have to offer, while ensuring public services are fit-for-purpose.

The has emphasised open-source repositories as part of its AI strategy. How does this influence enterprise adoption, particularly around security, accountability and compliance?

Open source repositories can give enterprises confidence that open source is a long-term, credible foundation, not a riskier option. At the same time, it raises expectations around security, auditability and accountability, which means companies still need enterprise-grade platforms around community projects.

Ensuring that every tool has a verified provenance and meets strict compliance standards is essential for production use. For Indian enterprises, this also means having the ability to integrate local and sector-specific initiatives while keeping data sovereign and accountability clear.

As AI workloads expand across edge, and cloud environments, how are organisations rethinking security architectures to maintain visibility and control?

Security is shifting from perimeter-based thinking to policy-driven control planes. Organisations want consistent identity, policy and observability across environments, with visibility not just into infrastructure but into model behaviour and risk at runtime.

As edge AI, or the practice of processing data locally on devices like and cameras instead of the cloud, takes flight in India, it has the potential to transform areas ranging from smart agriculture to advanced manufacturing. This shift makes an open, thoughtful and secure approach essential, as it allows firms to maintain strict by keeping sensitive information within local jurisdictional borders.

What does a ‘future-proof’ AI platform realistically look like for large enterprises, given ongoing uncertainty around regulation, models and infrastructure?

It’s not about betting on the right model. It’s about building a flexible platform. Linux and Kubernetes underneath, open standards on top, pluggable models and tools, and governance baked in from day one. That’s what lets enterprises adapt as regulation, infrastructure and AI capabilities keep changing.



Source link

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related