Site icon TechArena

Red Hat Boosts Enterprise AI Across the Hybrid Cloud with Red Hat AI

Red Hat

Red Hat

Red Hat has announced the latest updates to Red Hat AI, its portfolio of products and services designed to help accelerate the development and deployment of AI solutions across the hybrid cloud. Red Hat AI provides an enterprise AI platform for model training and inference that delivers increased efficiency, a simplified experience and the flexibility to deploy anywhere across a hybrid cloud environment.  

Even as businesses look for ways to reduce the costs of deploying large language models (LLMs) at scale to address a growing number of use cases, they are still faced with the challenge of integrating those models with their proprietary data that drives those use cases while also being able to access this data wherever it exists, whether in a data center, across public clouds or even at the edge.

Encompassing both Red Hat OpenShift AI and Red Hat Enterprise Linux AI (RHEL AI), Red Hat AI addresses these concerns by providing an enterprise AI platform that enables users to adopt more efficient and optimised models, tuned on business-specific data and that can then be deployed across the hybrid cloud for both training and inference on a wide-range of accelerated compute architectures.

Red Hat OpenShift AI

Red Hat OpenShift AI provides a complete AI platform for managing predictive and generative AI (gen AI) lifecycles across the hybrid cloud, including machine learning operations (MLOps) and LLMOps capabilities. The platform provides the functionality to build predictive models and tune gen AI models, along with tools to simplify AI model management, from data science and model pipelines and model monitoring to governance and more.

Red Hat OpenShift AI 2.18, the latest release of the platform, adds new updates and capabilities to support Red Hat AI’s aim of bringing better optimised and more efficient AI models to the hybrid cloud. Key features include:

RHEL AI

Part of the Red Hat AI portfolio, RHEL AI is a foundation model platform to more consistently develop, test and run LLMs to power enterprise applications. RHEL AI provides customers with Granite LLMs and InstructLab model alignment tools that are packaged as a bootable Red Hat Enterprise Linux server image and can be deployed across the hybrid cloud.

Launched in February 2025, RHEL 1.4 added several new enhancements including:

Red Hat AI InstructLab on IBM Cloud

Increasingly, enterprises are looking for AI solutions that prioritise accuracy and data security, while also keeping costs and complexity as low as possible. Red Hat AI InstructLab deployed as a service on IBM Cloud is designed to simplify, scale and help improve the security footprint for the training and deployment of AI models. By simplifying InstructLab model tuning, organisations can build more efficient models tailored to the organisations’ unique needs while retaining control of their data.

No-cost AI Foundations training

AI is a transformative opportunity that is redefining how enterprises operate and compete. To support organisations in this dynamic landscape, Red Hat now offers AI Foundations online training courses at no cost. Red Hat is providing two AI learning certificates that are designed for experienced senior leaders and AI novices alike, helping educate users of all levels on how AI can help transform business operations, streamline decision-making and drive innovation. The AI Foundations training guides users on how to apply this knowledge when using Red Hat AI.

Joe Fernandes, vice president and general manager, AI Business Unit, Red Hat, “Red Hat knows that enterprises will need ways to manage the rising cost of their generative AI deployments, as they bring more use cases to production and run at scale. They also need to address the challenge of integrating AI models with private enterprise data and be able to deploy these models wherever their data may live. Red Hat AI helps enterprises address these challenges by enabling them to leverage more efficient, purpose-built models, trained on their data and enable flexible inference across on-premises, cloud and edge environments.”

Exit mobile version