Red Hat going all in on its AI strategy with Lightspeed expansion, OpenShift AI updates, and more


At its annual Red Hat Summit event, the popular open source company made several announcements regarding its AI strategy. 

It expanded Red Hat Lightspeed to more products, revealed updates to OpenShift AI, launched Red Hat Enterprise Linux (RHEL) AI, and introduced Podman AI Lab. 

Red Hat Lightspeed expands to OpenShift and RHEL

This generative AI offering was first incorporated into Ansible at the end of 2023. 

According to Red Hat, OpenShift Lightspeed changes how developers interact with OpenShift by applying generative AI to the deployment and management processes. For instance, when a cluster is at capacity, Lightspeed might suggest the user enable autoscaling. 

In RHEL, Lightspeed will enable users to simplify their Linux operations by answering common questions and solving problems, such as flagging when a fix to a CVE has been released so that the user can apply it or helping a user with limited command line knowledge schedule patching for a maintenance window. 

“Red Hat Lightspeed puts AI to work, instantly, enabling the rapid acquisition of new skills while scaling existing expertise, from building the foundation of the hybrid cloud with Red Hat Enterprise Linux to bringing cloud-native applications to life with Red Hat OpenShift to managing distributed environments with Red Hat Ansible Automation Platform,” said Ashesh Badani, senior vice president and chief product officer at Red Hat.

OpenShift AI gets several updates in version 2.9

OpenShift AI is an offering that allows companies to build and deploy AI-powered applications throughout their hybrid cloud environments. 

With the release of OpenShift AI 2.9, there is greater support for deployment of applications to the edge. It now supports inference capabilities in those resource-limited environments.

OpenShift AI also now supports the use of multiple model servers, which allows users to run AI on a single platform for multiple use cases. Support was added for KServe, vLLM and text generation inference server (TGIS), and serving engines for LLMs and Caikit-nlp-tgis runtime. 

Additionally, there are new features that support model development, including updated project workspaces, additional workbench images, and enhanced CUDA.

Other new additions in OpenShift AI 2.9 include monitoring visualizations, new accelerator profiles, and distributed workloads with Ray, using CodeFlare and KubeRay

“Bringing AI into the enterprise is no longer an ‘if,’ it’s a matter of ‘when,’” said Badani. “Enterprises need a more reliable, consistent and flexible AI platform that can increase productivity, drive revenue and fuel market differentiation. Red Hat’s answer for the demands of enterprise AI at scale is Red Hat OpenShift AI, making it possible for IT leaders to deploy intelligent applications anywhere across the hybrid cloud while growing and fine-tuning operations and models as needed to support the realities of production applications and services.” 

Red Hat Enterprise Linux AI released

With RHEL AI, users get access to a platform for developing, testing, and deploying generative AI models.

It includes supported versions of IBM’s open-source Granite LLM family and InstructLab’s model alignment tools. 

RELATED CONTENT: Red Hat Enterprise Linux looks to the future of AI with Image Mode for RHEL

InstructLab is a joint effort between IBM and Red Hat, based on IBM Research’s Large-scale Alignment for chatBots (LAB) approach, which uses data and skills tied to a taxonomy to generate synthetic data to train models. 

“The InstructLab project aims to put LLM development into the hands of developers by making, building and contributing to an LLM as simple as contributing to any other open source project,” Red Hat said in its announcement.

RHEL AI is available as an RHEL image that can be deployed individually or it can be accessed through OpenShift AI.  

Red Hat also offers enterprise product distribution, 24×7 production support and extended  lifecycle support.

“To truly lower the entry barriers for AI innovation, enterprises need to be able to expand the roster of who can work on AI initiatives while simultaneously getting these costs under control. With InstructLab alignment tools, Granite models and RHEL AI, Red Hat aims to apply the benefits of true open source projects – freely accessible and reusable, transparent and open to contributions – to GenAI in an effort to remove these obstacles,” the company said.

Red Hat announces Podman AI Lab

Podman AI Lab is an extension to Podman Desktop, which is a graphical interface for building and deploying Kubernetes containers. 

With Podman AI Lab, developers gain access to a similar graphical interface for building and deploying generative AI workloads on their own devices. 

It includes a sample library with templates for common generative AI applications that can be built upon, including chatbots, text summarizers, code generators, object detection, and audio-to-text.

The AI Lab also includes a playground environment where users can interact with and observe their models. 

“The AI era is here, but for many application developers, building intelligent applications presents a steep learning curve. Podman AI Lab presents a familiar, easy-to-use tool, and playground environment to apply AI models to their code and workflows safely and more securely, without requiring costly infrastructure investments or extensive AI expertise,” said Sarwar Raza, vice president and general manager of the application developer business unit at Red Hat.



Source link