By Alex Reiniger, RHEL Business Development Manager at Red Hat
Africa is set to benefit greatly from adopting AI-driven and enabled technologies. Already, CEOs across Sub-Saharan Africa are already recognising the potential of generative artificial intelligence (GenAI) to accelerate growth and competitiveness, with many anticipating that it will enhance the quality of their businesses’ products and services.
For Africa to contribute to and participate in the global AI revolution, we must take a pragmatic approach. For businesses to build their own applications and solutions, they need access to the tools and systems that enable them to do so. That approach begins with a conversation about the operating system, the primary centre for managing hardware and software. Through the values of openness, agility, and security, we can build the AI-enabled applications and services of the future.
Managing resources in the cloud
The role of the operating system in modern business is not the same as it was at the start of the millennium. It is constantly evolving, especially across Africa as the continent continues to digitally transform and adopt cloud-based products and services.
Where most organisations previously only needed to deploy an operating system in a single, on-premises environment, today they need a system solution capable of running smoothly across hybrid architectures. These include on-premise (private cloud), hyperscalers (public cloud), and edge, the latter of which poses a lot of significance for AI workloads. The operating system is one of the most important ways businesses can manage hybrid computing resources. An operating system such as Linux, which remains the operating system with the largest user base in the world, enshrines standards that enhance portability, interoperability, and openness across cloud environments thanks to its maturity and community support.
By sharing the same critical infrastructure at the operating system level, businesses can achieve greater predictability, scalability, and security. These elements are important as they are also what will determine when a business can realise its AI ambitions.
A solid foundation for AI implementations
The modern role of the operating system now means it serves as the foundation on which businesses can build the next generation of AI-enabled applications and solutions. With an operating system like Red Hat Enterprise Linux (RHEL), businesses can start to leverage AI models, including large language models (LLMs), for their specific use cases.
Arguably, LLMs and subsequent innovations are what brought GenAI to the top of people’s minds as well as the corporate prioritisation of AI in general. Though AI for predictive and generative purposes has been around for a long time – long before products such as ChatGPT– AI is now applicable to a wide range of enterprise use cases. That said, LLMs do come with disadvantages and challenges for businesses. They require significant computational resources, are subject to high bandwidth requirements, and owing to their growing size and complexity, organisations may have difficulty maintaining them and overseeing their decision-making processes.
The secret to successful AI implementations is knowing exactly what you want it to accomplish, which is why many organisations may instead yield greater value with the building and deployment of small language models (SLMs) or by using pretrained LLMs, a good example being RHEL AI powered by IBM. Both SLMs and pretrained LLMs offer benefits for businesses ranging from reduced costs to specialised functionality, overall enabling them to build solutions that accomplish the desired outcomes.
Whether businesses in Africa want to build an LLM or SLM-powered solution, they need platforms that accelerate AI adoption, all while abstracting the complexities of delivery and introducing agility and flexibility to the development and deployment processes.
Transparency, agility, and openness
With that all said, what role does the operating system play in building, deploying and maintaining AI models? Today, the operating system has evolved to meet the needs of AI and help businesses build an IT stack from top to bottom, starting with the AI models at the top and grounding optimisation in the hardware itself.
Red Hat Enterprise Linux (RHEL) AI, a foundational model for developing, testing and running Granite family LLM models (in addition to other open-source models), enables portability across hybrid cloud environments, lowers development costs, and is easy to onboard. For example, organisations can have a chatbot for customer interactions or handle staff queries in a short amount of time. Organisations can contribute directly to AI development thanks to the open-source nature of Granite, while also accessing technical support that lets them fine-tune their applications.
Platforms such as RHEL AI and others are how we bring openness and value to Africa. With RHEL, businesses can establish an AI/ML ‘factory’ thanks to enterprise-grade Granite models, GPU support, and specific libraries and tools to deploy and tune models. This is how businesses can reinforce agility and transparency, and embed the values throughout their organisation that will lead to innovation and a new generation of open, responsible and impactful AI-enabled applications and services.
Also Read: Red Hat and SoftBank Collaborate on AI-Driven RAN for Next-Gen Networks