Africa’s First Multilingual Small Language Model (SLM) Gets Even Smaller – Thanks to Top African Innovators

Africa’s first multilingual Small Language Model (SLM), InkubaLM, has now been shrunk by up to 75%, without sacrificing performance. That’s a big deal for a continent where compute power, internet access, and high-end devices are often out of reach. It’s proof that leaner, locally optimised models can unlock AI’s potential across Africa.

This remarkable achievement emerged from the Buzuzu-Mavi Challenge, a collaboration between Lelapa AI and Zindi, which drew over 490 data scientists, researchers and machine learning (ML) engineers from 61 countries. The challenge? Make InkubaLM smaller, faster and more efficient, while keeping its performance strong.

Why Size (Still) Matters

In Africa, innovation must match infrastructure. With more than 70% of smartphone users on entry-level devices and internet penetration in Sub-Saharan Africa still hovering around 33%, heavy, compute-intensive models don’t stand a chance. That’s where resource-efficient models like InkubaLM come in. They unlock real-world use cases, such as:

  • A farmer accessing climate updates in their home language
  • Students using AI-powered educational tools on budget smartphones
  • Call centres and clinics offering multilingual support without cloud dependency

Originally designed to support African languages like isiXhosa, Swahili and Hausa, InkubaLM became even more agile thanks to the challenge participants, who reduced the model size by up to 75% while maintaining strong translation quality.

The Winners

The competition highlighted the brilliance of African AI talent:

🥇 1st Place: Yvan Carré (Cameroon) Used adapter heads, quantisation and knowledge distillation to create a compressed, adaptive and efficient version of InkubaLM that performed well across tasks.

🥈 2nd Place: Stefan Strydom (South Africa) Aggressively shrank the model to just 40M parameters by trimming vocab size, reducing layer depth, and using shared embeddings.

🥉 3rd Place: Team AI_Buzz – Abdourahamane Ide Salifou, Mubarak Muhammad, and Victor Olufemi (Nigeria and Niger) Blended multiple datasets and distilled a performant student model with just 177M parameters.

The Buzuzu-Mavi Challenge reaffirmed what we’ve always believed at Lelapa AI, Africa has the talent. All five winners were African, a powerful testament to the depth of machine learning and data science expertise across the continent.

In Their Words

“Language is more than just communication, it’s a carrier of culture, identity, and knowledge. By supporting low-resource languages, we empower communities to participate fully in the digital world.” – Yvan Carré

“Building tools for people in their own languages is critical to making the technology accessible to more people.” – Stefan Strydom

A shared theme across the winning entries was a deep commitment to accessibility and inclusion. Participants consistently emphasised the importance of building AI that works in people’s native languages to ensure it truly serves and reaches those who need it most.

A Step Forward for African AI 

“As Pelonomi Moiloa, Lelapa AI’s CEO and co-founder, puts it: “Optimising language models for Africa goes beyond technical achievement, it reflects our deeper mission at Lelapa AI: to build AI that is inclusive, accessible, and grounded in African realities. And when African talent is trusted with meaningful challenges, the results are not just outstanding, they’re a glimpse into the future we’re building.”

And from Zindi’s CEO and co-founder, Celina Lee: “It is a joy and a privilege for us at Zindi to partner with Lelapa AI on the Buzuzu-Mavi Challenge. Seeing the impact that our incredible community of AI builders can have on a truly African problem is inspiring and rewarding in its own right, but even better, these solutions showcase what African innovators can do in the language model space. In a world where the state of the art requires ever larger language models, we’re proud to show the world that more can be done with less.”

What’s Next?

Some of the top submissions will be integrated into future versions of InkubaLM. InkubaLM isn’t production-ready, yet. It’s a base model, a building block. But it’s open-source and ready to be explored, adapted, and fine-tuned by anyone committed to language equity. 

Explore InkubaLM on HuggingFace

The challenge may be over. But the mission to make African AI smaller, smarter, and more inclusive has only just begun.

Note: The original development of InkubaLM was supported by compute provided through Microsoft’s AI for Good initiative.


Leave a Reply

Your email address will not be published. Required fields are marked *