Skip to main content

Meta-Qualcomm Partnership – Llama 2 For The Masses!

alt test

What is Llama 2?

Llama 2 is Meta's second baby. Their very own language learning model to compete with OpenAI's ChatGPT and others. The main difference is that Llama 2 is open-source and entirely free for both research and commercial purposes. Meta hopes this will attract users across tech, academia, and beyond.

What happened to Llama (1)?

A week after it was announced and when Meta started fielding requests for access, the model got leaked at 4chan and became available to download via torrent. From hours to days, this has become the ground for a lot of innovation, upgrades, and improvements in the AI space using Llama as a base.

Stanford university scholars even created a Llama variant which only cost them $600 to develop, proving that developing language models can be cost and time efficient.

Majority of these Llama variations can be operated using mobile devices, giving way to individuals utilizing their own and personal LLMs. A programmer and people from the open-source community was able to successfully convert the model to C++, enabling execution on mobile devices. Which can even run, albeit quite slow, on a Google Pixel 5.

The Partnership – Qualcomm and Meta

With the previous example in mind, the partnership between Qualcomm and Meta can be the gateway to regular folks having their very own, offline-accessible LLMs, living in our very own mobile devices. 

Meta gave Qualcomm unfettered access to Llama 2 and the latter has begun making smart chips with AI capabilities built-in. They use a Hexagon processor to add different AI functions. 

They call it “micro tile inferencing” and it adds tensor cores, SegNet processing, scalar operations, and more to the AI processor. All of this is packed into a Snapdragon mobile chip!

Consequently, the chip manufacturer can incorporate specific enhancements, thus enabling Llama 2 to outperform other models. 

Given the projected release timeframe in 2024, it is probable that Qualcomm will also seek additional partnerships to align with the debut of their Snapdragon 8 Gen 3 chip.

What does this mean for us?

This team-up means they'll give access to Llama 2 on devices, using the new AI-powered Snapdragon chips. By running Llama 2 on the device, app makers can save money and keep user info safe – no need to send data to other servers.

What's even greater is that these models can work without the internet! You can personalize them based on what you like, all stored on your device. Llama 2 fits right into the Qualcomm AI Stack, a set of tools to make AI models run even better on your device.

The open-source community will undoubtedly make significant contributions to the (almost) entirely open Llama 2. This initiative, combined with the substantial industry momentum towards on-device AI, marks the beginning of a series of efforts aimed at fostering a dynamic on-device AI ecosystem.

Apple started the trend with the M1 chip's neural engine, and now, new processors will make AI even more inclusive.

Conclusion

Fast cannot even begin to describe the way AI is expanding. Soon, we will have our own personalized AI tools and applications fueled by powerful language models at our fingertips. Stay tuned to get more updates like this!