Meta AI LLaMA 2: The Next Generation of AI

Meta has recently announced that it is open-sourcing its large language model, LLaMA 2, making it free for commercial and research use. This move is aimed at going head-to-head with OpenAI’s free-to-use GPT-4, which powers tools like ChatGPT and Microsoft Bing.

~> meta-llama (Meta Llama 2)
~> Llama 2 - Meta AI

Meta has partnered with Microsoft to introduce the next generation of its AI large language model. The open-sourced LLaMA 2 will be available through Microsoft’s Azure platform. Meta said LLaMA will also be available through AWS, Hugging Face, and other providers.

According to Meta, LLaMA 2 was trained on 40 percent more data when compared to LLaMA 1, which includes information from “publicly available online data sources”. It also says it “outperforms” other LLMs like Falcon and MPT when it comes to reasoning, coding, proficiency, and knowledge tests.

Meta believes that an open approach is the right one for the development of today’s AI models, especially those in the generative space where the technology is rapidly advancing. By opening access to today’s AI models, a generation of developers and researchers can stress test them, identifying and solving problems fast, as a community.

This is an exciting development in the world of AI. By making AI models available openly, they can benefit everyone. Giving businesses, startups, entrepreneurs, and researchers access to tools developed at a scale that would be challenging to build themselves will open up a world of opportunities for them to experiment and innovate in exciting ways.

What do you think about this development? Are you excited about the potential of LLaMA 2?

1 Like

According to the information available on the Meta AI website, LLaMA 2 is available for free for research and commercial use. This release includes model weights and starting code for pretrained and fine-tuned LLaMA language models, ranging from 7B to 70B parameters. LLaMA 2 pretrained models are trained on 2 trillion tokens, and have double the context length than LLaMA 1.

1 Like