Announced in a blog post on Friday, Meta’s Large Language Model Meta AI (LLaMA) is designed with research teams of all sizes in mind. At just 10%.
Something to look forward to: Tech giants like Microsoft and Google, alongside OpenAI have been making the headlines with their innovative AI research and advancement. Never to be outdone, Mark Zuckerberg and Meta have thrown their hat into the AI ring with the release of their new natural language model, LLaMA. The model reportedly outperforms GPT-3 in most benchmarks, being only one-tenth of GPT-3’s total size.
Announced in a blog post on Friday, Meta’s Large Language Model Meta AI (LLaMA) is designed with research teams of all sizes in mind. At just 10% the size of GPT-3 (third-gen Generative Pre-trained Transformer), the LLaMA model provides a small but high performing resource that can be leveraged by even the smallest of research teams, according to Meta.
This model size ensures that small teams with limited resources can still use the model and contribute to overall AI and machine learning advancements.