Company attributes
Other attributes
Meta AI (formerly Facebook AI) is the organization within Meta (formerly Facebook) that is responsible for the company's AI research, developing AI systems and engaging with the wider research and academic communities by collaborating, publishing papers, presenting at conferences, and open-sourcing some of its tools. Meta AI aims to push the boundaries of AI to create a more connected world and build safe and responsible systems.
Meta AI conducts fundamental and applied research to advance the field of AI and find ways to incorporate new technology into Meta's products. The research division openly collaborates with others in the community and publishes in peer-reviewed journals and conferences. Key research areas of Meta AI include the following:
- Computer vision
- Conversational AI
- Integrity
- Natural language processing
- Ranking and recommendations
- Systems research
- Theory
- Speech and audio
- Human and machine intelligence
- Reinforcement learning
- Robotics
Meta AI dates back to December 2013 and the formation of Facebooks's AI Research (FAIR) laboratory, led by Yan Lecun. On June 2, 2022, Meta announced it would be reorganizing its AI research with a new decentralized structure for Meta AI. Teams like AI Platform, AI for Product, and AI4AR drew inspiration from FAIR and leveraged AI into Meta's products overseen by Jerome Pesenti. Pesenti stated that the centralized nature of the organization made it challenging to integrate research. The new model distributes the ownership of AI systems back to Meta's product groups, with the company believing this will accelerate adoption across the company. Teams tasked with driving AI advancement into products will be known as AI Innovation Centers. Changes with the new structure include the following:
- The Responsible AI organization joining the Social Impact team
- The AI for Product teams moving to the product engineering team
- The AI4AR team joining the XR team in Reality Labs
- FAIR becoming a part of Reality Labs Research
On February 24, 2023, Meta announced Large Language Model Meta AI (LLaMA), a 65-billion-parameter large language model. Three days later, on February 27, Meta announced it was revamping its AI unit into a top-level product group focused on generative AI. This started by pulling teams working on generative AI across the company into a single group focused on building technology for all of its products. This initially means focusing on creative tools before moving into developing AI personas. The company is exploring AI tools with text for WhatsApp and Messenger, images for Instagram and ad formats, and video and multi-modal experiences. The new product group will report to chief product officer Chris Cox and will be led by vice president of AI and machine learning Ahmad Al-Dahle. On March 3, 2023, a week after the announcement of LLaMA the model was leaked online. A downloadable torrent of the AI system was posted on 4chan before spreading to other online AI communities.
Before founding an AI research lab, Facebook had already started using basic machine learning techniques to decide what users saw on their news feeds. Additionally, some Facebook engineers were experimenting with convolutional neural networks.
MIT Technology reported that Facebook was planning to launch an AI lab in September 2013. On December 9, 2013, the company announced Yann Lecun had joined to lead a newly formed AI group. A professor at New York University, Lecun is an expert in deep learning and machine learning, working on AI since the 1980s. He developed an early version of the "back-propagation algorithm," which would go on to be one of the most popular ways to train neural networks. While working at AT&T Bell Laboratories, he created the "convolutional network model," capable of mimicking the visual cortex of living beings, creating a pattern recognition system for machines.
Facebook CEO Mark Zuckerberg announced the news of Lecun and the new AI research group at the Neural Information Processing Systems Conference in Lake Tahoe, California. In his announcement, Lecun stated the long-term goal of bringing major advances in AI. He continued working at NYU on a part-time basis, with Facebook building a new facility in New York City, close to NYU's main campus. As well as the new facility, Facebook's AI Group will have locations in Menlo Park, California, and London, United Kingdom.
The group would go on to be called Facebook’s AI Research lab (FAIR). In an interview five years after its formation, LeCun stated:
You wouldn’t be able to run Facebook without deep learning... It’s very, very deep in every aspect of the operation
In February 2023, Meta announced a 65-billion-parameter large language model called Large Language Model Meta AI (LLaMA), designed to help researchers advance their AI work. Unlike language models from Open AI/Microsoft and Google that are conversational chatbots, LLaMA is not a system users can talk to; it is a research tool to help researchers working in the field. Meta is releasing LLaMA under a noncommercial license focused on research use cases, with access granted to groups like universities, NGOs, and industry labs. Like other large language models, LLaMA takes a sequence of words as an input to predict the next word, recursively generating text. LLaMA was trained using text from twenty languages focusing on those with Latin and Cyrillic alphabets. In the announcement of LLaMA, Meta stated:
models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field.
As the size of the model impacts the computing power and resources required to test new approaches, Meta is making LLaMA available in several sizes:
- 7 billion
- 13 billion
- 33 billion
- 65 billion
The release came alongside a paper with more details on the model titled "LLaMA: Open and Efficient Foundation Language Models." In the paper, Meta claims the 13 billion parameter model (LLaMA-13B) performs better than OpenAI’s popular GPT-3 model on most benchmarks, while the largest model, LLaMA-65B, is “competitive with the best models,” such as DeepMind’s Chinchilla70B and Google’s PaLM 540B.
A week after the announcement of LLaMA on March 3, 2023, the model was leaked. A downloadable torrent of the system was posted on 4chan before spreading to other online AI communities. On March 6, 2023, Meta announced it would continue to release its AI tools to approved researchers despite the leak to unauthorized users. In a statement, the company said:
While the model is not accessible to all, and some have tried to circumvent the approval process, we believe the current release strategy allows us to balance responsibility and openness,