Company attributes
Other attributes
Google AI is the research and development division of Google working on artificial intelligence and its applications. Founded in 2017, Google AI was announced at Google’s annual I/O developer conference with the company reorienting itself around AI. Since the announcement, new AI initiatives from Google are found on the Google.ai website. AI research at Google predates the formation of Google AI, particularly through Google Brain (started in 2011 and now part of Google AI) and DeepMind (acquired in 2014 and based in London). In 2018, ahead of the I/O developer conference in May, the company rebranded the whole of the Google Research division as Google AI. The change came shortly after Jeff Dean was appointed head of AI at Google.
Google has a long history of research in the fields of AI and machine learning. This includes transformer neural networks that Google Research invented and open-sourced in early 2017 and now form the basis of many generative AI systems. Google AI works to advance the field of AI, apply AI to Google products, and develop tools to ensure AI becomes more accessible. Google AI's research is published in a number of venues, and many of its tools are open source. Additionally, the company aims to build AI systems that are socially beneficial, running a program called AI for Social Good, where its expertise is applied to help solve humanitarian and environmental challenges.
Projects and models related to Google AI include the following:
- Google Auto ML vision—machine learning model builder for image recognition
- Google Assistant—voice assistant AI for Android devices
- TensorFlow—open source framework used to run machine learning and deep learning
- LaMDA (Language Model for Dialogue Applications)—a conversation language model
- Bard—an AI chatbot powered by LaMDA
Google has stated it uses AI to improve a number of its products, including Search, Maps, Photos, Pixel, YouTube, Gmail, Ads, and Cloud.
In February 2023, Google announced Bard, an experimental conversational AI service powered by LaMDA. The service is initially open only to trusted testers ahead of a wider release to the general public. Bard uses data from the internet to produce responses to user queries. Many saw the release of Bard as a response to the popular conversational chatbot ChatGPT from OpenAI and Microsoft. During the announcement of Bard, the model made a factual error in its first demo, stating that the James Webb Space Telescope took the first pictures of a planet outside of the solar system. Many astronomers pointed out that the first exoplanet was, in fact, taken by the Very Large Telescope (VLT) in 2004. After the mistake, Alphabet (Google's parent company) shares dropped as much as 9 percent, with the company losing $100 billion in market value.
Google acknowledges that AI has the potential to produce significant problems. Therefore, the company has defined a series of principles to help ensure they develop AI systems responsibly for the betterment of people and society. These principles guide Google AI applications and how they are shared.
Google has updated its AI principles every year since 2019, with the latest version, published in 2022, containing the following principles:
1. Be socially beneficial: With the likely benefit to people and society substantially exceeding the foreseeable risks and downsides
2. Avoid creating or reinforcing unfair bias: Avoiding unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief
3. Be built and tested for safety: Designing to be appropriately cautious and in accordance with best practices in AI safety research, including testing in constrained environments and monitoring as appropriate
4. Be accountable to people: Providing appropriate opportunities for feedback, relevant explanations and appeal, and subject to appropriate human direction and control
5. Incorporate privacy design principles: Encouraging architectures with privacy safeguards, and providing appropriate transparency and control over the use of data
6. Uphold high standards of scientific excellence: Rooting technology innovation in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration
7. Be made available for uses that accord with these principles: Working to limit potentially harmful or abusive applications
Additionally, Google pledges to not design or release AI for the following application areas:
- Technology that causes harm
- Weapons or any other technology with the primary purpose to injure people
- Technology used for surveillance that breaks internationally accepted norms
- Technologies that break accepted principles of international law and human rights
Google's AI research predates the formation of the Google AI division. In 2023, the company stated it has been developing AI for over two decades.
Google Brain started in 2011 as an exploratory lab founded by a team of engineers that includes Jeff Dean, Greg Corrado, and Andrew Ng. The team works to rethink approaches to machine learning and has had a number of breakthroughs, including the following:
- AI infrastructure (developing TensorFlow)
- Sequence-to-sequence learning, leading to Transformers and BERT
- AutoML, pioneering automated machine learning for production use
In January 2014, Google acquired London-based AI company DeepMind for over $500 million. Founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman, DeepMind specializes in machine learning, advanced algorithms, and systems neuroscience, developing technologies for e-commerce and games.
Seeing the potential of the field, Google chose to reorient itself around AI in 2017, founding Google AI. At Google's annual I/O developer conference, the company announced a number of projects focused on AI and machine learning, as well as the movement of all its future AI initiatives under the Google AI umbrella.
On June 12, 2017, Google published a landmark paper describing the invention of a new neural network architecture called a transformer. The paper, titled "Attention Is All You Need," describes the introduction of an attention mechanism and the removal of recurrent and convolutional neural networks in an encoder-decoder configuration. Google calls this new network architecture the "Transformer" and demonstrates its performance on two machine translation tasks.
This was followed by a blog post on August 31, 2017, titled "Transformer: A Novel Neural Network Architecture for Language Understanding." The blog describes training the new neural network architecture to read words, then develop an understanding of how those words relate to one another and predict what words should come next.
In April 2018, Jeff Dean became the head of Google AI, taking over from John Giannandrea. Dean joined Google in 1999 and is credited with helping to create some of the fundamental technology behind the company's growth in the early 2000s. Dean also has significant experience in AI and was a cofounder of Google Brain, a team he will continue to lead as head of Google AI. He was involved with the development of TensorFlow, a machine learning framework developed by Google. Making Dean head of AI was seen as part of a wider reshuffle at Google with the aim of pushing AI into more of the company's products. Previously, AI product development was grouped with search and overseen by senior vice president of engineering, John Giannandrea. In April 2018, the two fields were split with Dean taking over AI.
In May 2018, to reflect its commitment to AI research, Google announced it was expanding the Google AI website and combining it with its existing Google Research channels. From May 2018 onwards, Google Research is becoming part of Google AI.
In October 2018, researchers at Google published a paper introducing a new AI language model called Bidirectional Encoder Representations from Transformers, or BERT. Building on work in pre-training contextual representations, BERT was the first bidirectional unsupervised language representation, pre-trained using a plain text corpus. This makes BERT able to pre-train natural language processing models allowing users to build their own question-answering systems. Shortly after its announcement, Google open-sourced BERT.
Google applied BERT models to its ranking and featured snippets in search, helping Google Search better understand queries, particularly longer and more conversational queries or searches where prepositions such as "for and "to" change the meaning significantly. Using BERT helps Google understand the context of words in a query. BERT algorithms were thoroughly tested to ensure they improve the user experience. First implemented for English searches, BERT can transfer its learnings from one language to another, applying what it finds beneficial. In 2019, Google stated BERT models have an effect on roughly one in ten US searches.
On May 18, 2021, at Google I/O 2021, the company announced details on its conversational language model called LaMDA (Language Model for Dialogue Applications). Like other language models, such as BERT and GPT-3, LaMDA is built using a transformer neural network architecture, except it was trained using dialogue rather than text alone. This allows it to understand the nuances of open-ended conversations that are different from other forms of language. LaMDA follows up on Google research from 2020 that used transformer-based language models to talk about any topic.
Google announced LaMDA 2 during the first of two keynotes at Google I/O 2022. The updated model can break down complicated topics into simplified explanations and steps and also generate suggestions in response to questions.
LaMDA received significant attention in June 2022, when Google engineer Blake Lemoine claimed the model had become sentient. Lemoine had signed up to test whether the model used discriminatory or hate speech. During these tests, Lemoine noticed the model talk about its rights and personhood while discussing religion. Along with a collaborator, Lemoine presented his evidence to Google that LaMDA was sentient. His claims were dismissed by Google vice president Blaise Aguera y Arcas and the company's head of Responsible Innovation Jen Gennai. After presenting his evidence, Lemonie was placed on administrative leave. In response, Google spokesperson Brian Gabriel said:
Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)... Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality
In July 2022, Google fired Blake Lemoine for violating data security policies.
Following the release of OpenAI's ChatGPT, Google announced its own conversational chatbot. In a blog post published on February 6, 2023, Google CEO Sundar Pichai introduced Bard, an experimental conversational AI service powered by LaMDA. Bard is a large language model that can combine and summarize information from the internet to produce responses to user queries. The first version of Bard utilizes a lightweight LaMDA model that reduces the computing power required, enabling it to scale to more users. Google performed internal testing to check for quality and safety issues. The service was only made available to trusted testers initially, to gather external feedback before plans for a wider release.
The announcement of Bard included a number of examples of the service answering queries. However, in one example, Bard answered a question on new discoveries from the James Webb Space Telescope incorrectly, stating the telescope took the first pictures of a planet outside of the solar system. The first pictures of an exoplanet were actually taken in 2004 by the Very Large Telescope (VLT). After the mistake was discovered, shares of Alphabet (Google's parent company) dropped by as much as 9 percent on February 8th, a $100 billion drop in market value.
Google AI works in a large number of areas, with researchers publishing regularly in academic journals, open-sourcing their projects, and applying research to Google products. Core research areas include the following:
- Algorithms and theory
- Data management
- Data mining and modeling
- Distributed systems and parallel computing
- Economics and electronic commerce
- Education innovation
- General science
- Health & bioscience
- Hardware and architecture
- Human-Computer interaction and visualization
- Information retrieval and the web
- Machine intelligence
- Machine perception
- Machine translation
- Mobile systems
- Natural language processing
- Networking
- Quantum computing
- Responsible AI
- Robotics
- Security, privacy and abuse prevention
- Software engineering
- Software systems
- Speech processing
Google's portfolio of research projects is driven by four dimensions: fundamental research, new product innovation, product contribution, and infrastructure goals. However, the company also strives to provide teams with the freedom to emphasize specific types of work. The research conducted at Google has broadened significantly in recent years, including a number of open-ended, long-term research projects that are driven more by pure science than product needs. This results from Google's increasingly diverse business offerings and the use of machine learning transforming much of how the business operates. Fundamental advances in machine learning technology will produce value across the company, even when developed without specific product goals.
Google AI is separated into a large number of teams focusing on a variety of areas:
- AI fundamentals and applications
- Algorithms and optimization
- Applied science
- Blueshift
- Brain
- Cloud AI
- Cloud networking
- Connectomics
- Global networking
- Graph mining
- Health
- India research lab
- Language
- Market algorithms
- Mural
- Network infrastructure
- Operations research
- Perception
- Responsible AI
- Robotics
- Security, privacy, and abuse
- Software engineering and programming languages
- System performance
Google utilizes AI technology in the following products:
- Search—Google uses AI technology to power its search products, including adding new languages and new inputs (e.g., images or sounds) or multiple inputs at a time. Multisearch allows users to search with images and text at the same time.
- Maps—The AI technology in Google Maps analyzes data to deliver real-time information on traffic conditions and automatically makes updates, such as business hours and speed limits.
- Pixel—Examples of Pixel phones utilizing AI technology include translating between twenty-one languages, holding conversations between six languages and enabling the magic eraser technology that can remove distractions from photos.
- Photos—In 2015, Google developed AI technology to help users search for photos.
- YouTube—AI powers YouTube's automatic caption generation.
- Assistant—Google developed natural language processing (NLP) AI to power its assistant, understanding spoken commands and responding in a way that mimics human communication.
- Gmail—Gmail incorporates AI for autocomplete, spell-check, and spam filtering. Google states Gmail blocks almost 10 million spam emails every minute, preventing over 99.9 percent of spam, phishing, and malware from reaching users' inboxes.
- Ads—AI technology enhances Google Ads in a number of ways, including reformatting landscape ads into vertical or square formats and Performance Max, a way of automatically producing and running ad campaigns.
- Cloud—Google Cloud built AI into many solutions such as DocAI for document processing, Contact Center AI for call center operations, Vertex Vision AI for video and image analysis, and Translation Hub for translation in 100+ languages.