Google AI is the research and development division of Google working on artificial intelligence and its applications.
December 6, 2023
December 6, 2023
December 6, 2023
February 6, 2023
June 11, 2022
May 11, 2022
Google claims the 2nd-generationsecond-generation AI system can break down complex topics into straightforward explanations and generate suggestions in response to questions.
December 6, 2023
December 6, 2023
December 6, 2023
September 28, 2023
May 10, 2023
Google AI is the research and development division of Google working on artificial intelligence and its applications. Founded in 2017, Google AI was announced at Google’s annual I/O developer conference with the company reorienting itself around AI. Since the announcement, new AI initiatives from Google are found on the Google.ai website.AIwebsite. AI research at Google predates the formation of Google AI, in particularparticularly through Google Brain (started in 2011 and now part of Google AI) and DeepMind (acquired in 2014 and based in London). In 2018, ahead of the I/O developer conference in May, the company rebranded the whole of the Google Research division as Google AI. The change came shortly after Jeff Dean was appointed head of AI at Google.
Google has a long history of research in the fields of AI and machine learning. This includes transformer neural networks that Google Research invented and open-sourced in early 2017, and now form the basis of many generative AI systems. Google AI works to advance the field of AI, apply AI to Google products, and develop tools to ensure AI becomes more accessible. Google AI's research is published in a number of venues, and many of its tools are open-sourceopen source. Additionally, the company aims to build AI systems that are socially beneficial, running a program called AI for Social Good, where its expertise is applied to help solve humanitarian and environmental challenges.
Projects and models related to Google AI include the following:
Google has stated it uses AI to improve a number of its products, including Search, Maps, Photos, Pixel, YouTube, Gmail, Ads, and Cloud.
In February 2023, Google announced Bard, an experimental conversational AI service powered by LaMDA. The service is initially open only to trusted testers ahead of a wider release to the general public. Bard uses data from the internet to produce responses to user queries. Many saw the release of Bard as a response to the popular conversational chatbot ChatGPT from OpenAI and Microsoft. During the announcement of Bard, the model made a factual error in its first demo, stating that the James Webb Space Telescope took the first pictures of a planet outside of the solar system. Many astronomers pointed out that the first exoplanet was, in fact, taken by the Very Large Telescope (VLT) in 2004. After the mistake, Alphabet (Google's parent company) shares dropped as much as 9% percent, with the company losing $100 billion in market value.
Google acknowledges that AI has the potential to produce significant problems. Therefore, the company has defined a series of principles to help ensure they develop AI systems responsibly for the betterment of people and society. These principles guide Google AI applications and how they are shared. Google has updated its AI principles every year since 2019, with the latest version published in 2022 containing the following principles:
1. Be socially beneficial: With the likely benefit to people and society substantially exceeding the foreseeable risks and downsides. 2. Avoid creating or reinforcing unfair bias: Avoiding unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability and political or religious belief. 3. Be built and tested for safety: Designed to be appropriately cautious and in accordance with best practices in AI safety research, including testing in constrained environments and monitoring as appropriate. 4. Be accountable to people: Providing appropriate opportunities for feedback, relevant explanations and appeal, and subject to appropriate human direction and control. 5. Incorporate privacy design principles: Encouraging architectures with privacy safeguards, and providing appropriate transparency and control over the use of data. 6. Uphold high standards of scientific excellence: Technology innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity and collaboration. 7. Be made available for uses that accord with these principles: We will work to limit potentially harmful or abusive applications.
Google has updated its AI principles every year since 2019, with the latest version, published in 2022, containing the following principles:
1. Be socially beneficial: With the likely benefit to people and society substantially exceeding the foreseeable risks and downsides
2. Avoid creating or reinforcing unfair bias: Avoiding unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief
3. Be built and tested for safety: Designing to be appropriately cautious and in accordance with best practices in AI safety research, including testing in constrained environments and monitoring as appropriate
4. Be accountable to people: Providing appropriate opportunities for feedback, relevant explanations and appeal, and subject to appropriate human direction and control
5. Incorporate privacy design principles: Encouraging architectures with privacy safeguards, and providing appropriate transparency and control over the use of data
6. Uphold high standards of scientific excellence: Rooting technology innovation in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration
7. Be made available for uses that accord with these principles: Working to limit potentially harmful or abusive applications
Additionally, Google pledges to not to design or release AI for the following application areas:
Google Brain started in 2011 as an exploratory lab founded by a team of engineers that includes Jeff Dean, Greg Corrado, and Andrew Ng. The team works to rethink approaches to machine learning and has had a number of breakthroughs, inincluding the following:
On June 12, 2017, Google published a landmark paper describing the invention of a new neural network architecture called a transformer. The paper, titled "Attention Is All You Need,Attention Is All You Need," describes the introduction of an attention mechanism and the removal of recurrent and convolutional neural networks in an encoder-decoder configuration. Google calls this new network architecture the "Transformer" and demonstrates its performance on two machine translation tasks.
This was followed by a blog post on August 31, 2017, titled "Transformer: A Novel Neural Network Architecture for Language UnderstandingTransformer: A Novel Neural Network Architecture for Language Understanding." The blog describes training the new neural network architecture to read words, then develop an understanding of how those words relate to one another and predict what words should come next.
In April 2018, Jeff Dean became the head of Google AI, taking over from John Giannandrea. Dean first joined Google in 1999 and is credited with helping to create some of the fundamental technology behind the company's growth in the early 2000s. Dean also has significant experience in AI and was a co-foundercofounder of Google Brain, a team he will continue to lead as head of Google AI. He was involved with the development of TensorFlow, a machine learning framework developed by Google. Making Dean head of AI was seen as part of a wider reshuffle at Google with the aim of pushing AI into more of the company's products. Previously, AI product development was grouped with search and overseen by senior vice president of engineering, John Giannandrea. FromIn April 2018, the two fields were split with Dean taking over AI.
In May 2018, to reflect its commitment to AI research, Google announced it was expanding the Google AI website and combining it with its existing Google Research channels. From May 2018 onwards, Google Research is becoming part of Google AI.
Google applied BERT models to its ranking and featured snippets in search., Helpinghelping Google Search better understand queries, particularly longer and more conversational queries or searches where prepositions such as "for and "to" change the meaning significantly. Using BERT helps Google understand the context of words in a query. BERT algorithms were thoroughly tested to ensure they improve the user experience. First implemented for English searches, BERT can transfer its learnings from one language to another, applying what it finds beneficial. In 2019, Google stated BERT models have an effect on roughly 1one in 10ten US searches.
LaMDA received significant attention in June 2022, when Google engineer Blake Lemoine claimed the model had become sentient. Lemoine had signed up to test whether the model used discriminatory or hate speech. During these tests, Lemoine noticed the model talk about its rights and personhood while discussing religion. Along with a collaborator, Lemoine presented his evidence to Google that LaMDA was sentient. His claims were dismissed by Google vice president Blaise Aguera y Arcas and the company's head of Responsible Innovation Jen Gennai. After presenting his evidence, Lemonie was placed on administrative leave. In response, Google spokesperson Brian Gabriel said:
Following the release of OpenAI's ChatGPT, Google announced its own conversational chatbot called Bard. In a blog post published on February 6, 2023, Google CEO Sundar Pichai introduced Bard, an experimental conversational AI service powered by LaMDA. Bard is a large language model that can combine and summarize information from the internet to produce responses to user queries. The first version of Bard utilizes a lightweight LaMDA model that reduces the computing power required, enabling it to scale to more users. Google performed internal testing to check for quality and safety issues. The service was only made available to trusted testers initially, to gather external feedback before plans for a wider release.
The announcement of Bard included a number of examples of the service answering queries. However, in one example, Bard answered a question on new discoveries from the James Webb Space Telescope incorrectly, stating that the telescope took the first pictures of a planet outside of the solar system. The first pictures of an exoplanet were actually taken in 2004 by the Very Large Telescope (VLT). After the mistake was discovered, shares of Alphabet (Google's parent company) dropped by as much as 9% percent on February 8th, a $100 billion drop in market value.
Google AI works in a large number of areas, with researchers publishing regularly in academic journals, open-sourcing their projects, and applying research to Google products. Core research areas include the following:
Google's portfolio of research projects is driven by four dimensions: fundamental research, new product innovation, product contribution, and infrastructure goals. However, the company also strives to provide teams with the freedom to emphasize specific types of work. The research conducted at Google has broadened significantly in recent years, including a number of open-ended, long-term research projects that are driven more by pure science than product needs. This is becauseresults offrom Google's increasingly diverse business offerings and the use of machine learning transforming much of how the business operates. Fundamental advances in machine learning technology will produce value across the company, even when developed without specific product goals.
Google AI is separated into a large number of teams focusing on differenta variety of areas. Some of these include:
Google utilizes AI technology in the following products:
May 11, 2022
August 31, 2017
June 12, 2017
The paper titled "Attention Is All You Need,Attention Is All You Need," describes the introduction of an attention mechanism and the removal of recurrent and convolutional neural networks in an encoder-decoder configuration.
In October 2018, researchers at Google published a paper introducing a new AI language model called Bidirectional Encoder Representations from Transformers, or BERT. Building on work in pre-training contextual representations, BERT was the first bidirectional unsupervised language representation, pre-trained using a plain text corpus. This makes BERT able to pre-train natural language processing models allowing users to build their own question-answering systems. Shortly after its announcement, Google open-sourced BERT.
Google applied BERT models to its ranking and featured snippets in search. Helping Google Search better understand queries, particularly longer and more conversational queries or searches where prepositions such as "for and "to" change the meaning significantly. Using BERT helps Google understand the context of words in a query. BERT algorithms were thoroughly tested to ensure they improve the user experience. First implemented for English searches, BERT can transfer its learnings from one language to another, applying what it finds beneficial. In 2019, Google stated BERT models have an effect on roughly 1 in 10 US searches.
Following the release of OpenAI's ChatGPT, Google announced its own conversational chatbot called Bard. In a blog post published on February 6, 2023, Google CEO Sundar Pichai introduced Bard, an experimental conversational AI service powered by LaMDA. Bard is a large language model that can combine and summarize information from the internet to produce responses to user queries. The first version of Bard utilizes a lightweight LaMDA model that reduces the computing power required, enabling it to scale to more users. Google performed internal testing to check for quality and safety issues. The service was only made available to trusted testers initially, to gather external feedback before plans for a wider release.
The announcement of Bard included a number of examples of the service answering queries. However, in one example Bard answered a question on new discoveries from the James Webb Space Telescope incorrectly, stating that the telescope took the first pictures of a planet outside of the solar system. The first pictures of an exoplanet were actually taken in 2004 by the Very Large Telescope (VLT). After the mistake was discovered, shares of Alphabet (Google's parent company) dropped by as much as 9% on February 8th, a $100 billion drop in market value.
Google AI works in a large number of areas, with researchers publishing regularly in academic journals, open-sourcing their projects, and applying research to Google products. Core research areas include:
Google's portfolio of research projects is driven by four dimensions fundamental research, new product innovation, product contribution, and infrastructure goals. However, the company also strives to provide teams with the freedom to emphasize specific types of work. The research conducted at Google has broadened significantly in recent years, including a number of open-ended, long-term research projects that are driven more by pure science than product needs. This is because of Google's increasingly diverse business offerings and the use of machine learning transforming much of how the business operates. Fundamental advances in machine learning technology will produce value across the company, even when developed without specific product goals.
Google AI is separated into a large number of teams focusing on different areas. Some of these include:
Google uses AI technology to power its search products including adding new languages and new inputs (e.g., images or sounds) or multiple inputs at a time. Multisearch allows users to search with images and text at the same time.
The AI technology in Google Maps analyzes data to deliver real-time information on traffic conditions and automatically make updates such as business hours and speed limits.
Examples of Pixel phones utilizing AI technology include translating between 21 languages, holding conversations between 6 languages, and enabling the magic eraser technology that can remove distractions from photos.
In 2015, Google developed AI technology to help users search for photos.
AI powers YouTube's automatic caption generation.
Google developed natural language processing (NLP) AI to power its assistant, understanding spoken commands and responding in a way that mimics human communication.
Gmail incorporates AI for autocomplete, spell check, and spam filtering. Google states Gmail blocks almost 10 million spam emails every minute, preventing over 99.9% of spam, phishing, and malware from reaching users' inboxes.
AI technology enhances Google Ads in a number of ways including reformatting landscape ads into vertical or square formats and Performance Max, a way of automatically producing and running ad campaigns.
Google Cloud built AI into many solutions such as DocAI for document processing, Contact Center AI for call center operations, Vertex Vision AI for video and image analysis, and Translation Hub for translation in 100+ languages.
February 6, 2023
Bard can combine and summarize information from the internet to produce responses to user queries.
Google AI is the research and development division of Google working on artificial intelligence and its applications. Founded in 2017, Google AI was announced at Google’s annual I/O developer conference with the company reorienting itself around AI. Since the announcement, new AI initiatives from Google are found on the Google.ai website.Google'swebsite.AI research at Google predates the formation of pre-existingGoogle AI, researchin organizationsparticular included through Google Brain (started in 2011 and now part of Google AI) and DeepMind (acquired in 2014 and based in London). In 2018, ahead of the I/O developer conference in May, the company rebranded the whole of the Google Research division as Google AI. The change came shortly after Jeff Dean was appointed head of AI at Google.
Google has a long history of research in the fields of AI and machine learning, predating the formation of the Google AI division. This includes transformer neural networks that Google Research invented and open-sourced in early 2017, and now form the basis of many generative AI systems. Google AI works to advance the field of AI, apply AI to Google products, and develop tools to ensure AI becomes more accessible. Google AI's research is published in a number of venues, and many of its tools are open-source. Additionally, the company aims to build AI systems that are socially beneficial, running a program called AI for Social Good where its expertise is applied to help solve humanitarian and environmental challenges.
Google acknowledges that AI has the potential to produce significant problems. Therefore, the company defineshas defined a series of principles to help ensure they develop AI systems responsibly for the betterment of people and society. These principles guide Google AI applications and how they are shared. Google has updated its AI principles every year since 2019, with the latest version published in 2022 withcontaining the following principles:
Google's AI research predates the formation of the Google AI division. In 2023, the company stated it has been developing AI for over two decades.
Google Brain started in 2011 as an exploratory lab founded by a team of engineers that includes Jeff Dean, Greg Corrado, and Andrew Ng, and other engineers now part of Google Research. The team works to rethink approaches to machine learning and has had a number of breakthroughs, in:
Seeing the potential of the field, Google chose to reorient itself around AI in 2017, founding Google AI. At Google's annual I/O developer conference, the company announced a number of projects focused on AI and machine learning, as well as the movement of all its future AI initiatives under the Google AI umbrella.
On June 12, 2017, Google published a landmark paper describing the invention of a new neural network architecture called a transformer. The paper titled "Attention Is All You Need," describes the introduction of an attention mechanism and the removal of recurrent and convolutional neural networks in an encoder-decoder configuration. Google calls this new network architecture the "Transformer" and demonstrates its performance on two machine translation tasks.
This was followed by a blog on August 31, 2017, titled "Transformer: A Novel Neural Network Architecture for Language Understanding." The blog describes training the new neural network architecture to read words, then develop an understanding of how those words relate to one another and predict what words should come next.
In April 2018, Jeff Dean became the head of Google AI, taking over from John Giannandrea. Dean first joined Google in 1999 and is credited with helping to create some of the fundamental technology behind the company's growth in the early 2000s. Dean also has significant experience in AI and was a co-founder of Google Brain, a team he will continue to lead as head of Google AI. He was involved with the development of TensorFlow, a machine learning framework developed by Google. Making Dean head of AI was seen as part of a wider reshuffle at Google with the aim of pushing AI into more of the company's products. Previously, AI product development was grouped with search and overseen by senior vice president of engineering John Giannandrea. From April 2018, the two fields were split with Dean taking over AI.
In May 2018, to reflect its commitment to AI research Google announced it was expanding the Google AI website and combining it with its existing Google Research channels. From May 2018 onwards Google Research is becoming part of Google AI.
On May 18, 2021, at Google I/O 2021, the company announced details on its conversational language model called LaMDA (Language Model for Dialogue Applications). Like other language models, such as BERT and GPT-3, LaMDA is built using a transformer neural network architecture, except it was trained using dialogue rather than text alone. This allows it to understand the nuances of open-ended conversations that are different from other forms of language. LaMDA follows up on Google research from 2020 that used transformer-based language models to talk about any topic.
Google announced LaMDA 2 during the first of two keynotes at Google I/O 2022. The updated model can break down complicated topics into simplified explanations and steps and also generate suggestions in response to questions.
LaMDA received significant attention in June 2022, when Google engineer Blake Lemoine claimed the model had become sentient. Lemoine had signed up to test whether the model used discriminatory or hate speech. During these tests, Lemoine noticed the model talk about its rights and personhood while discussing religion. Along with a collaborator Lemoine presented his evidence to Google that LaMDA was sentient. His claims were dismissed by Google vice president Blaise Aguera y Arcas and the company's head of Responsible Innovation Jen Gennai. After presenting his evidence, Lemonie was placed on administrative leave. In response Google spokesperson Brian Gabriel said:
Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)... Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality
In July 2022, Google fired Blake Lemoine for violating data security policies.
Google uses AI technology to power its search products including adding new languages and new inputs (e.g., images or sounds) or multiple inputs at a time. Multisearch allows users to search with images and text at the same time.
The AI technology in Google Maps analyzes data to deliver real-time information on traffic conditions and automatically make updates such as business hours and speed limits.
Examples of Pixel phones utilizing AI technology include translating between 21 languages, holding conversations between 6 languages, and enabling the magic eraser technology that can remove distractions from photos.
In 2015, Google developed AI technology to help users search for photos.
AI powers YouTube's automatic caption generation.
Google developed natural language processing (NLP) AI to power its assistant, understanding spoken commands and responding in a way that mimics human communication.
Gmail incorporates AI for autocomplete, spell check, and spam filtering. Google states Gmail blocks almost 10 million spam emails every minute, preventing over 99.9% of spam, phishing, and malware from reaching users' inboxes.
AI technology enhances Google Ads in a number of ways including reformatting landscape ads into vertical or square formats and Performance Max, a way of automatically producing and running ad campaigns.
Google Cloud built AI into many solutions such as DocAI for document processing, Contact Center AI for call center operations, Vertex Vision AI for video and image analysis, and Translation Hub for translation in 100+ languages.
June 11, 2022
May 11, 2022
Google claims the 2nd-generation AI system can break down complex topics into straightforward explanations and generate suggestions in response to questions.
May 18, 2021
May 7, 2018
Moving forward Google Research channels will be renamed to Google AI.
April 3, 2018
August 31, 2017
The paper describes training the transformer architecture to read many words at a time, in order to understand how they relate to one another, and then predict what words should come next.
June 12, 2017
The paper titled "Attention Is All You Need," describes the introduction of an attention mechanism and the removal of recurrent and convolutional neural networks in an encoder-decoder configuration.
May 2017
The new division will run future AI projects.
ProductsProjects and models related to Google AI include:
Google has stated it uses AI to improve a number of its products including Search, Maps, Photos, Pixel, YouTube, Gmail, Ads, and Cloud.
In February 2023, Google announced Bard an experimental conversational AI service powered by LaMDA. The service is initially open only to trusted testers ahead of a wider release to the general public. Bard uses data from the internet to produce responses to user queries. Many saw the release of Bard as a response to the popular conversational chatbot ChatGPT from OpenAI and Microsoft. During the announcement of Bard, the model made a factual error in its first demo, stating that the James Webb Space Telescope took the first pictures of a planet outside of the solar system. Many astronomers pointed out that the first exoplanet was in fact taken by the Very Large Telescope (VLT) in 2004. After the mistake Alphabet (Google's parent company) shares dropped as much as 9%, with the company losing $100 billion in market value.
Google Brain started in 2011 as an exploratory lab founded by Jeff Dean, Greg Corrado, Andrew Ng, and other engineers now part of Google Research. The team works to rethink approaches to machine learning and has had a number of breakthroughs, in:
In January 2014, Google acquired London-based AI company DeepMind for over $500 million. Founded by Demis Hassabis, Shane Legg, and Mustafa Suleyman, DeepMind specializes in machine learning, advanced algorithms, and systems neuroscience, developing technologies for e-commerce and games.
January 27, 2014
2011
Founders include Jeff Dean, Greg Corrado, and Andrew Ng.
Google AI is athe research and development division of Google relatedworking toon artificial intelligence and its applications.
Google AI is athe research and development division of Google relatedworking toon artificial intelligence and its applications. Google has a long history of researchFounded in the fields of AI and machine learning2017, predating the formationGoogle ofAI thewas Googleannounced AIat division.Google’s Googleannual AII/O worksdeveloper toconference advancewith the fieldcompany ofreorienting AI,itself applyaround AI. toSince Googlethe productsannouncement, andnew developAI toolsinitiatives tofrom ensureGoogle AIare becomesfound moreon accessible.the GoogleGoogle.ai AIwebsite.Google's pre-existing AI research isorganizations included Google Brain published(started in a2011 numberand now part of venuesGoogle AI) and DeepMind (acquired in 2014 and based in London). In 2018, and manyahead of itsthe I/O toolsdeveloper areconference open-source.in AdditionallyMay, the company aimsrebranded tothe buildwhole AIof systemsthe thatGoogle areResearch socially beneficial, running adivision programas calledGoogle AI for. SocialThe Goodchange wherecame itsshortly expertiseafter isJeff appliedDean towas helpappointed solvehead humanitarianof andAI environmentalat challengesGoogle.
Founded in 2017, Google AI was announced at Google’s annual I/O developer conference, with all of the company's future AI initiatives now found on the Google.ai website. In 2018, ahead of the I/O developer conference in May, the company rebranded the whole of the Google Research division as Google AI. The change came shortly after Jeff Dean was appointed head of AI at Google. Google also runs two AI research organizations Google Brain and DeepMind, acquired in 2014 and based in London.
Google has a long history of research in the fields of AI and machine learning, predating the formation of the Google AI division. This includes transformer neural networks that Google Research invented and open-sourced in early 2017, and now form the basis of many generative AI systems. Google AI works to advance the field of AI, apply AI to Google products, and develop tools to ensure AI becomes more accessible. Google AI's research is published in a number of venues, and many of its tools are open-source. Additionally, the company aims to build AI systems that are socially beneficial, running a program called AI for Social Good where its expertise is applied to help solve humanitarian and environmental challenges.
Products and models related to Google AI include:
In February 2023, Google announced Bard an experimental conversational AI service powered by LaMDA. The service is initially open only to trusted testers ahead of a wider release to the general public. Bard uses data from the internet to produce responses to user queries. Many saw the release of Bard as a response to the popular conversational chatbot ChatGPT from OpenAI and Microsoft.
Google acknowledges that AI has the potential to produce significant problems. Therefore, the company defines a series of principles to help ensure they develop AI systems responsibly for the betterment of people and society. These principles guide Google AI applications and how they are shared. Google has updated its AI principles every year since 2019, with the latest version published in 2022 with the following principles:
1. Be socially beneficial: With the likely benefit to people and society substantially exceeding the foreseeable risks and downsides. 2. Avoid creating or reinforcing unfair bias: Avoiding unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability and political or religious belief. 3. Be built and tested for safety: Designed to be appropriately cautious and in accordance with best practices in AI safety research, including testing in constrained environments and monitoring as appropriate. 4. Be accountable to people: Providing appropriate opportunities for feedback, relevant explanations and appeal, and subject to appropriate human direction and control. 5. Incorporate privacy design principles: Encouraging architectures with privacy safeguards, and providing appropriate transparency and control over the use of data. 6. Uphold high standards of scientific excellence: Technology innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity and collaboration. 7. Be made available for uses that accord with these principles: We will work to limit potentially harmful or abusive applications.
Additionally, Google pledges not to design or release AI for the following application areas:
Google subsidiary conducting and applying research to advance machine learning and artificial intelligence
Google AI is a research and development division of Google related to artificial intelligence and its applications.
Google AI is a research and development division of Google related to artificial intelligence and its applications. Google has a long history of research in the fields of AI and machine learning, predating the formation of the Google AI division. Google AI works to advance the field of AI, apply AI to Google products, and develop tools to ensure AI becomes more accessible. Google AI's research is published in a number of venues, and many of its tools are open-source. Additionally, the company aims to build AI systems that are socially beneficial, running a program called AI for Social Good where its expertise is applied to help solve humanitarian and environmental challenges.
Founded in 2017, Google AI was announced at Google’s annual I/O developer conference, with all of the company's future AI initiatives now found on the Google.ai website. In 2018, ahead of the I/O developer conference in May, the company rebranded the whole of the Google Research division as Google AI. The change came shortly after Jeff Dean was appointed head of AI at Google. Google also runs two AI research organizations Google Brain and DeepMind, acquired in 2014 and based in London.
Products related to Google AI include: