Company attributes
Other attributes
Hugging Face is a company developing social artificial intelligence (AI)-run chatbot applications and natural language processing technologies (NLP) to facilitate AI-powered communication. The company's platform is capable of analyzing tone and word usage to decide what a chat may be about and enable the system to chat based on emotions. Hugging Face's platform allows users to build, train, and deploy NLP models with the intent of making the models more accessible to users.
Hugging Face was established in 2016 by Clement Delangue, Julien Chaumond, and Thomas Wolf. The company is based in Brooklyn, New York. There are an estimated 5,000 organizations that use the Hugging Face platform to integrate artificial intelligence into their product and workflows. These companies have included Microsoft, Bloomberg, Typeform, Allen Institute for AI, Meta AI, Graphcore, Intel, and Grammarly.
Hugging Face's platform is developed to allow users to build, train, and deploy machine learning (ML) and artificial intelligence models powered by open-source systems. This includes the following components:
- Hugging Face Hub, where users can create, discover, and collaborate on ML projects
- Tasks, where different users can post problems and users can develop ML and AI applications to solve these problems, including using audio, vision, and language with AI systems
- Transformers, a natural language processing library open to all ML models and includes support from other libraries, such as Flair, Asteroid, ESPnet, and Pyannote
- Inference APIs, to allow users to serve their models directly from Hugging Face's platform and allow those users to run NLP models with a few lines of code
The company’s main platform includes the company's chatbot applications, which allow users to interact with the artificial intelligence that the company developed. To accomplish this, Hugging Face developed its own natural language processing (NLP) model called Hierarchical Multi-Task Learning (HMTL) and managed a library of pre-trained NPL models under PyTorch-Transformers. Their chatbot applications are, as of September 2019, only available on iOS. These applications are Chatty, Talking Dog, Talking Egg, and Boloss. These AIs are intended to be digital companions that can entertain users.
HuggingChat is an AI-powered, open-source chatbot built with 30 billion parameters and based on the latest LLaMa model developed by the project OpenAssistant. Considered a ChatGPT clone, HuggingChat is developed to be quick and straightforward and offer an open-source chatbot model available to anyone. Further, Hugging Face's goal with HuggingChat is to make the chatbot small and efficient so that it can be run on consumer hardware. Data privacy is strict on HuggingChat, which enforces a privacy model where messages are only stored to display them to the user and are not shared for research or training purposes, nor are users authenticated or identified using cookies.
Hugging Face's Expert Acceleration Program helps users build ML solutions with guidance from ML experts. The program includes dedicated support from ML engineers to guide the development and implementation of the ML models; assistance from these experts from research to production, with the experts available to help answer questions and find solutions needed in the development of these models and their implementations; and flexible communication to make it easy to seek expert guidance as it is needed.
Similar to the regular hub on Hugging Face's platform, the Private Hub allows users to experiment, collaborate, train, and develop ML models with a private group of collaborators rather than a public group.
Hugging Face's Inference Endpoints service allows users to deploy transformers, diffusers, or any model on a dedicated, managed infrastructure. It allows users to deploy models using production-ready APIs without needing to deal with infrastructure or MLOps while using a fully managed production solution with a pay-as-you-go structure. Its secure offline endpoints are only available through a direct connection to a user's virtual private cloud (VPC).
Hugging Face's AutoTrain solution offers a chance to automatically train, evaluate, and deploy ML models without needing to know how to code. The solution allows users to define the task they need the model to accomplish and upload the necessary data, and AutoTrain finds the best models for the data and task accomplishment and trains the model automatically.
Hugging Face and ServiceNow partnered to develop StarCoder, an open-source language model for code, offering a free AI code-generating system alternative to GitHub's Copilot, DeepMind's AlphaCode, and Amazon's CodeWhisperer. StarCoder was trained in over eighty programming languages and text from GitHub repositories and trained on over 1 trillion tokens with a context window of 8192 tokens to give the model 15.5 billion parameters.