Patent attributes
Techniques are disclosed for providing an avatar personalized for a specific person based on known data from a relatively large population of individuals and a relatively small data sample of the specific person. Auto-encoder neural networks are used in a novel manner to capture latent-variable representations of facial models. Once such models are developed, a very limited data sample of a specific person may be used in combination with convolutional-neural-networks or statistical filters, and driven by audio/visual input during real-time operations, to generate a realistic avatar of the specific individual's face. In some embodiments, conditional variables may be encoded (e.g. gender, age, body-mass-index, ethnicity, emotional state). In other embodiments, different portions of a face may be modeled separately and combined at run-time (e.g., face, tongue and lips). Models in accordance with this disclosure may be used to generate resolution independent output.