SBIR/STTR Award attributes
We have created a new Human Interface using (A) the micro and dynamic motions of the body (head, eyes, hands, etc.), and (B) a new way for objects and content to respond to that motion predictively; i.e., by looking at the path of the human motion we can predict User intent and cause the objects to respond accordingly. This means that not only can we interact with things easier and at speeds faster than any other system, but we can measure how much confidence the user has while interacting with menus, decision trees and assessments (testing). This also hyper-activates the parts of the brain responsible with cognition and memory, accelerating learning and retention. We then apply this to 360 Videos, and add an interactive software layer using this interaction technology, providing interaction with objects IN the video. Interaction happens through gaze, touch, mouse cursor, or motion of the eyes or body, and the platform works across all devices the same way. Also, we can determine the User's confidence and test decisions continuously throughout the process, so we not only know the final answer, but the 2nd and 3rd choice as well, and the percentage of each.