Patent 9973874 was granted and assigned to DTS (company) on May, 2018 by the United States Patent and Trademark Office.
The methods and apparatus described herein optimally represent full 3D audio mixes (e.g., azimuth, elevation, and depth) as “sound scenes” in which the decoding process facilitates head tracking. Sound scene rendering can be performed for the listener's orientation (e.g., yaw, pitch, roll) and 3D position (e.g., x, y, z), and can be modified for a change in the listener's orientation or 3D position. As described below, the ability to render an audio object in both the near-field and far-field enables the ability to fully render depth of not just objects, but any spatial audio mix decoded with active steering/panning, such as Ambisonics, matrix encoding, etc., thereby enabling full translational head tracking (e.g., user movement) beyond simple rotation in the horizontal plane, or 6-degrees-of-freedom (6-DOF) tracking and rendering.