Determination of spatialized virtual acoustic scenes from legacy audiovisual media
Abstract:
An audio system generates virtual acoustic environments with three-dimensional (3-D) sound from legacy video with two-dimensional (2-D) sound. The system relocates sound sources within the video from 2-D to into a 3-D geometry to create an immersive 3-D virtual scene of the video that can be viewed using a headset. Accordingly, an audio processing system obtains a video that includes flat mono or stereo audio being generated by one or more sources in the video. The system isolates the audio from each source by segmenting the individual audio sources. Reverberation is removed from the audio from each source to obtain each source's direct sound component. The direct sound component is then re-spatialized to the 3-D local area of the video to generate the 3-D audio based on acoustic characteristics obtained for the local area in the video.
Information query
Patent Agency Ranking
0/0