G11B27/22

Audio-driven viewport selection

An example device includes a memory device, and a processor coupled to the memory device. The memory is configured to store audio spatial metadata associated with a soundfield and video data. The processor is configured to identify one or more foreground audio objects of the soundfield using the audio spatial metadata stored to the memory device, and to select, based on the identified one or more foreground audio objects, one or more viewports associated with the video data. Display hardware coupled to the processor and the memory device is configured to output a portion of the video data being associated with the one or more viewports selected by the processor.

Audio-driven viewport selection

An example device includes a memory device, and a processor coupled to the memory device. The memory is configured to store audio spatial metadata associated with a soundfield and video data. The processor is configured to identify one or more foreground audio objects of the soundfield using the audio spatial metadata stored to the memory device, and to select, based on the identified one or more foreground audio objects, one or more viewports associated with the video data. Display hardware coupled to the processor and the memory device is configured to output a portion of the video data being associated with the one or more viewports selected by the processor.

LAYERED CODING OF AUDIO WITH DISCRETE OBJECTS
20220262373 · 2022-08-18 ·

A first layer of data having a first set of Ambisonic audio components can be decoded where the first set of Ambisonic audio components is generated based on ambience and one or more object-based audio signals. A second layer of data is decoded having at least one of the one or more object-based audio signals. One of the object-based audio signals is subtracted from the first set of Ambisonic audio components. The resulting Ambisonic audio components are rendered to generate a first set of audio channels. The one or more object-based audio signals are spatially rendered to generate a second set of audio channels. Other aspects are described and claimed.

Layered coding of audio with discrete objects
11430451 · 2022-08-30 · ·

A first layer of data having a first set of Ambisonic audio components can be decoded where the first set of Ambisonic audio components is generated based on ambience and one or more object-based audio signals. A second layer of data is decoded having at least one of the one or more object-based audio signals. One of the object-based audio signals is subtracted from the first set of Ambisonic audio components. The resulting Ambisonic audio components are rendered to generate a first set of audio channels. The one or more object-based audio signals are spatially rendered to generate a second set of audio channels. Other aspects are described and claimed.

Layered coding of audio with discrete objects
11430451 · 2022-08-30 · ·

A first layer of data having a first set of Ambisonic audio components can be decoded where the first set of Ambisonic audio components is generated based on ambience and one or more object-based audio signals. A second layer of data is decoded having at least one of the one or more object-based audio signals. One of the object-based audio signals is subtracted from the first set of Ambisonic audio components. The resulting Ambisonic audio components are rendered to generate a first set of audio channels. The one or more object-based audio signals are spatially rendered to generate a second set of audio channels. Other aspects are described and claimed.

COMPUTER-IMPLEMENTED METHOD, COMPUTER PROGRAM AND APPARATUS FOR VIDEO PROCESSING AND FOR GENERATING A THUMBNAIL FROM A VIDEO SEQUENCE, AND VIDEO SURVEILLANCE SYSTEM COMPRISING SUCH AN APPARATUS

A computer-implemented method of video processing is provided. The method comprises obtaining a first video sequence of a target area comprising a first predetermined object or activity of interest and obtaining a second video sequence of the target area comprising a second predetermined object or activity of interest. The method further comprises determining whether a recording period of the first video sequence and a recording period of the second video sequence overlap for a time period; and in a case where the recording periods of the first and second video sequences overlap for a time period, defining at least one first video clip using frames of the first and/or second video sequence(s) from at least the time period of overlap.

COMPUTER-IMPLEMENTED METHOD, COMPUTER PROGRAM AND APPARATUS FOR VIDEO PROCESSING AND FOR GENERATING A THUMBNAIL FROM A VIDEO SEQUENCE, AND VIDEO SURVEILLANCE SYSTEM COMPRISING SUCH AN APPARATUS

A computer-implemented method of video processing is provided. The method comprises obtaining a first video sequence of a target area comprising a first predetermined object or activity of interest and obtaining a second video sequence of the target area comprising a second predetermined object or activity of interest. The method further comprises determining whether a recording period of the first video sequence and a recording period of the second video sequence overlap for a time period; and in a case where the recording periods of the first and second video sequences overlap for a time period, defining at least one first video clip using frames of the first and/or second video sequence(s) from at least the time period of overlap.

Method and apparatus for extracting highlight of sporting event
11238288 · 2022-02-01 · ·

A method for highlight extraction capable of automatically extracting a highlight from a video including a sporting event is provided. The method for highlight extraction may include: identifying a video including a sporting event, log information that sequentially records events occurring in the sporting event, and a keyword related to the video; tagging the video with game information related to the video; extracting at least one piece of log information corresponding to the keyword and determining at least one frame that corresponds to the log information extracted from the tagged video; and creating a highlight video by combining the at least one determined frame.

Method and apparatus for extracting highlight of sporting event
11238288 · 2022-02-01 · ·

A method for highlight extraction capable of automatically extracting a highlight from a video including a sporting event is provided. The method for highlight extraction may include: identifying a video including a sporting event, log information that sequentially records events occurring in the sporting event, and a keyword related to the video; tagging the video with game information related to the video; extracting at least one piece of log information corresponding to the keyword and determining at least one frame that corresponds to the log information extracted from the tagged video; and creating a highlight video by combining the at least one determined frame.

Method and system for indexing video data using a data processing unit

A method for processing video data is performed by a data processing unit (DPU). The method includes obtaining, by the DPU of an edge device, video data; processing the video data to obtain video data chunks and indexing attributes; generating indexing metadata based on the video data chunks and the indexing attributes; processing the video data chunks and indexing attributes to generate contextual attributes; generating contextual metadata based on the contextual attributes and the video data chunks; associating the indexing metadata and the contextual metadata with the video data chunks; and storing the indexing metadata, contextual metadata, and the video data chunks in storage.