H04N7/52

Systems and methods for categorizing motion events

The various embodiments described herein include methods, devices, and systems for categorizing motion events. In one aspect, a method is performed at a camera device. The method includes: (1) capturing a plurality of video frames via the image sensor, the plurality of video frames corresponding to a scene in a field of view of the camera; (2) sending the video frames to the remote server system in real-time; (3) while sending the video frames to the remote server system in real-time: (a) determining that motion has occurred within the scene; (b) in response to determining that motion has occurred within the scene, characterizing the motion as a motion event; and (c) generating motion event metadata for the motion event; and (4) sending the generated motion event metadata to the remote server system concurrently with the video frames.

Systems and methods for categorizing motion events

The various embodiments described herein include methods, devices, and systems for categorizing motion events. In one aspect, a method is performed at a camera device. The method includes: (1) capturing a plurality of video frames via the image sensor, the plurality of video frames corresponding to a scene in a field of view of the camera; (2) sending the video frames to the remote server system in real-time; (3) while sending the video frames to the remote server system in real-time: (a) determining that motion has occurred within the scene; (b) in response to determining that motion has occurred within the scene, characterizing the motion as a motion event; and (c) generating motion event metadata for the motion event; and (4) sending the generated motion event metadata to the remote server system concurrently with the video frames.

METHOD OF SYNCHRONIZING DATA AND ELECTRONIC DEVICE AND SYSTEM FOR IMPLEMENTING THE SAME
20170264792 · 2017-09-14 ·

A method of synchronizing data and an electronic device and system for implementing the same are provided. The head mounted device includes: a housing including a surface; a connection device connected to the housing to detachably connect the housing to a portion of a user head; a display exposed through a portion of the surface; a motion sensor located at the housing or connected to the housing configured to provide a first signal representing a movement of the housing; a communication circuit; a processor electrically connected to the display and the wire communication circuit; and a memory electrically connected to the processor and configured to store instructions, wherein the processor is configured to execute the instructions to receive the first signal from the motion sensor, to transmit first information based on the first signal using the communication circuit, to transmit second information including a time related to the first signal using the communication circuit, to receive multimedia data and third information related to the multimedia data corresponding to a time using the communication circuit, to discard a portion of the multimedia data based on the second information and the third information, and to display an image on the display using the multimedia data whose portion is discarded and to output audio using an audio output module.

METHOD AND APPARATUS FOR DECODING AN ENHANCED VIDEO STREAM
20210409782 · 2021-12-30 ·

A method of decoding an enhanced video stream composed of base layer video access units and enhancement layer video access units, each access unit comprising a plurality of syntax structures, includes passing the syntax structures of the base layer access units to a base layer buffer, passing syntax structures of the enhancement layer access units to an enhancement layer buffer, outputting the syntax structures passed to the base layer buffer in a predetermined sequence, outputting the syntax structures passed to the enhancement layer buffer in a predetermined sequence, and recombining the sequences of syntax structures output by the base layer buffer and the enhancement layer buffer respectively to form a complete enhanced access unit, composed of base layer syntax structures and enhancement layer syntax structures in a predetermined sequence.

METHOD AND APPARATUS FOR DECODING AN ENHANCED VIDEO STREAM
20210409782 · 2021-12-30 ·

A method of decoding an enhanced video stream composed of base layer video access units and enhancement layer video access units, each access unit comprising a plurality of syntax structures, includes passing the syntax structures of the base layer access units to a base layer buffer, passing syntax structures of the enhancement layer access units to an enhancement layer buffer, outputting the syntax structures passed to the base layer buffer in a predetermined sequence, outputting the syntax structures passed to the enhancement layer buffer in a predetermined sequence, and recombining the sequences of syntax structures output by the base layer buffer and the enhancement layer buffer respectively to form a complete enhanced access unit, composed of base layer syntax structures and enhancement layer syntax structures in a predetermined sequence.

Image processing device and image processing method
11196995 · 2021-12-07 · ·

An image processing device includes a buffer for receiving encoded image data, and a processor to execute instructions that cause the processor to: decode the encoded image data from the buffer to generate quantized transform coefficient data; inversely quantize the quantized transform coefficient data using a 16×16 quantization matrix to generate predicted error data, the 16×16 quantization matrix includes a duplicate of at least one of two elements adjacent to each other from an 8×8 quantization matrix; and combine the predicted error data with a predicted image to generate decoded image data.

Image processing device and image processing method
11196995 · 2021-12-07 · ·

An image processing device includes a buffer for receiving encoded image data, and a processor to execute instructions that cause the processor to: decode the encoded image data from the buffer to generate quantized transform coefficient data; inversely quantize the quantized transform coefficient data using a 16×16 quantization matrix to generate predicted error data, the 16×16 quantization matrix includes a duplicate of at least one of two elements adjacent to each other from an 8×8 quantization matrix; and combine the predicted error data with a predicted image to generate decoded image data.

REAL-TIME SOUND FIELD SYNTHESIS BY MODIFYING PRODUCED AUDIO STREAMS
20230276189 · 2023-08-31 ·

A client device is disclosed that receives, from a server, a live video stream and a production quality live ambisonic audio stream generated during performance of a live event at a venue. The live ambisonic audio stream is generated from audio channels captured by audio capture devices disposed at the venue. The audio channels captured at the event, and modified by a producer, can be compared to audio captured by an ambisonic microphone positioned within the event space to determine the phase and relative amplitude of those channels as received by a particular ambisonic microphone channel. In this manner, raw and/or produced audio channels captured at the event can be shifted and mixed together to generate a production quality ambisonic stream.

REAL-TIME SOUND FIELD SYNTHESIS BY MODIFYING PRODUCED AUDIO STREAMS
20230276189 · 2023-08-31 ·

A client device is disclosed that receives, from a server, a live video stream and a production quality live ambisonic audio stream generated during performance of a live event at a venue. The live ambisonic audio stream is generated from audio channels captured by audio capture devices disposed at the venue. The audio channels captured at the event, and modified by a producer, can be compared to audio captured by an ambisonic microphone positioned within the event space to determine the phase and relative amplitude of those channels as received by a particular ambisonic microphone channel. In this manner, raw and/or produced audio channels captured at the event can be shifted and mixed together to generate a production quality ambisonic stream.

Apparatus, systems and methods for packet based transmission of multiple data signals

Apparatus, systems and methods for receiving one or more input signals and providing output signals in various video, audio, data and mixed formats are described. One or more input processors receive the input signals. Each of the input processors provides one or more packetized signals corresponding to one or more of the input signals received at the input processor. Each output processor can receive one or more packetized signals and generate one or more output signals. The output signals correspond to one or more of the input signals, additional locally generated signals or data relating to the signals or any combination of such signals. Use of a packet router according to the invention allows input signals encoded as one set of packetized signals to be recombined to provide additional packetized signals incorporating the same or different combinations of the packetized signals.