H04N5/92

ON-VEHICLE RECORDING CONTROL APPARATUS AND RECORDING CONTROL METHOD
20240265749 · 2024-08-08 ·

An on-vehicle recording control apparatus includes: a captured data acquisition unit configured to acquire video data of surroundings of a vehicle captured by a camera; an event detection unit configured to detect an event based on acceleration applied to the vehicle; an operation controller configured to receive event recording operation based on user operation; and a recording controller configured to: record the acquired video data; generate and store, from the acquired video data, event data of a longer retroactive period when the operation controller receives the event recording operation and acceleration that is equal to or larger than a predetermined value and not determined as an event is detected before receiving the event recording operation, as compared to a case in which the acceleration that is equal to or larger than the predetermined value and not determined as an event is not detected before receiving the event recording operation.

ON-VEHICLE RECORDING CONTROL APPARATUS AND RECORDING CONTROL METHOD
20240265749 · 2024-08-08 ·

An on-vehicle recording control apparatus includes: a captured data acquisition unit configured to acquire video data of surroundings of a vehicle captured by a camera; an event detection unit configured to detect an event based on acceleration applied to the vehicle; an operation controller configured to receive event recording operation based on user operation; and a recording controller configured to: record the acquired video data; generate and store, from the acquired video data, event data of a longer retroactive period when the operation controller receives the event recording operation and acceleration that is equal to or larger than a predetermined value and not determined as an event is detected before receiving the event recording operation, as compared to a case in which the acceleration that is equal to or larger than the predetermined value and not determined as an event is not detected before receiving the event recording operation.

IMAGE COMPRESSION APPARATUS AND METHOD
20240267539 · 2024-08-08 · ·

Provided are an image compression method performed by an apparatus including at least one processor and at least one memory that stores instructions executable by the at least one processor. The method includes, receiving an event information of a captured image; encoding an image frame from the captured image; generating a meta-frame by encoding a mapping table corresponding to the event information; generating a transmission packet by combining the meta-frame with the encoded image frame; and transmitting the generated transmission packet, wherein the mapping table includes a first mapping table for encoding an object type for classifying at least one object included in the event information and a second mapping table for encoding a situation class for classifying a situation of the at least one object.

Methods and Systems for Customizing Virtual Reality Data

An exemplary virtual reality system (system) accesses metadata descriptive of a plurality of surface data frame sequences that each depict a different view of a three-dimensional (3D) scene, and identifies a set of experience parameters descriptive of a particular virtual reality experience providable to a user by a media player device that processes a particular virtual reality dataset that is customized to the particular virtual reality experience. Based on the metadata and the identified set of experience parameters, the system selects surface data frame sequences for inclusion in a frame sequence subset upon which the particular virtual reality dataset is based. The system then includes an entry corresponding to the particular virtual reality dataset within an experience selection data structure configured to facilitate dynamic selection of different entries by the media player device as the media player device provides different virtual reality experiences to the user.

Automatic Processing of Double-System Recording

A method for automatically producing a video and audio mix at a first portable electronic device. The method receives a request to capture video and audio, performs a network discovery process to find a second portable electronic device, and sends a message to the second device indicating when to start recording audio for a double system recording session. The method initiates the recording session, such that both devices record concurrently. In response to the first device stopping the recording of audio and sound, signaling the second device to stop recording for the identified recording session. In response to the first device receiving a first audio track from the second device that contains an audio signal recorded during the recording session, automatically generating a mix of video and audio, such that one of the audio signals from the first and second tracks is ducked relative to the other.

Methods and Systems for Customizing Virtual Reality Data

An exemplary virtual reality system (system) generates an experience selection data structure configured to facilitate dynamic selection of different entries included within the experience selection data structure by a media player device as the media player device provides different virtual reality experiences to a user by processing different virtual reality datasets corresponding to different entries that the media player device selects. The system provides the experience selection data structure to the media player device and detects that the media player device selects an entry by way of the experience selection data structure. The entry corresponds to a particular virtual reality dataset that is customized to a particular virtual reality experience. In response to the selection of the entry, the system provides, to the media player device, the particular virtual reality dataset that is customized to the particular virtual reality experience.

METHOD OF USING CUBE MAPPING AND MAPPING METADATA FOR ENCODERS
20180343470 · 2018-11-29 · ·

Described herein is a method and apparatus for using cube mapping and mapping metadata with encoders. Video data, such as 360 video data, is sent by a capturing device to an application, such as video editing software, which generates cube mapped video data and mapping metadata from the 360 video data. An encoder then applies the mapping metadata to the cube mapped video data to minimize or eliminate search regions when performing motion estimation, minimize or eliminate neighbor regions when performing intra coding prediction and assign zero weights to edges having no relational meaning.

Distinguishing HEVC pictures for trick mode operations

Assistance information related to a tier framework may describe signaling for extractable and decodable sub-sequences based on pictures interdependencies. This may allow a video application to efficiently select pictures when performing a given trick mode.

REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM

The present technology relates to a reproduction device, a reproduction method, and a recording medium that enable content having a wide dynamic range of brightness to be displayed with an appropriate brightness. A recording medium, on which the reproduction device of one aspect of the present technology performs reproduction, records coded data of an extended video that is a video having a second brightness range that is wider than a first brightness range, brightness characteristic information that represents a brightness characteristic of the extended video, and brightness conversion definition information used when performing a brightness conversion of the extended video to a standard video that is a video having the first brightness range. The reproduction device decodes the coded data and converts the extended video obtained by decoding the coded data to the standard video on the basis of the brightness conversion definition information.

STORING METADATA RELATED TO CAPTURED IMAGES

The present disclosure relates to user-selected metadata related to images captured by a camera of a client device. User-selected metadata may include contextual information and/or information provided by a user when the images are captured. In various implementations, a free form input may be received at a first client device of one or more client devices operated by a user. A task request may be recognized from the free form input, and it may be determined that the task request includes a request to store metadata related to one or more images captured by a camera of the first client device. The metadata may be selected based on content of the task request. The metadata may then be stored, e.g., in association with one or more images captured by the camera, in computer-readable media. The computer-readable media may be searchable by the metadata.