G11B27/326

INFORMATION PROCESSING APPARATUS, INFORMATION RECORDING MEDIUM, INFORMATION PROCESSING METHOD, AND PROGRAM
20190058855 · 2019-02-21 · ·

There is provided an information processing apparatus that reproduces data recorded onto a recording medium, the apparatus including: circuitry configured to convert a color space of an image recorded onto the recording medium, and superimpose a main content image and a sub-content image recorded onto the recording medium, wherein, when a color space of the main content image corresponds to BT.709 format, the circuitry determines to convert color spaces of the sub-content image and the main content image into the same color space, and wherein, when the color space of the main content image corresponds to BT.2020 format, the circuitry determines to not convert the color space of the sub-content image into BT.2020 format.

Virtual panoramic thumbnail to summarize and visualize video content in video surveillance and in connected home business

A system including a video sequence embodied in memory, a processor that detects a moving object or person within a field of view of the video sequence, a processor that tracks the moving object or person and that identifies a plurality of frames that summarizes movement of the moving object or person within a time interval of the video sequence and a processor that combines the identified plurality of frames into a thumbnail image.

Systems and methods of providing content differentiation between thumbnails

A method and system of providing content differentiation between thumbnails. One method includes receiving a first video and selecting a first thumbnail from the first video. The method includes receiving a second video and selecting a second thumbnail from the second video. The method includes determining whether a first scene of the first video matches a second scene of the second video. The method includes comparing the first thumbnail and the second thumbnail to determine a thumbnail similarity when the first scene of the first video matches the second scene of the second video. The method includes selecting a new thumbnail to replace either the first thumbnail or the second thumbnail when the thumbnail similarity meets or exceeds a predetermined threshold. The method includes displaying the new thumbnail and the other of the first thumbnail and the second thumbnail.

Extracting high quality images from a video

Various embodiments calculate a score for each frame of a video segment based on various subject-related factors associated with a subject (e.g., face or other object) captured in a frame relative to corresponding factors of the subject in other frames of the video segment. A highest-scoring frame from the video segment can then be extracted based on a comparison of the score of each frame of the video segment with the score of each other frame of the video segment, and the extracted frame can be transcoded as an image for display via a display device. The score calculation, extraction, and transcoding actions are performed automatically and without user intervention, which improves previous approaches that use a primarily manual, tedious, and time consuming approach.

Image capture device, recording device, and display control device

An image capture device for recording HDR (high dynamic range) image data obtained through image capture performs control so as to, when encoding HDR image data obtained by capturing an image with an image sensor, divide part of the HDR image data corresponding to a coding area to be encoded into a plurality of divided HDR image data, encode each of the plurality of divided HDR image data by using encoding means, and record the plurality of divided HDR image data that are encoded on a recording medium in a predetermined recording format.

CORRELATION OF RECORDED VIDEO PRESENTATIONS AND ASSOCIATED SLIDES

Techniques are disclosed for performing a computer-implemented processing of slide presentation videos to automatically generate index locations corresponding to particular slides within a slide presentation video. In embodiments, a slide presentation video is uploaded to a video processing system. The video processing system performs an image analysis to identify each slide within the slide presentation and determine a time window for each occurrence of each slide. An audio analysis is performed to adjust the time window to the start of a sentence that precedes the introduction of the slide. A user interface includes one or more selectable links associated with each slide that link to a corresponding location within the slide presentation video. Similarly, a processed slide presentation video includes selectable links to index to the corresponding slide of the presentation.

Method for storing multi-lens recording file and multi-lens recording apparatus
12131755 · 2024-10-29 · ·

A method for storing multi-lens recording file and multi-lens recording apparatus are provided. When at least one lens is driven for loop recording, a total recording capacity for loop recording is calculated based on setting data of the at least one lens driven. When a remaining capacity of a storage space is less than the total recording capacity, a file cleaning action is performed on multiple recorded files stored in the storage space for at least one lens. The file cleaning action includes: deleting the at least one recorded file stored in the storage space based on a recording time and a lens number until the remaining capacity is no less than the total recording capacity. When the remaining capacity of the storage space is no less than the total recording capacity, loop recording is performed through at least one lens being driven and a currently recorded file is stored.

Extracting High Quality Images from a Video

Various embodiments calculate a score for each frame of a video segment based on various subject-related factors associated with a subject (e.g., face or other object) captured in a frame relative to corresponding factors of the subject in other frames of the video segment. A highest-scoring frame from the video segment can then be extracted based on a comparison of the score of each frame of the video segment with the score of each other frame of the video segment, and the extracted frame can be transcoded as an image for display via a display device. The score calculation, extraction, and transcoding actions are performed automatically and without user intervention, which improves previous approaches that use a primarily manual, tedious, and time consuming approach.

ROBUST TRACKING OF OBJECTS IN VIDEOS

The present disclosure is directed toward systems and methods for tracking objects in videos. For example, one or more embodiments described herein utilize various tracking methods in combination with an image search index made up of still video frames indexed from a video. One or more embodiments described herein utilize a backward and forward tracking method that is anchored by one or more key frames in order to accurately track an object through the frames of a video, even when the video is long and may include challenging conditions.

Mobile terminal and method for controlling the same

The present invention relates to a mobile terminal capable of capturing videos, and a method of controlling the same. The mobile terminal includes a display unit capable of outputting a first video captured in response to a preset user input, and outputting a timeline of the first video in a camera preview mode, a camera capable of capturing a second video consecutive to the first video, in response to a preset user input, and a controller capable of storing the first video and the second video as one full video, and outputting a timeline of the full video, which a timeline of the second video follows the timeline of the first video, in the camera preview mode.