H04N5/92

Spherical video editing
11380362 · 2022-07-05 · ·

Systems and methods provide for editing of spherical video data. In one example, a computing device can receive a spherical video (or a video associated with an angular field of view greater than an angular field of view associated with a display screen of the computing device), such as by a built-in spherical video capturing system or acquiring the video data from another device. The computing device can display the spherical video data. While the spherical video data is displayed, the computing device can track the movement of an object (e.g., the computing device, a user, a real or virtual object represented in the spherical video data, etc.) to change the position of the viewport into the spherical video. The computing device can generate a new video from the new positions of the viewport.

Display control device, display control method, and program

From captured images, a scene desired by a viewer is appropriately detected. A display control device includes: an image information acquisition unit 60 that acquires information on a plurality of images captured during movement; a set time designation unit 66 that designates a plurality of set times based on the information on images; a thumbnail acquisition unit 68 that acquires the images captured at the set times as thumbnails; and a display image generation unit 70 that generates a display image to be displayed on a display unit so that the plurality of acquired thumbnails are arrayed in a predetermined direction in image-capturing time-series order while being displayed side by side in a direction different from the predetermined direction based on position information indicating positions at which the images are captured.

Data sterilization for post-capture editing of artificial reality effects

In one embodiment, the system may receive a serialized data stream generated by serializing data chunks including data from a video stream and contextual data streams associated with the video stream. The contextual data streams may include a first computed data stream and a sensor data stream. The system may extract the video data stream and one or more contextual data streams from the serialized data stream. The system may generate a second computed data stream based on the sensor data stream in the extracted contextual data streams. The system may compare the second computed data stream to the first computed data stream extracted from the serialized data stream to select a computed data stream based on one or more pre-determined criteria. The system may render an artificial reality effect for display with the extracted video data stream based at least in part on the selected computed data stream.

Combining video streams having different information-bearing levels

A method includes recording a first video stream characterized by a first value of a first quality characteristic. The method includes determining that the first video stream satisfies a trigger criterion. The trigger criterion characterizes a threshold amount of video content change information. The method includes, in response to determining that the first video stream satisfies the trigger criterion, obtaining a second video stream characterized by a second value of a second quality characteristic. The second video stream includes scene information also included in the first video stream. The second value of the second quality characteristic is indicative of a higher quality video stream than the first value of the first quality characteristic. The method includes generating a third video stream by adding information from the second video stream to the first video stream. The third video stream corresponds to a higher quality version of the first video stream.

Increasing dynamic range of a virtual production display

Disclosed here are various techniques to increase dynamic range of an image recorded from a display. A processor performing preprocessing splits an input image containing both bright and dark regions into two images, image A containing bright regions, and image B containing dark regions. The display presents image A and image B in alternating fashion. Camera is synchronized with the display to record image A and image B independently. In postprocessing, a processor obtains the recorded images A and B. The processor increases the pixel value of the recorded image A to obtain image A with increased pixel value. Finally, the processor increases pixel value of the image recorded from the display by combining the first recorded image with increased pixel value and the second recorded image.

Increasing dynamic range of a virtual production display

The processor obtains a first pixel value and a second pixel value of the display. The processor determines a desired pixel value range that exceeds the second pixel value of the display. The processor obtains a threshold between the first pixel value of the display and the second pixel value of the display. The processor obtains a function mapping the desired pixel value range to a range between the threshold and the second pixel value. The processor applies the first function to an input image prior to displaying the input image on the display. The display presents the image. Upon recording the presented image, the processor determines a region within the recorded image having a pixel value between the threshold and the second pixel value. The processor increases dynamic range of the recorded image by applying an inverse of the function to the pixel value of the region.

Increasing dynamic range of a virtual production display

The processor obtains a first pixel value and a second pixel value of the display. The processor determines a desired pixel value range that exceeds the second pixel value of the display. The processor obtains a threshold between the first pixel value of the display and the second pixel value of the display. The processor obtains a function mapping the desired pixel value range to a range between the threshold and the second pixel value. The processor applies the first function to an input image prior to displaying the input image on the display. The display presents the image. Upon recording the presented image, the processor determines a region within the recorded image having a pixel value between the threshold and the second pixel value. The processor increases dynamic range of the recorded image by applying an inverse of the function to the pixel value of the region.

Increasing dynamic range of a virtual production display

The processor obtains a first pixel value and a second pixel value of the display. The processor determines a desired pixel value range that exceeds the second pixel value of the display. The processor obtains a threshold between the first pixel value of the display and the second pixel value of the display. The processor obtains a function mapping the desired pixel value range to a range between the threshold and the second pixel value. The processor applies the first function to an input image prior to displaying the input image on the display. The display presents the image. Upon recording the presented image, the processor determines a region within the recorded image having a pixel value between the threshold and the second pixel value. The processor increases dynamic range of the recorded image by applying an inverse of the function to the pixel value of the region.

Increasing dynamic range of a virtual production display

A processor performing postprocessing obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value of the virtual production display and a second pixel value of the virtual production display. The processor modifies the region according to predetermined steps producing a pattern unlikely to occur within the input image, where the pattern corresponds to a difference between the original pixel value and the threshold. The processor can replace the region of the input image with the pattern to obtain a modified image. The virtual production display can present the modified image. A processor performing postprocessing detects the pattern within the modified image displayed on the virtual production display. The processor calculates the original pixel value of the region by reversing the predetermined steps. The processor replaces the pattern in the modified image with the original pixel value.

Dynamic range of a virtual production display

A processor obtains an input image containing both bright and dark regions. The processor obtains a threshold between a first pixel value and a second pixel value of the display. Upon detecting a region of the input image having an original pixel value above the threshold, the processor can create a data structure including a location of the region in the input image and an original pixel value of the region. The data structure occupies less memory than the input image. The display presents the input image including the region of the image having the original pixel value above the threshold. The processor sends the data structure to a camera, which records the presented image. The processor performing postprocessing obtains the data structure and the recorded image and increases dynamic range of the recorded image by modifying the recorded image based on the data structure.