H04N2013/0074

STEREO IMAGING DEVICE
20200099914 · 2020-03-26 · ·

A stereo imaging device includes a first image sensor and a second image sensor each capable of outputting a captured image signal. A monitoring signal and a parallax detection signal generating unit and a parallax detection signal generating unit are operated to generate two parallax detection signals for detecting a parallax from the two image signals of the two image sensors also generate, from an image signal fed from one of the two image sensors, a monitoring signal to be outputted to the monitor. The first reduction processing circuit reduces and outputs a monitoring signal at a preset reducing rate. The second reduction processing circuit performs a conversion on an arbitrary range of an image indicated by the parallax detection signal, so as to form an arbitrary reduction ratio, thus outputting a parallax detection signal.

Mobile terminal and operating method thereof
10594927 · 2020-03-17 · ·

A mobile terminal includes: a display; and a controller configured to: cause the display to display a plurality of videos captured by a 360-degree camera; generate a 360-degree video by combining or stitching the plurality of videos; and cause the display to display a stitching region corresponding to a focused photographing object when the focused photographing object included in the 360-degree video is placed in the stitching region that is a boundary region in which at least two of the plurality of videos are connected.

Device and method of dimensioning using digital images and depth data
10587858 · 2020-03-10 · ·

A device and method of dimensioning using digital images and depth data is provided. The device includes a camera and a depth sensing device whose fields of view generally overlap. Segments of shapes belonging to an object identified in a digital image from the camera are identified. Based on respective depth data, from the depth sensing device, associated with each of the segments of the shapes belonging to the object, it is determined whether each of the segments is associated with a same shape belonging to the object. Once all the segments are processed to determine their respective associations with the shapes of the object in the digital image, dimensions of the object are computed based on the respective depth data and the respective associations of the shapes.

METHODS AND APPARATUS FOR GENERATING A THREE-DIMENSIONAL RECONSTRUCTION OF AN OBJECT WITH REDUCED DISTORTION
20200074698 · 2020-03-05 · ·

Methods, systems, and computer readable media for generating a three-dimensional reconstruction of an object with reduced distortion are described. In some aspects, a system includes at least two image sensors, at least two projectors, and a processor. Each image sensor is configured to capture one or more images of an object. Each projector is configured to illuminate the object with an associated optical pattern and from a different perspective. The processor is configured to perform the acts of receiving, from each image sensor, for each projector, images of the object illuminated with the associated optical pattern and generating, from the received images, a three-dimensional reconstruction of the object. The three-dimensional reconstruction has reduced distortion due to the received images of the object being generated when each projector illuminates the object with an associated optical pattern from the different perspective.

PERSON RECOGNITION DEVICE AND METHOD
20200074204 · 2020-03-05 ·

Person recognition device for person re-identification in a monitoring region, having a camera apparatus and an evaluation module, wherein the camera apparatus comprises a first camera unit and a second camera unit, wherein the first camera unit is configured to record a first monitoring image of a portion of the monitoring region, wherein the second camera unit is configured to record a second monitoring image of the portion of the monitoring region, wherein the camera apparatus is configured to feed the first monitoring image and the second monitoring image to the evaluation module, wherein the evaluation module is configured to re-identify a person in the monitoring region based on the first monitoring image and the second monitoring image, wherein the second camera unit has a cut-off filter for wavelength cutting of incident light in a stop band.

Stereo image generation and interactive playback
10540818 · 2020-01-21 · ·

Video data of an environment may be prepared for stereoscopic presentation to a user in a virtual reality or augmented reality experience. According to one method, a plurality of locations distributed throughout a viewing volume may be designated, at which a plurality of vantages are to be positioned to facilitate viewing of the environment from proximate the locations. For each location, a plurality of images of the environment, captured from viewpoints proximate the location, may be retrieved. For each location, the images may be reprojected to a three-dimensional shape and combined to generate a combined image. The combined image may be applied to one or more surfaces of the three-dimensional shape to generate a vantage. The vantages may be stored such that the vantages can be used to generate stereoscopic viewpoint video of the scene, as viewed from at least two virtual viewpoints corresponding to viewpoints of an actual viewer's eyes within the viewing volume.

Method and apparatus for processing stereoscopic video

A method of processing a stereoscopic video includes determining whether a current frame of a stereoscopic video is a video segment boundary frame; determining whether an image error is included in the current frame when the current frame is the video segment boundary frame; and processing the current frame by removing, from the current frame, a post inserted object (PIO) included in the current frame when the image error is included in the current frame.

Three-dimensional (3D) image rendering method and apparatus

A three-dimensional (3D) image rendering method and an apparatus are provided. The 3D image rendering method includes determining optical images associated with candidate viewpoint positions in a viewing zone, determining virtual rays intersecting a pixel of a display panel based on the determined optical images, and assigning a pixel value to the pixel based on respective distances between intersection points between the rays and an optical layer and optical elements of the optical layer.

Enhanced material detection by stereo beam profile analysis

A detector (110) for determining at least one material property of at least one object (112) is proposed. The detector (110) comprises at least one projector (116) configured for illuminating the object (112) with at least one illumination pattern (118) comprising a plurality of illumination features (120); at least one first camera (122) having at least one first sensor element, wherein the first sensor element has a matrix of first optical sensors, the first optical sensors each having a light-sensitive area, wherein each first optical sensor is designed to generate at least one sensor signal in response to an illumination of its respective light-sensitive area by a reflection light beam propagating from the object (112) to the first camera (122), wherein the first camera (122) is configured for imaging at least one first reflection image comprising a plurality of first reflection features generated by the object (112) in response to illumination by the illumination features (120), wherein the first camera (122) is arranged such that the first reflection image is imaged under a first direction of view to the object (112); at least one second camera (124) having at least one second sensor element, wherein the second sensor element has a matrix of second optical sensors, the second optical sensors each having a light-sensitive area, wherein each second optical sensor is designed to generate at least one sensor signal in response to an illumination of its respective light-sensitive area by a reflection light beam propagating from the object (112) to the second camera (124), wherein the second camera (124) is configured for imaging at least one second reflection image comprising a plurality of second reflection features generated by the object (112) in response to illumination by the illumination feature (120), wherein the second camera (124) is arranged such that the second reflection image is imaged under a second direction of view to the object (112), wherein the first direction of view and the second direction of view differ; at least one evaluation device (126) configured for evaluating the first reflection image and the second reflection image, wherein the evaluation comprises matching the first reflection features and the second reflection features and determining a combined material property of matched pairs of first and second reflection features by analysis of their beam profiles.

MULTI-CAMERA SCENE REPRESENTATION INCLUDING STEREO VIDEO FOR VR DISPLAY
20190349561 · 2019-11-14 ·

This invention encompasses a device capable of taking two sets of videos or pictures from a slightly different perspective than the other, and using software to manipulate these two sets of media into one three-dimensional image that can be shared with others. One embodiment of the invention calls for a tray with a hand grip that holds two cell phones, and can adjust them to approximately an interpupillary distance, such that a user can take a picture with the device and have a recipient of the message view either the user or an objection the user is pointing the device at in three-dimensional view. The software also has image recognition abilities such that it can build a three-dimensional environment through the one-sided capture of an image, then pull data from an image recognition database to complete a three-dimensional representation of the object. Dual Bluetooth with close-range detection for shutter control was developed and tested successfully.