H04N13/156

Automated feature analysis of a structure

An automated structural feature and analysis system is disclosed. A 3D device emits a volume scanning 3D beam that scans a structure to generate 3D data that is associated with a distance between the 3D device and each end point of the 3D beam positioned on the structure. An imaging device captures an image of the structure to generate image data with the structure as depicted by the image of the structure. A controller fuses the 3D data of the structure generated by the 3D device with the image data of the structure generated by the imaging device to determine the distance between the 3D device and each end point of the 3D beam positioned on the structure and to determine a distance between each point on the image. The controller generates a sketch image of the structure that is displayed to the user.

Region monitoring system and control method for region monitoring system

A region monitoring system includes: a first communication control unit that transmits monitoring camera captured-image data to a monitoring server, the monitoring camera captured-image data being captured-image data of a monitoring camera that monitors a region in which a plurality of houses are located; and a camera communication control unit that transmits in-vehicle camera captured-image data to the monitoring server, the in-vehicle camera captured-image data being captured-image data of an in-vehicle camera of a vehicle located in the region.

Region monitoring system and control method for region monitoring system

A region monitoring system includes: a first communication control unit that transmits monitoring camera captured-image data to a monitoring server, the monitoring camera captured-image data being captured-image data of a monitoring camera that monitors a region in which a plurality of houses are located; and a camera communication control unit that transmits in-vehicle camera captured-image data to the monitoring server, the in-vehicle camera captured-image data being captured-image data of an in-vehicle camera of a vehicle located in the region.

Methods and systems of automatic calibration for dynamic display configurations
11544031 · 2023-01-03 · ·

Systems and methods are described for capturing, using a forward-facing camera associated with a head-mounted augmented reality (AR) head-mounted display (HMD), images of portions of first and second display devices in an environment, the first and second display devices displaying first and second portions of content related to an AR presentation, and displaying a third portion of content related to the AR presentation on the AR HMD, the third portion determined based upon the images of portions of the first and second display devices captured using the forward-facing camera. Moreover, the first and second display devices may be active stereo display, and the AR HMD may simultaneously function as shutter glasses.

Methods and systems of automatic calibration for dynamic display configurations
11544031 · 2023-01-03 · ·

Systems and methods are described for capturing, using a forward-facing camera associated with a head-mounted augmented reality (AR) head-mounted display (HMD), images of portions of first and second display devices in an environment, the first and second display devices displaying first and second portions of content related to an AR presentation, and displaying a third portion of content related to the AR presentation on the AR HMD, the third portion determined based upon the images of portions of the first and second display devices captured using the forward-facing camera. Moreover, the first and second display devices may be active stereo display, and the AR HMD may simultaneously function as shutter glasses.

TECHNIQUES FOR GENERATING LIGHT FIELD DATA BY COMBINING MULTIPLE SYNTHESIZED VIEWPOINTS
20220408070 · 2022-12-22 · ·

Techniques for efficiently generating and displaying light-field data are disclosed. In one particular embodiment, the techniques may be realized as a method for generating light-field data, the method comprising receiving input image data, synthesizing a first plurality of viewpoints based on the input image data, synthesizing a second plurality of viewpoints based on cached image data, combining the first and second plurality of viewpoints, yielding a plurality of blended viewpoints, displaying the plurality of blended viewpoints, and caching image data associated with the plurality of blended viewpoints.

TECHNIQUES FOR GENERATING LIGHT FIELD DATA BY COMBINING MULTIPLE SYNTHESIZED VIEWPOINTS
20220408070 · 2022-12-22 · ·

Techniques for efficiently generating and displaying light-field data are disclosed. In one particular embodiment, the techniques may be realized as a method for generating light-field data, the method comprising receiving input image data, synthesizing a first plurality of viewpoints based on the input image data, synthesizing a second plurality of viewpoints based on cached image data, combining the first and second plurality of viewpoints, yielding a plurality of blended viewpoints, displaying the plurality of blended viewpoints, and caching image data associated with the plurality of blended viewpoints.

APPARATUS AND METHOD FOR NOISE REDUCTION FROM A MULTI-VIEW IMAGE
20220405890 · 2022-12-22 ·

An image processing apparatus is coupled to a plurality of image capturing devices. The image processing apparatus reduces a noise in an epi-polar image while generating a three-dimensional image from a multi view image. The image processing apparatus divides the multi view image into a flat region and a non-flat region, generates the epi-polar image from the multi view image, replaces an epi-polar line in the epi-polar image corresponding to the flat region with an average pixel value of the multi-view image, and replaces an epi-polar line in the epi-polar image corresponding to the non-flat region with a pixel value of a center-view image obtained from a centrally located image capturing device among the plurality of image capturing devices.

APPARATUS AND METHOD FOR NOISE REDUCTION FROM A MULTI-VIEW IMAGE
20220405890 · 2022-12-22 ·

An image processing apparatus is coupled to a plurality of image capturing devices. The image processing apparatus reduces a noise in an epi-polar image while generating a three-dimensional image from a multi view image. The image processing apparatus divides the multi view image into a flat region and a non-flat region, generates the epi-polar image from the multi view image, replaces an epi-polar line in the epi-polar image corresponding to the flat region with an average pixel value of the multi-view image, and replaces an epi-polar line in the epi-polar image corresponding to the non-flat region with a pixel value of a center-view image obtained from a centrally located image capturing device among the plurality of image capturing devices.

Stereoscopic visualization camera and platform

A stereoscopic imaging apparatus and platform are disclosed. An example stereoscopic imaging apparatus includes a main objective assembly and left and right lens sets defining respective parallel left and right optical paths from light that is received from the main objective assembly of a target surgical site. Each of the left and right lens sets includes a front lens, first and second zoom lenses configured to be movable along the optical path, and a lens barrel configured to receive the light from the second zoom lens. The example stereoscopic imaging apparatus also includes left and right image sensors configured to convert the light after passing through the lens barrel into image data that is indicative of the received light. The example stereoscopic visualization camera further includes a processor configured to convert the image data into stereoscopic video signals or video data for display on a display monitor.