H04N13/293

REAL-TIME OMNIDIRECTIONAL STEREO MATCHING METHOD USING MULTI-VIEW FISHEYE LENSES AND SYSTEM THEREOF
20220321859 · 2022-10-06 ·

Provided is a real-time omnidirectional stereo matching method in a camera system including a first pair of fisheye cameras including first and second fisheye cameras provided to perform shooting in opposite directions and a second pair of fisheye cameras including third and fourth fisheye cameras provided to perform shooting in opposite directions and in which the first pair of fisheye cameras and the second pair of fisheye cameras are vertically provided, including receiving fisheye images of a subject captured through the first to the fourth fisheye cameras; selecting one fisheye camera from among fisheye cameras for each pixel of a preset reference fisheye image among the fisheye images using a sweep volume for preset distance candidates; generating a distance map for all pixels using the reference fisheye image and a fisheye image of the one fisheye camera; and performing real-time stereo matching on the fisheye images using the distance map.

REAL-TIME OMNIDIRECTIONAL STEREO MATCHING METHOD USING MULTI-VIEW FISHEYE LENSES AND SYSTEM THEREOF
20220321859 · 2022-10-06 ·

Provided is a real-time omnidirectional stereo matching method in a camera system including a first pair of fisheye cameras including first and second fisheye cameras provided to perform shooting in opposite directions and a second pair of fisheye cameras including third and fourth fisheye cameras provided to perform shooting in opposite directions and in which the first pair of fisheye cameras and the second pair of fisheye cameras are vertically provided, including receiving fisheye images of a subject captured through the first to the fourth fisheye cameras; selecting one fisheye camera from among fisheye cameras for each pixel of a preset reference fisheye image among the fisheye images using a sweep volume for preset distance candidates; generating a distance map for all pixels using the reference fisheye image and a fisheye image of the one fisheye camera; and performing real-time stereo matching on the fisheye images using the distance map.

Method for image processing of image data for varying image quality levels on a two-dimensional display wall

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.

Method for image processing of image data for varying image quality levels on a two-dimensional display wall

A captured scene captured of a live action scene while a display wall is positioned to be part of the live action scene may be processed. To perform the processing, image data of the live action scene having a live actor and the display wall displaying a first rendering of a precursor image is received. Further, precursor metadata for the precursor image displayed on the display wall and display wall metadata for the display wall is determined. An image matte is accessed, where the image matte indicates a first portion associated with the live actor and a second portion associated with the precursor image on the display wall Image quality levels for display wall portions of the display wall in the image data is determined, and pixels associated with the display wall in the image data are adjusted to the image quality levels.

Estimation of object properties in 3D world

Objects within two-dimensional video data are modeled by three-dimensional models as a function of object type and motion through manually calibrating a two-dimensional image to the three spatial dimensions of a three-dimensional modeling cube. Calibrated three-dimensional locations of an object in motion in the two-dimensional image field of view of a video data input are determined and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the determined three-dimensional locations. The two-dimensional object image is replaced in the video data input with an object-type three-dimensional polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.

Image reconstruction method, system, device and computer-readable storage medium
11257283 · 2022-02-22 · ·

Image reconstruction methods, systems, devices, and computer-readable storage media are provided. The method includes: acquiring a multi-angle free-perspective image combination, parameter data of the image combination, and virtual viewpoint position information based on user interaction, where the image combination includes multiple groups of texture images and depth maps that are synchronized at multiple angles and have corresponding relationships; selecting a corresponding group of texture images and depth maps in the image combination at a user interaction moment based on a preset rule according to the virtual viewpoint position information and the parameter data of the image combination; and combining and rendering the selected corresponding group of texture images and depth maps in the image combination at the user interaction moment based on the virtual viewpoint position information and parameter data corresponding to the corresponding group of texture images and depth maps in the image combination at the user interaction moment.

Image reconstruction method, system, device and computer-readable storage medium
11257283 · 2022-02-22 · ·

Image reconstruction methods, systems, devices, and computer-readable storage media are provided. The method includes: acquiring a multi-angle free-perspective image combination, parameter data of the image combination, and virtual viewpoint position information based on user interaction, where the image combination includes multiple groups of texture images and depth maps that are synchronized at multiple angles and have corresponding relationships; selecting a corresponding group of texture images and depth maps in the image combination at a user interaction moment based on a preset rule according to the virtual viewpoint position information and the parameter data of the image combination; and combining and rendering the selected corresponding group of texture images and depth maps in the image combination at the user interaction moment based on the virtual viewpoint position information and parameter data corresponding to the corresponding group of texture images and depth maps in the image combination at the user interaction moment.

Microscope video processing device and medical microscope system

The present invention is intended to convert a video input from a surgical microscope into a three-dimensional video. A microscope video processing device 100 includes: a microscope video acquisition unit that acquires a microscope video output from an surgical microscope 200; a video conversion unit that converts the microscope video acquired by the microscope video acquisition unit into a three-dimensional video; a surgical instrument position determination unit that determines the position of a surgical instrument in the three-dimensional video converted by the video conversion unit; a distance calculation unit that calculates a distance between a preset patient's preset surgery target region and the position of the surgical instrument determined by the surgical instrument position determination unit; and a video output unit that outputs to a display unit an output video in which distance information indicative of the distance calculated by the distance calculation unit is displayed in the three-dimensional video.

METHOD AND APPARATUS FOR UPDATING NAVIGATION MAP
20170228933 · 2017-08-10 ·

A method and an apparatus for updating navigation map are disclosed. The method includes: fusing captured three-dimensional (3D) data and two-dimensional data (2D) image data of a street view to generate 3D fused data representing the street view; and updating the navigation map in real time according to the 3D fused data. Thus, the disclosure provides a way to update the navigation map in real time.

ELECTRONIC DEVICE AND METHOD FOR OUTPUTTING ELECTRONIC DOCUMENT IN ELECTRONIC DEVICE

An electronic device is provided. The electronic device includes a display configured to output a screen on which a web browser is executed, an input device comprising input circuitry configured to integrate with the display or be independent of the display, a communication circuit configured establish a communication channel with the network via a wired or wireless communication connection, a processor configured to be electrically connected with the communication circuit, the display, and the input device, and a memory configured to store a program and instructions for the web browser and be electrically connected with the processor. The memory stores the instructions which, when executed by the processor, cause the electronic device to perform at least one operation comprising: displaying the screen where the web browser is executed on the display, receiving a web document via the communication circuit, displaying the content on the first region based on the first code and displaying an object which may interact with the content on the second region based on the second code.