Patent classifications
H04N13/296
Display control apparatus, method for controlling display control apparatus, and storage medium
State information indicating states of a plurality of imaging apparatuses 100-x used for generating a virtual viewpoint image is acquired. At least one image type is determined from a plurality of image types indicating display formats of displaying the states of the plurality of imaging apparatuses 100-x based on the state information. Based on the determined image type, the states of the plurality of imaging apparatuses 100-x are displayed.
Method and system to calibrate a camera clock using flicker
A method operable by circuitry including an image processor, an image sensor, and another clock-dependent device, including measuring a flicker using the image sensor, adjusting a clock rate of the circuitry according to the measured flicker, and operating the clock-dependent device using the adjusted clock rate.
Method and system to calibrate a camera clock using flicker
A method operable by circuitry including an image processor, an image sensor, and another clock-dependent device, including measuring a flicker using the image sensor, adjusting a clock rate of the circuitry according to the measured flicker, and operating the clock-dependent device using the adjusted clock rate.
FREE VIEWPOINT VIDEO GENERATION AND INTERACTION METHOD BASED ON DEEP CONVOLUTIONAL NEURAL NETWORK
A Free Viewpoint Video (FVV) generation and interaction method based on a deep Convolutional Neural Network (CNN) includes the steps of: acquiring multi-viewpoint data of a target scene by a synchronous shooting system with a multi-camera array arranged accordingly to obtain groups of synchronous video frame sequences from a plurality of viewpoints, and rectifying baselines of the sequences at pixel level in batches; extracting, by encoding and decoding network structures, features of each group of viewpoint images input into a designed and trained deep CNN model, to obtain deep feature information of the scene, and combining the information with the input images to generate a virtual viewpoint image between each group of adjacent physical viewpoints at every moment; and synthesizing all viewpoints into frames of the FVV based on time and spatial position of viewpoints by stitching matrices. The method eliminates the need for camera rectification and depth image calculation.
FREE VIEWPOINT VIDEO GENERATION AND INTERACTION METHOD BASED ON DEEP CONVOLUTIONAL NEURAL NETWORK
A Free Viewpoint Video (FVV) generation and interaction method based on a deep Convolutional Neural Network (CNN) includes the steps of: acquiring multi-viewpoint data of a target scene by a synchronous shooting system with a multi-camera array arranged accordingly to obtain groups of synchronous video frame sequences from a plurality of viewpoints, and rectifying baselines of the sequences at pixel level in batches; extracting, by encoding and decoding network structures, features of each group of viewpoint images input into a designed and trained deep CNN model, to obtain deep feature information of the scene, and combining the information with the input images to generate a virtual viewpoint image between each group of adjacent physical viewpoints at every moment; and synthesizing all viewpoints into frames of the FVV based on time and spatial position of viewpoints by stitching matrices. The method eliminates the need for camera rectification and depth image calculation.
Process and apparatus for the capture of plenoptic images between arbitrary planes
A process and an apparatus for the plenoptic capture of photographic or cinematographic images of an object or a 3D scene (10) of interest are based on a correlated light emitting source and correlation measurement, along the line of “Correlation Plenoptic Imaging” (CPI). A first image sensor (Da) and a second image sensor (Db) detect images along a path of a first light beam (a) and a second light beam (b), respectively. A processing unit (100) of the intensities detected by the synchronized image sensors (Da, Db) is configured to retrieve the propagation direction of light by measuring spatio-temporal correlations between light intensities detected in the image planes of at least two arbitrary planes (P′, P″; D′b, D″a) chosen in the vicinity of the object or within the 3D scene (10).
Process and apparatus for the capture of plenoptic images between arbitrary planes
A process and an apparatus for the plenoptic capture of photographic or cinematographic images of an object or a 3D scene (10) of interest are based on a correlated light emitting source and correlation measurement, along the line of “Correlation Plenoptic Imaging” (CPI). A first image sensor (Da) and a second image sensor (Db) detect images along a path of a first light beam (a) and a second light beam (b), respectively. A processing unit (100) of the intensities detected by the synchronized image sensors (Da, Db) is configured to retrieve the propagation direction of light by measuring spatio-temporal correlations between light intensities detected in the image planes of at least two arbitrary planes (P′, P″; D′b, D″a) chosen in the vicinity of the object or within the 3D scene (10).
VIRTUAL REALITY INTERACTION METHOD, DEVICE AND SYSTEM
A method and system for aligning exposure center points of multiple cameras in a VR system are provided. The method includes: acquiring image data of a first type frame according to a preset frame rate; adjusting VTS data of the first type frame, the VTS data changing with the change of the exposure parameters so as to fix a time interval between an exposure center point of the first type frame and an FSIN synchronization signal in a VR system; acquiring image data of a second type frame according to the preset frame rate; and adjusting VTS data of the second type frame according to the VTS data of the first type frame, and fixing a time interval between an exposure center point of the second type frame and the FSIN synchronization signal so as to complete the alignment of exposure center points of the cameras.
VIRTUAL REALITY INTERACTION METHOD, DEVICE AND SYSTEM
A method and system for aligning exposure center points of multiple cameras in a VR system are provided. The method includes: acquiring image data of a first type frame according to a preset frame rate; adjusting VTS data of the first type frame, the VTS data changing with the change of the exposure parameters so as to fix a time interval between an exposure center point of the first type frame and an FSIN synchronization signal in a VR system; acquiring image data of a second type frame according to the preset frame rate; and adjusting VTS data of the second type frame according to the VTS data of the first type frame, and fixing a time interval between an exposure center point of the second type frame and the FSIN synchronization signal so as to complete the alignment of exposure center points of the cameras.
SYSTEM AND METHOD FOR DETERMINING DEPTH PERCEPTION IN VIVO IN A SURGICAL ROBOTIC SYSTEM
A system and method for generating a depth map from image data in a surgical robotic system that employs a robotic subsystem having a camera assembly with first and second cameras for generating image data. The system and method generates based on the image data a plurality of depth maps, and then converts the plurality of depth maps into a single combined depth map having distance data associated therewith. The system and method can then control the camera assembly based on the distance data in the single combined depth map.