Patent classifications
H04N13/232
SYSTEMS AND METHODS FOR MULTI-MODAL SENSING OF DEPTH IN VISION SYSTEMS FOR AUTOMATED SURGICAL ROBOTS
Systems and methods for multi-modal sensing of three-dimensional position information of the surface of an object are disclosed. In particular, multiple visualization modalities are each used to collect distinctive positional information of a surface of an object. Each of the computed positional information is combined using weighting factors to compute a final, weighted three-dimensional position. In various embodiments, a first depth may be recorded using fiducial markers, a second depth may be recorded using a structured light pattern, and a third depth may be recorded using a light-field camera. Weighting factors may be applied to each of the recorded depths and a final, weighted depth may be computed.
LENS APPARATUS AND IMAGE PICKUP APPARATUS
A lens apparatus includes a first optical system, a second optical system, a first focus adjusting unit configured to simultaneously adjust focus of the first optical system and the second optical system, and a second focus adjusting unit configured to adjust a relative shift of focus positions of the first optical system and the second optical system. The first focus adjusting unit is connected to both the first optical system and the second optical system. The second focus adjusting unit is connected to one of the first optical system and the second optical system.
LENS APPARATUS AND IMAGE PICKUP APPARATUS
A lens apparatus includes a first optical system, a second optical system, a first focus adjusting unit configured to simultaneously adjust focus of the first optical system and the second optical system, and a second focus adjusting unit configured to adjust a relative shift of focus positions of the first optical system and the second optical system. The first focus adjusting unit is connected to both the first optical system and the second optical system. The second focus adjusting unit is connected to one of the first optical system and the second optical system.
Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots
Systems and methods for multi-modal sensing of three-dimensional position information of the surface of an object are disclosed. In particular, multiple visualization modalities are each used to collect distinctive positional information of a surface of an object. Each of the computed positional information is combined using weighting factors to compute a final, weighted three-dimensional position. In various embodiments, a first depth may be recorded using fiducial markers, a second depth may be recorded using a structured light pattern, and a third depth may be recorded using a light-field camera. Weighting factors may be applied to each of the recorded depths and a final, weighted depth may be computed.
IMAGE SENSORS AND SENSING METHODS TO OBTAIN TIME-OF-FLIGHT AND PHASE DETECTION INFORMATION
Indirect time-of-flight (i-ToF) image sensor pixels, i-ToF image sensors including such pixels, stereo cameras including such image sensors, and sensing methods to obtain i-ToF detection and phase detection information using such image sensors and stereo cameras. An i-ToF image sensor pixel may comprise a plurality of sub-pixels, each sub-pixel including a photodiode, a single microlens covering the plurality of sub-pixels and a read-out circuit for extracting i-ToF phase signals of each sub-pixel individually.
INTEGRATED MICROOPTIC IMAGER, PROCESSOR, AND DISPLAY
An optical system for displaying light from a scene includes an active optical component that includes a first plurality of light directing apertures, an optical detector, a processor, a display, and a second plurality of light directing apertures. The first plurality of light directing apertures is positioned to provide an optical input to the optical detector. The optical detector is positioned to receive the optical input and convert the optical input to an electrical signal corresponding to intensity and location data. The processor is connected to receive the data from the optical detector and process the data for the display. The second plurality of light directing apertures is positioned to provide an optical output from the display.
INTEGRATED MICROOPTIC IMAGER, PROCESSOR, AND DISPLAY
An optical system for displaying light from a scene includes an active optical component that includes a first plurality of light directing apertures, an optical detector, a processor, a display, and a second plurality of light directing apertures. The first plurality of light directing apertures is positioned to provide an optical input to the optical detector. The optical detector is positioned to receive the optical input and convert the optical input to an electrical signal corresponding to intensity and location data. The processor is connected to receive the data from the optical detector and process the data for the display. The second plurality of light directing apertures is positioned to provide an optical output from the display.
DEPTH CODEC FOR REAL-TIME, HIGH-QUALITY LIGHT FIELD RECONSTRUCTION
Techniques to facilitate compression of depth data and real-time reconstruction of high-quality light fields. A parameter space of values for a line, pairs of endpoints on different sides of the line, and a palette index for each pixel of a pixel tile of a depth image is sampled. Values for the line, the pairs of endpoints, and the palette index that minimize an error are determined and stored.
METHOD FOR GENERATING LAYERED DEPTH DATA OF A SCENE
The invention relates to layered depth data. In multi-view images, there is a large amount of redundancy between images. Layered Depth Video format is a well-known formatting solution for formatting multi-view images which reduces the amount of redundant information between images. In LDV, a reference central image is selected and information brought by other images of the multi-view images that are mainly regions occluded in the central image are provided. However, LDV format contains a single horizontal occlusion layer, and thus fails rendering viewpoints that uncover multiple layers dis-occlusions. The invention uses light filed content which offers disparities in every directions and enables a change in viewpoint in a plurality of directions distinct from the viewing direction of the considered image enabling to render viewpoints that may uncover multiple layer dis-occlusions which may occurs with complex scenes viewed with wide inter-axial distance.
Three-dimensional microscopic imaging method and system
Provided are a 3D microscopic imaging method and a 3D microscopic imaging system. The method includes: acquiring a first PSF of a 3D sample from an object plane to a plane of a main camera sensor and a second PSF of the 3D sample from the object plane to a plane of a secondary camera sensor, and generating a first forward projection matrix corresponding to the first PSF and a second forward projection matrix corresponding to the second PSF; acquiring a light field image captured by the main camera sensor and a high resolution image captured by the secondary camera sensor; generating a reconstruction result of the 3D sample by reconstructing the light field image, the first forward projection matrix, the high resolution image and the second forward projection matrix according to a preset algorithm.