Patent classifications
G03B35/08
Distance measuring camera
A distance measurement camera 1 contains a first optical system OS1 for collecting light from a subject 100 to form a first subject image, a second optical system OS2 for collecting the light from the subject 100 to form a second subject image, an imaging part S for imaging the first subject image formed by the first optical system OS1 and the second subject image formed by the second optical system OS2, and a distance calculating unit 4 for calculating the distance to the subject 100 based on the first subject image and the second subject image imaged by the imaging part S. The distance calculating part 4 calculates the distance to the subject 100 based on an image magnification ratio between a magnification of the first subject image and a magnification of the second subject image.
LIGHTING ASSEMBLY FOR PRODUCING REALISTIC PHOTO IMAGES
An example lighting assembly may comprise: a mounting frame comprising a plurality of vertical bars positioned on an imaginary cylindrical surface; a plurality of horizontal joists attached to the vertical bars; a plurality of lighting fixtures attached to the mounting frame; and a plurality of camera mounts attached to the mounting frame; wherein the lighting fixtures and camera mounts are positioned to form a pre-defined grid configuration.
Stereo viewing
The invention relates to creating and viewing stereo images, for example stereo video images, also called 3D video. At least three camera sources with overlapping fields of view are used to capture a scene so that an area of the scene is covered by at least three cameras. At the viewer, a camera pair is chosen from the multiple cameras to create a stereo camera pair that best matches the location of the eyes of the user if they were located at the place of the camera sources. That is, a camera pair is chosen so that the disparity created by the camera sources resembles the disparity that the user's eyes would have at that location. If the user tilts his head, or the view orientation is otherwise altered, a new pair can be formed, for example by switching the other camera. The viewer device then forms the images of the video frames for the left and right eyes by picking the best sources for each area of each image for realistic stereo disparity.
Stereo viewing
The invention relates to creating and viewing stereo images, for example stereo video images, also called 3D video. At least three camera sources with overlapping fields of view are used to capture a scene so that an area of the scene is covered by at least three cameras. At the viewer, a camera pair is chosen from the multiple cameras to create a stereo camera pair that best matches the location of the eyes of the user if they were located at the place of the camera sources. That is, a camera pair is chosen so that the disparity created by the camera sources resembles the disparity that the user's eyes would have at that location. If the user tilts his head, or the view orientation is otherwise altered, a new pair can be formed, for example by switching the other camera. The viewer device then forms the images of the video frames for the left and right eyes by picking the best sources for each area of each image for realistic stereo disparity.
Image sensor modules including primary high-resolution imagers and secondary imagers
Image sensor modules include primary high-resolution imagers and secondary imagers. For example, an image sensor module may include a semiconductor chip including photosensitive regions defining, respectively, a primary camera and a secondary camera. The image sensor module may include an optical assembly that does not substantially obstruct the field-of-view of the secondary camera. Some modules include multiple secondary cameras that have a field-of-view at least as large as the field-of-view of the primary camera. Various features are described to facilitate acquisition of signals that can be used to calculate depth information.
Image sensor modules including primary high-resolution imagers and secondary imagers
Image sensor modules include primary high-resolution imagers and secondary imagers. For example, an image sensor module may include a semiconductor chip including photosensitive regions defining, respectively, a primary camera and a secondary camera. The image sensor module may include an optical assembly that does not substantially obstruct the field-of-view of the secondary camera. Some modules include multiple secondary cameras that have a field-of-view at least as large as the field-of-view of the primary camera. Various features are described to facilitate acquisition of signals that can be used to calculate depth information.
Stereoscopic camera with fluorescence visualization
A stereoscopic camera with fluorescence visualization is disclosed. An example stereoscopic camera includes a visible light source, a near-infrared light source, and a near-ultraviolet light source. The stereoscopic camera also includes a light filter assembly having left and right filter magazines positioned respectively along left and right optical paths and configured to selectively enable certain wavelengths of light to pass through. Each of the left and right filter magazines includes an infrared cut filter, a near-ultraviolent cut filter, and a near-infrared bandpass filter. A controller of the camera is configured to provide for a visible light mode, an indocyanine green (“ICG”) fluorescence mode, and a 5-aminolevulinic acid (“ALA”) fluorescence mode by synchronizing the activation of the light sources with the selection of the filters. A processor of the camera combines image data from the different modes to enable fluorescence emission light to be superimposed on visible light stereoscopic images.
Stereoscopic camera with fluorescence visualization
A stereoscopic camera with fluorescence visualization is disclosed. An example stereoscopic camera includes a visible light source, a near-infrared light source, and a near-ultraviolet light source. The stereoscopic camera also includes a light filter assembly having left and right filter magazines positioned respectively along left and right optical paths and configured to selectively enable certain wavelengths of light to pass through. Each of the left and right filter magazines includes an infrared cut filter, a near-ultraviolent cut filter, and a near-infrared bandpass filter. A controller of the camera is configured to provide for a visible light mode, an indocyanine green (“ICG”) fluorescence mode, and a 5-aminolevulinic acid (“ALA”) fluorescence mode by synchronizing the activation of the light sources with the selection of the filters. A processor of the camera combines image data from the different modes to enable fluorescence emission light to be superimposed on visible light stereoscopic images.
Projector, 3D sensing module and method for fabricating the projector
A projector, a 3D sensing module and a method for fabricating the projector are provided. The 3D sensing module includes the projector and a receiver. The projector is configured to project a light beam to an object, and the receiver is configured to receive the light beam reflected from the object. The projector includes a circuit board, electronic components, a holder and a lens module. The circuit board has a plurality of first bonding pads and a plurality of second bonding pads on a top surface of the circuit board. The electronic components are bonded on the first bonding pads. The holder has a cavity and third bonding pads bonded on and electrically connected to the second bonding pads. The lens module is disposed in the cavity of the holder.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing apparatus includes: a plurality of stereo cameras arranged so that directions of baseline lengths of the stereo cameras intersect each other; a depth estimation unit that estimates, from captured images captured by the plurality of stereo cameras, a depth of an object included in the captured images; and an object detection unit that detects the object based on the depth estimated by the depth estimation unit and reliability of the depth, the reliability being determined in accordance with an angle of a direction of an edge line of the object with respect to the directions of the baseline lengths of the plurality of stereo cameras.