Patent classifications
G02B23/2415
Rigid Scope Device
A single solid-state image sensing device is disposed for a first optical system and a second optical system provided in a rigid endoscope. The first image formed by the first light beam emerging from the observation target and passing through the first optical system and the second image formed by the second light beam emerging from the observation target and passing through the second optical system are formed on the imaging surface of the solid-state image sensing device. The solid-state image sensing device then converts the first image and the second image into electric signals. In the picture display unit, the first picture corresponding to the first image and the second picture corresponding to the second image are displayed on the display surface based on the electric signal obtained by the solid-state image sensing device.
Stereoscopic-vision endoscope optical system and endoscope using the same
A stereoscopic-vision endoscope optical system includes a pair of objective optical systems, a pair of relay optical systems, and a pair of image forming optical systems. The image forming optical system includes a first lens unit, a first optical-path bending element, and a second optical-path bending element. The objective optical system and the relay optical system are disposed in a first optical path. A second optical path is formed between the first optical-path bending element and the second optical-path bending element. A third optical path is formed between the second optical-path bending element and a final image. The second optical path is positioned farther from the central axis, than the first optical path. The third optical path is positioned closer to the central axis, than the second optical path.
APPARATUS FOR THE OPTICAL MANIPULATION OF A PAIR OF LANDSCAPE STEREOSCOPIC IMAGES
Apparatus (38) for the optical manipulation of a pair of landscape stereoscopic images (L, R), which apparatus (38) comprises: (i) a camera (36) which has its own focus lens (40); (ii) an enclosed housing (4), three ports (6, 8, 10) in the housing (4) with one port being a photographic interface port which forms a photographic interface (12) to the camera (36), and the other two ports being human interface ports which form a human interface (18), said three ports (6, 8, 10) allowing the light to pass from the human interface to the photographic interface (12) for camera recording, or from the photographic interface (12) to the human interface (18) for each eye of the human, in a direction parallel to that of light entering the other said interface without left-right image inversion between the photographic interface (12) and the human interface (18); and (iii) at least four reflective surfaces (20, 22, 24,26) which direct light along three mutually perpendicular axes, each of said surfaces having an edge lying on a flat plane (28), said plane (28) also including a division line (30) between adjacent landscape stereoscopic images presented at said photographic interface (12), whereby the apparatus (38) causes landscape stereoscopic images which are side by side with a left eye image left of a right eye image and with shortest dimensions adjacent and which are at the human interface (18) to become stacked one image above the other at the photographic interface (12), and wherein: (iv) the apparatus (38) causes the left and right eye images which are stacked one image above the other at the photographic interface (12) to emerge as parallel light towards the camera (36); and (v) the focus lens of the camera (36) is on an optical axis passing through the centre of the photographic interface port and is focussed for infinity distance to receive the parallel light conveying the two stereoscopic images.
3D scanning of nasal tract with deflectable endoscope
An apparatus includes a shaft, an imaging head, and a processor. The shaft includes a distal end sized to fit through a human nostril into a human nasal cavity. The imaging head includes an image sensor assembly, a plurality of light sources, and a plurality of collimators. At least some of the light sources are positioned adjacent to the image sensor assembly. Each collimator is positioned over a corresponding light source of the plurality of light sources. The processor is configured to activate the light sources in a predetermined sequence. The image sensor assembly is configured to capture images of a surface illuminated by the light sources as the light sources are activated in the predetermined sequence.
Next generation endoscope
An endoscope system includes a shaft portion having a proximal end and a distal end and defining a longitudinal axis. The system also includes a first image receiver coupled to the shaft portion. The first image receiver is directed toward a first direction to receive an image of a first portion of the interior of a lumen. The system also includes a second image receiver coupled to the shaft portion. The second image receiver is directed toward a second direction to receive an image of a second portion of the interior of the lumen. The first direction is generally opposite the second direction. The system further includes a monitor, wherein the image of the first portion and the image of the second portion are displayed simultaneously on the monitor.
Imaging system and method
An imaging device (010, 10, 110) comprises a first optical system (020, 20, 120) at a distal end of the imaging device, a second optical system (080, 80, 180) towards the proximal end of the imaging device, and a sensor (074, 74, 174) at the proximal end of the imaging device. The first and second optical systems and the sensor are aligned along a common longitudinal axis. The first optical system is or comprises one or more reflective and/or refractive optical components (24, 124; 22, 122) symmetrically and/or coaxially arranged with respect to the longitudinal axis, and the second optical system comprises one or more reflective and/or refractive optical components (24, 124; 22, 122) for focussing incident light towards the sensor. A calibration system (200) and method for calibrating such an imaging device, and a method of processing image data obtained from such an imaging device are also provided.
Continuous fiber optic functionality monitoring and self-diagnostic reporting system
Disclosed herein is a system, apparatus and method directed to detecting damage to an optical fiber of a medical device. The optical fiber includes one or more core fibers each including a plurality of sensors configured to (i) reflect a light signal based on received incident light, and (ii) alter the reflected light signal for use in determining a physical state of the multi-core optical fiber. The system also includes a console having non-transitory computer-readable medium storing logic that, when executed, causes operations of providing a broadband incident light signal to the multi-core optical fiber, receiving reflected light signals, receiving reflected light signals of different spectral widths of the broadband incident light by one or more of the plurality of sensors, identifying at least one unexpected spectral width or a lack of an expected spectral width, and determining the damage has occurred to the optical fiber based on the identification.
COMPOUND-EYE ENDOSCOPE
A compound-eye endoscope includes two or more image-capturing modules, a sub-frame, and an outer shell. The two or more image-capturing modules are formed in the same outer shape, and each of the image-capturing modules includes a lens barrel housing an optical system, an image sensor, and a sensor holding member that relatively fixes the lens barrel and the image sensor. The sub-frame relatively fixes each of the two or more image-capturing modules. The outer shell accommodates and fixes the sub-frame and the two or more image-capturing modules.
Method of operating observation device, observation device, and recording medium
An imaging condition set in a first region of a three-dimensional model of a subject and an imaging condition set in a second region of the three-dimensional model are different from each other. A processor of an observation device determines whether or not the imaging condition that has been set in the first region or the second region including a position on the three-dimensional model is satisfied. The position is identified on the basis of a position of an imaging device and a posture of the imaging device. The processor displays observation information on a display on the basis of a result of determination. The observation information represents whether or not the first region or the second region including the position on the three-dimensional model has been observed.
METHOD AND SYSTEM FOR AUTOMATICALLY OPTIMIZING 3D STEREOSCOPIC PERCEPTION, AND MEDIUM
Provided by the present invention are a method and system for automatically optimizing 3D stereoscopic perception, and a medium. The method comprises the following steps that are executed successively: step 1: given current left and right images, calculating a stereo disparity to generate a disparity map; step 2: calculating a depth value corresponding to each individual pixel by using the calculated disparity; step 3: calculating a depth distance of a target to be observed; step 4: acquiring corresponding left and right image displacement values by using the depth distance calculated in step 3; and step 5: applying the acquired image displacement values into a 3D display. The beneficial effect of the present invention is that: the method for automatically optimizing 3D stereoscopic perception according to the present invention solves fatigue and dizziness that are easily generated during the use of a 3D endoscope.