Patent classifications
H04N13/257
Color night vision cameras, systems, and methods thereof
Disclosed are improved methods, systems and devices for color night vision that reduce the number of intensifiers and/or decrease noise. In some embodiments, color night vision is provided in system in which multiple spectral bands are maintained, filtered separately, and then recombined in a unique three-lens-filtering setup. An illustrative four-camera night vision system is unique in that its first three cameras separately filter different bands using a subtractive Cyan, Magenta and Yellow (CMY) color filtering-process, while its fourth camera is used to sense either additional IR illuminators or a luminance channel to increase brightness. In some embodiments, the color night vision is implemented to distinguish details of an image in low light. The unique application of the three-lens subtractive CMY filtering allows for better photon scavenging and preservation of important color information.
Surgical navigation with stereovision and associated methods
A surgical guidance system has two cameras to provide stereo image stream of a surgical field; and a stereo viewer. The system has a 3D surface extraction module that generates a first 3D model of the surgical field from the stereo image streams; a registration module for co-registering annotating data with the first 3D model; and a stereo image enhancer for graphically overlaying at least part of the annotating data onto the stereo image stream to form an enhanced stereo image stream for display, where the enhanced stereo stream enhances a surgeon's perception of the surgical field. The registration module has an alignment refiner to adjust registration of the annotating data with the 3D model based upon matching of features within the 3D model and features within the annotating data; and in an embodiment, a deformation modeler to deform the annotating data based upon a determined tissue deformation.
Surgical navigation with stereovision and associated methods
A surgical guidance system has two cameras to provide stereo image stream of a surgical field; and a stereo viewer. The system has a 3D surface extraction module that generates a first 3D model of the surgical field from the stereo image streams; a registration module for co-registering annotating data with the first 3D model; and a stereo image enhancer for graphically overlaying at least part of the annotating data onto the stereo image stream to form an enhanced stereo image stream for display, where the enhanced stereo stream enhances a surgeon's perception of the surgical field. The registration module has an alignment refiner to adjust registration of the annotating data with the 3D model based upon matching of features within the 3D model and features within the annotating data; and in an embodiment, a deformation modeler to deform the annotating data based upon a determined tissue deformation.
Stereo imaging miniature endoscope with single imaging and conjugated multi-bandpass filters
An endoscope includes a housing with a distal end insertable into a cavity; an image capture device at the distal end to obtain 3D images, and process them to form a video signal; and a folded substrate folded into a U-shape having first and second legs. The image capture device includes a detector and a lens system with right and left multi-band pass filters having right pass bands that are complements of left pass bands. The lens system receives the 3D images including right and left images. The detector faces the lens system to obtain the right and left images. A processing circuit faces the proximal end behind the detector to process signals from the detector. The folded substrate includes the detector at an outer side of the first leg facing the lens system and the processing circuit at an outer side of the second leg facing the proximal end.
Stereo imaging miniature endoscope with single imaging and conjugated multi-bandpass filters
An endoscope includes a housing with a distal end insertable into a cavity; an image capture device at the distal end to obtain 3D images, and process them to form a video signal; and a folded substrate folded into a U-shape having first and second legs. The image capture device includes a detector and a lens system with right and left multi-band pass filters having right pass bands that are complements of left pass bands. The lens system receives the 3D images including right and left images. The detector faces the lens system to obtain the right and left images. A processing circuit faces the proximal end behind the detector to process signals from the detector. The folded substrate includes the detector at an outer side of the first leg facing the lens system and the processing circuit at an outer side of the second leg facing the proximal end.
Stereoscopic visualization camera and platform
A stereoscopic imaging apparatus and platform are disclosed. An example stereoscopic imaging apparatus includes a main objective assembly and left and right lens sets defining respective parallel left and right optical paths from light that is received from the main objective assembly of a target surgical site. Each of the left and right lens sets includes a front lens, first and second zoom lenses configured to be movable along the optical path, and a lens barrel configured to receive the light from the second zoom lens. The example stereoscopic imaging apparatus also includes left and right image sensors configured to convert the light after passing through the lens barrel into image data that is indicative of the received light. The example stereoscopic visualization camera further includes a processor configured to convert the image data into stereoscopic video signals or video data for display on a display monitor.
FREE VIEWPOINT VIDEO GENERATION AND INTERACTION METHOD BASED ON DEEP CONVOLUTIONAL NEURAL NETWORK
A Free Viewpoint Video (FVV) generation and interaction method based on a deep Convolutional Neural Network (CNN) includes the steps of: acquiring multi-viewpoint data of a target scene by a synchronous shooting system with a multi-camera array arranged accordingly to obtain groups of synchronous video frame sequences from a plurality of viewpoints, and rectifying baselines of the sequences at pixel level in batches; extracting, by encoding and decoding network structures, features of each group of viewpoint images input into a designed and trained deep CNN model, to obtain deep feature information of the scene, and combining the information with the input images to generate a virtual viewpoint image between each group of adjacent physical viewpoints at every moment; and synthesizing all viewpoints into frames of the FVV based on time and spatial position of viewpoints by stitching matrices. The method eliminates the need for camera rectification and depth image calculation.
FREE VIEWPOINT VIDEO GENERATION AND INTERACTION METHOD BASED ON DEEP CONVOLUTIONAL NEURAL NETWORK
A Free Viewpoint Video (FVV) generation and interaction method based on a deep Convolutional Neural Network (CNN) includes the steps of: acquiring multi-viewpoint data of a target scene by a synchronous shooting system with a multi-camera array arranged accordingly to obtain groups of synchronous video frame sequences from a plurality of viewpoints, and rectifying baselines of the sequences at pixel level in batches; extracting, by encoding and decoding network structures, features of each group of viewpoint images input into a designed and trained deep CNN model, to obtain deep feature information of the scene, and combining the information with the input images to generate a virtual viewpoint image between each group of adjacent physical viewpoints at every moment; and synthesizing all viewpoints into frames of the FVV based on time and spatial position of viewpoints by stitching matrices. The method eliminates the need for camera rectification and depth image calculation.
Electronic device and method for controlling the same
An electronic device (100) and a method for controlling the electronic device (100) are provided. The electronic device (100) includes a time-of-flight (TOF) module 20, a color camera 30, a monochrome camera (40), and a processor (10). The TOF module (20) is configured to capture a depth image of a subject. The color camera (30) is configured to capture a color image of the subject. The monochrome camera (40) is configured to capture a monochrome image of the subject. The processor (10) is configured to obtain a current brightness of ambient light in real time, and to construct a three-dimensional image of the subject according to the depth image, the color image, and the monochrome image when the current brightness is less than a first threshold.
Electronic device and method for controlling the same
An electronic device (100) and a method for controlling the electronic device (100) are provided. The electronic device (100) includes a time-of-flight (TOF) module 20, a color camera 30, a monochrome camera (40), and a processor (10). The TOF module (20) is configured to capture a depth image of a subject. The color camera (30) is configured to capture a color image of the subject. The monochrome camera (40) is configured to capture a monochrome image of the subject. The processor (10) is configured to obtain a current brightness of ambient light in real time, and to construct a three-dimensional image of the subject according to the depth image, the color image, and the monochrome image when the current brightness is less than a first threshold.