Patent classifications
H04N23/16
Method and apparatus for learning human pose estimation in low-light conditions
Provided is an apparatus for learning human pose estimation by configuring a dataset for human pose estimation by simultaneously obtaining a well-lit image and a low-light image, performing annotation in the well-lit image, and transmitting the annotation to the low-light image. By using the well-lit image included in the dataset as an input of a teacher model and the low-light image as an input of a student model, the student model learns human pose estimation at a high accuracy in low-light conditions by using privileged information of the teacher model.
MICROSCOPE ENABLING DIFFERENTIAL PHASE CONTRAST IMAGING OF OBLIQUE FOCAL PLANE AND METHOD FOR OPERATING SAME
Proposed is a microscope enabling differential phase contrast imaging of oblique focal plane and a method for operating the same, which can image a large area 3D surface by scanning and photographing in a horizontal direction a focal plane tilted by applying an oblique plane microscopy (OPM) technology that tilts a focal plane of a photographing area of a microscope. The microscope includes a sample photographing module for photographing a sample and an imaging generator for generating a sample imaging on the basis of an image of the sample photographed in the sample photographing module, wherein the sample photographing module includes a light source unit for radiating light to the sample, an objective lens for obtaining transmitted light from the sample, and an imaging information acquisition unit for obtaining imaging information of the sample through the transmitted light.
MICROSCOPE ENABLING DIFFERENTIAL PHASE CONTRAST IMAGING OF OBLIQUE FOCAL PLANE AND METHOD FOR OPERATING SAME
Proposed is a microscope enabling differential phase contrast imaging of oblique focal plane and a method for operating the same, which can image a large area 3D surface by scanning and photographing in a horizontal direction a focal plane tilted by applying an oblique plane microscopy (OPM) technology that tilts a focal plane of a photographing area of a microscope. The microscope includes a sample photographing module for photographing a sample and an imaging generator for generating a sample imaging on the basis of an image of the sample photographed in the sample photographing module, wherein the sample photographing module includes a light source unit for radiating light to the sample, an objective lens for obtaining transmitted light from the sample, and an imaging information acquisition unit for obtaining imaging information of the sample through the transmitted light.
IMAGING APPARATUS
An imaging apparatus includes: a light splitting element that includes a first splitting surface that splits an incident light into a first reflected light and a first transmitted light, and a second splitting surface that splits the first transmitted light into a second reflected light and a second transmitted light; a first sensor that images the first reflected light; a second sensor that images the second reflected light; and a third sensor that images the second transmitted light. One of the first sensor, the second sensor, and the third sensor is a polarization sensor, another one is a visible light sensor, and yet another one is an infrared light sensor.
IMAGING APPARATUS
An imaging apparatus includes: a light splitting element that includes a first splitting surface that splits an incident light into a first reflected light and a first transmitted light, and a second splitting surface that splits the first transmitted light into a second reflected light and a second transmitted light; a first sensor that images the first reflected light; a second sensor that images the second reflected light; and a third sensor that images the second transmitted light. One of the first sensor, the second sensor, and the third sensor is a polarization sensor, another one is a visible light sensor, and yet another one is an infrared light sensor.
MIXED REALITY FOR LASER SAFETY EYEWEAR
A system for safe mixed-reality visualization of one or more lasers. The system comprises a headset component configured to fit over a user's eyes. The headset component may comprise a plurality of cameras configured to generate images of an environment. The headset component may further comprise a mountable optical device configured to be applied to one or more cameras. Each mountable optical device may comprise a mounting component and one or more inner optical elements operatively coupled to the mounting component. The one or more inner optical elements may comprise an optical filter configured to reduce saturation by the one or more laser beams in the image generated by the camera. The headset component may further comprise a display component communicatively coupled to a computing device, configured to display a combined image to the eyes of the user.
MIXED REALITY FOR LASER SAFETY EYEWEAR
A system for safe mixed-reality visualization of one or more lasers. The system comprises a headset component configured to fit over a user's eyes. The headset component may comprise a plurality of cameras configured to generate images of an environment. The headset component may further comprise a mountable optical device configured to be applied to one or more cameras. Each mountable optical device may comprise a mounting component and one or more inner optical elements operatively coupled to the mounting component. The one or more inner optical elements may comprise an optical filter configured to reduce saturation by the one or more laser beams in the image generated by the camera. The headset component may further comprise a display component communicatively coupled to a computing device, configured to display a combined image to the eyes of the user.
Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.
Apparatus and method for obtaining image employing color separation lens array
Provided is an apparatus for obtaining an image includes an image sensor and a signal processing unit that includes a demosaicing unit configured to reconstruct a green signal to have a full pixel resolution by using the input image, a sharpening filter unit configured to generate a first image by sharpening the reconstructed green signal for each preset direction, a direction image generation unit which generates a second image by removing a base band and extracting only a detail band, a gray detection unit configured to detect a gray region of the white balance-processed input image, an edge detection unit configured to detect an edge direction of the white balance-processed input image, and a selection unit configured to generate a third image by blending the first image and the second image, based on the detected gray region and the detected edge direction.