Patent classifications
G06V10/145
Detection of structured light for depth sensing
In one embodiment, a computing system may access a first image and a second image of at least a common portion of an environment while a light emission with a predetermined emission pattern is projected by a projector. The first and second images are respectively captured by a first and a second detector that are respectively separated from the projector by a first and a second distance. The system may determine that a first portion of the first image corresponds to a second portion of the second image. The system may compute, using triangulation, a first depth value associated with the first portion and a second depth value associated with the second portion. The system may determine that the first and second depth values match in accordance with one or more predetermined criteria, and generate a depth map of the environment based on at least one of the depth values.
Systems and Methods of Locating a Control Object Appendage in Three Dimensional (3D) Space
Methods and systems for capturing motion and/or determining the shapes and positions of one or more objects in 3D space utilize cross-sections thereof. In various embodiments, images of the cross-sections are captured using a camera based on reflections therefrom or shadows cast thereby.
MEDICAL IMAGE PROCESSING APPARATUS
Provided is a medical image processing apparatus that generates a color image by using one type of specific color image obtained by imaging a subject with specific monochromatic light. The medical image processing apparatus (10) includes an image acquisition unit (medical image acquisition unit (11)) that acquires a specific color image (56) obtained by imaging a subject with specific monochromatic light, and a color image generation unit (12) that generates a color image from the specific color image by assigning the specific color image (56) to a plurality of color channels and adjusting a balance of each of the color channels.
Method for In-Ovo Fertilisation Determination and Gender Determination on a Closed Egg
The invention relates to the fields of process engineering and agriculture and relates to a method for in-ovo fertilisation determination and gender determination on a closed egg. The aim of the invention is to specify a method for the in-ovo fertilisation determination and gender determination on a closed egg. This aim is achieved by a method in which a closed egg is positioned, candled and/or illuminated, next an image of the closed egg is recorded, then the captured data are evaluated and the position of the cardiovascular system located in the egg is calculated. A detection unit is adjusted via the calculated position of the cardiovascular system by means of a positioning unit and subsequently the blood is stimulated, then the blood-specific and blood-foreign absorption spectra are detected and selected, the fertilisation is ascertained and then the spectra containing blood-foreign information are compensated by a compensation method and the spectra are classified for sex determination.
METHOD FOR AUGMENTING A SCENE IN REAL SPACE WITH PROJECTED VISUAL CONTENT
One variation of method includes: serving setup frames to a projector facing a scene; at a peripheral control module comprising a camera facing the scene, recording a set of images during projection of corresponding setup frames onto the scene by the projector and a baseline image depicting the scene in the field of view of the camera; calculating a pixel correspondence map based on the set of images and the setup frames; transforming the baseline image into a corrected color imagedepicting the scene in the field of view of the camerabased on the pixel correspondence map; linking visual assets to discrete regions in the corrected color image; generating augmented reality frames depicting the visual assets aligned with these discrete regions; and serving the augmented reality frames to the projector to cast depictions of the visual assets onto surfaces, in the scene, corresponding to these discrete regions.
ANALYSIS AND DEEP LEARNING MODELING OF SENSOR-BASED OBJECT DETECTION DATA IN BOUNDED AQUATIC ENVIRONMENTS
Techniques for analysis and deep learning modeling of sensor-based object detection data in bounded aquatic environments are described, including capturing an image from a sensor disposed substantially above a waterline, the sensor being housed in a structure electrically coupled to a light housing, converting the image into data, the data being digitally encoded, evaluating the data to separate background data from foreground data, generating tracking data from the data after the background data is removed, the tracking data being evaluated to determine whether a head or a body are detected by comparing the tracking data to classifier data, tracking the head or the body relative to the waterline if the head or the body are detected in the tracking data, and determining a state associated with the head or the body.
Using a scene illuminating infrared emitter array in a video monitoring camera for depth determination
A method generates depth maps at a camera having illuminators, a lens assembly, an image sensing element, a processor, and memory. The illuminators operate in a first mode to provide illumination, the lens assembly focuses incident light on the image sensing element, the memory stores programs for execution by the processor, and the processor executes the programs to control operation of the camera. The method reconfigures the illuminators to operate in a second mode, where each of a plurality of subsets of the illuminators provides illumination of a scene separately. For each subset, the process activates the illuminators in the subset without activating illuminators not in the subset and receives reflected illumination from the scene incident on the lens assembly and focused onto the image sensing element. The measured light intensity values of the received reflected illumination at the image sensing element are transmitted to a remote server for processing.
Eye tracking system with holographic film decoder
A volume holographic film (such as a photopolymer) that is pre-recorded with patterns subsequently is used to encode LED or low-power laser light reflections from an eye into a binary pattern that can be read at very high speeds by a relatively simple complementary metal-oxide-semiconductor (CMOS) sensor that may be similar to a high framerate, low resolution mouse sensor. The low-resolution mono images from the film are translated into eye poses using, for instance, a look up table that correlates binary patterns to X, Y positions or using a pre-trained convolutional neural network to robustly interpret many variations of the binary patterns for conversion to X, Y positions.
System and method for biometrics identification and pupillometry
A system for photonic illumination for biometric identification and pupillometry. The system includes at least one light source configured for illuminating coherent beam including spatial superposition of at least a first wavelength and a second wavelength, the first wavelength is in the visible spectrum and the second wavelength is in the infrared spectrum. The system further includes an LCD dot array disposed along optical path of the beam such that the beam provides a spatially selective illumination having a cross section including an array of discrete pixels and an optical system for illuminating a human eye with the spatially selective illumination and for reflecting infrared light in the spatially selective illumination towards a camera disposed along an optical axis such that an image obtained by the camera is superimposed of the illuminated eye and the infrared light reflected by the optical system.
TOUCH PANEL DEVICE
A touch panel device according to an embodiment of the present invention is provided with: a panel having a contact surface for receiving a contact; a light emitting portion that inputs, to the panel, input light that is transmitted through the panel; a light receiving portion that detects output light including reflected light of evanescent light that has been generated, on the contact surface, from the input light; and a control portion that calculates and outputs a contact position on the basis of the output light detected by the light receiving portion and a relationship, stored in advance, between positions on the contact surface and the output light.