Patent classifications
G06V10/145
Apparatus and methods for safe navigation of robotic devices
Apparatus and methods for navigation of a robotic device configured to operate in an environment comprising objects and/or persons. Location of objects and/or persons may changed prior and/or during operation of the robot. In one embodiment, a bistatic sensor comprises a transmitter and a receiver. The receiver may be spatially displaced from the transmitter. The transmitter may project a pattern on a surface in the direction of robot movement. In one variant, the pattern comprises an encoded portion and an information portion. The information portion may be used to communicate information related to robot movement to one or more persons. The encoded portion may be used to determine presence of one or more object in the path of the robot. The receiver may sample a reflected pattern and compare it with the transmitted pattern. Based on a similarity measure breaching a threshold, indication of object present may be produced.
Method and apparatus for automated plant necrosis
A method of real-time plant selection and removal from a plant field including capturing a first image of a first section of the plant field, segmenting the first image into regions indicative of individual plants within the first section, selecting the optimal plants for retention from the first image based on the first image and the previously thinned plant field sections, sending instructions to the plant removal mechanism for removal of the plants corresponding to the unselected regions of the first image from the second section before the machine passes the unselected regions, and repeating the aforementioned steps for a second section of the plant field adjacent the first section in the direction of machine travel.
Method and apparatus for automated plant necrosis
A method of real-time plant selection and removal from a plant field including capturing a first image of a first section of the plant field, segmenting the first image into regions indicative of individual plants within the first section, selecting the optimal plants for retention from the first image based on the first image and the previously thinned plant field sections, sending instructions to the plant removal mechanism for removal of the plants corresponding to the unselected regions of the first image from the second section before the machine passes the unselected regions, and repeating the aforementioned steps for a second section of the plant field adjacent the first section in the direction of machine travel.
INSPECTION METHOD AND INSPECTION APPARATUS
An inspection method according to the embodiments includes applying light of a light source to an inspection target; receiving light from the inspection target to obtain a first image of the inspection target by a sensor; based on an image of a first pattern comprising repetitive patterns unresolvable with a wavelength of the light source in the first image, calculating a deviation of luminance values with respect to each of first regions in the first pattern by a processor; obtaining a second image of the inspection target by the sensor; correcting luminance values of the second image by the processor based on the deviations of the luminance values; and comparing the repetitive patterns of the corrected second image with each other by a comparer.
Methods for optical amplified imaging using a two-dimensional spectral brush
An apparatus and method for ultrafast real-time optical imaging that can be used for imaging dynamic events such as microfluidics or laser surgery is provided. The apparatus and methods encode spatial information from a sample into a back reflection of a two-dimensional spectral brush that is generated with a two-dimensional disperser and a light source that is mapped in to the time domain with a temporal disperser. The temporal waveform is preferably captured by an optical detector, converted to an electrical signal that is digitized and processed to provide two dimensional and three dimensional images. The produced signals can be optically or electronically amplified. Detection may be improved with correlation matching against a database in the time domain or the spatial domain. Embodiments for endoscopy, microscopy and simultaneous imaging and laser ablation with a single fiber are illustrated.
MACHINE LEARNING-BASED GENOTYPING PROCESS OUTCOME PREDICTION USING AGGREGATE METRICS
The technology disclosed relates to systems and methods for avoiding post-processing of an image from extension of probes on beads during a sample evaluation run. The method includes receiving an image of beads in an array of tiles on a beadchip. The method includes calculating averages, over each tile in image channel, of full width at half max (FWHM) values of images of the beads. The method includes predicting from the average FWHM values of the tiles, a likelihood of failure score for the sample evaluation run. A trained classifier can be used to predict the likelihood of failure score. The method includes reporting the likelihood of failure score for the sample evaluation run.
METHOD AND SYSTEM FOR IDENTIFYING AN INDIVIDUAL WITH INCREASED BODY TEMPERATURE
A method of identifying one or more individuals with increased body temperature in a group of individuals, the method comprising the steps of: providing a scene (50), comprising one or more individuals, acquiring a plurality of successive sets of images of the scene (60), a set comprising a visual image and a thermal image, wherein in a set of images a content of the thermal image corresponds to a content of the visual image, detecting a feature of at least one of the individuals in at least one of the visual images (70), tracking at least one of the individuals based on the detected feature in the respective visual images of at least two of the sets of images (80), selecting a subset of the sets of images (100), using the feature in determining a measured temperature associated with the individual based on at least one of the thermal images of the subset (110), if the measured temperature exceeds a threshold value, providing an alarm (120), wherein providing the alarm comprises indicating the individual(s) associated with the alarm (130).
Multi-distance information processing using retroreflected light properties
In some examples, a method may include receiving retroreflected light that indicates at least one retroreflective property of a retroreflective article, wherein retroreflected light is captured at a first distance. The method may include determining a first set of information based at least in part on the at least one retroreflective property of the retroreflective article. The method may include receiving, from the light capture device, an image that includes at least one object, wherein the image is captured at a second distance. The method may include determining, based at least in part on the spatially resolvable property, a second set of information that corresponds to the object in the image. The method may include performing, by a computing device, at least one operation based at least in part on the second set of information.
Multi-distance information processing using retroreflected light properties
In some examples, a method may include receiving retroreflected light that indicates at least one retroreflective property of a retroreflective article, wherein retroreflected light is captured at a first distance. The method may include determining a first set of information based at least in part on the at least one retroreflective property of the retroreflective article. The method may include receiving, from the light capture device, an image that includes at least one object, wherein the image is captured at a second distance. The method may include determining, based at least in part on the spatially resolvable property, a second set of information that corresponds to the object in the image. The method may include performing, by a computing device, at least one operation based at least in part on the second set of information.
Eye tracking system with holographic film decoder
A volume holographic film (such as a photopolymer) that is pre-recorded with patterns subsequently is used to encode LED or low-power laser light reflections from an eye into a binary pattern that can be read at very high speeds by a relatively simple complementary metal-oxide-semiconductor (CMOS) sensor that may be similar to a high framerate, low resolution mouse sensor. The low-resolution mono images from the film are translated into eye poses using, for instance, a look up table that correlates binary patterns to X, Y positions or using a pre-trained convolutional neural network to robustly interpret many variations of the binary patterns for conversion to X, Y positions.