Patent classifications
G06V10/147
Method and device for eye metric acquisition
The present disclosure relates to a method and a device for acquisition of a metric of an eye (1) located in an acquisition space (29). The device comprises at least one light source (11) configured to emit light towards the acquisition space, a camera (15) configured to receive light from the acquisition space to (29) generate image data, and an analyzing unit (14) configured to extract at least one metric from the image data. The camera (15) is configured to receive light from the acquisition space via at least two light paths (17, 19) which are differently angled with respect to the optical axis of the camera, the light of at least one path being received via a first mirror (21). The camera receives light from an overlapping portion of the acquisition space via the first and second paths, as to allow the camera to receive at least two representations of a single eye. This metric may be used for e.g. eye tracking or autorefraction/accomodation.
Method and device for eye metric acquisition
The present disclosure relates to a method and a device for acquisition of a metric of an eye (1) located in an acquisition space (29). The device comprises at least one light source (11) configured to emit light towards the acquisition space, a camera (15) configured to receive light from the acquisition space to (29) generate image data, and an analyzing unit (14) configured to extract at least one metric from the image data. The camera (15) is configured to receive light from the acquisition space via at least two light paths (17, 19) which are differently angled with respect to the optical axis of the camera, the light of at least one path being received via a first mirror (21). The camera receives light from an overlapping portion of the acquisition space via the first and second paths, as to allow the camera to receive at least two representations of a single eye. This metric may be used for e.g. eye tracking or autorefraction/accomodation.
Methods and arrangements for identifying objects
In some arrangements, product packaging is digitally watermarked over most of its extent to facilitate high-throughput item identification at retail checkouts. Imagery captured by conventional or plenoptic cameras can be processed (e.g., by GPUs) to derive several different perspective-transformed views—further minimizing the need to manually reposition items for identification. Crinkles and other deformations in product packaging can be optically sensed, allowing such surfaces to be virtually flattened to aid identification. Piles of items can be 3D-modelled and virtually segmented into geometric primitives to aid identification, and to discover locations of obscured items. Other data (e.g., including data from sensors in aisles, shelves and carts, and gaze tracking for clues about visual saliency) can be used in assessing identification hypotheses about an item. Logos may be identified and used—or ignored—in product identification. A great variety of other features and arrangements are also detailed.
Methods and arrangements for identifying objects
In some arrangements, product packaging is digitally watermarked over most of its extent to facilitate high-throughput item identification at retail checkouts. Imagery captured by conventional or plenoptic cameras can be processed (e.g., by GPUs) to derive several different perspective-transformed views—further minimizing the need to manually reposition items for identification. Crinkles and other deformations in product packaging can be optically sensed, allowing such surfaces to be virtually flattened to aid identification. Piles of items can be 3D-modelled and virtually segmented into geometric primitives to aid identification, and to discover locations of obscured items. Other data (e.g., including data from sensors in aisles, shelves and carts, and gaze tracking for clues about visual saliency) can be used in assessing identification hypotheses about an item. Logos may be identified and used—or ignored—in product identification. A great variety of other features and arrangements are also detailed.
System and method for refining dimensions of a generally cuboidal 3D object imaged by 3D vision system and controls for the same
A system and method for estimating dimensions of an approximately cuboidal object from a 3D image of the object acquired by an image sensor of the vision system processor is provided. An identification module, associated with the vision system processor, automatically identifies a 3D region in the 3D image that contains the cuboidal object. A selection module, associated with the vision system processor, automatically selects 3D image data from the 3D image that corresponds to approximate faces or boundaries of the cuboidal object. An analysis module statistically analyzes, and generates statistics for, the selected 3D image data that correspond to approximate cuboidal object faces or boundaries. A refinement module chooses statistics that correspond to improved cuboidal dimensions from among cuboidal object length, width and height. The improved cuboidal dimensions are provided as dimensions for the object. A user interface displays a plurality of interface screens for setup and runtime operation.
Techniques for predictive sensor reconfiguration
Systems and methods for optimizing sensory signal capturing by reconfiguring robotic device configurations. A method includes determining at least one predicted future sensor reading for a robotic device based on navigation path data of the robotic device, wherein the robotic device is deployed with at least one sensor, wherein each predicted future sensor reading is an expected value of a future sensory signal; determining an optimized sensor configuration based on the at least one predicted future sensor reading, wherein the optimized sensor configuration optimizes capturing of sensor signals by the at least one sensor; and reconfiguring the at least one sensor based on the optimized sensor configuration, wherein reconfiguring the at least one sensor further comprises modifying at least one sensor parameter of the at least one sensor based on the optimized sensor configuration.
DISPLAY DEVICE, NON-CONTACT KEY, AND INPUT DEVICE
The present invention provides a display device, which includes a frame having an accommodating cavity and a display module disposed in the accommodating cavity. The display module includes a first light source, an optical unit, an imaging unit arranged on a side of the optical unit facing away from the first light source, and a lens array arranged on a side of the imaging unit facing away from the first light source. Corresponding to a preset pattern, light emitted by the first light source passes through the optical unit, the imaging unit and the lens array to form a default floating image in a floating display area outside the accommodating cavity. In addition, the present invention also provides a non-contact key and an input device including the above display module.
DISPLAY DEVICE, NON-CONTACT KEY, AND INPUT DEVICE
The present invention provides a display device, which includes a frame having an accommodating cavity and a display module disposed in the accommodating cavity. The display module includes a first light source, an optical unit, an imaging unit arranged on a side of the optical unit facing away from the first light source, and a lens array arranged on a side of the imaging unit facing away from the first light source. Corresponding to a preset pattern, light emitted by the first light source passes through the optical unit, the imaging unit and the lens array to form a default floating image in a floating display area outside the accommodating cavity. In addition, the present invention also provides a non-contact key and an input device including the above display module.
LINE RECOGNITION MODULE, FABRICATING METHOD THEREOF AND DISPLAY DEVICE
The present disclosure provides a line recognition module, a fabricating method thereof and a display device. A better collimating effect may be achieved only by fabricating an optical sensing structure on a substrate in the line recognition module and then directly fabricating at least two light shading layers and a light transmitting layer with relatively simple structures, and the device structure is light and thin, which can reduce the difficulty of processing the device. The problem that the yield is affected due to blistering caused by attaching a collimating structure to the line recognition module with use of an optically clear adhesive (OCA) is avoided. Moreover, since film layers are fabricated directly on the optical sensing structure to form the collimating structure, fabrication of the collimating structure may be accomplished by using a generic device for fabrication of the film layers on an array substrate without adding new fabrication equipment.
LINE RECOGNITION MODULE, FABRICATING METHOD THEREOF AND DISPLAY DEVICE
The present disclosure provides a line recognition module, a fabricating method thereof and a display device. A better collimating effect may be achieved only by fabricating an optical sensing structure on a substrate in the line recognition module and then directly fabricating at least two light shading layers and a light transmitting layer with relatively simple structures, and the device structure is light and thin, which can reduce the difficulty of processing the device. The problem that the yield is affected due to blistering caused by attaching a collimating structure to the line recognition module with use of an optically clear adhesive (OCA) is avoided. Moreover, since film layers are fabricated directly on the optical sensing structure to form the collimating structure, fabrication of the collimating structure may be accomplished by using a generic device for fabrication of the film layers on an array substrate without adding new fabrication equipment.