Patent classifications
G06T7/74
Determination of position of a head-mounted device on a user
There is provided a method and system for determining if a head-mounted device for extended reality (XR) is correctly positioned on a user, and optionally performing a position correction procedure if the head-mounted device is determined to be incorrectly positioned on the user. Embodiments include: performing eye tracking by estimating, based on a first image of a first eye of the user, a position of a pupil in two dimensions; determining whether the estimated position of the pupil of the first eye is within a predetermined allowable area in the first image; and, if the determined position of the pupil of the first eye is inside the predetermined allowable area, concluding that the head-mounted device is correctly positioned on the user; or, if the determined position of the pupil of the first eye is outside the predetermined allowable area, concluding that the head-mounted device is incorrectly positioned on the user.
Performing 3D reconstruction via an unmanned aerial vehicle
In some examples, an unmanned aerial vehicle (UAV) employs one or more image sensors to capture images of a scan target and may use distance information from the images for determining respective locations in three-dimensional (3D) space of a plurality of points of a 3D model representative of a surface of the scan target. The UAV may compare a first image with a second image to determine a difference between a current frame of reference position for the UAV and an estimate of an actual frame of reference position for the UAV. Further, based at least on the difference, the UAV may determine, while the UAV is in flight, an update to the 3D model including at least one of an updated location of at least one point in the 3D model, or a location of a new point in the 3D model.
Usage and technique analysis of surgeon / staff performance against a baseline to optimize device utilization and performance for both current and future procedures
A situationally aware surgical system configured for use during a surgical procedure performed on a patient by an operating clinician is disclosed including a surgical instrument configured to generate a signal and a cloud-based analytics subsystem including a memory and a control circuit. The memory is configured to store a plurality of baseline variables. The control circuit is configured to receive the signal, determine a type of surgical procedure being performed, at least in part, on the received signal, determine that a baseline variable of the plurality of baseline variables corresponds to the determined type of surgical procedure, determine a procedural variable of the surgical procedure based, at least in part, on the received signal, compare the determined procedural variable to the corresponding baseline variable, and generate an alert for the operating clinician based, at least in part, on the comparison.
Eye gaze tracking system, associated methods and computer programs
An eye tracking system configured to: receive a plurality of right-eye-images of a right eye of a user; receive a plurality of left-eye-images of a left eye of a user, each left-eye-image corresponding to a right-eye-image in the plurality of right-eye-images; detect a pupil and determine an associated pupil-signal, for each of the plurality of right-eye-images and each of the plurality of left-eye-images; calculate a right-eye-pupil-variation of the pupil-signals for the plurality of right-eye-images and a left-eye-pupil-variation of the pupil-signals for the plurality of left-eye-images; and determine a right-eye-weighting and a left-eye-weighting based on the right-eye-pupil-variation and the left-eye-pupil-variation. For one or more right-eye-images and one or more corresponding left-eye-images, the eye tracking system can: determine at least one right-eye-gaze-signal based on the right-eye-image and at least one left-eye-gaze-signal based on the corresponding left-eye-image; and calculate a combined-gaze-signal from a weighted sum of the right-eye-gaze-signal and the left-eye-gaze-signal using the right-eye-weighting and the left-eye-weighting.
Deep learning-based feature extraction for LiDAR localization of autonomous driving vehicles
In one embodiment, a method for extracting point cloud features for use in localizing an autonomous driving vehicle (ADV) includes selecting a first set of keypoints from an online point cloud, the online point cloud generated by a LiDAR device on the ADV for a predicted pose of the ADV; and extracting a first set of feature descriptors from the first set of keypoints using a feature learning neural network running on the ADV, The method further includes locating a second set of keypoints on a pre-built point cloud map, each keypoint of the second set of keypoints corresponding to a keypoint of the first set of keypoint; extracting a second set of feature descriptors from the pre-built point cloud map; and estimating a position and orientation of the ADV based on the first set of feature descriptors, the second set of feature descriptors, and a predicted pose of the ADV.
Generation method for training dataset, model generation method, training data generation apparatus, inference apparatus, robotic controller, model training method and robot
One aspect of the present disclosure relates to a generation method for a training dataset, comprising: capturing, by one or more processors, a target object to which a marker unit recognizable under a first illumination condition is provided; and acquiring, by the one or more processors, a first image where the marker unit is recognizable and a second image obtained by capturing the target object under a second illumination condition.
System and method for fusing information of a captured environment
A method, apparatus and computer program product for fusing information, to be performed by a device comprising a processor and a memory device, the method comprising: receiving one or more distance readings related to the environment from a Lidar device emitting light in a predetermined wavelength; receiving an image captured by a multi spectra camera, the multi spectra camera being sensitive at least to visible light and to the predetermined wavelength; identifying within the image points or areas having the predetermined wavelength; identifying one or more objects within the image; identifying correspondence between each of the light points or areas and one of the readings; associating the object with a distance, based on the reading and points or areas within the object; and outputting indication of the object and the distance associated with the at least one object.
Gemstone container, lighting device and imaging system and method
Systems and methods for evaluating gemstones are disclosed. The system includes a gemstone holding case, an illumination pad and an image capturing and gemstone evaluation device. The case includes a transparent viewing window in a top wall, a translucent bottom wall and a compressible light-diffusing pad for receiving gemstones thereon. The case and pad are dimensioned such that, when closed, the viewing window presses the gemstones into the pad holding them in place and level and allowing for in situ gem imaging. The illumination pad includes embedded LEDs for selectively illuminating the portion of the pad on which the case is placed. The controllable lighting, translucent case and diffusive pad serves to evenly light the gems and reduce unwanted light thereby improving the images and analysis performed using the image capturing and gemstone evaluation device. Methods for analyzing gemstones under diffuse lighting using the system is also disclosed.
Cheek retractor and mobile device holder
The present disclosure provides methods, computing device readable medium, devices, and systems that utilize a cheek retractor and/or a mobile device holder for case assessment and/or dental treatments. One cheek retractor includes a first and a second lip holder, both including imaging markers of a predetermined size to determine a scale of teeth of a user, where each imaging marker is located a predefined distance from the remaining imaging markers, and where the lip holder is to hold a cheek away from a mouth of a user to expose teeth of the user. A mobile device holder can include elements to receive a mobile device to capture images of the patient's teeth.
SELF-RECTIFICATION OF STEREO CAMERA
Embodiments include a method for self-rectification of a stereo camera, wherein the stereo camera comprises a first camera and a second camera, the method comprises creating image pairs from a first images taken by the first camera and second images taken by the second camera, respectively, such that each image pair comprises two images taken at essentially the same time by the first camera and the second camera, respectively. The method comprises creating, for each image pair, matching point pairs from corresponding points in the two images of each image pair, such that each matching point pair comprises one point from each of the first and second image of the respective image pair. For each matching point pair, a disparity is calculated and a plurality of disparities is created for each image pair, and the resulting plurality of disparities is taken into account for the self-rectification.