Patent classifications
G06T7/246
HOLISTIC CAMERA CALIBRATION SYSTEM FROM SPARSE OPTICAL FLOW
Holistic systems and methods are used for calibrating image capture devices. An image capture device includes a lens, an image sensor, an inertial measurement unit (IMU), and an image signal processor (ISP). The image sensor detects images as frames and the IMU captures motion data. The ISP detects one or more key points on the frames and matches the one or more key points between the frames. The ISP computes one or more calibration parameters. The one or more calibration parameters are based on the matched key points and a model. The model includes an optical component, an IMU component, and a sensor component. The ISP performs a calibration using the calibration parameters.
HOLISTIC CAMERA CALIBRATION SYSTEM FROM SPARSE OPTICAL FLOW
Holistic systems and methods are used for calibrating image capture devices. An image capture device includes a lens, an image sensor, an inertial measurement unit (IMU), and an image signal processor (ISP). The image sensor detects images as frames and the IMU captures motion data. The ISP detects one or more key points on the frames and matches the one or more key points between the frames. The ISP computes one or more calibration parameters. The one or more calibration parameters are based on the matched key points and a model. The model includes an optical component, an IMU component, and a sensor component. The ISP performs a calibration using the calibration parameters.
MONITORING OF DENTITION
A method for acquiring at least one two-dimensional image of a part of arches of a patient includes steps carried out by the patient or other person who is not a dental health professional, for example, including placing a dental separator in the mouth of the patient in order to separate the lips of the patient and improve the visibility of the teeth during the acquisition of said at least one two-dimensional image, and acquiring, in a mouth closed position and with a personal image acquisition apparatus, said at least one two-dimensional image.
STABILIZATION OF FACE IN VIDEO
Placement of a face depicted within a video may be determined. One or more stabilization options for the video may be obtained. Stabilization option(s) may include angle stabilization option, a position stabilization option, and/or a size stabilization option. The video may be stabilized based on the placement of the face and the stabilization option(s).
LOOP CLOSURE DETECTION METHOD AND SYSTEM, MULTI-SENSOR FUSION SLAM SYSTEM, ROBOT, AND MEDIUM
The present invention provides a loop closure detection method and system, a multi-sensor fusion SLAM system, a robot, and a medium. Said system runs on a mobile robot, and comprises a similarity detection unit, a visual pose solving unit, and a laser pose solving unit. According to the loop closure detection system, the multi-sensor fusion SLAM system and the robot provided in the present invention, the speed and accuracy of loop closure detection in cases of a change in a viewing angle of the robot, a change in the environmental brightness, a weak texture, etc. can be significantly improved.
INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, AND METHOD FOR PROCESSING INFORMATION
An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a detection target included in the observation data. The processor maps coordinates of the detected detection target as coordinates of a detection target in a virtual space, tracks a position and a velocity of a material point indicating the detection target in the virtual space, and maps coordinates of the tracked material point in the virtual space as coordinates in a display space. The processor sequentially observes a size of the detection target in the display space and estimates a size of a detection target at a present time on a basis of observed values of a size of a detection target at the present time and estimated values of a size of a past detection target. The output interface outputs output information based on the coordinates of the material point mapped to the display space and the estimated size of the detection target.
INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, AND METHOD FOR PROCESSING INFORMATION
An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a detection target included in the observation data. The processor maps coordinates of the detected detection target as coordinates of a detection target in a virtual space, tracks a position and a velocity of a material point indicating the detection target in the virtual space, and maps coordinates of the tracked material point in the virtual space as coordinates in a display space. The processor sequentially observes a size of the detection target in the display space and estimates a size of a detection target at a present time on a basis of observed values of a size of a detection target at the present time and estimated values of a size of a past detection target. The output interface outputs output information based on the coordinates of the material point mapped to the display space and the estimated size of the detection target.
INFORMATION PROCESSING DEVICE, PROGRAM, AND METHOD
An information processing device that includes a control unit configured to track an object in an image using images input in time series, using a tracking result obtained by performing tracking in units of a tracking region corresponding to a specific part of the object.
DEVICE AND COMPUTER-IMPLEMENTED METHOD FOR OBJECT TRACKING
A device and computer-implemented method for object tracking. The method comprises providing a sequence of digital images, determining a sequence of relational graph embeddings, wherein a first relational graph embedding of the sequence comprises a first object embedding representing a first object in a first digital image of the sequence of digital images, wherein the first relational graph embedding comprises a first relation embedding of a relation for the first object embedding, wherein the first relation embedding relates the first object embedding to embeddings representing other objects of the first digital image in the first relational graph embedding and to embeddings in a second relational graph embedding of the sequence that represent objects of a second digital image of the sequence of digital images.
DEVICE AND COMPUTER-IMPLEMENTED METHOD FOR OBJECT TRACKING
A device and computer-implemented method for object tracking. The method comprises providing a sequence of digital images, determining a sequence of relational graph embeddings, wherein a first relational graph embedding of the sequence comprises a first object embedding representing a first object in a first digital image of the sequence of digital images, wherein the first relational graph embedding comprises a first relation embedding of a relation for the first object embedding, wherein the first relation embedding relates the first object embedding to embeddings representing other objects of the first digital image in the first relational graph embedding and to embeddings in a second relational graph embedding of the sequence that represent objects of a second digital image of the sequence of digital images.