G06T7/262

ROBOTIC SYSTEMS AND METHODS FOR NAVIGATION OF LUMINAL NETWORK THAT DETECT PHYSIOLOGICAL NOISE

Provided are robotic systems and methods for navigation of luminal network that detect physiological noise. In one aspect, the system includes a set of one or more processors configured to receive first and second image data from an image sensor located on an instrument, detect a set of one or more points of interest the first image data, and identify a set of first locations and a set of second location respectively corresponding to the set of points in the first and second image data. The set of processors are further configured to, based on the set of first locations and the set of second locations, detect a change of location of the instrument within a luminal network caused by movement of the luminal network relative to the instrument based on the set of first locations and the set of second locations.

ROBOTIC SYSTEMS AND METHODS FOR NAVIGATION OF LUMINAL NETWORK THAT DETECT PHYSIOLOGICAL NOISE

Provided are robotic systems and methods for navigation of luminal network that detect physiological noise. In one aspect, the system includes a set of one or more processors configured to receive first and second image data from an image sensor located on an instrument, detect a set of one or more points of interest the first image data, and identify a set of first locations and a set of second location respectively corresponding to the set of points in the first and second image data. The set of processors are further configured to, based on the set of first locations and the set of second locations, detect a change of location of the instrument within a luminal network caused by movement of the luminal network relative to the instrument based on the set of first locations and the set of second locations.

Detecting motion in images
11625840 · 2023-04-11 · ·

In general, the subject matter described in this disclosure can be embodied in methods, systems, and program products for detecting motion in images. A computing system receives first and second images that were captured by a camera. The computing system generates, using the images, a mathematical transformation that indicates movement of the camera from the first image to the second image. The computing system generates, using the first image and the mathematical transformation, a modified version of the first image that presents the scene that was captured by the first image from a position of the camera when the second image was captured. The computing system determines a portion of the first image or second image at which a position of an object in the scene moved, by comparing the modified version of the first image to the second image.

Detecting motion in images
11625840 · 2023-04-11 · ·

In general, the subject matter described in this disclosure can be embodied in methods, systems, and program products for detecting motion in images. A computing system receives first and second images that were captured by a camera. The computing system generates, using the images, a mathematical transformation that indicates movement of the camera from the first image to the second image. The computing system generates, using the first image and the mathematical transformation, a modified version of the first image that presents the scene that was captured by the first image from a position of the camera when the second image was captured. The computing system determines a portion of the first image or second image at which a position of an object in the scene moved, by comparing the modified version of the first image to the second image.

IMPROVING THE RESOLUTION OF A CONTINUOUS WAVELET TRANSFORM
20220319022 · 2022-10-06 ·

A computer implemented method of decoding a signal. The method includes receiving a signal (which may be an electromagnetic signal), sampling the received signal to generate an input waveform having magnitude and phase components, applying a transform operation to the input waveform to generate a first decoded signal, and outputting the first decoded signal. The transform operation includes pre-processing the input waveform to generate a mirrored inverted waveform and applying a continuous wavelet transform to the mirrored inverted waveform to generate the first decoded signal. This allows inversion of the frequency and temporal resolution of the continuous wavelet transform, thereby enabling improved temporal and frequency decoding of a signal. The method is particularly suitable for signal filters and filtering units.

IMPROVING THE RESOLUTION OF A CONTINUOUS WAVELET TRANSFORM
20220319022 · 2022-10-06 ·

A computer implemented method of decoding a signal. The method includes receiving a signal (which may be an electromagnetic signal), sampling the received signal to generate an input waveform having magnitude and phase components, applying a transform operation to the input waveform to generate a first decoded signal, and outputting the first decoded signal. The transform operation includes pre-processing the input waveform to generate a mirrored inverted waveform and applying a continuous wavelet transform to the mirrored inverted waveform to generate the first decoded signal. This allows inversion of the frequency and temporal resolution of the continuous wavelet transform, thereby enabling improved temporal and frequency decoding of a signal. The method is particularly suitable for signal filters and filtering units.

Methods and systems for determining human body model parameters, human body models based on such parameters and simulating human bodies based on such body models

A method that simulates a response of a target person to one or more simulation inputs comprises: contacting a body of a first test subject with a force-sensing probe; measuring a first force applied by the probe to the body of the first test subject; obtaining a first relative movement comprising a first movement of the probe relative to the body of the first test subject; determining one or more model parameters associated with the body of the first test subject based on the measured first force and the first relative movement; incorporating the one or more model parameters into a target model of at least a portion of a body of a target person; obtaining a simulation input for application to the target model; and simulating a response of the target person in response to the simulation input using the target model.

Methods and systems for determining human body model parameters, human body models based on such parameters and simulating human bodies based on such body models

A method that simulates a response of a target person to one or more simulation inputs comprises: contacting a body of a first test subject with a force-sensing probe; measuring a first force applied by the probe to the body of the first test subject; obtaining a first relative movement comprising a first movement of the probe relative to the body of the first test subject; determining one or more model parameters associated with the body of the first test subject based on the measured first force and the first relative movement; incorporating the one or more model parameters into a target model of at least a portion of a body of a target person; obtaining a simulation input for application to the target model; and simulating a response of the target person in response to the simulation input using the target model.

Real-time visual object tracking for unmanned aerial vehicles (UAVs)

Embodiments described herein provide various examples of real-time visual object tracking. In another aspect, a process for performing a local re-identification of a target object which was earlier detected in a video but later lost when tracking the target object is disclosed. This process begins by receiving a current video frame of the video and a predicted location of the target object. The process then places a current search window in the current video frame centered on or in the vicinity of the predicted location of the target object. Next, the process extracts a feature map from an image patch within the current search window. The process further retrieves a set of stored feature maps computed at a set of previously-determined locations of the target object from a set of previously-processed video frames in the video. The process next computes a set of correlation maps between the feature map and each of the set of stored feature maps. The process then attempts to re-identify the target object locally in the current video frame based on the set of computed correlation maps.

Real-time visual object tracking for unmanned aerial vehicles (UAVs)

Embodiments described herein provide various examples of real-time visual object tracking. In another aspect, a process for performing a local re-identification of a target object which was earlier detected in a video but later lost when tracking the target object is disclosed. This process begins by receiving a current video frame of the video and a predicted location of the target object. The process then places a current search window in the current video frame centered on or in the vicinity of the predicted location of the target object. Next, the process extracts a feature map from an image patch within the current search window. The process further retrieves a set of stored feature maps computed at a set of previously-determined locations of the target object from a set of previously-processed video frames in the video. The process next computes a set of correlation maps between the feature map and each of the set of stored feature maps. The process then attempts to re-identify the target object locally in the current video frame based on the set of computed correlation maps.