G06T7/62

Management and display of object-collection data

An object identification and collection method is disclosed. The method includes receiving a pick-up path that identifies a route in which to guide an object-collection system over a target geographical area to pick up objects, determining a current location of the object-collection system relative to the pick-up path, and guiding the object-collection system along the pick-up path over the target geographical area based on the current location. The method further includes capturing images in a direction of movement of the object-collection system along the pick-up path, identifying a target object in the images; tracking movement of the target object through the images, determining that the target object is within range of an object picker assembly on the object-collection system based on the tracked movement of the target object, and instructing the object picker assembly to pick up the target object.

Methods For Improved Measurements Of Brain Volume and Changes In Brain Volume
20180012354 · 2018-01-11 · ·

Methods of the disclosure may include obtaining a first set of medical images at a first time point and a second set of medical images at a second time point, each set including at least two medical images. First and second algorithms may be used to calculate, respectively, first and third brain volume (BV) values at the first time point based on two or more images from the first set of medical images and second and fourth BV values at the second time point based on two or more images from the second set of medical images. A mathematical weight may be applied to at least one of the first, second, third, or fourth BV values. The first and third BV values may be averaged, and the second and fourth BV values may be averaged to determine overall BV values at the first and second time points, respectively.

Systems and methods for machine learning based physiological motion measurement

A system for physiological motion measurement is provided. The system may acquire a reference image corresponding to a reference motion phase of an ROI and a target image of the ROI corresponding to a target motion phase, wherein the reference motion phase may be different from the target motion phase. The system may identify one or more feature points relating to the ROI from the reference image, and determine a motion field of the feature points from the reference motion phase to the target motion phase using a motion prediction model. An input of the motion prediction model may include at least the reference image and the target image. The system may further determine a physiological condition of the ROI based on the motion field.

Systems and methods for machine learning based physiological motion measurement

A system for physiological motion measurement is provided. The system may acquire a reference image corresponding to a reference motion phase of an ROI and a target image of the ROI corresponding to a target motion phase, wherein the reference motion phase may be different from the target motion phase. The system may identify one or more feature points relating to the ROI from the reference image, and determine a motion field of the feature points from the reference motion phase to the target motion phase using a motion prediction model. An input of the motion prediction model may include at least the reference image and the target image. The system may further determine a physiological condition of the ROI based on the motion field.

System and method for local three dimensional volume reconstruction using a standard fluoroscope
11707241 · 2023-07-25 · ·

A system and method for constructing fluoroscopic-based three dimensional volumetric data from two dimensional fluoroscopic images including a computing device configured to facilitate navigation of a medical device to a target area within a patient and a fluoroscopic imaging device configured to acquire a fluoroscopic video of the target area about a plurality of angles relative to the target area. The computing device is configured to determine a pose of the fluoroscopic imaging device for each frame of the fluoroscopic video and to construct fluoroscopic-based three dimensional volumetric data of the target area in which soft tissue objects are visible using a fast iterative three dimensional construction algorithm.

System and method for local three dimensional volume reconstruction using a standard fluoroscope
11707241 · 2023-07-25 · ·

A system and method for constructing fluoroscopic-based three dimensional volumetric data from two dimensional fluoroscopic images including a computing device configured to facilitate navigation of a medical device to a target area within a patient and a fluoroscopic imaging device configured to acquire a fluoroscopic video of the target area about a plurality of angles relative to the target area. The computing device is configured to determine a pose of the fluoroscopic imaging device for each frame of the fluoroscopic video and to construct fluoroscopic-based three dimensional volumetric data of the target area in which soft tissue objects are visible using a fast iterative three dimensional construction algorithm.

Systems and methods for volumetric sizing

A method for computing dimensions of an object in a scene includes: controlling, by a processor, a depth camera system to capture at least a frame of the scene, the frame including a color image and a depth image arranged in a plurality of pixels; detecting, by the processor, an object in the frame; determining, by the processor, a ground plane in the frame, the object resting on the ground plane; computing, by the processor, a rectangular outline bounding a projection of a plurality of pixels of the object onto the ground plane; computing, by the processor, a height of the object above the ground plane; and outputting, by the processor, computed dimensions of the object in accordance with a length and a width of the rectangular outline and the height.

Systems and methods for volumetric sizing

A method for computing dimensions of an object in a scene includes: controlling, by a processor, a depth camera system to capture at least a frame of the scene, the frame including a color image and a depth image arranged in a plurality of pixels; detecting, by the processor, an object in the frame; determining, by the processor, a ground plane in the frame, the object resting on the ground plane; computing, by the processor, a rectangular outline bounding a projection of a plurality of pixels of the object onto the ground plane; computing, by the processor, a height of the object above the ground plane; and outputting, by the processor, computed dimensions of the object in accordance with a length and a width of the rectangular outline and the height.

SYSTEMS AND METHODS FOR MESH AND TACK DIAGNOSIS

A system for surgical mesh analysis includes a display screen, a processor, and a memory. The memory has instructions stored thereon, which when executed by the processor, cause the system to: access an image of a surgical site (602). The image includes the surgical mesh. The surgical mesh includes a grid of cells. The instructions, when executed by the processor further cause the system to detect a first desired location on the surgical mesh in the image (604); detect a second desired location on the surgical mesh in the image (606); determine a distance between the first desired location and the second desired location along the surgical mesh in the image (608); and display, on the display screen, the determined distance (610).

SYSTEMS AND METHODS FOR MESH AND TACK DIAGNOSIS

A system for surgical mesh analysis includes a display screen, a processor, and a memory. The memory has instructions stored thereon, which when executed by the processor, cause the system to: access an image of a surgical site (602). The image includes the surgical mesh. The surgical mesh includes a grid of cells. The instructions, when executed by the processor further cause the system to detect a first desired location on the surgical mesh in the image (604); detect a second desired location on the surgical mesh in the image (606); determine a distance between the first desired location and the second desired location along the surgical mesh in the image (608); and display, on the display screen, the determined distance (610).