G06T7/77

HAND-EYE CALIBRATION OF CAMERA-GUIDED APPARATUSES
20220383547 · 2022-12-01 ·

The invention describes a generic framework for hand-eye calibration of camera-guided apparatuses, wherein the rigid 3D transformation between the apparatus and the camera must be determined. An example of such an apparatus is a camera-guided robot.

METHOD AND APPARATUS FOR MODELING AN ENVIRONMENT PROXIMATE AN AUTONOMOUS SYSTEM

A method and apparatus for modeling the environment proximate an autonomous system. The method and apparatus accesses vision data, assigns semantic labels to points in the vision data, processes points that are identified as being a drivable surface (ground) and performs an optimization over the identified points to form a surface model. The model is subsequently used for detecting objects, planning, and mapping.

METHOD AND APPARATUS FOR MODELING AN ENVIRONMENT PROXIMATE AN AUTONOMOUS SYSTEM

A method and apparatus for modeling the environment proximate an autonomous system. The method and apparatus accesses vision data, assigns semantic labels to points in the vision data, processes points that are identified as being a drivable surface (ground) and performs an optimization over the identified points to form a surface model. The model is subsequently used for detecting objects, planning, and mapping.

THREE-DIMENSIONAL MODELING AND ASSESSMENT OF CARDIAC TISSUE
20220370033 · 2022-11-24 ·

A system for patient cardiac imaging and tissue modeling. The system includes a patient imaging device that can acquire patient cardiac imaging data. A processor is configured to receive the cardiac imaging data. A user interface and display allow a user to interact with the cardiac imaging data. The processor includes fat identification software conducting operations to interact with a trained learning network to identify fat tissue in the cardiac imaging data and to map fat tissue onto a three-dimensional model of the heart. A preferred system uses an ultrasound imaging device as the patient imaging device. Another preferred system uses an MRI or CT image device as the patient imaging device.

Cross-device supervisory computer vision system

A supervisory computer vision (CV) system may include a secondary CV system running in parallel with a native CV system on a mobile device. The secondary CV system is configured to run less frequently than the native CV system. CV algorithms are then run on these less-frequent sample images, generating information for localizing the device to a reference point cloud (e.g., provided over a network) and for transforming between a local point cloud of the native CV system and the reference point cloud. AR content may then be consistently positioned relative to the convergent CV system's coordinate space and visualized on a display of the mobile device. Various related algorithms facilitate the efficient operation of this system.

Multi-device object tracking and localization

Methods, systems, and devices for multi-device object tracking and localization are described. A device may transmit a request message associated with a target object to a set of devices within a target area. The request message may include an image of the target object, a feature of the target object, or at least a portion of a trained model associated with the target object. Subsequently, the device may receive response messages from the set of devices based on the request message. The response messages may include a portion of a captured image including the target object, location information of the devices, a pose of the devices, or temporal information of the target object detected within the target area by the devices. In some examples, the device may determine positional information with respect to the target object based on the one or more response messages.

Multi-device object tracking and localization

Methods, systems, and devices for multi-device object tracking and localization are described. A device may transmit a request message associated with a target object to a set of devices within a target area. The request message may include an image of the target object, a feature of the target object, or at least a portion of a trained model associated with the target object. Subsequently, the device may receive response messages from the set of devices based on the request message. The response messages may include a portion of a captured image including the target object, location information of the devices, a pose of the devices, or temporal information of the target object detected within the target area by the devices. In some examples, the device may determine positional information with respect to the target object based on the one or more response messages.

APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT FOR GENERATING LOCATION INFORMATION OF AN OBJECT IN A SCENE
20220366573 · 2022-11-17 ·

An apparatus for generating location information of an object in a scene includes circuitry configured to: acquire image data of a scene from an image capture device; acquire predicted location information of an object in the scene indicative of a region of the scene in which the object is predicted to be located at a given time; detect one or more properties of the object from the image data, the properties of the object indicative of an observed location of the object in the scene; and generate location information of the object in the scene using the predicted location information and the one of more properties of the object.

APPARATUS, METHOD AND COMPUTER PROGRAM PRODUCT FOR GENERATING LOCATION INFORMATION OF AN OBJECT IN A SCENE
20220366573 · 2022-11-17 ·

An apparatus for generating location information of an object in a scene includes circuitry configured to: acquire image data of a scene from an image capture device; acquire predicted location information of an object in the scene indicative of a region of the scene in which the object is predicted to be located at a given time; detect one or more properties of the object from the image data, the properties of the object indicative of an observed location of the object in the scene; and generate location information of the object in the scene using the predicted location information and the one of more properties of the object.

DISPLAYING BLOOD VESSELS IN ULTRASOUND IMAGES

A method and apparatus for identifying blood vessels in ultrasound images and displaying blood vessels in ultrasound images are described. In some embodiments, the method is implemented by a computing device and includes receiving ultrasound images that include a blood vessel, and determining, with a neural network implemented at least partially in hardware of the computing device, diameters of the blood vessel in the ultrasound images. The diameters include a respective diameter of the blood vessel for each ultrasound image of the ultrasound images. The method includes determining a blood vessel diameter based on the diameters of the blood vessel, selecting a color based on the blood vessel diameter, and indicating, in one of the ultrasound images, the blood vessel with an indicator having the color.