G06T1/0014

Method and tracking system for tracking a medical object
11540884 · 2023-01-03 · ·

The disclosure relates to a method and a tracking system for tracking a medical object. Herein, image data obtained by an imaging method and a predetermined target position is acquired for the medical object. The image data is used to detect the medical object automatically by an image processing algorithm and track the position thereof in a time-resolved manner. Furthermore, it is furthermore indicated when, or that, the detected medical object has reached the target position. A plurality of the detected positions of the medical object and associated detection times are stored in a database.

Image signal processor, operating method thereof, and image processing system including the image signal processor

An image signal processor, an operating method thereof, and an image processing system are provided. The image processing system includes: a control processor configured to generate and output setting information corresponding to N (where N is an integer of 2 or more) image frames; and an image signal processor configured to perform image processing on the N image frames received from an image sensor based on the setting information, and generate an interrupt signal and transmit the interrupt signal to the control processor based on completion of the image processing performed on the N image frame.

Method for Treating Plants in a Field

A method for treating plants in a field, in which a specific crop is planted, has the following steps: selecting a treatment tool for treating plants; capturing an image of the field, the image being correlated with positional information; determining a position of a plant to be treated in a field, using a neural network into which the captured image is fed, wherein the neural network has multiple specific classes and a general class, the crop belongs to one of the specific classes, and plants not corresponding to the crop belong to both one of the specific classes and to the general class, or at least belong to the general class; directing the treatment tool to the position of the plant; and treating the plant using the treatment tool.

INDOOR NAVIGATION METHOD, INDOOR NAVIGATION EQUIPMENT, AND STORAGE MEDIUM
20220404153 · 2022-12-22 ·

An indoor navigation method is provided, including: receiving an instruction for navigation, and collecting an environment image; extracting an instruction room feature and an instruction object feature carried in the instruction, and determining a visual room feature, a visual object feature, and a view angle feature based on the environment image; fusing the instruction object feature and the visual object feature with a first knowledge graph representing an indoor object association relationship to obtain an object feature, and determining a room feature based on the visual room feature and the instruction room feature; and determining a navigation decision based on the view angle feature, the room feature, and the object feature.

SCENE PERCEPTION SYSTEMS AND METHODS

Scene perception systems and methods are described herein. In certain illustrative examples, a system combines data sets associated with imaging devices included in a dynamic multi-device architecture and uses the combined data sets to perceive a scene (e.g., a surgical scene) imaged by the imaging devices. To illustrate, the system may access tracking data for imaging devices capturing images of a scene and fuse, based on the tracking data, data sets respectively associated with the imaging devices to generate fused sets of data for the scene. The tracking data may represent a change in a pose of at least one of the image devices that occurs while the imaging devices capture images of the scene. The fused sets of data may represent or be used to generate perceptions of the scene. In certain illustrative examples, scene perception is dynamically optimized using a feedback control loop.

OPTICAL AXIS CALIBRATION OF ROBOTIC CAMERA SYSTEM
20220392012 · 2022-12-08 · ·

A method, instructions for which are executed from a computer-readable medium, calibrates a robotic camera system having a digital camera connected to an end-effector of a serial robot. The end-effector and camera move within a robot motion coordinate frame (“robot frame”). The method includes acquiring, using the camera, a reference image of a target object on an image plane having an optical coordinate frame, and receiving input signals, including a depth measurement and joint position signals. Separate roll and pitch offsets are determined of a target point within the reference image with respect to the robot frame while moving the robot. Offsets are also determined with respect to x, y, and z axes of the robot frame while moving the robot through another motion sequence. The offsets are stored in a transformation matrix, which is used to control the robot during subsequent operation of the camera system.

CUSTOMIZING CLEANING CYCLES
20220392011 · 2022-12-08 ·

A method includes receiving, at an artificial intelligence (AI) accelerator of a computing system, image data of one or more objects from an image sensor and performing an AI operation on the image data at the AI accelerator of the computing system using an AI model. The method further includes determining a custom cleaning cycle at the AI accelerator in response to performing the AI operation.

CONTEXTUAL POLICY-BASED COMPUTER VISION CONTROL

A tool for providing contextual policy-based computer vision control. The tool identifies one or more camera devices in an environment. The tool determines a plurality of vision areas in a field of view within the environment. The tool determines one or more contextual policies for the plurality of vision areas in the field of view. The tool determines one or more vision augmentations for the plurality of vision areas based, at least in part, on an aggregate computer vision capability of the one or more camera devices in the environment and the one or more contextual policies for the plurality of vision areas in the field of view. The tool applies the one or more vision augmentations to at least one of the one or more camera devices in the environment.

Service providing system, service providing method and management apparatus for service providing system

A service providing system including: a mobile robot configured to act in response to an instruction from a user through wireless communication and including a sensor having a capability corresponding to a human sensing capability to perceive an external world; a target identifying unit configured to identify whether an action target of the robot is a human being or another robot; an user apparatus control portion configured to output the signal detected by the sensor in a first manner when the action target is identified to be a human being and output the signal detected by the sensor in a second manner when the action target is identified to be the other robot; and a user apparatus configured to act in a manner perceivable by the user based on a output signal.

HAND-EYE CALIBRATION OF CAMERA-GUIDED APPARATUSES
20220383547 · 2022-12-01 ·

The invention describes a generic framework for hand-eye calibration of camera-guided apparatuses, wherein the rigid 3D transformation between the apparatus and the camera must be determined. An example of such an apparatus is a camera-guided robot.