G06T7/20

Method for supporting a user, computer program product, data medium and imaging system
11576557 · 2023-02-14 · ·

A method for supporting a user, a corresponding computer program product, a corresponding data medium, and a corresponding imaging system are provided. According to the method, a three-dimensional (3D) data set depicting a target object is provided, and at least one two-dimensional (2D) image of the target object is automatically acquired. The 2D image and the 3D data set are automatically registered with each other by a 2D/3D registration. A spatial direction in which the 2D/3D registration exhibits greatest uncertainty is automatically specified. A signal for aligning an instrument that is provided for the purpose of examining the target object is then automatically generated and output as a function of the specified spatial direction in order to support the user.

Method for supporting a user, computer program product, data medium and imaging system
11576557 · 2023-02-14 · ·

A method for supporting a user, a corresponding computer program product, a corresponding data medium, and a corresponding imaging system are provided. According to the method, a three-dimensional (3D) data set depicting a target object is provided, and at least one two-dimensional (2D) image of the target object is automatically acquired. The 2D image and the 3D data set are automatically registered with each other by a 2D/3D registration. A spatial direction in which the 2D/3D registration exhibits greatest uncertainty is automatically specified. A signal for aligning an instrument that is provided for the purpose of examining the target object is then automatically generated and output as a function of the specified spatial direction in order to support the user.

Method and device for carrying out eye gaze mapping

The invention relates to a device and a method for performing an eye gaze mapping (M), in which at least one point of vision (B) and/or a viewing direction of at least one person (10) in relation to at least one scene recording (S) of a scene (12) viewed by the at least one person (10) is mapped onto a reference (R). At least a part of an algorithm (A1, A2, A3) for performing the eye gaze mapping (M) is thereby selected from multiple predetermined algorithms (A1, A2, A3) as a function of at least one parameter (P), and the eye gaze mapping (M) is performed on the basis of the at least one part of the algorithm (A1, A2, A3).

Method and device for carrying out eye gaze mapping

The invention relates to a device and a method for performing an eye gaze mapping (M), in which at least one point of vision (B) and/or a viewing direction of at least one person (10) in relation to at least one scene recording (S) of a scene (12) viewed by the at least one person (10) is mapped onto a reference (R). At least a part of an algorithm (A1, A2, A3) for performing the eye gaze mapping (M) is thereby selected from multiple predetermined algorithms (A1, A2, A3) as a function of at least one parameter (P), and the eye gaze mapping (M) is performed on the basis of the at least one part of the algorithm (A1, A2, A3).

Model learning device, model learning method, and recording medium
11580784 · 2023-02-14 · ·

A model learning device provided with: an error-added movement locus generation unit for adding an error to movement locus data for action learning that represents the movement locus of a subject and to which is assigned an action label that is information representing the action of the subject, and thereby generating error-added movement locus data; and an action recognition model learning unit for learning a model, using at least the error-added movement locus data and learning data created on the basis of the action label, by which model the action of some subject can be recognized from the movement locus of the subject. Thus, it is possible to provide a model by which the action of a subject can be recognized with high accuracy on the basis of the movement locus of the subject estimated using a camera image.

Model learning device, model learning method, and recording medium
11580784 · 2023-02-14 · ·

A model learning device provided with: an error-added movement locus generation unit for adding an error to movement locus data for action learning that represents the movement locus of a subject and to which is assigned an action label that is information representing the action of the subject, and thereby generating error-added movement locus data; and an action recognition model learning unit for learning a model, using at least the error-added movement locus data and learning data created on the basis of the action label, by which model the action of some subject can be recognized from the movement locus of the subject. Thus, it is possible to provide a model by which the action of a subject can be recognized with high accuracy on the basis of the movement locus of the subject estimated using a camera image.

Obtaining image data of an object in a scene

A method and processor system are provided which analyze a depth map, which may be obtained from a range sensor capturing depth information of a scene, to identify where an object is located in the scene. Accordingly, a region of interest may be identified in the scene which includes the object, and image data may be selectively obtained of the region of interest, rather than of the entire scene containing the object. This image data may be acquired by an image sensor configured for capturing visible light information of the scene. By only selectively obtaining the image data within the region of interest, rather than all of the image data, improvements may be realized in the computational complexity of a possible further processing of the image data, the storage of the image data and/or the transmission of the image data.

Obtaining image data of an object in a scene

A method and processor system are provided which analyze a depth map, which may be obtained from a range sensor capturing depth information of a scene, to identify where an object is located in the scene. Accordingly, a region of interest may be identified in the scene which includes the object, and image data may be selectively obtained of the region of interest, rather than of the entire scene containing the object. This image data may be acquired by an image sensor configured for capturing visible light information of the scene. By only selectively obtaining the image data within the region of interest, rather than all of the image data, improvements may be realized in the computational complexity of a possible further processing of the image data, the storage of the image data and/or the transmission of the image data.

Systems and methods for scanning a patient in an imaging system

The present disclosure relates to a method for scanning a patient in an imaging system. The imaging system may include one or more cameras directed at the patient. The method may include obtaining a position of each of the camera(s) relative to the imaging system. The method may also include obtain image data of the patient captured by the camera(s), wherein the image data may correspond to a first view with respect to the patient. The method may further include generating projection image data of the patient based on the image data and the position of each of the camera(s) relative to the imaging system, wherein the projection image data may correspond to a second view with respect to the patient different from the first view. The method may further include generating control information for scanning the patient based on the projection image data of the patient.

Systems and methods for scanning a patient in an imaging system

The present disclosure relates to a method for scanning a patient in an imaging system. The imaging system may include one or more cameras directed at the patient. The method may include obtaining a position of each of the camera(s) relative to the imaging system. The method may also include obtain image data of the patient captured by the camera(s), wherein the image data may correspond to a first view with respect to the patient. The method may further include generating projection image data of the patient based on the image data and the position of each of the camera(s) relative to the imaging system, wherein the projection image data may correspond to a second view with respect to the patient different from the first view. The method may further include generating control information for scanning the patient based on the projection image data of the patient.