G06T2207/20076

Person counting image processing apparatus, method, and storage medium
11568557 · 2023-01-31 · ·

To improve calculation efficiency by properly reducing the number of regression regions based on a size of an object to be detected, a processing apparatus has at least one processor or circuit configured to function as: a size obtain unit configured to obtain a size of a predetermined object in the image; a set unit configured to divide the image into a plurality of regions based on the size of the predetermined object obtained by the size obtain unit, and to set regression regions for estimating a number of the predetermined object, wherein the set unit is configured to inhibit to set the regression regions smaller than a predetermined minimum size corresponding to the predetermined object, and an estimate unit configured to estimate the number of the predetermined object by performing a regression process on the regression regions set by the set unit.

System and method for image segmentation

Methods and systems for image processing are provided. Image data may be obtained. The image data may include a plurality of voxels corresponding to a first plurality of ribs of an object. A first plurality of seed points may be identified for the first plurality of ribs. The first plurality of identified seed points may be labelled to obtain labelled seed points. A connected domain of a target rib of the first plurality of ribs may be determined based on at least one rib segmentation algorithm. A labelled target rib may be obtained by labelling, based on a hit-or-miss operation, the connected domain of the target rib, wherein the hit-or-miss operation may be performed using the labelled seed points to hit the connected domain of the target rib.

Medical environment monitoring system

A system and a method are described for monitoring a medical care environment. In one or more implementations, a method includes identifying a first subset of pixels within a field of view of a camera as representing a bed. The method also includes identifying a second subset of pixels within the field of view of the camera as representing an object (e.g., a subject, such as a patient, medical personnel; bed; chair; patient tray; medical equipment; etc.) proximal to the bed. The method also includes determining an orientation of the object within the bed.

METHOD AND APPARATUS FOR TRAINING IMAGE PROCESSING MODEL
20230028237 · 2023-01-26 ·

A method for training an image processing model is provided. After an augmented image is obtained, a soft label of the augmented image is obtained, and the image processing model is trained based on guidance of the soft label, to improve performance of the image processing model. In addition, according to the method, the image processing model is trained based on guidance of a soft label, with a relatively high score, selected from soft labels of the augmented image, to further improve performance of the image processing model.

DYNAMIC FACIAL HAIR CAPTURE OF A SUBJECT

Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).

INFORMATION PROCESSING DEVICE, CONTROL METHOD, AND STORAGE MEDIUM
20230021345 · 2023-01-26 · ·

The information processing device 4 includes an acquisition unit 40A, a structural feature point extraction unit 41A, a common coordinate system transformation unit 42A, and a structural feature point integration unit 43A. The acquisition unit 40A is configured to acquire captured images Im generated by plural image acquisition devices 5 and positional relation information Ip indicative of a positional relation among the plural image acquisition device 5. The structural feature point extraction unit 41A is configured to extract, from each of the captured images Im, intra-image coordinate values Pi for structural feature points, which are structural feature points of a target structure of observation through a display device, the display device displaying a virtual object superimposed on a view. The common coordinate system transformation unit 42A is configured to convert, based on the positional relation information Ip, the intra-image coordinate values Pi for each of the structural feature points extracted from each of the captured images Im into individual coordinate values Pc indicating coordinate values in a common coordinate system which is a common three-dimensional coordinate system. The structural feature point integration unit 43A is configured to determine, for each of the structural feature points, a representative coordinate value Pcr in the common coordinate system based on the individual coordinate values Pc in the common coordinate system.

Methods and systems for dynamic coronary roadmapping

Methods are provided for dynamically visualizing information in image data of an object of interest of a patient, which include an offline phase and an online phase. In the offline phase, first image data of the object of interest acquired with a contrast agent is obtained with an interventional device is present in the first image data. The first image data is used to generate a plurality of roadmaps of the object of interest. A plurality of reference locations of the device in the first image data is determined, wherein the plurality of reference locations correspond to the plurality of roadmaps. In the online phase, live image data of the object of interest acquired without a contrast agent is obtained with the device present in the live image data, and a roadmap is selected from the plurality of roadmaps. A location of the device in the live image data is determined. The reference location of the device corresponding to the selected roadmap and the location of the device in the live image data is used to transform the selected roadmap to generate a dynamic roadmap of the object of interest. A visual representation of the dynamic roadmap is overlaid on the live image data for display. In embodiments, the first image data of the offline phase covers different of phases of the cardiac cycle of the patient, and the plurality of roadmaps generated in the offline phase covers the different phases of the patient's cardiac cycle. Related systems and program storage devices are also described and claimed.

COMPUTER-IMPLEMENTED METHOD FOR EVALUATING AN ANGIOGRAPHIC COMPUTED TOMOGRAPHY DATASET, EVALUATION DEVICE, COMPUTER PROGRAM AND ELECTRONICALLY READABLE DATA MEDIUM

At least one vascular tree supplying at least a part of the hollow organ in the computed tomography dataset is segmented, and a tree structure up to an order possible based on the blood vessel segmentation result is determined from a blood vessel segmentation result. Perfusion information for each edge in the tree structure is assigned as at least one of the computed tomography data assigned to the blood vessel segment or at least one value derived therefrom. Adjacent hollow organ segments of the hollow organ are defined based on supply by adjacent blood vessels in the tree structure, and the tree structure and the perfusion information are analyzed to determine hemodynamic information to assign to hollow organ segments. At least a part of the hemodynamic information in at least one of the computed tomography dataset or a visualization dataset derived therefrom is then visualized.

STAIN-FREE DETECTION OF EMBRYO POLARIZATION USING DEEP LEARNING

Disclosed herein include systems, devices, and methods for detecting embryo polarization from a 2D image generated from a 3D image of an embryo that is not fluorescently labeled using a convolutional neural network (CNN), e.g., deep CNN.

DESIGN OPTIMIZATION AND USE OF CODEBOOKS FOR DOCUMENT ANALYSIS
20230028992 · 2023-01-26 ·

A method of generating and optimizing a codebooks for document analysis comprises: receiving a first set of document images; extracting a plurality of keypoint regions from each document image of the first set of document images; calculating local descriptors for each keypoint region of the extracted keypoint regions; clustering the local descriptors such that each center of a cluster of local descriptors corresponds to a respective visual word; generating a codebook containing a set of visual words; and optimizing the codebook by maximizing mutual information (MI) between a target field of a second set of document images and at least one visual word of the set of visual words.