Patent classifications
G06T2207/20016
IMAGE PROCESSING SYSTEM AND METHOD
A computer-implemented method of determining a pose of each of a plurality of objects includes, for each given object: using image data and associated depth information to estimate a pose of the given object. The method includes iteratively updating the estimated poses by: sampling, for each given object, a plurality of points from a predetermined model of the given object transformed in accordance with the estimated pose of the given object; determining first occupancy data for each given object dependent on positions of the points sampled from the predetermined model, relative to a voxel grid containing the given object; determining second occupancy data for each given object dependent on positions of the points sampled from the predetermined models of the other objects, relative to the voxel grid containing the given object; and updating the estimated poses of the plurality of objects to reduce an occupancy penalty.
Device and method for detecting clinically important objects in medical images with distance-based decision stratification
A method for performing a computer-aided diagnosis (CAD) includes: acquiring a medical image set; generating a three-dimensional (3D) tumor distance map corresponding to the medical image set, each voxel of the tumor distance map representing a distance from the voxel to a nearest boundary of a primary tumor present in the medical image set; and performing neural-network processing of the medical image set to generate a predicted probability map to predict presence and locations of oncology significant lymph nodes (OSLNs) in the medical image set, wherein voxels in the medical image set are stratified and processed according to the tumor distance map.
IMAGE ANALYSIS METHOD, IMAGE GENERATION METHOD, LEARNING-MODEL GENERATION METHOD, ANNOTATION APPARATUS, AND ANNOTATION PROGRAM
The usability in annotating an image of a subject derived from a living body is improved. An image analysis method is implemented by one or more computers and includes: displaying a first image that is an image of a subject derived from a living body; acquiring information regarding a first region based on a first annotation added to the first image by a user (S101); specifying a similar region similar to the first region from a region different from the first region in the first image, or a second image obtained by image capture of a region including at least a part of a region of the subject subjected to capture of the first image, based on the information regarding the first region (S102, S103); and displaying a second annotation in a second region corresponding to the similar region in the first image (S104).
IMAGE PROCESSING METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM
An image processing method includes: determining a first face mask image that does not contain hair from a target image, and obtaining a first face region that does not contain hair from the target image according to the first face mask image; filling a preset grayscale color outside the first face region to generate an image to be sampled; performing down-sampling on the image to be sampled to obtain sampling results, and obtaining remaining sampling results by removing one or more sampling results in which a color is the preset grayscale color from the sampling results; obtaining a target color by calculating a mean color value of the remaining sampling results and performing weighted summation on a preset standard face color and the mean color value; rendering pixels in a face region of the target image according to the target color.
MULTI-SCALE TRANSFORMER FOR IMAGE ANALYSIS
The technology employs a patch-based multi-scale Transformer (300) that is usable with various imaging applications. This avoids constraints on image fixed input size and predicts the quality effectively on a native resolution image. A native resolution image (304) is transformed into a multi-scale representation (302), enabling the Transformer's self-attention mechanism to capture information on both fine-grained detailed patches and coarse-grained global patches. Spatial embedding (316) is employed to map patch positions to a fixed grid, in which patch locations at each scale are hashed to the same grid. A separate scale embedding (318) is employed to distinguish patches coming from different scales in the multiscale representation. Self-attention (508) is performed to create a final image representation. In some instances, prior to performing self-attention, the system may prepend a learnable classification token (322) to the set of input tokens.
SELF-ADAPTIVE MULTI-SCALE RESPIRATORY MONITORING METHOD BASED ON CAMERA
A self-adaptive multi-scale respiratory monitoring method based on a camera, relates to the technical field of video image signal identification processing, in order to solve the defect that the local optimal respiratory signal and the global optimal respiratory signal cannot be acquired by single image scale, a method was provided: (1) acquiring a respiratory monitoring object in real time;(2) performing multi-scale regular pre-segmentation on a video image, performing local respiratory signal identification and extraction on each unit area pre-segmented under each scale respectively, and defining the unit area with local respiratory signal output as a target area; and (3) comparing local respiratory signals extracted from the target area pre-segmented under each scale, determining an optimal segmentation scale, and taking a local respiratory signal extracted from the target area under the optimal segmentation scale as a monitoring respiratory signal output. The reliability is improved, and intelligent monitoring is realized.
PIXEL LOCATION CALIBRATION IMAGE CAPTURE AND PROCESSING
What is disclosed are systems and methods for optical correction for correcting for non-uniformity in active matrix light emitting diode device (AMOLED) and other emissive displays, using iterative processing of images of calibration patterns including features of coarse and fine granularity to successively generate a high-resolution estimate of the panel pixel locations.
METHOD AND APPARATUS FOR 3D OBJECT DETECTION AND SEGMENTATION BASED ON STEREO VISION
A method, apparatus and system for 3D object detection and segmentation are provided. The method comprises the steps of: extracting multi-view 2D features based on multi-view images captured by a plurality of cameras; generating a 3D feature volume based on the multi-view 2D features; and performing a depth estimation, a semantic segmentation, and a 3D object detection based on the 3D feature volume. The method, apparatus, and system of the disclosure are faster, computation friendly, flexible, and more practical to deploy on vehicles, drones, robots, vehicles, mobile devices, or mobile communication devices.
VOLUMETRIC SAMPLING WITH CORRELATIVE CHARACTERIZATION FOR DENSE ESTIMATION
Systems and techniques are described herein for performing optical flow estimation for one or more frames. For example, a process can include determining an optical flow prediction associated with a plurality of frames. The process can include determining a position of at least one feature associated with a first frame and determining, based on the position of the at least one feature in the first frame and the optical flow prediction, a position estimate of a search area for searching for the at least one feature in a second frame. The process can include determining, from within the search area, a position of the at least one feature in the second frame
Quantitative imaging for instantaneous wave-free ratio
Systems and methods for analyzing pathologies utilizing quantitative imaging are presented herein. Advantageously, the systems and methods of the present disclosure utilize a hierarchical analytics framework that identifies and quantify biological properties/analytes from imaging data and then identifies and characterizes one or more pathologies based on the quantified biological properties/analytes. This hierarchical approach of using imaging to examine underlying biology as an intermediary to assessing pathology provides many analytic and processing advantages over systems and methods that are configured to directly determine and characterize pathology from underlying imaging data.