Patent classifications
G06V10/755
Semiautomatic drawing tool for image segmentation
A semiautomatic drawing tool is configured to generate an image segmentation contour that corresponds to a visible edge within a digital image. Individual contour points define the shape and location of the image segmentation contour, and the semiautomatic drawing tool determines the locations of these contour points based on both user input and on the location of a steep luminance gradient that corresponds to the visible edge. The semiautomatic drawing tool determines the location of a particular point of the image segmentation contour based on the current location of an input device pointer, e.g., a cursor, the location of the steepest luminance gradient, and the location of previously selected points of the image segmentation contour.
DETAILED SPATIO-TEMPORAL RECONSTRUCTION OF EYELIDS
Methods and systems of reconstructing an eyelid are provided. A method of reconstructing an eyelid includes obtaining one or more images of the eyelid, generating one or more image input data for the one or more images of the eyelid, generating one or more reconstruction data for the one or more images of the eyelid, and reconstructing a spatio-temporal digital representation of the eyelid using the one or more input image data and the one or more reconstruction data.
Methods and systems for extracting blood vessel
A method for determining a centerline of a blood vessel in an image associated with a subject is provided. The method includes obtaining a centerline model used for identifying a centerline of a blood vessel and identifying the centerline of the blood vessel based on the centerline model.
Method and system for quantifying semantic variance between neural network representations
A method and system for quantifying semantic variance between neural network representations is provided. Two neural network representations to be compared are first extracted, the weight of each filter in an intermediate layer corresponding to each semantic concept is learned on a reference dataset using the Net2Vec method, then set IoU of each representation for all semantic concepts in the reference dataset are calculated, and finally variance between the set IoU of the two representations for all the semantic concepts are integrated to obtain semantic variance between the two neural network representations. The method solves the problem of lack of accurate measurement on the variance between neural network representations on a semantic information level, and has an accurate measurement effect.
A MEDICAL SCANNING SYSTEM AND METHOD FOR DETERMINING SCANNING PARAMETERS BASED ON A SCOUT IMAGE
A medical scanning system and method for determining scanning parameters based on a scout image, the system includes: a scanned object description module for describing the shape of a scanned object on an initial image; an adjustment module for aligning the shape of the scanned object with the pre-stored average shape; a principal component analysis module for extracting the principal component for the aligned shape of the scanned object; a desired shape acquisition module for imparting weight parameters to said principal component, acquiring a plurality of new shapes, and from said plurality of new shapes, determining the new shape with the maximum cost function value as the desired shape and a scanning parameter setting module for setting scanning parameters based on the desired shape.
Analysis and Characterization of Epithelial Tissue Structure
Methods for non-invasive or minimally invasive assessment of epithelial tissue structure are disclosed. Digital imaging and processing are used to identify cell locations. More specifically, an automated algorithm that may be used to identify epithelial tissue structure, and/or to specify the coordinates/locations of cells in the epithelial tissue structure, through non-invasive or minimally invasive imaging, and use of this information to extract values of epithelial structure related parameters are disclosed.
Classification of Image Data from Synthetic Aperture Radar Images and Electro-Optical Images with Multi-Modal Fusion
Systems and methods are disclosed for classifying objects using electro-optical and synthetic aperture radar images through multi-modal feature alignment and fusion. A computing system acquires and preprocesses image data, then aligns features across modalities using a multi-modal alignment engine. A cross-modal attention fusion network extracts and integrates complementary information using transformer-based attention mechanisms. A modality-specific feature extraction framework processes EO and SAR images through specialized branches, ensuring optimal feature representation. An adaptive fusion decision system dynamically determines the best fusion strategy based on image quality and confidence scores. A self-supervised consistency controller enforces alignment between EO and SAR features using contrastive learning. The fused representations are processed by a neural network to generate object classifications. This system improves accuracy and robustness in environments where one modality may be degraded or missing, enhancing applications such as remote sensing, surveillance, and autonomous navigation.
Image analysis system for forensic facial comparison
An automatic forensic facial comparison system, FFC, having a questioned image (I1) and a reference image (I2), captured by means of acquisition of images of a subject, comprising processing means configured to carry out FFC steps: at least one morphological analysis stage (11), mandatory, and optionally a holistic comparison stage (12), and/or an image overlay stage (13), and/or a photo-anthropometry stage (14), and/or a decision-making stage (15). For each stage (11, 12, 13, 14) corresponding to FFC methods, the processing means calculate an overall indicator value of the stage carried out. In the last decision-making stage (15), the processing means calculate a fuzzy value by applying soft computing, obtained as a sum of the overall indicator value of each stage previously carried out (11, 12, 13, 14), each value being weighted by a weight based on a set of data to support the decision-making stage (15) indicative of a degree of reliability of each stage and of the quality of the starting images (I1, I2).