Patent classifications
G06V10/46
Point-set kernel clustering
A computer-implemented clustering method is disclosed for image segmentation, social network analysis, computational biology, market research, search engine and other applications. At the heart of the method is a point-set kernel that measures the similarity between a data point and a set of data points. The method has a procedure that employs the point-set kernel to expand from a seed point to a cluster; and finally identifies all clusters in the given dataset. Applying the method for image segmentation, it identifies several segments in the image, where points in each segment have high similarity: but points in one segment have low similarity with respect to other segments. The method is both effective and efficient that enables it to deal with large scale datasets. In contrast, existing clustering methods are either efficient or effective; and even efficient ones have difficulty dealing with large scale datasets without massive parallelization.
PIXEL-LEVEL BASED MICRO-FEATURE EXTRACTION
Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
PIXEL-LEVEL BASED MICRO-FEATURE EXTRACTION
Techniques are disclosed for extracting micro-features at a pixel-level based on characteristics of one or more images. Importantly, the extraction is unsupervised, i.e., performed independent of any training data that defines particular objects, allowing a behavior-recognition system to forgo a training phase and for object classification to proceed without being constrained by specific object definitions. A micro-feature extractor that does not require training data is adaptive and self-trains while performing the extraction. The extracted micro-features are represented as a micro-feature vector that may be input to a micro-classifier which groups objects into object type clusters based on the micro-feature vectors.
System and method for eye-tracking
A system for eye-tracking according to an embodiment of the present invention includes a data collection unit that acquires face information of a user and location information of the user from an image captured by a photographing device installed at each of one or more points set within a three-dimensional space and an eye tracking unit that estimates a location of an area gazed at by the user in the three-dimensional space from the face information and the location information, and maps spatial coordinates corresponding to the location of the area to a three-dimensional map corresponding to the three-dimensional space.
System and method for eye-tracking
A system for eye-tracking according to an embodiment of the present invention includes a data collection unit that acquires face information of a user and location information of the user from an image captured by a photographing device installed at each of one or more points set within a three-dimensional space and an eye tracking unit that estimates a location of an area gazed at by the user in the three-dimensional space from the face information and the location information, and maps spatial coordinates corresponding to the location of the area to a three-dimensional map corresponding to the three-dimensional space.
Co-heterogeneous and adaptive 3D pathological abdominal organ segmentation using multi-source and multi-phase clinical image datasets
The present disclosure describes a computer-implemented method for processing clinical three-dimensional image. The method includes training a fully supervised segmentation model using a labelled image dataset containing images for a disease at a predefined set of contrast phases or modalities, allow the segmentation model to segment images at the predefined set of contrast phases or modalities; finetuning the fully supervised segmentation model through co-heterogenous training and adversarial domain adaptation (ADA) using an unlabelled image dataset containing clinical multi-phase or multi-modality image data, to allow the segmentation model to segment images at contrast phases or modalities other than the predefined set of contrast phases or modalities; and further finetuning the fully supervised segmentation model using domain-specific pseudo labelling to identify pathological regions missed by the segmentation model.
Systems and methods for screenshot linking
A system for analyzing screenshots can include a computing device including a processor coupled to a memory and a display screen configured to display content. The system can include an application stored on the memory and executable by the processor. The application can include a screenshot receiver configured to access, from storage to which a screenshot of the content displayed on the display screen captured using a screenshot function of the computing device is stored, the screenshot including an image and a predetermined marker. The application can include a marker detector configured to detect the predetermined marker included in the screenshot. The application can include a link identifier configured to identify, using the predetermined marker, a link to a resource mapped to the image included in the screenshot, the resource accessible by the computing device via the link.
SYSTEMS AND METHODS TO PROCESS ELECTRONIC IMAGES TO DETERMINE HISTOPATHOLOGY QUALITY
A computer-implemented method for processing an electronic image may include receiving, by an artificial intelligence (AI) system at an electronic storage of the AI system, one or more digital whole slide images (WSIs) and extracting one or more vectors of features from one or more foreground tiles of tile images of the one or more digital WSIs. The method may include running a trained machine learning model on the one or more vectors of features and determining, based on an output of the trained machine learning model, whether one or more quality issues are present in the one or more digital WSIs.
Real time region of interest (ROI) detection in thermal face images based on heuristic approach
Embodiments herein provide a method and system for real time ROI detection in thermal face images based on a heuristic approach. The ROI of the thermal images, once detected, is then further used to detect temperature of a subject corresponding to the ROI. Unlike state of the art techniques, the heuristic approach is computationally less intensive and provides fast and accurate ROI detection even in case of occluded faces in a crowd with a single thermal image having a plurality of subject being scanned. The heuristics applied does not focus on face detection but directly on point of interest detection. Once the point of interest (ROI) is detected, it may be used for plurality of applications such as subject tracking and the like, not limited to subject or object temperature sensing since the method disclosed herein is easily implementable on low power devices.
System and Method for Improved Generation of Avatars for Virtual Try-On of Garments
A system and a method for improved generation of 3D avatars for virtual try-on of garments is provided. Inputs from a first user type are received, via a first input unit, for generating one or more garment types in a graphical format. Further, a 3D avatar of a second user type is generated in a semi-automatic manner or an automatic manner based on capturing a first input type or a second input type respectively received via a second input unit. The first input type comprises measurements of body specifications of the second user type and the second input type comprises body images of the second user type. Further, the generated garments are rendered on the generated 3D avatar of the second user type for carrying out a virtual try-on operation.