Patent classifications
G06V10/32
LUMBAR SPINE ANNATOMICAL ANNOTATION BASED ON MAGNETIC RESONANCE IMAGES USING ARTIFICIAL INTELLIGENCE
A system for automated comprehensive assessment of clinical lumbar MRIs includes a MRI standardization component that reads MRI data from raw lumbar MRI files, uses an artificial intelligence (AI) model to convert the raw MRI data into a standardized format. A core assessment component automatically generates MRI assessment results, including multi-tissue anatomical annotation, multi-pathology detection and multi-pathology progression prediction based on the structured MRI data package. The core assessment component contains a semantic segmentation module that utilizes a deep learning artificial intelligence (AI) model to generate an MRI assessment results that contains multi-tissue anatomical annotation, a pathology detection module to generate multi-pathology detection, and a pathology progression prediction module to generate multi-pathology progression prediction. A model optimization component archives clinical MRI data and MRI assessment results based on comments provided by a specialist, and periodically optimizes the AI deep learning model of the core assessment component.
Systems and methods for mobile image capture and content processing of driver's licenses
Systems and methods are provided for processing and extracting content from an image captured using a mobile device. In one embodiment, an image is captured by a mobile device and corrected to improve the quality of the image. The corrected image is then further processed by adjusting the image, identifying the format and layout of the document, binarizing the image and extracting the content using optical character recognition (OCR). Multiple methods of image adjusting may be implemented to accurately assess features of the document, and a secondary layout identification process may be performed to ensure that the content being extracted is properly classified.
Systems and methods for mobile image capture and content processing of driver's licenses
Systems and methods are provided for processing and extracting content from an image captured using a mobile device. In one embodiment, an image is captured by a mobile device and corrected to improve the quality of the image. The corrected image is then further processed by adjusting the image, identifying the format and layout of the document, binarizing the image and extracting the content using optical character recognition (OCR). Multiple methods of image adjusting may be implemented to accurately assess features of the document, and a secondary layout identification process may be performed to ensure that the content being extracted is properly classified.
System and method for access control using a plurality of images
Aspects of the invention provide, in some aspects, a method of face recognition that includes receiving plural frames of a video stream imaging a candidate individual, e.g., in the field of view of a camera, and generating for each of those frames a score of the image and/or of the candidate therein. This can include, for example, a score (or count) indicative of the number of individuals present in the frame, a pose of the candidate individual (e.g., face-on or otherwise), blur in the image, and so forth. The method further includes selecting, based on the respective scores of the frames, a subset of the frames for matching by a face recognizer against a set of one or more images of designated individuals. That set may be of individuals approved for access, individuals to be prevented for access, or otherwise. An output is generated, according to the method, based on such matching by the face recognizer.
IMAGE NORMALIZATION PROCESSING
Methods, systems, electronic devices, and computer-readable storage media for image normalization processing are provided. In one aspect, an image normalization processing method includes: normalizing a feature map by respectively using K normalization factors to obtain K candidate normalized feature maps; for each of the K normalization factors, determining a first weight value for the normalization factor; and determining a target normalized feature map corresponding to the feature map based on the candidate normalized feature map corresponding to each of the K normalization factors and the first weight value for each of the K normalization factors. The K candidate normalized feature maps and the K normalization factors have a one-to-one correspondence, and K is an integer greater than 1.
IMAGE NORMALIZATION PROCESSING
Methods, systems, electronic devices, and computer-readable storage media for image normalization processing are provided. In one aspect, an image normalization processing method includes: normalizing a feature map by respectively using K normalization factors to obtain K candidate normalized feature maps; for each of the K normalization factors, determining a first weight value for the normalization factor; and determining a target normalized feature map corresponding to the feature map based on the candidate normalized feature map corresponding to each of the K normalization factors and the first weight value for each of the K normalization factors. The K candidate normalized feature maps and the K normalization factors have a one-to-one correspondence, and K is an integer greater than 1.
IMAGE DETECTION METHOD AND APPARATUS, COMPUTER DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
The present application provides an image detection method performed by a server. The method includes: intercepting a first image and a second image at a preset time interval from a video stream; performing pixel matching on the first image and the second image to obtain a value of total matching pixels between the first image and the second image; performing picture content detection on the second image in response to determining that the value of total matching pixels between the first image and the second image satisfies a preset matching condition based on the value of total matching pixels; and determining that the video stream is abnormal in response to determining that no picture content is in the second image by the picture content detection. In this way, an image recognition manner can be used to perform detection on image pictures of the video stream at the preset time interval.
MEDICAL IMAGE SYNTHESIS FOR MOTION CORRECTION USING GENERATIVE ADVERSARIAL NETWORKS
A computer system is configured to remove motion artifacts in medical images using a generative adversarial network (GAN). The computer system instantiates the GAN having one or more generative network(s) and one or more discriminative network(s) that are pitted against each other to train a generative model and a discriminative model. The training uses a training dataset including a plurality of medical images that are previously classified as without significant motion artifacts for diagnostic purposes. The discriminative model is trained to classify medical images as real or artificial. The generative model is trained to enhance the quality of a medical image and remove motion artifacts by producing a medical image directly from a post-contrast image, without using a pre-contrast mask.
METHOD FOR AUTOMATIC SEGMENTATION OF CORONARY SINUS
Method, executed by a computer, for identifying a coronary sinus of a patient, comprising: receiving a 3D image of a body region of the patient; extracting 2D axial images of the 3D image taken along respective axial planes, 2D sagittal images of the 3D image taken along respective sagittal planes, and 2D coronal images of the 3D image taken along respective coronal planes; applying an axial neural network to each 2D axial image to generate a respective 2D axial probability map, a sagittal neural network to each 2D sagittal image to generate a respective 2D sagittal probability map, and a coronal neural network to each 2D coronal image to generate a respective 2D coronal probability map; generating, based on the 2D probability maps, a 3D mask of the coronary sinus of the patient.
METHOD FOR AUTOMATIC SEGMENTATION OF CORONARY SINUS
Method, executed by a computer, for identifying a coronary sinus of a patient, comprising: receiving a 3D image of a body region of the patient; extracting 2D axial images of the 3D image taken along respective axial planes, 2D sagittal images of the 3D image taken along respective sagittal planes, and 2D coronal images of the 3D image taken along respective coronal planes; applying an axial neural network to each 2D axial image to generate a respective 2D axial probability map, a sagittal neural network to each 2D sagittal image to generate a respective 2D sagittal probability map, and a coronal neural network to each 2D coronal image to generate a respective 2D coronal probability map; generating, based on the 2D probability maps, a 3D mask of the coronary sinus of the patient.