Patent classifications
G06T2207/20132
LARGE-SCALE AUTOMATED IMAGE ANNOTATION SYSTEM
Systems and methods for automating image annotations are provided, such that a large-scale annotated image collection may be efficiently generated for use in machine learning applications. In some aspects, a mobile device may capture image frames, identifying items appearing in the image frames and detect objects in three-dimensional space across those image frames. Cropped images may be created as associated with each item, which may then be correlated to the detected objects. A unique identifier may then be captured that is associated with the detected object, and labels are automatically applied to the cropped images based on data associated with that unique identifier. In some contexts, images of products carried by a retailer may be captured, and item data may be associated with such images based on that retailer's item taxonomy, for later classification of other/future products.
Software-based image processing using an associated machine learning model
Techniques for applying one or more machine learning models to a sub-region less than all of an image scene are described. An example is receiving first sub-region from an image; analyzing the received first sub-region of the image using the indicated least one machine learning model to perform the analyzing of the first sub-region of the scene; and outputting a result of the analyzing.
METHOD FOR DETERMINING A DIAGNOSTICALLY RELEVANT SECTIONAL PLANE
A computer-implemented method for determining an orientation of at least one diagnostically relevant sectional plane for heart imaging in a three-dimensional magnetic resonance imaging image dataset, comprises: providing the three-dimensional image dataset; applying a trained function to the three-dimensional image dataset to determine a position of at least one landmark; determining the orientation of the at least one diagnostically relevant sectional plane as a function of at least one landmark; and providing the orientation of the at least one diagnostically relevant sectional plane.
IMAGE PROCESSING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
An image processing method includes: performing target object detection on an initial image to obtain an object detection result, and performing image saliency detection on the initial image to obtain a saliency detection result; cropping the initial image based on the object detection result and the saliency detection result to obtain a corresponding cropped image; acquiring an image template for indicating an image style, and acquiring layer information corresponding to the image template; and adding the layer information to the cropped image based on the image template to obtain a target image corresponding to the image style indicated by the image template.
MACHINE LEARNING PIPELINE FOR DOCUMENT IMAGE QUALITY DETECTION AND CORRECTION
A computing system receives, from a client device, an image of a content item uploaded by a user of the client devices. The computing system divides the image into one or more overlapping patches. The computing system identifies, via a first machine learning model, one or more distortions present in the image based on the image and the one or more overlapping patches. The computing system determines that the image meets a threshold level of quality. Responsive to the determining, the computing system corrects, via a second machine learning model, the one or more distortions present in the image based on the image and the one or more overlapping patches. Each patch of the one or more overlapping patches are corrected. The computing system reconstructs the image of the content item based on the one or more corrected overlapping patches.
IMAGE FUSION METHOD AND BIFOCAL CAMERA
Embodiments of the present application are an image fusion method and a bifocal camera. The method includes: acquiring a thermal image captured by the thermal imaging lens and a visible light image captured by the visible light lens; determining a first focal length when the thermal imaging lens captures the thermal image and a second focal length when the visible light lens captures the visible light image; determining a size calibration parameter and a position calibration parameter of the thermal image according to the first focal length and the second focal length; adjusting a size of the thermal image according to the size calibration parameter, and moving the adjusted thermal image to the visible light image according to the position calibration parameter for registration with the visible light image, to obtain to-be-fused images; and fusing the to-be-fused images to generate a bifocal fused image.
System and method for identifying target regions prior to organs at risk segmentation
A method and device for generating a three dimensional (3D) bounding box of a region of interest (ROI) of a patient include receiving a two dimensional (2D) maximum intensity projection (MIP) image that is an axial view of the patient and a 2D MIP image that is a sagittal view of the patient. A first 2D bounding box of the ROI of the patient and a second 2D bounding box of the ROI of the patient are detected using the 2D MIP images. A 3D MIP image of the patient is received, and the 3D bounding box of the ROI of the patient is generated using the 3D MIP image, the first 2D bounding box, and the second 2D bounding box. The 3D MIP image including the 3D bounding box is provided.
Method and system for postural analysis and measuring anatomical dimensions from a radiographic image using machine learning
A method for use of machine learning in computer-assisted anatomical prediction. The method includes identifying with a processor parameters in a plurality of training images to generate a training dataset, the training dataset having data linking the parameters to respective training images, training at least one machine learning algorithm based on the parameters in the training dataset and validating the trained machine learning algorithm, identifying with the processor digitized points on a plurality of anatomical landmarks in a radiographic image of a person's skeleton displayed on a screen by determining anatomical relationships of adjacent bony structures as well as dimensions of at least a portion of a body of the skeleton in the displayed image using the validated machine learning algorithm and a scale factor for the displayed image, and making an anatomical prediction of the person's skeletal alignment based on the determined anatomical dimensions and a known morphological relationship.
Method and system for estimating the trajectory of an object on a map
A method is disclosed for estimating a trajectory of an object on a map given a sequence of traces for the moving object. Each trace of the object including information defining a position measured at a given time for the object, as well as information as to an area of accuracy around the measured position. The method processes pairs of successive traces, corresponding to two positions successive in time in the sequence of measured positions for the moving object. For each trace of a pair of successive traces, the method defines road segments on the map within the area of accuracy of the trace. For each road segment within the area of accuracy of a first trace of a pair of traces and each road segment within the area of accuracy of the second trace of the pair, the method determines at least one candidate path between the two road segments. A neural network and a neural graph model are used to compute the most probable sequence of candidate paths to estimate the trajectory of the object on the map.
Data processing apparatus and method
A medical image data processing apparatus is provided and includes processing circuitry to receive medical image data in respect of at least one subject; receive non-image data; generate a filter based on the non-image data; and apply the filter to the medical image data, wherein the filter limits a region of the medical image data.