Patent classifications
G06V30/2528
AUTOMATED DETECTION AND TYPE CLASSIFICATION OF CENTRAL VENOUS CATHETERS
A system for automated detection and type classification of central venous catheters. The system includes an electronic processor that is configured to, based on an image, generate a segmentation of a potential central venous catheter using a segmentation method and extract, from the segmentation, one or more image features associated with the potential central venous catheter. The electronic processor is also configured to, based on the one or more image features, determine, using a first classifier, whether the image includes a central venous catheters and determine, using a second classifier, a type of central venous catheter included in the image.
Using motion-based features to match video sequences
A method is performed at a computing system having one or more processors and memory. The method includes receiving a first video clip having three or more image frames and computing a first hash pattern, including: (i) computing a temporal sequence of differential frames and (ii) for each differential frame: identifying a respective plurality of feature points and computing a respective hash value that represents spatial positioning of the respective feature points with respect to each other. The method includes receiving a second video clip having three or more image frames and computing a second hash pattern by applying steps (i) and (ii) to the three or more image frames. The method includes computing a distance between the first hash pattern and the second hash pattern and determining that the first and second video clips match when the computed distance is less than a threshold distance.
HOMOGRAPHY ERROR CORRECTION
An object tracking system that includes a sensor that is configured to capture frames of at least a portion of a global plane for a space. The system is configured to receive a first frame from the sensor, to identify a pixel location within the first frame, and to determine an estimated sensor location for the sensor by applying a homography to the pixel location. The homography includes coefficients that translate between pixel locations in a frame from the sensor and (x,y) coordinates in the global plane. The system is further configured to determine an actual sensor location for the sensor and to determine a location difference between the estimated sensor location and the actual sensor location. The system is further configured to compare the location difference to a difference threshold level and to recompute the homography in response to determining that the location difference exceeds the difference threshold level.
Method and System for Controlling Machines Based on Object Recognition
A method includes: capturing one or more images of an unorganized collection of items inside a first machine; determining one or more item types of the unorganized collection of items from the one or more images, comprising: dividing a respective image in the one or more images into a respective plurality of sub-regions; performing feature detection on the respective plurality of sub-regions to obtain a respective plurality of regional feature vectors, wherein a regional feature vector for a sub-region indicates characteristics for a plurality of predefined local item features for the sub-region; generating an integrated feature vector by combining the respective plurality of regional feature vectors; and applying a plurality of binary classifiers to the integrated feature vector; and selecting a machine setting for the first machine based on the determined one or more clothes type in the unorganized collection of items.
SYSTEM AND METHOD FOR AUTOMATED DIAGNOSIS OF SKIN CANCER TYPES FROM DERMOSCOPIC IMAGES
Disclosed is a content-based image retrieval (CBIR) system and related methods that serve as a diagnostic aid for diagnosing whether a dermoscopic image correlates to a skin cancer type. Systems and methods according to aspects of the invention use as a reference a set of images of pathologically confirmed benign or malignant past cases from a collection of different classes that are of high similarity to the unknown new case in question, along with their diagnostic profiles. Systems and methods according to aspects of the invention predict what class of skin cancer is associated with a particular patient skin lesion, and may be employed as a diagnostic aid for general practitioners and dermatologists.
IDENTIFYING AND TREATING PLANTS USING DEPTH INFORMATION IN A SINGLE IMAGE
A farming machine includes one or more image sensors for capturing an image as the farming machine moves through the field. A control system accesses an image captured by the one or more sensors and identifies a distance value associated with each pixel of the image. The distance value corresponds to a distance between a point and an object that the pixel represents. The control system classifies pixels in the image as crop, plant, ground, etc. based on the visual information in the pixels. The control system generates a labelled point cloud using the labels and depth information, and identifies features about the crops, plants, ground, etc. in the point cloud. The control system generates treatment actions based on any of the depth information, visual information, point cloud, and feature values. The control system actuates a treatment mechanism based on the classified pixels.
Neural network-based classification method and classification device thereof
A neural network-based classification method, including: obtaining a neural network and a first classifier; inputting input data to the neural network to generate a feature map; cropping the feature map to generate a first cropped part and a second cropped part of the feature map; inputting the first cropped part to the first classifier to generate a first probability vector; inputting the second cropped part to a second classifier to generate a second probability vector, wherein weights of the first classifier are shared with the second classifier; and performing a probability fusion on the first probability vector and the second probability vector to generate an estimated probability vector for determining a class of the input data.
IDENTIFYING TARGETS WITHIN IMAGES
Methods of detecting and/or identifying an artificial target within an image are provided. These methods comprise: applying to a region of the image a primary classification algorithm for performing a feature extraction of the image region, the primary classification algorithm being based on a spectral profile defined by one or more spectral signatures with one or more features in at least part of the infrared spectrum; obtaining a relation between the extracted features of the image region and the spectral profile; verifying whether a level of confidence of the obtained relation between the extracted features and the spectral profile is higher than a first predetermined confirmation level; and, in case of positive (or true) result of said verification, determining that the image region corresponds to artificial target to be detected, thereby obtaining a confirmed artificial target. Systems and computer programs are also provided that are suitable for performing said methods.
SYSTEM AND METHOD FOR EYEWEAR SIZING
Provided is a process for generating specifications for lenses of eyewear based on locations of extents of the eyewear determined through a pupil location determination process. Some embodiments capture an image and determine, using computer vision image recognition functionality, the pupil locations of a human's eyes based on the captured image depicting the human wearing eyewear.
DETERMINING VANISHING POINTS BASED ON FEATURE MAPS
In some implementations, a method is provided. The method includes obtaining an image depicting an environment where an autonomous driving vehicle (ADV) is located. The method also includes determining, using a first neural network, a plurality of line indicators based on the image. The plurality of line indicators represent one or more lanes in the environment. The method further includes determining, using a second neural network, a vanishing point within the image based on the plurality of line segments. The second neural network is communicatively coupled to the first neural network. The plurality of line indicators is determined simultaneously with the vanishing point. The method further includes calibrating one or more sensors based of the autonomous driving vehicle based on the vanishing point.