Patent classifications
G06V10/446
Analyzing integral images with respect to HAAR features
Subject matter disclosed herein relates to arrangements and techniques that provide for identifying objects within an image such as the face position of a user of a portable electronic device. An application specific integrated circuit (ASIC) is configured to locate objects within images. The ASIC includes an image node configured to process an image and a search node configured to search the image for an object in the image. The search node includes an integral image generation unit configured to generate an integral image of the image and a Haar feature evaluation unit configured to evaluate search windows of the integral image with respect to Haar-like features. The ASIC also includes an ensemble node configured to confirm the presence of the object in the image.
Method and apparatus for generating strong classifier for face detection
Embodiments of the present invention disclose methods for generating a strong classifier for face detection. The methods include determining, according to a size of a prestored image training sample, a parameter of weak classifier of the image training sample, obtaining a sketch value of each of the weak classifiers of the image training sample, calculating a weighted classification error of each of the weak classifiers according to the sketch value and an initial weight of the image training sample, obtaining at least one optimal weak classifier according to the weighted classification error, and generating a strong classifier for face detection according to the optimal weak classifiers. The embodiments of the present invention further disclose an apparatus for generating a strong classifier for face detection. The embodiments of the present invention have advantages of improving robustness of code against noise and reducing a false detection rate of face detection.
Technologies for robust two-dimensional gesture recognition
Technologies for performing two-dimensional gesture recognition are described. In some embodiments the technologies include systems, methods, and computer readable media for performing two-dimensional gesture recognition on one or more input images. In some embodiments the technologies use an object detector to detect one or more suspected gestures in an input image, and to generate a first set of hits correlating to detected gestures in the input image. At least a portion of false positive hits may then be removed by the application of one or more filters to the first set of hits. Custom hand gesture filters are also described.
Robust object recognition from moving platforms by combining form and motion detection with bio-inspired classification
Described is system for object recognition from moving platforms. The system receives a video captured from a moving platform as input. The video is processed with a static object detection module to detect static objects in the video, resulting in a set of static object detections. The video is also processed with a moving object detection module to detect moving objects in the video, resulting in a set of moving object detections. The set of static object detections and the set of moving object detections are fused, resulting in a set of detected objects. The set of detected objects are classified with an object classification module, resulting in a set of recognized objects that are then output.
Tracking For Detection Of TEE Probe In Fluoroscopy Medical Imaging
A probe pose is detected in fluoroscopy medical imaging. The pose of the probe through a sequence of fluoroscopic images is detected. The detection relies on an inference framework for visual tracking overtime. By applying visual tracking, the pose through the sequence is consistent or the pose at one time guides the detection of the probe at another time. Single frame drop-out of detection may be avoided. Verification using detection of the tip of the probe and/or weighting of possible detections by separate detection of markers on the probe may further improve the accuracy.
System and method for component detection
A method and system that include an imaging device configured to capture image data of the vehicle. The vehicle includes one or more components of interest. The method and system include a memory device configured to store an image detection algorithm based on one or more image templates corresponding to the one or more components of interest. The method and system also includes an image processing unit operably coupled to the imaging device in the memory device. The image processing unit is configured to determine one or more shapes of interest of the image data using the image detection algorithm that correspond to the one or more components of interest, and determine one or more locations of the one or more shapes of interest respective to the vehicle.
AUTOMATIC DETERMINATION OF THE PRESENCE OF BURN-IN OVERLAY IN VIDEO IMAGERY
Systems, methods and computer systems for the automatic determination of presence or absence of burn-in overlay data are provided. The systems, methods, and computer systems implement mask generation, edge detection, feature vector generation methods that are combined with machine learning classifiers to rapidly and automatically determine the presence or absence of burn-in overlays in the image for the purpose of removal or other forms to obfuscate burn-in overlay data so as to maintain confidential or classified information while allowing for the release of remaining image data.
METHOD FOR TESTING SUITABILITY OF IMAGE FOR TRAINING OR RECOGNIZING NOSE PRINT OF COMPANION ANIMAL
The present disclosure discloses a method of testing suitability of an image for training or recognizing the nose print of the companion animal includes obtaining an image including a face of the companion animal, extracting a nose region of the companion animal from the image, extracting feature points representing a contour of the nose from the nose region, and determining whether the nose region is frontal based on a positional relationship of the feature points.
ULTRASOUND ANALYSIS METHOD AND DEVICE
The invention provides an ultrasound data processing method (30) for detecting presence of an intravascular object in a vessel lumen based on analysis of acquired intravascular ultrasound data of the lumen. The method comprises receiving (32) data comprising multiple frames, and each frame containing data for a plurality of radial lines, corresponding to different circumferential positions around the IVUS device body, and reducing (34) the data to a single representative value for each radial line in each frame. These representative values are subsequently processed to derive (36) values for at least each frame representative of a probability of presence of an object within the given frame. Based on the probability values, a region within the data occupied by an intravascular object, for instance a consecutive set of frames occupied by an object, is determined (38).