Patent classifications
G06K9/46
PATTERN RECOGNITION DEVICE, PATTERN RECOGNITION METHOD, AND COMPUTER PROGRAM PRODUCT
According to an embodiment, a pattern recognition device is configured to divide an input signal into a plurality of elements, convert the divided elements into feature vectors having the same dimensionality to generate a set of feature vectors, and evaluate the set of feature vectors using a recognition dictionary including models corresponding to respective classes, to output a recognition result representing a class or a set of classes to which the input signal belongs. The models each include sub-models each corresponding to one of possible division patterns in which a signal to be classified into a class corresponding to the model can be divided into a plurality of elements. A label expressing a model including a sub-model conforming to the set of feature vectors, or a set of labels expressing a set of models including sub-models conforming to the set of feature vectors is output as the recognition result.
TRAFFIC-COUNTING SYSTEM AND METHOD THEREOF
Traffic-counting methods and apparatus are disclosed. The methods may include, in a view of traffic comprising moving objects, identifying first and second regions of interest (ROIs). The methods may also include obtaining first and second image data respectively representing the first and second ROIs. The methods may also include analyzing the first and second image data over time. The methods may further include, based on the analyses of the first and second image data, counting the moving objects and determining moving directions of the moving objects.
COMPUTER-READABLE STORAGE MEDIUM STORING IMAGE PROCESSING PROGRAM AND IMAGE PROCESSING APPARATUS
A computation unit calculates luminance differences of individual pixel pairs in a feature area and calculates, based thereon, a local feature value formed from bit values respectively corresponding to the pixel pairs. Specifically, the computation unit calculates a specific luminance difference for a specific pixel pair corresponding to a specific bit value and then compares the result with a specified range including a zero point of luminance difference. Then a first value is assigned to the specific bit value when the specific luminance difference is greater than the upper bound of the specified range. A second value is assigned to the same when the specific luminance difference is smaller than the lower bound of the specified range. A predetermined one of the first and second values is assigned to the same when the specific luminance difference falls in the specified range.
AUTOMATED SELECTION OF SUBJECTIVELY BEST IMAGE FRAMES FROM BURST CAPTURED IMAGE SEQUENCES
A “Best of Burst Selector,” or “BoB Selector,” automatically selects a subjectively best image from a single set of images of a scene captured in a burst or continuous capture mode, captured as a video sequence, or captured as multiple images of the scene over any arbitrary period of time and any arbitrary timing between images. This set of images is referred to as a burst set. Selection of the subjectively best image is achieved in real-time by applying a machine-learned model to the burst set. The machine-learned model of the BoB Selector is trained to select one or more subjectively best images from the burst set in a way that closely emulates human selection based on subjective subtleties of human preferences. Images automatically selected by the BoB Selector are presented to a user or saved for further processing.
CONVOLUTIONAL NEURAL NETWORK ON PROGRAMMABLE TWO DIMENSIONAL IMAGE PROCESSOR
A method is described that includes executing a convolutional neural network layer on an image processor having an array of execution lanes and a two-dimensional shift register. The executing of the convolutional neural network includes loading a plane of image data of a three-dimensional block of image data into the two-dimensional shift register. The executing of the convolutional neural network also includes performing a two-dimensional convolution of the plane of image data with an array of coefficient values by sequentially: concurrently multiplying within the execution lanes respective pixel and coefficient values to produce an array of partial products; concurrently summing within the execution lanes the partial products with respective accumulations of partial products being kept within the two dimensional register for different stencils within the image data; and, effecting alignment of values for the two-dimensional convolution within the execution lanes by shifting content within the two-dimensional shift register array.
METHOD AND APPARATUS FOR AUTOMATED PLACEMENT OF A SEAM IN A PANORAMIC IMAGE DERIVED FROM MULTIPLE CAMERAS
A method, apparatus and computer program product are provided to generate a panoramic view derived from multiple cameras and automatically place a seam in that panoramic view in a computationally efficient manner. In regards to a method, images captured by at least two cameras are received. Each camera has a different, but partially overlapping field of view. The method determines a seam location and scale factor to be used when combining the images together to minimize errors at the seam between the two images. In some example implementations, the seam location and scale factor may be recalculated in response to a manual or automatic trigger. In some additional example implementations, motion associated with an image element near a seam location is detected, and the seam location is moved in a direction opposite that of the direction of motion.
BIOMETRIC IDENTIFICATION BY GARMENTS HAVING A PLURLITY OF SENSORS
Biometric identification methods and apparatuses (including devices and systems) for uniquely identifying one an individual based on wearable garments including a plurality of sensors, including but not limited to sensors having multiple sensing modalities (e.g., movement, respiratory movements, heart rate, ECG, EEG, etc.).
MOVING OBJECT DETECTION METHOD IN DYNAMIC SCENE USING MONOCULAR CAMERA
The present invention relates to a moving object detection method in a dynamic scene using a monocular camera, which is capable of detecting a moving object using a monocular camera installed on the moving object such as a vehicle, and warning a driver of a dangerous situation. The moving object detection method in a dynamic scene using a monocular camera can detect a moving object in a dynamic scene using the monocular camera without a stereo camera.
GENERATING IMAGE FEATURES BASED ON ROBUST FEATURE-LEARNING
Techniques for increasing robustness of a convolutional neural network based on training that uses multiple datasets and multiple tasks are described. For example, a computer system trains the convolutional neural network across multiple datasets and multiple tasks. The convolutional neural network is configured for learning features from images and accordingly generating feature vectors. By using multiple datasets and multiple tasks, the robustness of the convolutional neural network is increased. A feature vector of an image is used to apply an image-related operation to the image. For example, the image is classified, indexed, or objects in the image are tagged based on the feature vector. Because the robustness is increased, the accuracy of the generating feature vectors is also increased. Hence, the overall quality of an image service is enhanced, where the image service relies on the image-related operation.
Image Measurement Device
There are included a probe that can be arranged in an imaging field of view, a horizontal drive section for causing the probe to contact a side surface of a workpiece on a stage, a display section for displaying a model image, a contact position designation section for receiving designation of contact target position information in the model image, a characteristic amount information setting section for setting characteristic amount information, a measurement setting information storage section for storing a plurality of pieces of contact target position information and the characteristic amount information, and a measurement control section for identifying a position and an attitude of the workpiece from a workpiece image by using the characteristic amount information, for identifying a plurality of contact target positions on the side surface of the workpiece where the probe should contact, based on the identified position and the identified attitude of the workpiece.