G06V30/2504

Image matching device

An image matching device that performs matching between a first image and a second image includes: a frequency characteristic acquisition unit configured to acquire a frequency characteristic of the first image and a frequency characteristic of the second image; a frequency characteristic synthesizing unit configured to synthesize the frequency characteristic of the first image and the frequency characteristic of the second image to generate a synthesized frequency characteristic; a determination unit configured to perform frequency transformation on the synthesized frequency characteristic to calculate a correlation coefficient map whose resolution coincides with a target resolution, and perform matching between the first image and the second image based on a matching score calculated from the correlation coefficient map; and a regulation unit configured to regulate the target resolution based on the matching score.

Target identification in large image data

A machine receives a large image having large image dimensions that exceed memory threshold dimensions. The large image includes metadata. The machine adjusts an orientation and a scaling of the large image based on the metadata. The machine divides the large image into a plurality of image tiles, each image tile having tile dimensions smaller than or equal to the memory threshold dimensions. The machine provides the plurality of image tiles to an artificial neural network. The machine identifies, using the artificial neural network, at least a portion of the target in at least one image tile. The machine identifies the target in the large image based on at least the portion of the target being identified in at least one image tile.

Image Classification Attack Mitigation

Concepts and technologies disclosed herein are directed to image classification attack mitigation. According to one aspect of the concepts and technologies disclosed herein, a system can obtain an original image and reduce a resolution of the original image to create a reduced resolution image. The system can classify the reduced resolution image and output a first classification. The system also can classify the original image via deep learning image classification and output a second classification. The system can compare the first classification and the second classification. In response to determining that the first classification and the second classification match, the system can output the second classification of the original image. In response to determining that the first classification and the second classification do not match, the system can output the first classification of the original image.

Multiscale feature representations for object recognition and detection

Embodiments of the present invention are directed to a computer-implemented method for multiscale representation of input data. A non-limiting example of the computer-implemented method includes a processor receiving an original input. The processor downsamples the original input into a downscaled input. The processor runs a first convolutional neural network (“CNN”) on the downscaled input. The processor runs a second CNN on the original input, where the second CNN has fewer layers than the first CNN. The processor merges the output of the first CNN with the output of the second CNN and provides a result following the merging of the outputs.

Image Classification Attack Mitigation

Concepts and technologies disclosed herein are directed to image classification attack mitigation. According to one aspect of the concepts and technologies disclosed herein, a system can obtain an original image and reduce a resolution of the original image to create a reduced resolution image. The system can classify the reduced resolution image and output a first classification. The system also can classify the original image via deep learning image classification and output a second classification. The system can compare the first classification and the second classification. In response to determining that the first classification and the second classification match, the system can output the second classification of the original image. In response to determining that the first classification and the second classification do not match, the system can output the first classification of the original image.

Techniques for Detecting Text
20230196807 · 2023-06-22 ·

In some examples, a system for detecting text in an image includes a memory device to store a text detection model trained using images of up-scaled text, and a processor configured to perform text detection on an image to generate original bounding boxes that identify potential text in the image. The processor is also configured to generate a secondary image that includes up-scaled portions of the image associated with bounding boxes below a threshold size, and perform text detection on the secondary image to generate secondary bounding boxes that identify potential text in the secondary image. The processor is also configured to compare the original bounding boxes with the secondary bounding boxes to identify original bounding boxes that are false positives, and generate an image file that includes the original bounding boxes, wherein those original bounding boxes that are identified as false positives are removed.

Classifying camera images to generate alerts
11682233 · 2023-06-20 ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving mission specific data. Converting the mission specific data into attribute classes recognizable by image recognition classifiers, where the image recognition classifiers are pre-trained to detect objects corresponding to the attributed classes within digital images. Obtaining, from a wearable camera system, low resolution images of a scene and high resolution images of the scene. Detecting, within the low resolution images using a low resolution image classifier, an object that corresponds to one of the attribute classes. In response to detecting the object that corresponds to one of the attribute classes, providing, for presentation to a user of the wearable camera system by a presentation system, an first alert indicating a potential detection of the suspect, and providing the high resolution images to a high resolution image classifier. Obtaining, from the high resolution image classifier, a confirmation of the detected object from the low resolution images. In response to obtaining the confirmation, providing, for presentation to the user by the presentation system, a second alert indicating confirmation that the object has been detected.

NEURAL NETWORKS FOR COARSE- AND FINE-OBJECT CLASSIFICATIONS
20220374650 · 2022-11-24 ·

Aspects of the subject matter disclosed herein include methods, systems, and other techniques for training, in a first phase, an object classifier neural network with a first set of training data, the first set of training data including a first plurality of training examples, each training example in the first set of training data being labeled with a coarse-object classification; and training, in a second phase after completion of the first phase, the object classifier neural network with a second set of training data, the second set of training data including a second plurality of training examples, each training example in the second set of training data being labeled with a fine-object classification.

SYSTEM AND METHOD FOR DETECTING OBJECTS IN AN AUTOMOTIVE ENVIRONMENT

Advanced driver assistance systems (ADAS) and methods for object detection such as traffic lights, speed signs, in an automotive environment, are disclosed. In an embodiment, ADAS includes camera system for capturing image frames of at least a part of surroundings of vehicle, memory comprising image processing instructions and processing system for detecting one or more objects in a coarse detection followed by a fine detection. Coarse detection includes detecting presence of the one or more objects in non-consecutive image frames of the image frames, where non-consecutive image frames are determined by skipping one or more frames of the image frames. Upon detection of presence of the one or more objects in coarse detection, fine detection of the one or more objects is performed in a predetermined number of neighboring image frames of a frame in which the presence of the objects is detected in coarse detection.

METHOD AND PROGRAM FOR IMAGE-BASED STATUS RESOLUTION SERVICES
20170318225 · 2017-11-02 ·

A method and program product includes capturing at least one image of an operations display of at least one peripheral device. The operations display at least presents information associated with a status encountered by the at least one peripheral device. The at least one image is communicated to at least one computing system. The at least one computing system is at least configured for processing the at least one image for extracting features of the at least one peripheral device, determining a model of the at least one peripheral from the extracted features, determining at least a status encountered by the peripheral device from the extracted features, and creating a resolution to the encountered status. The resolution to the encountered status is received from the at least one computing system and displayed.