Patent classifications
G06V10/443
Real-world object-based image authentication method and system
A real-world object-based method and system of performing an authentication of a person in order to permit access to a secured resource is disclosed. The system and method are configured to collect image data from an end-user in real-time that includes objects in their environment. At least one object is selected and its image data stored for subsequent authentication sessions, when the system can determine whether there is a match between the new image data and image data previously collected and stored in a database. If there is a match, the system verifies an identity of the person and can further be configured to automatically grant the person access to one or more services, features, or information for which he or she is authorized.
MACHINE LEARNING IMAGE PROCESSING
A machine learning image processing system performs natural language processing (NLP) and auto-tagging for an image matching process. The system facilitates an interactive process, e.g., through a mobile application, to obtain an image and supplemental user input from a user to execute an image search. The supplemental user input may be provided from a user as speech or text, and NLP is performed on the supplemental user input to determine user intent and additional search attributes for the image search. Using the user intent and the additional search attributes, the system performs image matching on stored images that are tagged with attributes through an auto-tagging process.
Image evaluation device, image evaluation method, and image evaluation program
An image evaluation device includes a determination result acquisition unit acquires a result of determining the presence or absence of a difference between an object image that is one of a plurality of images that include three or more images obtained by imaging substantially the same spatial region and each of reference images that are images other than the object image among the plurality of images and an evaluation index acquisition unit configured to acquire an evaluation index for the plurality of images on the basis of at least one of the number of determinations of the presence of the difference between the object image and each reference image and the number of determinations of the absence of the difference between the object image and each reference image.
SYSTEMS AND METHODS FOR IDENTIFYING AN EVENT IN DATA
The present disclosure includes systems, apparatuses, and methods for event identification. In some aspects, a method includes receiving data including text and performing natural language processing on the received data to generate processed data that indicates one or more sentences. The method also includes generating, based on a first keyword set, a second keyword set having more keywords than the first keyword set. The method further includes, for each of the first and second keyword sets: detecting one or more keywords and one or more entities included in the processed data, determining one or more matched pairs based on the detected keywords and entities, and extracting a sentence, such as a single sentence or multiple sentences, from a document based on the one or more sentences indicated by the processed data. The method may also include outputting at least one extracted sentence.
METHOD AND APPARATUS FOR RETRIEVING TARGET
A method and an apparatus for retrieving a target are provided. The method may include: obtaining at least one image and a description text of a designated object; extracting image features of the image and text features of the description text by using a pre-trained cross-media feature extraction network; and matching the image features with the text features to determine an image that contains the designated object.
OBJECT RECOGNITION METHOD AND APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM
Provided is an object recognition method which includes obtaining a first visible-light image acquired by the first camera device and a second visible-light image acquired by the second camera device; performing exposure processing on the first visible-light image according to the luminance information of the bright area image of the first visible-light image and performing exposure processing on the second visible-light image according to the luminance information of the dark area images of the first visible-light image and/or the second visible-light image, where the dark area image is an area image having a luminance value less than or equal to the preset value; and performing target object detection on the first visible-light image obtained after exposure processing and the second visible-light image obtained after exposure processing and recognizing and verifying a target object according to the detection result.
Method and System for Implementing Adaptive Feature Detection for VSLAM Systems
A method includes receiving a first image, receiving a motion dataset, determining a motion level, determining an initialization state, and determining a tracking level. In a first condition, the method includes generating a first image pyramid, detecting a plurality of features in the first image pyramid using a first detector threshold, and generating a first set of detected keypoints from the plurality of features. In a second condition, the method includes generating a second image pyramid, detecting the plurality of features in the second image pyramid using a second detector threshold, the second detector threshold being less restrictive than the first detector threshold, and generating a second set of detected keypoints. In a third condition, the method includes detecting the plurality of features in the first image according to the first detector threshold and generating a third set of detected keypoint.
RECOGNITION APPARATUS, RECOGNITION METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
According to one embodiment, a recognition apparatus includes processing circuitry. The processing circuitry generates a first feature quantity exhibiting a feature of sensor data based on the sensor data, converts the first feature quantity into a second feature quantity exhibiting a feature contributing to identification of a class of the sensor data, generates a significant feature quantity exhibiting a feature that is significant in the identification of the class based on a cross-correlation between the first feature quantity and the second feature quantity, generates an integrated feature quantity considering features of the first feature quantity and the second feature quantity, based on the second feature quantity and the significant feature quantity, and identifies the class based on the integrated feature quantity.
Image processing method and image processing system
An image processing method includes analyzing multiple images data based on Illumination-invariant Feature Network (IF-NET) with an image processing device to generate corresponding sets of eigenvector, in which image data includes a first image data related to at least one first feature of the sets of eigenvector, and a second image data related to at least one second feature of the sets of eigenvector; choosing a corresponding first training set of tiles and second training set of tiles from the first image data and second image data with an image processing device based on IF-NET, and computing on both training set of tiles to generate a least one loss value; and adjusting IF-NET based on a least one loss value. An image processing system is also disclosed herein.
METHOD AND DEVICE FOR GENERATING VEHICLE PANORAMIC SURROUND VIEW IMAGE
The present disclosure relates to a method for generating a panoramic surround view image of a vehicle, comprising: acquiring actual original images of external environment of a first part and a second part of the vehicle hinged to each other; processing the actual original images to obtain respective actual independent surround view images of the first part and the second part; obtaining coordinates of respective hinge points of the first part and the second part; determining matched feature point pairs in the actual independent surround view images of the first part and the second pan; calculating a distance between two points in each matched feature point pair accordingly, and taking matched feature point pairs with the distance less than a preset first threshold as a successfully matched feature point pairs; and taking a rotation angle corresponding to the maximum number of the successfully matched feature point pairs as a candidate rotation angle of the first part relative to the second part. The present disclosure further provides a device for generating a panoramic surround view image of a vehicle and an intelligent vehicle.