Patent classifications
G06V2201/09
OBJECT RECOGNITION
The subject technology provides object recognition systems and methods that can be used to identify objects of interest in an image. An image such as live preview may be generated by a display component of the electronic device and an object of interest may be detected in the image. The detected object of interest may be classified using a classification model. Subsequent to classification, a confidence level in identifying the object of interest may be determined. In response to determining that the confidence level does not meet a confidence level threshold for identifying the object of interest, a request for a user input is generated. Based on the user input, the object of interest is identified using an object recognition model.
Systems and methods for detecting logos in a video stream
A method for identifying a logo within at least one image includes identifying an area containing the logo within the at least one image, extracting logo features from the area by analyzing image gradient vectors associated with the at least one image, and using a machine learning model to identify the logo from the extracted logo features, wherein the machine learning model is trained to identify at least one target logo based on a received image data containing the logo features.
METHOD AND DEVICE FOR CLUSTERING PHISHING WEB RESOURCES BASED ON VISUAL CONTENT IMAGE
A method and a computing device for clustering phishing web resources based on images of visual content thereof are provided. The method comprises: receiving references to a plurality of phishing web resources; generating, for a given phishing web resource of the plurality of phishing web resources, at least one image of a visual content of the given phishing web resource; analyzing the at least one image associated with the given phishing web resource, the analyzing comprising identifying contours of elements of the visual content of the given phishing web resource within the at least one image; conducting pairwise comparison between the contours associated with the given phishing web resource and contours of stored clusters of visual content images; and storing, in a database, data indicative of an association between the given phishing web resource and a respective cluster of the at least one image.
NAVIGATION DEVICE
To enable guidance to be given more understandably, depending on the guidance location.
Provision of an image recognizing unit for analyzing the state of a guidance object that serves as a landmark for a guidance location, through image recognition using an image captured for the direction forward of a vehicle; a guidance information generating unit for generating guidance information that includes a supplementary explanation regarding the state of the guidance object, depending on the result of the analysis; and an output processing unit for outputting the guidance information.
System for verifying the identity of a user
A system receives an image including a live facial image of the user and an identity document including a photograph of the user. Moreover, the system calculates a facial match score by comparing facial features in the live facial image to facial features in the photograph. The system recognizes data objects and characters in the identity document using optical character recognition (OCR) and computer vision, and then identifies, based on the recognized data objects and characters, a type of the identity document. Further, the system calculates a document validity score by comparing the recognized characters and data objects to character strings and data objects known to be present in the identified type of the identity document. Additionally, the system determines and outputs the user's identity verification status based on comparing the facial match score to a facial match threshold and comparing the document validity score to a document validity threshold.
Electronic device for providing information on item based on category of item
Electronic devices are disclosed. A first device stores items, parent categories, images, child categories and product information for each item. The first device receives a search image from a second device, determines a parent category and a child category of the search item, identifies a first database from among the databases matching the determined parent category of the search item, when the child category is determined, identifies a subset of the stored items corresponding the first database that match the search image based on at least one feature of the received search image and the determined child category of the received search image, and transmits information on the identified subset of the stored items to the external electronic device. The second device transmits the image of a first item to the first device, and receives a transmission indicating one or more second items matching the transmitted image for display.
Universal object recognition
Large scale instance recognition is provided that can take advantage of channel-wise pooling. A received query image is processed to extract a set of features that can be used to generate a set of region proposals. The proposed regions of image data are processed using a trained classifier to classify the regions as object or non-object regions. Extracted features for the object regions are processed using feature correlation against extracted features for a set of object images, each representing a classified object. Matching tensors generated from the comparison are processed using a spatial verification network to determine match scores for the various object images with respect to a specific object region. The match scores are used to determine which objects, or types of objects, are represented in the query image. Information or content associated with the matching objects can be provided as part of a response.
Electronic device
An electronic device includes a camera to capture an image, and a processor to input an image acquired by photographing a detergent container into a trained model to acquire detergent information corresponding to the detergent container, and to guide an amount of detergent dispensed based on washing information corresponding to the detergent information. The trained model is a neural network trained using images of a plurality of detergent containers.
Microtome
A microtome has a sectioning apparatus that comprises a microtome blade, for sectioning histological samples into thin prepared sections. The microtome is characterized in that an optical reading apparatus is present which reads an optically readable image pattern on the microtome blade, generates analog or digital image signals corresponding to the read image pattern and conveys the image signals to a control apparatus of the microtome.
Automated classification and interpretation of life science documents
A computer-implemented tool for automated classification and interpretation of documents, such as life science documents supporting clinical trials, is configured to perform a combination of raw text, document construct, and image analyses to enhance classification accuracy by enabling a more comprehensive machine-based understanding of document content. The combination of analyses provides context for classification by leveraging relative spatial relationships among text and image elements, identifying characteristics and formatting of elements, and extracting additional metadata from the documents as compared to conventional automated classification tools, wherein natural language processing (NLP) is applied to associate text with tokens, and relevant differences and similarities between protocols are identified.