Patent classifications
G06V10/17
Electronic device for performing payment and operation method therefor
Disclosed is an electronic device for processing a touch input. The electronic device may comprise: a touch screen; a biometric sensor disposed overlappingly at a position of at least a part of the touch screen; and a processor for acquiring biometric information of a user from an input relating to an object displayed through the touch screen, by using the biometric sensor, receiving a payment command associated with a payment function for the object, and performing the payment function for a product corresponding to the object by using the biometric information according to the payment command. Various other embodiments may be provided.
FINGERPRINT SENSING SYSTEM AND METHOD USING THRESHOLDING
A fingerprint sensing system for sensing a finger surface of a finger, comprising: an array of sensing elements arranged under a sensing surface, each sensing element in the array of sensing elements being configured to sense a property indicative of a distance between the sensing element and the finger surface; and read-out circuitry coupled to the array of sensing elements and configured to provide, for each sensing element in the array of sensing elements, a timing indication indicative of a time when a value of the property sensed by the sensing element reached a predefined threshold value.
System and method for displaying objects of interest at an incident scene
A system and method for displaying an image of an object of interest located at an incident scene. The method includes receiving, from the image capture device, a first video stream of the incident scene, and displaying the video stream. The method includes receiving an input indicating a pixel location in the video stream, and detecting the object of interest in the video stream based on the pixel location. The method includes determining an object class, an object identifier, and metadata for the object of interest. The metadata includes the object class, an object location, an incident identifier corresponding to the incident scene, and a time stamp. The method includes receiving an annotation input for the object of interest, and associating the annotation input and the metadata with the object identifier. The method includes storing, in a memory, the object of interest, the annotation input, and the metadata.
Method to verify identity using a previously collected biometric image/data
A system for remote identity verification including a computing device configured to capture a first image of the user a first distance and capture a second image at a second distance and then processing the images to create one or more facemaps. The facemaps are processed to verify that the images were captured from a live person. If the facemaps represents a live person, the facemaps and a user identifying code are sent to a trusted image server. The rusted image server configured to, using the user ID code, retrieve a trusted image from a database and generate a trusted image facemaps. Then, compare captured image facemaps to the trusted image facemaps. Responsive to a match between the captured image facemaps and the trusted image facemaps, send a message to the computing device, a third-party server, or both providing notice of the match.
Interactive assistant
An interactive troubleshooting assistant and method for troubleshooting a system in real time to repair (fix) one or more problems in a system is disclosed. The interactive troubleshooting assistant and method may include receiving multimodal inputs from sensors, wearable devices, a person, etc. that may be input into a feature extractor including attention layers and pre-processing units of a cloud computing system hosted by one or more servers, such as a private cloud system. A pre-processing unit converts the raw multimodal input into a structed form so that an attention layer can give weights to features provided by the pre-processing unit according to their importance. The weighted extracted features may be provided to an actions predictor. The actions predictor generates the most suitable action based on the weighted extracted features generated by the feature extractor based on the multimodal inputs. After the most suitable action is performed, the interactive troubleshooting assistant considers new information from multimodal inputs so that the interactive troubleshooting assistant can provide the next recommended action. The interactive troubleshooting assistant may repeat these operations until the repair is completed.
Method and system for augmented feature purchase
A computer-implemented augmented reality-based method provides a user device access to data content by processing of camera image captured by the user device to identify a candidate visual area, which are then processed to identify visual features indicative of available data content. The identified features are compared to visual features of database objects indicative of available data content, and a determination made regarding the object that corresponds to the identified features. Instructions are then generated to augment a display of the camera image with selectable image features. In response to a selection of one of the selectable image features, a determination is made regarding whether the user device has permission to access the data content associated with the selected image feature. If not, a payment settlement arrangement is implemented for the user device to gain access permission to the data content.
System, method, apparatus, and computer program product for utilizing machine learning to process an image of a mobile device to determine a mobile device integrity status
A system, apparatus, method and computer program product are provided for determining a mobile device integrity status. Images of a mobile device captured by the mobile device and using a reflective surface are processed with various trained models, such as neural networks, to verify authenticity, detect damage, and to detect occlusions. A mask may be generated to enable identification of concave occlusions or blocked corners of an object, such as a mobile device, in an image. Images of the front and/or rear of a mobile device may be processed to determine the mobile device integrity status such as verified, not verified, or inconclusive. A user may be prompted to remove covers, remove occlusions, and/or move the mobile device closer to the reflective surface. A real-time response relating to the mobile device integrity status may be provided. The trained models may be trained to improve the accuracy of the mobile device integrity status.
Multi-sensor analysis of food
In an embodiment, a method for estimating a composition of food includes: receiving a first three-dimensional (3D) image; identifying food in the first 3D image; determining a volume of the identified food based on the first 3D image; and estimating a composition of the identified food using a millimeter-wave radar.
TECHNIQUES FOR DOCUMENT CREATION BASED ON IMAGE SECTIONS
In an embodiment, an image reception system is communicatively coupled to an image analysis system and is configured to receive a digital image and analyze the pixels of the digital image to determine one or more regions in the digital image. For each region in the one or more regions in the digital image, the image analysis system recognizes the content in the region. A document creation system communicatively coupled to the image analysis system is configured to create a digital document based on the recognized content for the one or more regions. In some embodiments, the image analysis system is further configured to analyze the digital image to detect one or more of the following: region markers, tables, headers.
BARCODE READERS WITH 3D CAMERA(S)
Methods and systems include using three-dimensional imaging apparatuses to capture three-dimensional images and analyze resulting three-dimensional image data to enhance captured two-dimensional images for scanning related processes such as object identification, symbology detection, object recognition model training, and identifying improper scan attempts or other actions performed by an operator. Imaging systems such as bi-optic readers, handheld scanners, and machine vision systems are described using three-dimensional imaging apparatuses and described capturing three-dimensional images and using with captured two-dimensional images.