Patent classifications
G06F16/5838
Preserving authentication under item change
Apparatuses and methods associated with preserving authentication under item change are disclosed herein. In embodiments, acquiring digital image data of an image of at least a portion of a target physical object; extracting features from the image data to form a digital fingerprint; querying the database system to seek a matching record based on the digital fingerprint; based on an amount of difference between the digital fingerprint and a stored digital fingerprint of the database, update the database system to output a new indication of a new match to the physical object for any new samples that are not matchable to the stored digital fingerprint within a first predetermined similarity threshold provided the new samples are matchable to the digital fingerprint within a second predetermined similarity threshold. Other embodiments may be disclosed or claimed.
Methods and systems for content processing
Mobile phones and other portable devices are equipped with a variety of technologies by which existing functionality can be improved, and new functionality can be provided. Some aspects relate to visual search capabilities, and determining appropriate actions responsive to different image inputs. Others relate to processing of image data. Still others concern metadata generation, processing, and representation. Yet others concern user interface improvements. Other aspects relate to imaging architectures, in which a mobile phone's image sensor is one in a chain of stages that successively act on packetized instructions/data, to capture and later process imagery. Still other aspects relate to distribution of processing tasks between the mobile device and remote resources (“the cloud”). Elemental image processing (e.g., simple filtering and edge detection) can be performed on the mobile phone, while other operations can be referred out to remote service providers. The remote service providers can be selected using techniques such as reverse auctions, through which they compete for processing tasks. A great number of other features and arrangements are also detailed.
Appliance for processing food and method of operating same
Appliance for processing food and method of operating the same The present application in particular is related to a method of operating a cooking appliance, in which a food category of a food item automatically can be assigned based on features extracted from an image of the food item. For improving assignment, the method is provided with self learning performance.
Art image characterization and system training in the loupe art platform
The Loupe system defines Loupe Visual Art DNA for art images to be presented to a user so as to maximize and customize the user experience in viewing art images delivered onto digital displays, TVs and other screens facilitating the artwork transition with and without human interaction. The Loupe system recommendations engine utilizes both human and machine curated data to determine factors of art images that will appeal to a user viewing the images. The Loupe system gathers data about visual perception, historical and academic provenance, and emotion or intention represented in an image. The gathered data is analyzed through deep learning and AI algorithms to inform recommendations and select art images to be presented to a user. The user may purchase fine art prints or select originals of the artwork image displayed, if the artist elects to make it available for sale, presented from the Loupe integrated electronic marketplace.
Hyperzoom attribute analytics on the edge
A computer vision processor of a camera extracts attributes of persons or vehicles from hyperzooms generated from image frames. The hyperzooms represent traffic patterns. The extracting is performed using a feature extractor of an on-camera convolutional neural network (CNN) including an inverted residual structure. The attributes include at least colors of clothing of the persons or colors of the vehicles. Mobile semantic segmentation models of the CNN are generated using the hyperzooms and the attributes. Attribute analytics are generated by executing the mobile semantic segmentation models while obviating network usage by the camera. The attribute analytics are stored in a key-value database located on a memory card of the camera. A query is received from the server instance specifying one or more of the attributes. The attribute analytics are filtered using the one or more of the attributes to obtain a portion of the traffic patterns.
Electronic device for providing information on item based on category of item
Electronic devices are disclosed. A first device stores items, parent categories, images, child categories and product information for each item. The first device receives a search image from a second device, determines a parent category and a child category of the search item, identifies a first database from among the databases matching the determined parent category of the search item, when the child category is determined, identifies a subset of the stored items corresponding the first database that match the search image based on at least one feature of the received search image and the determined child category of the received search image, and transmits information on the identified subset of the stored items to the external electronic device. The second device transmits the image of a first item to the first device, and receives a transmission indicating one or more second items matching the transmitted image for display.
Telecommunication call management and monitoring system with voiceprint verification
Disclosed is a secure telephone call management system for authenticating users of a telephone system in an institutional facility. Authentication of the users is accomplished by using a personal identification number, preferably in conjunction with speaker independent voice recognition and speaker dependent voice identification. When a user first enters the system, the user speaks his or her name which is used as a sample voice print. During each subsequent use of the system, the user is required to speak his or her name. Voice identification software is used to verify that the provided speech matches the sample voice print. The secure system includes accounting software to limit access based on funds in a user's account or other related limitations. Management software implements widespread or local changes to the system and can modify or set any number of user account parameters.
SYSTEM AND METHOD FOR LEARNING SCENE EMBEDDINGS VIA VISUAL SEMANTICS AND APPLICATION THEREOF
The present teaching relates to method, system, and programming for responding to an image related query. Information related to each of a plurality of images is received, wherein the information represents concepts co-existing in the image. Visual semantics for each of the plurality of images are created based on the information related thereto. Representations of scenes of the plurality of images are obtained via machine learning, based on the visual semantics of the plurality of images, wherein the representations capture concepts associated with the scenes.
Suggested actions for images
- Juan Carlos Anorga ,
- David Lieb ,
- Madhur Khandelwal ,
- Evan Millar ,
- Timothy Novikoff ,
- Mugdha Kulkarni ,
- Leslie Ikemoto ,
- Jorge Verdu ,
- Jingyu Cui ,
- Sharadh Ramaswamy ,
- Raja Ratna Murthy Ayyagari ,
- Marc Cannon ,
- Alexander Roe ,
- Shaun Tungseth ,
- Songbo Jin ,
- Matthew Bridges ,
- Ruirui Jiang ,
- Jeremy Selier ,
- Austin Suszek ,
- Gang Song
Implementations relate to causing a command to be executed based on an image. In some implementations, a computer-implemented method includes obtaining and programmatically analyzing an image to determine suggested actions. The method causes a user interface to be displayed that includes user interface elements corresponding to default actions, and to suggested actions that are determined based on analyzing the image. The method receives user input indicative of selection of a particular action from the default actions and the suggested actions. The method causes a command to be executed by a computing device for the particular action that was selected.
Wearable apparatus with universal wireless controller and monitoring technology comprising pandemic detection feature
Disclosed is a wearable apparatus configured for monitoring a user's environment and items in a user's environment. The wearable apparatus comprises a plurality of communication circuits configured to communicate using a plurality of communication protocols. The wearable device is configured to use sensors to automatically detect organisms in a user's environment and issue and warnings related to a detected organism. The wearable device is configured to communicate with a plurality of health sensors that may be separate modules using a plurality of communication protocols and combine the health reading into user health status data. The user status data may be transmitted along with verification tags to remote devices.