Patent classifications
G06F16/58
Restoring integrity of a social media thread from a social network export
The disclosed technology addresses the need in the art for a service that can ingest a social network export and restore the integrity of threads within the social network export. The present technology can unite images in the social network export with the caption from the initial post, and with any comments within the thread. Likewise, images in the social network export can be enhanced to include metadata that reflects when the image was posted and any other contextual information that the social network provides in export file.
Systems and methods for screenshot linking
A system for analyzing screenshots can include a computing device including a processor coupled to a memory and a display screen configured to display content. The system can include an application stored on the memory and executable by the processor. The application can include a screenshot receiver configured to access, from storage to which a screenshot of the content displayed on the display screen captured using a screenshot function of the computing device is stored, the screenshot including an image and a predetermined marker. The application can include a marker detector configured to detect the predetermined marker included in the screenshot. The application can include a link identifier configured to identify, using the predetermined marker, a link to a resource mapped to the image included in the screenshot, the resource accessible by the computing device via the link.
Coalescing Notifications Associated with Interactive Digital Content
The technology described herein is capable of generating and presenting graphical user interfaces for displaying shared content, configuring space objects (also simply called spaces), posting digital content items to various spaces, inviting other users to contribute digital content items to various spaces, forking digital content items posted in one space or post object to another space or post object, contextual searching, posting rich comments in association with a post including graphical and textual data, and so forth. Further, the technology may provide suggestive search based on the spaces associated with a user, generate and exchange data with other nodes on a computer network, generate notification data including notifications reflecting updates posted to spaces by various users, and coalesce related comments to reduce number of notifications that each user receives and/or through which a user may have to navigate or scroll through.
DISPLAY APPARATUS AND METHOD FOR PERSON RECOGNITION AND PRESENTATION
Provided are a display apparatus and a person recognition and presentation method. The display apparatus includes a display and a controller that is in communication with the display. The controller is configured to: associated information of a display interface of the display and generate a scenario image for recognition in response to a user command; obtain facial feature information for recognition in the scenario image; obtain similar facial feature information when a matching confidence level of pre-stored facial feature information in a database with the facial feature information for recognition does not exceed a preset confidence level; obtain average-person recognition data; generate a sharing control uniquely matching with the facial feature information for recognition; and control the display to present the average-person recognition data and the sharing control on a current display interface.
Guided information viewing and storage features within web browsers
The present disclosure relates to non-transitory computer readable mediums (CRMs) for guided-viewing of annotations and the process or organizing and connecting annotations of web documents within web browsers. The rationale for creating and using such computer readable medium is discussed in detail within this disclosure. Throughout the course of this explanation, various steps are dissected and explained in detail in the context of exemplary embodiments to elaborate on the relevant data structures and the architectures, messaging patterns, and use cases that provide the context for these data structures.
PRIORITIZED DEVICE ACTIONS TRIGGERED BY DEVICE SCAN DATA
Systems, methods, devices, server computers, storage media, and instructions for prioritized device action triggered by device scan data are described. In one embodiment, a mobile device performs a method that involves executing a messaging application with an image capture interface and a scanning input. An associated scanning mode comprises capture of scan data from a plurality of input/output modules of the first client device, analyzes the scan data to identify one or more scan data patterns by matching at least a portion of the scan data against a set of data patterns, and selects a priority system action based on the results of the matching of the portion of the scan data against the set of data patterns. In some embodiments, the priority system action is selected based on a priority ranking for identified scan data types.
Fast annotation of samples for machine learning model development
Computer systems and associated methods are disclosed to implement a model development environment (MDE) that allows a team of users to perform iterative model experiments to develop machine learning (ML) media models. In embodiments, the MDE implements a media data management interface that allows users to annotate and manage training data for models. In embodiments, the MDE implements a model experimentation interface that allows users to configure and run model experiments, which include a training run and a test run of a model. In embodiments, the MDE implements a model diagnosis interface that displays the model's performance metrics and allows users to visually inspect media samples that were used during the model experiment to determine corrective actions to improve model performance for later iterations of experiments. In embodiments, the MDE allows different types of users to collaborate on a series of model experiments to build an optimal media model.
Fast annotation of samples for machine learning model development
Computer systems and associated methods are disclosed to implement a model development environment (MDE) that allows a team of users to perform iterative model experiments to develop machine learning (ML) media models. In embodiments, the MDE implements a media data management interface that allows users to annotate and manage training data for models. In embodiments, the MDE implements a model experimentation interface that allows users to configure and run model experiments, which include a training run and a test run of a model. In embodiments, the MDE implements a model diagnosis interface that displays the model's performance metrics and allows users to visually inspect media samples that were used during the model experiment to determine corrective actions to improve model performance for later iterations of experiments. In embodiments, the MDE allows different types of users to collaborate on a series of model experiments to build an optimal media model.
USING TRACKING PIXELS TO DETERMINE AREAS OF INTEREST ON A ZOOMED IN IMAGE
A system and method for enhancing searching capabilities is disclosed. The system and method can receive an image and metadata associated with the image. An intensity map of a grayscale vector may be generated corresponding to the image. An HTML code snippet may be placed at a coordinate location within the image and a browsing activity associated with the image may be detected. The HTML code snippet may be activated in response to detecting the browsing activity at a coordinate location. An interactive page may be rendered on a user interface, the interactive page including the image and the metadata associated with the image. The code snippet output may be correlated with the metadata to generate image browsing track data. A user browsing profile may be generated including the image browsing track data.
ADVERSARIALLY ROBUST VISUAL FINGERPRINTING AND IMAGE PROVENANCE MODELS
The present disclosure relates to systems, methods, and non-transitory computer readable media that utilize a deep visual fingerprinting model with parameters learned from robust contrastive learning to identify matching digital images and image provenance information. For example, the disclosed systems utilize an efficient learning procedure that leverages training on bounded adversarial examples to more accurately identify digital images (including adversarial images) with a small computational overhead. To illustrate, the disclosed systems utilize a first objective function that iteratively identifies augmentations to increase contrastive loss. Moreover, the disclosed systems utilize a second objective function that iteratively learns parameters of a deep visual fingerprinting model to reduce the contrastive loss. With these learned parameters, the disclosed systems utilize the deep visual fingerprinting model to generate visual fingerprints for digital images, retrieve and match digital images, and provide digital image provenance information.