G06F40/169

Descriptor uniqueness for entity clustering

A mechanism is provided in a data processing system to implement a cognitive natural language processing (NLP) system with descriptor uniqueness identification to support named entity mention clustering. The mechanism annotates a set of documents from a corpus of documents for entity types and mentions, collects descriptor usages from all documents in the corpus of documents, analyzes the descriptor usages to classify the descriptors as base terms or modifier terms, generates compatibility scores for the descriptors, and performs entity merging of entity clusters based on the compatibility scores.

Sharing screen content in a mobile environment
11573810 · 2023-02-07 · ·

Systems and methods are provided for sharing a screen from a mobile device. For example, a method includes receiving, at a second mobile device, an image of a screen captured from a first mobile device and determining whether to trigger an automated action. The method may also include displaying, responsive to not triggering the automated action, annotation data generated for the image with the image on a display of the second mobile device, the annotation data including at least one visual cue corresponding to content in the image relevant to a user of the second mobile device. The method may further include, responsive to triggering the automated action, determining that a mobile application associated with the image is installed on the second mobile device and replaying user input actions received with the image on the second mobile device starting from a reference screen associated with the mobile application.

Sharing screen content in a mobile environment
11573810 · 2023-02-07 · ·

Systems and methods are provided for sharing a screen from a mobile device. For example, a method includes receiving, at a second mobile device, an image of a screen captured from a first mobile device and determining whether to trigger an automated action. The method may also include displaying, responsive to not triggering the automated action, annotation data generated for the image with the image on a display of the second mobile device, the annotation data including at least one visual cue corresponding to content in the image relevant to a user of the second mobile device. The method may further include, responsive to triggering the automated action, determining that a mobile application associated with the image is installed on the second mobile device and replaying user input actions received with the image on the second mobile device starting from a reference screen associated with the mobile application.

Managing content item collections

Disclosed are systems, methods, and non-transitory computer-readable storage media for managing content item collections. For example, in embodiment, a client device may receive first user input selecting a content item collection. The client device may generate a graphical user interface for presenting the content item collection. The content item collection may include one or more tiles. Each tile may correspond to a content item embedded into the content item collection and stored by a content management system. The client device may present the content item collection including the one or more tiles. The client device may present, within each of the one or more tiles, an image representing the corresponding content item.

Managing content item collections

Disclosed are systems, methods, and non-transitory computer-readable storage media for managing content item collections. For example, in embodiment, a client device may receive first user input selecting a content item collection. The client device may generate a graphical user interface for presenting the content item collection. The content item collection may include one or more tiles. Each tile may correspond to a content item embedded into the content item collection and stored by a content management system. The client device may present the content item collection including the one or more tiles. The client device may present, within each of the one or more tiles, an image representing the corresponding content item.

Automatic reminders in a mobile environment
11704136 · 2023-07-18 · ·

Systems and methods are provided for suggesting reminders from content displayed on a mobile device. An example method may include analyzing content generated by a first mobile application and displayed on a display of a mobile device, and determining that the content suggests an event, the event including at least one entity. The method may also include providing an assistance window requesting confirmation for adding a reminder for the event in a second mobile application responsive to determining that the content suggests the event, and adding the reminder via the second mobile application responsive to receiving the confirmation. In some implementations the first mobile application is a messaging application.

Automatic reminders in a mobile environment
11704136 · 2023-07-18 · ·

Systems and methods are provided for suggesting reminders from content displayed on a mobile device. An example method may include analyzing content generated by a first mobile application and displayed on a display of a mobile device, and determining that the content suggests an event, the event including at least one entity. The method may also include providing an assistance window requesting confirmation for adding a reminder for the event in a second mobile application responsive to determining that the content suggests the event, and adding the reminder via the second mobile application responsive to receiving the confirmation. In some implementations the first mobile application is a messaging application.

OBFUSCATING TRAINING DATA
20180005626 · 2018-01-04 ·

Examples disclosed herein involve obfuscating training data. An example method includes computing a sequence of acoustic features from audio data of training data, the training data comprising the audio data and a corresponding text transcript; mapping the acoustic features to acoustic model states to generate annotated feature vectors, the annotated feature vectors comprising the acoustic features and corresponding context from the text transcript; and providing a randomized sequence of the annotated feature vectors as obfuscated training data to an audio analysis system.

Remote Creation of a Playback Queue for a Future Event
20180004714 · 2018-01-04 ·

Example embodiments involve remote creation of a playback queue for an event. An example implementation involves a cloud computing system receiving, from a first mobile device, one or more messages representing an instruction to create a playlist for an event. In response, the system creates the playlist in data storage. The system sends, to multiple second mobile devices, one or more respective invitations to the event, wherein each invitation indicates: a time and date for the event and a link to a web interface including controls to add audio tracks to the playlist for the event. The system receives respective sets of input data, each set indicating audio tracks selected via the web interface on a respective second mobile device and, in response, adds the respective audio tracks to the playlist. During the event, the system causes the playlist to be queued in a queue of a media playback system.

VISUAL RECOGNITION USING SOCIAL LINKS
20180004719 · 2018-01-04 ·

System, method and architecture for providing improved visual recognition by modeling visual content, semantic content and an implicit social network representing individuals depicted in a collection of content, such as visual images, photographs, etc., which network may be determined based on co-occurrences of individuals represented by the content, and/or other data linking the individuals. In accordance with one or more embodiments, using images as an example, a relationship structure may comprise an implicit structure, or network, determined from co-occurrences of individuals in the images. A kernel jointly modeling content, semantic and social network information may be built and used in automatic image annotation and/or determination of relationships between individuals, for example.