Patent classifications
H04M1/7243
USER INTERFACE FOR MULTI-USER COMMUNICATION SESSION
The present disclosure generally relates to user interfaces for multi-user communication sessions. In some examples, a device initiates a live stream in a communication session. In some examples, a device transitions between streaming live audio and live video. In some examples, a device enables synchronizing media playback during a live stream. In some examples, a device displays synchronized media playback and plays a reaction from a first participant of the communication session.
DATA PROCESSING METHOD, TERMINAL DEVICE AND SERVER DEVICE
The present invention discloses a data processing method, a terminal device and a server device. The method comprises steps of acquiring messages and adding and displaying list items of the messages in an electronic content list. With the method of the present disclosure, list items of messages can be provided in an electronic content list so as to provide a new operating mode for reading the messages.
DATA PROCESSING METHOD, TERMINAL DEVICE AND SERVER DEVICE
The present invention discloses a data processing method, a terminal device and a server device. The method comprises steps of acquiring messages and adding and displaying list items of the messages in an electronic content list. With the method of the present disclosure, list items of messages can be provided in an electronic content list so as to provide a new operating mode for reading the messages.
Unified message search
The disclosed embodiments include computerized methods, systems, and devices, including computer programs encoded on a computer storage medium, for generating terms of a search query based on a user's spoken utterances, identifying multiple cross-platform messages based on the generated terms, and to generating, via a presentation device, a single interface that enables the user to interact with identified messages. Based on a spoken utterance, the disclosed embodiments may determine user-specified search terms and/or criteria, and based on the user-specified search terms and/or criteria, may obtain cross-platform message data that corresponds to the search query. The communications device may generate one or more interface elements that describe corresponding ones of the cross-platform messages, which may be presented within a unified graphical user interface or voice-user interface by a communications device.
Unified message search
The disclosed embodiments include computerized methods, systems, and devices, including computer programs encoded on a computer storage medium, for generating terms of a search query based on a user's spoken utterances, identifying multiple cross-platform messages based on the generated terms, and to generating, via a presentation device, a single interface that enables the user to interact with identified messages. Based on a spoken utterance, the disclosed embodiments may determine user-specified search terms and/or criteria, and based on the user-specified search terms and/or criteria, may obtain cross-platform message data that corresponds to the search query. The communications device may generate one or more interface elements that describe corresponding ones of the cross-platform messages, which may be presented within a unified graphical user interface or voice-user interface by a communications device.
Systems and methods for selecting media items
A device includes an image capture device configured to capture a first video. The device includes a memory configured to store one or more videos. The device further includes a processor coupled to the memory. The processor is configured to concatenate the first video and a second video to generate a combined video. The second video is included in the one or more videos or is accessible via a network. The second video is selected by the processor based on a similarity of a first set of characteristics with a second set of characteristics. The first set of characteristics corresponds to the first video. The second set of characteristics corresponds to the second video.
Systems and methods for selecting media items
A device includes an image capture device configured to capture a first video. The device includes a memory configured to store one or more videos. The device further includes a processor coupled to the memory. The processor is configured to concatenate the first video and a second video to generate a combined video. The second video is included in the one or more videos or is accessible via a network. The second video is selected by the processor based on a similarity of a first set of characteristics with a second set of characteristics. The first set of characteristics corresponds to the first video. The second set of characteristics corresponds to the second video.
SYSTEM AND METHOD FOR TEXT-BASED DELIVERY OF SALES PROMOTIONS WITH DEFERRED TEXT-TO-CALL INTERACTIONS
A system and method for messaging-triggered sales lead redirection which uses an interaction control server to facilitate initial communications between potential buyers and sales representatives of sellers. In an embodiment, the system comprises a triggering application installed on a mobile phone of a user (potential buyer), a media gateway server which provides context-aware advertising and through which potential buyers may be connected directly with sales representatives of a seller, and an interaction control server which controls the messaging between the mobile device and the media gateway server.
AUTOMATIC CAMERA SELECTION IN A COMMUNICATION DEVICE
A method, a first communication device and a computer program product for selecting an active camera from a front facing camera and a rear facing camera for use during a video communication session. A request is detected, via a processor, to transition to a video communication session between the first communication device and a second communication device. The first communication device receives, from the second communication device, first context identifying data that identifies which of at least one front facing camera or at least one rear facing camera to activate. The first context identifying data is generated at the second communication device based on information within the exchanged communication. At least one front facing camera or at least one rear facing camera identified by the received first context identifying data is selected and activated.
AUTOMATIC CAMERA SELECTION IN A COMMUNICATION DEVICE
A method, a first communication device and a computer program product for selecting an active camera from a front facing camera and a rear facing camera for use during a video communication session. A request is detected, via a processor, to transition to a video communication session between the first communication device and a second communication device. The first communication device receives, from the second communication device, first context identifying data that identifies which of at least one front facing camera or at least one rear facing camera to activate. The first context identifying data is generated at the second communication device based on information within the exchanged communication. At least one front facing camera or at least one rear facing camera identified by the received first context identifying data is selected and activated.