Patent classifications
G06F16/63
SYSTEM AND METHOD FOR RECOMMENDING BACKGROUND MUSIC FOR BOOKS USING MACHINE LEARNING MODELS
A system and a method for recommending background music that corresponds to an extracted text from a book based on emotion and a topic that is relevant to the extracted text using machine learning models provided. The method includes, (i) determining, using a first trained machine learning model, the emotion from the extracted text that corresponds to the paragraph of the book, (ii) assigning, using a word similarity technique, a similarity score for emotion-words based on the emotion, (iii) determining the emotion-words that exceed a threshold to obtain a subset of emotion-words, (iv) determining a query using the subset of the emotion-words and the emotion, (v) retrieving, using the query, songs that match any of words in the query, and (vi) recommending background music based on top-ranked songs for the extracted text from the book.
PROVIDING RELATED QUERIES TO A SECONDARY AUTOMATED ASSISTANT BASED ON PAST INTERACTIONS
Systems and methods for providing audio data, from an initially invoked automated assistant to a subsequently invoked automated assistant. An initially invoked automated assistant may be invoked by a user utterance, followed by audio data that includes a query. The query is provided to a secondary automated assistant for processing. Subsequently, the user can submit a query that is related to the first query. In response, the initially invoked automated assistant provides the query to the secondary automated assistant in lieu of providing the query to other secondary automated assistants based on similarity between the first query and the subsequent query.
PROVIDING RELATED QUERIES TO A SECONDARY AUTOMATED ASSISTANT BASED ON PAST INTERACTIONS
Systems and methods for providing audio data, from an initially invoked automated assistant to a subsequently invoked automated assistant. An initially invoked automated assistant may be invoked by a user utterance, followed by audio data that includes a query. The query is provided to a secondary automated assistant for processing. Subsequently, the user can submit a query that is related to the first query. In response, the initially invoked automated assistant provides the query to the secondary automated assistant in lieu of providing the query to other secondary automated assistants based on similarity between the first query and the subsequent query.
AUDIO PROCESSING METHOD AND APPARATUS
Disclosed are an audio processing method and an electronic apparatus. The audio processing method is applied to a conference system, and the conference system includes at least one audio capturing device. The audio processing method includes: receiving at least one segment of audio captured by the at least one audio capturing device (S1710); determining voices of a plurality of targets in the at least one segment of audio (S1720); and performing voice recognition on a voice of each of the plurality of targets, to obtain semantics corresponding to the voice of each target (S1730). Voice recognition is separately performed on voices of different targets, thereby improving accuracy of voice recognition.
AUDIO PROCESSING METHOD AND APPARATUS
Disclosed are an audio processing method and an electronic apparatus. The audio processing method is applied to a conference system, and the conference system includes at least one audio capturing device. The audio processing method includes: receiving at least one segment of audio captured by the at least one audio capturing device (S1710); determining voices of a plurality of targets in the at least one segment of audio (S1720); and performing voice recognition on a voice of each of the plurality of targets, to obtain semantics corresponding to the voice of each target (S1730). Voice recognition is separately performed on voices of different targets, thereby improving accuracy of voice recognition.
Displaying information related to content playing on a device
A computer-implemented method includes: detecting whether a user is watching media content; after detecting that the user is watching media, presenting on a user device a first affordance providing a first user-selectable election to receive information on entities relevant to the media content; in response to user selection of the election: sampling at the user device program information from the media content including one or more of audio signals and subtitles, and sending the program information to a server, which identifies the media content and generates one or more second user-selectable user elections for the identified media content and sends to the user device one or more second affordances providing the second user-selectable elections; displaying the second affordances on the user device; and in response to user selection of one of the second affordances, displaying on the user device information on a respective entity relevant to the media content.
Voice command processing method and electronic device utilizing the same
An voice command processing method provides a unified voice control interface to access and control Internet of things (IoT) devices and configure value of attributes of graphical user interface (GUI) elements, attributes of applications, and attributes of the IoT devices. As a voice command comprises an expression of a percentage or a fraction of a baseline value of an attribute, or an exact value of the attribute of an IoT device, the unified voice control interface sets the attribute of the IoT device in response to the percentage, the fraction, or the exact value in the voice command.
Data transfers from memory to manage graphical output latency
Systems and methods of transferring data from memory to manage graphical output latency are provided. A device having a display receives an acoustic signal that carries a query. The device determines that a wireless controller is in a first state. The device establishes, based on receipt of the acoustic signal and the determination that the wireless controller device is in the first state, a first interaction mode for a graphical user interface rendered by the computing device for display via the display device. The device sets a prefetch parameter to a first value and prefetches the corresponding amount of electronic content items. The device establishes a second interaction mode and overrides the first value of the prefetch parameter to a second value, and prefetches a second amount of electronic content items corresponding to the second value.
Data transfers from memory to manage graphical output latency
Systems and methods of transferring data from memory to manage graphical output latency are provided. A device having a display receives an acoustic signal that carries a query. The device determines that a wireless controller is in a first state. The device establishes, based on receipt of the acoustic signal and the determination that the wireless controller device is in the first state, a first interaction mode for a graphical user interface rendered by the computing device for display via the display device. The device sets a prefetch parameter to a first value and prefetches the corresponding amount of electronic content items. The device establishes a second interaction mode and overrides the first value of the prefetch parameter to a second value, and prefetches a second amount of electronic content items corresponding to the second value.
CONTENT ATOMIZATION
Organizing and publishing content in a content management system wherein content, including text, images and video, is received and segmented into content atoms. One or more tags are associated with the content atoms to allow device specific presentation of the content atoms.