Patent classifications
G10L15/1822
Method and apparatus for generating speech
A speech generation method and apparatus are disclosed. The speech generation method includes obtaining, by a processor, a linguistic feature and a prosodic feature from an input text, determining, by the processor, a first candidate speech element through a cost calculation and a Viterbi search based on the linguistic feature and the prosodic feature, generating, at a speech element generator implemented at the processor, a second candidate speech element based on the linguistic feature or the prosodic feature and the first candidate speech element, and outputting, by the processor, an output speech by concatenating the second candidate speech element and a speech sequence determined through the Viterbi search.
SKILL DISPATCHING METHOD AND APPARATUS FOR SPEECH DIALOGUE PLATFORM
A skill dispatching method for a speech dialogue platform including: receiving, by a central control dispatching service, a semantic result of recognizing a user's voice sent by a data distribution service; dispatching, by the central control dispatching service, a plurality of skill services related to the semantic result in parallel, and obtaining skill parsing results from the plurality of skill services; sorting the skill parsing results based on priorities of the skill services, and exporting a result with the highest priority to a skill realization discrimination service; when failure in realization, selecting a result with the highest priority among the rest of skill parsing results and exporting the same to the skill realization discrimination service, and when success in realization, sending the result with the highest priority to the data distribution service for feedback to the user. The method improves skill dispatching efficiency, reduces delay, and improves user experience.
SYSTEM AND METHOD FOR IMPROVING NAMED ENTITY RECOGNITION
A method includes training a set of teacher models. Training the set of teacher models includes, for each individual teacher model of the set of teacher models, training the individual teacher model to transcribe unlabeled audio samples and predict a pseudo labeled dataset having multiple labels. At least some of the unlabeled audio samples contain named entity (NE) audio data. At least some of the labels include transcribed NE labels corresponding to the NE audio data. The method also includes correcting at least some of the transcribed NE labels using user-specific NE textual data. The method further includes retraining the set of teacher models based on the pseudo labeled dataset from a selected one of the teacher models, where the selected one of the teacher models predicts the pseudo labeled dataset more accurately than other teacher models of the set of teacher models.
Generation of business process model
One embodiment provides a method, including: obtaining at least one video capturing images of a writing capture device used during a business process design session, wherein the images comprise portions of the process flow; obtaining at least one audio recording corresponding to the business process design session; identifying an intended business process model shape; determining at least one business process model shape missing from the process flow provided on the writing capture device; identifying a task dependency for pairs of business process model shapes; and generating a business process model from (i) the intended business process model shapes, (ii) the at least one business process model shape missing from the process flow, and (iii) the identified task dependencies.
Conditional camera control via automated assistant commands
Implementations set forth herein relate to an automated assistant that can control a camera according to one or more conditions specified by a user. A condition can be satisfied when, for example, the automated assistant detects a particular environment feature is apparent. In this way, the user can rely on the automated assistant to identify and capture certain moments without necessarily requiring the user to constantly monitor a viewing window of the camera. In some implementations, a condition for the automated assistant to capture media data can be based on application data and/or other contextual data that is associated with the automated assistant. For instance, a relationship between content in a camera viewing window and other content of an application interface can be a condition upon which the automated assistant captures certain media data using a camera.
Time asynchronous spoken intent detection
An embodiment of a spoken intent detection device includes technology to detect a phrase in an electronic representation of an audio stream based on a pre-defined vocabulary, associate a time stamp with the detected phrase, and classify a spoken intent based on a sequence of detected phrases and the respective associated time stamps. Other embodiments are disclosed and claimed.
DETECTING AN IN-FIELD EVENT
Examples are disclosed that relate to methods, computing devices, and systems for detecting an in-field event. One example provides a method comprising, during a training phase, receiving one or more training data streams. The training data stream(s) include an audio input comprising a semantic indicator. The audio input is processed to recognize the semantic indicator. A subset of data is selected and used to train a machine learning model to detect the in-field event, and the method further comprises outputting the trained machine learning model. During a run-time phase, the method comprises receiving one or more run-time input data streams. The trained machine learning model is used to detect a second instance of the in-field event in the one or more run-time input data streams. The method further comprises outputting an indication of the second instance of the in-field event.
Multimodal sentiment classification
Sentiment classification can be implemented by an entity-level multimodal sentiment classification neural network. The neural network can include left, right, and target entity subnetworks. The neural network can further include an image network that generates representation data that is combined and weighted with data output by the left, right, and target entity subnetworks to output a sentiment classification for an entity included in a network post.
USING A SMARTPHONE TO CONTROL ANOTHER DEVICE BY VOICE
A method and system for implementing a speech-enabled interface of a host device via an electronic mobile device in a network are provided. The method includes establishing a communication session between the host device and the mobile device via a session service provider. According to some embodiments, a barcode can be adopted to enable the pairing of the host device and mobile device. Furthermore, the present method and system employ the voice interface in conjunction with speech recognition systems and natural language processing to interpret voice input for the hosting device, which can be used to perform one or more actions related to the hosting device.
Spoken language understanding models
Techniques for using a federated learning framework to update machine learning models for spoken language understanding (SLU) system are described. The system determines which labeled data is needed to update the models based on the models generating an undesired response to an input. The system identifies users to solicit labeled data from, and sends a request to a user device to speak an input. The device generates labeled data using the spoken input, and updates the on-device models using the spoken input and the labeled data. The updated model data is provided to the system to enable the system to update the system-level (global) models.