Patent classifications
G10L25/48
Method and system for identifying recipients of a reward associated with a conversion
The present teaching relates to method and system for evaluating a conversion. The method extracts meta-information including a conversion parameter and a reward. The meta-information corresponds to a conversion associated with an advertisement displayed previously by a plurality of entities. The method receives a plurality of claims for the conversion from one or more entities, and selects a claim corresponding to an entity from the plurality of claims based on the conversion parameter and information included in the plurality of claims. Further, the method transmits information related to the selected claim.
Method and system for identifying recipients of a reward associated with a conversion
The present teaching relates to method and system for evaluating a conversion. The method extracts meta-information including a conversion parameter and a reward. The meta-information corresponds to a conversion associated with an advertisement displayed previously by a plurality of entities. The method receives a plurality of claims for the conversion from one or more entities, and selects a claim corresponding to an entity from the plurality of claims based on the conversion parameter and information included in the plurality of claims. Further, the method transmits information related to the selected claim.
Synthesizing higher order conversation features for a multiparty conversation
Technology is provided for identifying synthesized conversation features from recorded conversations. The technology can identify, for each of one or more utterances, data for multiple modalities, such as acoustic data, video data, and text data. The technology can extract features, for each particular utterance of the one or more utterances, from each of the data for the multiple modalities associated with that particular utterance. The technology can also apply a machine learning model that receives the extracted features and/or previously synthesized conversation features and produces one or more additional synthesized conversation features.
Hybrid learning system for natural language understanding
An agent automation system includes a memory configured to store a natural language understanding (NLU) framework and a processor configured to execute instructions of the NLU framework to cause the agent automation system to perform actions. These actions comprise: generating an annotated utterance tree of an utterance using a combination of rules-based and machine-learning (ML)-based components, wherein a structure of the annotated utterance tree represents a syntactic structure of the utterance, and wherein nodes of the annotated utterance tree include word vectors that represent semantic meanings of words of the utterance; and using the annotated utterance tree as a basis for intent/entity extraction of the utterance.
Automatic dubbing method and apparatus
An automatic dubbing method is disclosed. The method comprises: extracting speeches of a voice from an audio portion of a media content (504); obtaining a voice print model for the extracted speeches of the voice (506); processing the extracted speeches by utilizing the voice print model to generate replacement speeches (508); and replacing the extracted speeches of the voice with the generated replacement speeches in the audio portion of the media content (510).
Named entity recognition method, named entity recognition equipment and medium
A named entity recognition method, a named entity recognition equipment and a medium are disclosed, the method including: acquiring a voice signal; extracting a voice feature vector in the voice signal; extracting, based on a literalness result after voice recognition is performed on the voice signal, a literalness feature vector in the literalness result; splicing the voice feature vector and the literalness feature vector to obtain a composite feature vector of each word in the voice signal; processing the composite feature vector of each word in the voice signal through a deep learning model to obtain a named entity recognition result.
Image forming system
There is provided an image forming system that includes an image forming apparatus including, a first variable mechanism that varies from a first state to a second state or from the second state to the first state when physically operated by a user, a first detection unit configured to detect the variation of a state of the first variable mechanism, and a sound collection unit. The image forming system further includes a generation unit configured to generate, when the variation is detected by the first detection unit, first statistical information regarding at least one sound wave obtained by the sound collection unit in a period based on a timing when the variation is detected.
FRAUDULENT CALL DETECTION
There is disclosed in one example a mobile telephone, including: a hardware platform including a processor and a memory; a telecommunication transceiver; and instructions encoded within the memory to instruct the processor to: identify a call made via the telecommunication transceiver; analyze the call and assign the call a predicted local reputation according to the analysis, including a legitimacy confidence score; if the legitimacy confidence score is less than a first threshold, terminate the call; if the legitimacy confidence score is greater than a second threshold, cease analysis of the call; and if the legitimacy confidence score is between the first and second thresholds, continue analysis of the call.
Audio content production, audio sequencing, and audio blending system and method
Some embodiments include a production content server system with a computing device processing operations include causing a content reader server to couple to a content source with content using a wired or wireless link, and downloading at least one content file associated with content retrieved from the content source, where content file includes audio and/or a video. The operations include transcoding at least a portion of the at least one content file with a dynamic range compression to a specified dynamic range, equalization and duration, and processing at least one content audio file from the at least one content file. The operations further include storing the at least one content audio file to a production content database. Some embodiments include processing a production break audio file including blending the at least one production break audio file with at least one other content file.
Audio content production, audio sequencing, and audio blending system and method
Some embodiments include a production content server system with a computing device processing operations include causing a content reader server to couple to a content source with content using a wired or wireless link, and downloading at least one content file associated with content retrieved from the content source, where content file includes audio and/or a video. The operations include transcoding at least a portion of the at least one content file with a dynamic range compression to a specified dynamic range, equalization and duration, and processing at least one content audio file from the at least one content file. The operations further include storing the at least one content audio file to a production content database. Some embodiments include processing a production break audio file including blending the at least one production break audio file with at least one other content file.