Patent classifications
G06F40/42
Non-transitory computer-readable recording medium, control system, and method of controlling information processing apparatus
A non-transitory computer-readable recording medium stores instructions executable by a controller of an information processing apparatus. The instructions cause the controller to perform operations. The operations include: displaying an edit screen, the edit screen being configured to receive a translation instruction, the edit screen including a text area and a print area, the print area being configured to place therein a target text string to be printed; upon receiving on the edit screen an operation for inputting the target text string, displaying the target text string in a first language in the text area; upon receiving on the edit screen the translation instruction for translating the target text string displayed in the text area from the first language into a second language, translating the target text string to obtain a translation data piece representing a translated text string in the second language.
Non-transitory computer-readable recording medium, control system, and method of controlling information processing apparatus
A non-transitory computer-readable recording medium stores instructions executable by a controller of an information processing apparatus. The instructions cause the controller to perform operations. The operations include: displaying an edit screen, the edit screen being configured to receive a translation instruction, the edit screen including a text area and a print area, the print area being configured to place therein a target text string to be printed; upon receiving on the edit screen an operation for inputting the target text string, displaying the target text string in a first language in the text area; upon receiving on the edit screen the translation instruction for translating the target text string displayed in the text area from the first language into a second language, translating the target text string to obtain a translation data piece representing a translated text string in the second language.
DISTILLING TRANSFORMERS FOR NEURAL CROSS-DOMAIN SEARCH
A distillation system extracts knowledge from a large pre-trained sequence-to-sequence neural transformer model into a smaller bi-encoder. The pre-trained sequence-to-sequence neural transformer model is trained to translate data from a first domain into a second domain on a large corpus. A teacher model is generated from the pre-trained model by fine-tuning the pre-trained neural transformer model on a smaller translation task with true translation pairs. The fine-tuned model is then used to generate augmented data values which are used with the true translation pairs to train the bi-encoder. The bi-encoder is used for perform cross-domain searches.
DISTILLING TRANSFORMERS FOR NEURAL CROSS-DOMAIN SEARCH
A distillation system extracts knowledge from a large pre-trained sequence-to-sequence neural transformer model into a smaller bi-encoder. The pre-trained sequence-to-sequence neural transformer model is trained to translate data from a first domain into a second domain on a large corpus. A teacher model is generated from the pre-trained model by fine-tuning the pre-trained neural transformer model on a smaller translation task with true translation pairs. The fine-tuned model is then used to generate augmented data values which are used with the true translation pairs to train the bi-encoder. The bi-encoder is used for perform cross-domain searches.
Machine translation method, device, and computer-readable storage medium
A machine translation method includes: receiving to-be-processed information expressed in a source language; encoding the to-be-processed information, and generating an expression vector sequence of the to-be-processed information; and predicting feature information of a target foresight word at a first moment by using a prediction model. The feature information includes at least one of a part of speech or a word category of the target foresight word. The method also includes: determining a context vector corresponding to the first moment in the expression vector sequence according to the feature information of the target foresight word; and decoding the context vector by using a decoder, to obtain target content that corresponds to the context vector and expressed in a target language.
PREDICTING FUTURE TRANSLATIONS
Technology is disclosed for snippet pre-translation and dynamic selection of translation systems. Pre-translation uses snippet attributes such as characteristics of a snippet author, snippet topics, snippet context, expected snippet viewers, etc., to predict how many translation requests for the snippet are likely to be received. An appropriate translator can be dynamically selected to produce a translation of a snippet either as a result of the snippet being selected for pre-translation or from another trigger, such as a user requesting a translation of the snippet. Different translators can generate high quality translations after a period of time or other translators can generate lower quality translations earlier. Dynamic selection of translators involves dynamically selecting machine or human translation, e.g., based on a quality of translation that is desired. Translations can be improved over time by employing better machine or human translators, such as when a snippet is identified as being more popular.
System and method for defining dialog intents and building zero-shot intent recognition models
A system and method of creating the natural language understanding component of a speech/text dialog system. The method involves a first step of defining user intent in the form of an intent flow graph. Next, (context, intent) pairs are created from each of the plurality of intent flow graphs and stored in a training database. A paraphrase task is then generated from each (context, intent) pair and also stored in the training database. A zero-shot intent recognition model is trained using the plurality of (context, intent) pairs in the training database to recognize user intents from the plurality of paraphrase tasks in the training database. Once trained, the zero-shot intent recognition model is applied to user queries to generate semantic outputs.
Two Way Communication Assembly
A two way communication assembly includes a display housing that is positionable on a support surface such that the display housing is visible to a pair of users. A pair of displays and a pair of qwerty keyboards is each integrated into opposite sides of the display housing. A translation unit is integrated into the display housing and the translation unit stores a database comprising a plurality of languages spoken around the world. The translation unit translates language between the qwerty keyboards to facilitate a patient who speaks a first language to communicate with a caregiver that speaks a second language. In this way the translation unit facilitates the caregiver to communicate with the patient.
Two Way Communication Assembly
A two way communication assembly includes a display housing that is positionable on a support surface such that the display housing is visible to a pair of users. A pair of displays and a pair of qwerty keyboards is each integrated into opposite sides of the display housing. A translation unit is integrated into the display housing and the translation unit stores a database comprising a plurality of languages spoken around the world. The translation unit translates language between the qwerty keyboards to facilitate a patient who speaks a first language to communicate with a caregiver that speaks a second language. In this way the translation unit facilitates the caregiver to communicate with the patient.
Profile-based natural language message generation and selection
In some embodiments, text for user consumption may be generated based on an intended user action category and a user profile. In some embodiments, an action category, a plurality of text seeds, and a profile comprising feature values may be obtained. Context values may be generated based on the feature values, and text generation models may be obtained based on the text seeds. In some embodiments, messages may be generated using the text generation models based on the action category and the context values. Weights associated with the messages may be determined, and a first text message of the messages may be sent to an address associated with the profile based on the weights. Based on a reaction value obtained in response to the first message, a first expected allocation value may be updated based on the reaction value.