G06F40/58

Neural network model compression method, corpus translation method and device

A method for compressing a neural network model, includes: obtaining a set of training samples including a plurality of pairs of training samples, each pair of the training samples including source data and target data corresponding to the source data; training an original teacher model by using the source data as an input and using the target data as verification data; training intermediate teacher models based on the set of training samples and the original teacher model, one or more intermediate teacher models forming a set of teacher models; training multiple candidate student models based on the set of training samples, the original teacher model, and the set of teacher models, the multiple candidate student models forming a set of student models; and selecting a candidate student model of the multiple candidate student models as a target student model according to training results of the multiple candidate student models.

TRANSLATION SUPPORT DEVICE THAT GENERATES UNTRANSLATED PORTION INFORMATION INDICATING UNTRANSLATED PORTION IN TRANSLATED DOCUMENT, AND IMAGE FORMING APPARATUS

A translation support device includes a storage device and a control device. The storage device stores therein an original document file in which original document data is recorded, and a translated document file in which translated document data, representing a translated document translated from an original document represented by the original document data, is recorded. The control device includes a processor, and acts as a detector and a generator, when the processor executes a control program. The detector detects, through comparison between the original document file and the translated document file, a same portion contained in common in both of the files, as an untranslated portion. The generator generates untranslated portion information indicating the untranslated portion.

TRANSLATION SUPPORT DEVICE THAT GENERATES UNTRANSLATED PORTION INFORMATION INDICATING UNTRANSLATED PORTION IN TRANSLATED DOCUMENT, AND IMAGE FORMING APPARATUS

A translation support device includes a storage device and a control device. The storage device stores therein an original document file in which original document data is recorded, and a translated document file in which translated document data, representing a translated document translated from an original document represented by the original document data, is recorded. The control device includes a processor, and acts as a detector and a generator, when the processor executes a control program. The detector detects, through comparison between the original document file and the translated document file, a same portion contained in common in both of the files, as an untranslated portion. The generator generates untranslated portion information indicating the untranslated portion.

Electronic device and method for controlling the electronic device thereof

An electronic device is provided. The electronic device includes a memory configured to store a speech translation model and at least one processor electronically connected with the memory. The at least one processor is configured to train the speech translation model based on first information related to conversion between a speech in a first language and a text corresponding to the speech in the first language, and second information related to conversion between a text in the first language and a text in a second language corresponding to the text in the first language, and the speech translation model is trained to convert a speech in the first language into a text in the second language and output the text.

AUTOMATIC INTERPRETATION SERVER AND METHOD BASED ON ZERO UI

Provided a method performed by an automatic interpretation server based on a zero user interface (UI), which communicates with a plurality of terminal devices having a microphone function, a speaker function, a communication function, and a wearable function. The method includes connecting terminal devices disposed within a designated automatic interpretation zone, receiving a voice signal of a first user from a first terminal device among the terminal devices within the automatic interpretation zone, matching a plurality of users placed within a speech-receivable distance of the first terminal device, and performing automatic interpretation on the voice signal and transmitting results of the automatic interpretation to a second terminal device of at least one second user corresponding to a result of the matching.

Enhanced graphical user interface for voice communications
11574633 · 2023-02-07 · ·

Enhanced graphical user interfaces for transcription of audio and video messages is disclosed. Audio data may be transcribed, and the transcription may include emphasized words and/or punctuation corresponding to emphasis of user speech. Additionally, the transcription may be translated into a second language. A message spoken by a user depicted in one or more images of video data may also be transcribed and provided to one or more devices.

Enhanced graphical user interface for voice communications
11574633 · 2023-02-07 · ·

Enhanced graphical user interfaces for transcription of audio and video messages is disclosed. Audio data may be transcribed, and the transcription may include emphasized words and/or punctuation corresponding to emphasis of user speech. Additionally, the transcription may be translated into a second language. A message spoken by a user depicted in one or more images of video data may also be transcribed and provided to one or more devices.

Natural language processing engine for translating questions into executable database queries
11573957 · 2023-02-07 · ·

A system and method for translating questions into database queries are provided. A text to database query system receives a natural language question and a structure in a database. Question tokens are generated from the question and query tokens are generated from the structure in the database. The question tokens and query tokens are concatenated into a sentence and a sentence token is added to the sentence. A BERT network generates question hidden states for the question tokens, query hidden states for the query tokens, and a classifier hidden state for the sentence token. A translatability predictor network determines if the question is translatable or untranslatable. A decoder converts a translatable question into an executable query. A confusion span predictor network identifies a confusion span in the untranslatable question that causes the question to be untranslatable. An auto-correction module to auto-correct the tokens in the confusion span.

Machine translation method, device, and computer-readable storage medium

A machine translation method includes: receiving to-be-processed information expressed in a source language; encoding the to-be-processed information, and generating an expression vector sequence of the to-be-processed information; and predicting feature information of a target foresight word at a first moment by using a prediction model. The feature information includes at least one of a part of speech or a word category of the target foresight word. The method also includes: determining a context vector corresponding to the first moment in the expression vector sequence according to the feature information of the target foresight word; and decoding the context vector by using a decoder, to obtain target content that corresponds to the context vector and expressed in a target language.

Machine translation method, device, and computer-readable storage medium

A machine translation method includes: receiving to-be-processed information expressed in a source language; encoding the to-be-processed information, and generating an expression vector sequence of the to-be-processed information; and predicting feature information of a target foresight word at a first moment by using a prediction model. The feature information includes at least one of a part of speech or a word category of the target foresight word. The method also includes: determining a context vector corresponding to the first moment in the expression vector sequence according to the feature information of the target foresight word; and decoding the context vector by using a decoder, to obtain target content that corresponds to the context vector and expressed in a target language.