G10L13/00

DEVICE INCLUDING SPEECH RECOGNITION FUNCTION AND METHOD OF RECOGNIZING SPEECH
20180005627 · 2018-01-04 ·

A device including a speech recognition function which recognizes speech from a user, includes: a loudspeaker which outputs speech to a space; a microphone which collects speech in the space; a first speech recognition unit which recognizes the speech collected by the microphone; a command control unit which issues a command for controlling the device, based on the speech recognized by the first speech recognition unit; and a control unit which prohibits the command issuance unit from issuing the command, based on the speech to be output from the loudspeaker.

DEVICE INCLUDING SPEECH RECOGNITION FUNCTION AND METHOD OF RECOGNIZING SPEECH
20180005627 · 2018-01-04 ·

A device including a speech recognition function which recognizes speech from a user, includes: a loudspeaker which outputs speech to a space; a microphone which collects speech in the space; a first speech recognition unit which recognizes the speech collected by the microphone; a command control unit which issues a command for controlling the device, based on the speech recognized by the first speech recognition unit; and a control unit which prohibits the command issuance unit from issuing the command, based on the speech to be output from the loudspeaker.

SOUND CONTROL DEVICE, SOUND CONTROL METHOD, AND SOUND CONTROL PROGRAM
20180005617 · 2018-01-04 ·

A sound control device includes: a reception unit that receives a start instruction indicating a start of output of a sound; a reading unit that reads a control parameter that determines an output mode of the sound, in response to the start instruction being received; and a control unit that causes the sound to be output in a mode according to the read control parameter.

COMPUTING DEVICE AND CORRESPONDING METHOD FOR GENERATING DATA REPRESENTING TEXT
20180011834 · 2018-01-11 ·

An example method involves (i) accessing first data representing text, wherein the text defines at least one position representing a particular type of grammatical break between two portions of the text; (ii) identifying, from among the at least one position, a position that is closest to a target position within the text; (iii) based on the identified position within the text, generating second data that represents a proper subset of the text, wherein the proper subset extends from an initial position within the text to the identified position within the text; and (iv) providing output based on the generated second data.

METHOD, ELECTRONIC DEVICE, AND RECORDING MEDIUM FOR NOTIFYING OF SURROUNDING SITUATION INFORMATION
20180012073 · 2018-01-11 ·

According to various embodiments, a method for notifying of surrounding situation information by an electronic device may comprise the operations of: monitoring a value indicating a movement of the electronic device; determining whether a state of the electronic device is a stopped state, on the basis of the value indicating a movement of the electronic device; and acquiring surrounding situation information of the electronic device, which will be notified of to a user, when the state of the electronic device is a stopped state; and outputting the surrounding situation information.

MOBILE ELECTRONIC DEVICE AND OPERATION METHOD THEREFOR
20180013882 · 2018-01-11 ·

An operation method for a mobile electronic device is provided. The operation method includes: transmitting a calling phone number to a wireless audio product from an operation system of the mobile electronic device via wireless communication, wherein the mobile electronic device is wirelessly connected to the wireless audio product; transmitting the calling phone number to an application software of the mobile electronic device by the wireless audio product; searching a caller name corresponding to the calling phone number by the application software of the mobile electronic device; transmitting the caller name to the wireless audio product by the application software of the mobile electronic device via wireless communication; and playing the caller name by the wireless audio product.

AUTOMATIC INTERPRETATION METHOD AND APPARATUS

Provided is an automated interpretation method, apparatus, and system. The automated interpretation method includes encoding a voice signal in a first language to generate a first feature vector, decoding the first feature vector to generate a first language sentence in the first language, encoding the first language sentence to generate a second feature vector with respect to a second language, decoding the second feature vector to generate a second language sentence in the second language, controlling a generating of a candidate sentence list based on any one or any combination of the first feature vector, the first language sentence, the second feature vector, and the second language sentence, and selecting, from the candidate sentence list, a final second language sentence as a translation of the voice signal.

Removal of identifying traits of a user in a virtual environment

A virtual environment platform may receive, from a user device, a request to access a virtual reality (VR) environment and may verify, based on the request, a user of the user device to allow the user device access to the VR environment. The virtual environment platform may receive, after verifying the user of the user device, user voice input and user handwritten input from the user device. The virtual environment platform may generate processed user speech by processing the user voice input, wherein a characteristic of the processed user speech and a corresponding characteristic of the user voice input are different and may generate formatted user text by processing the user handwritten input, wherein the formatted user text is machine-encoded text. The virtual environment platform may cause the processed user speech to be audibly presented and the formatted user text to be visually presented in the VR environment.

Removal of identifying traits of a user in a virtual environment

A virtual environment platform may receive, from a user device, a request to access a virtual reality (VR) environment and may verify, based on the request, a user of the user device to allow the user device access to the VR environment. The virtual environment platform may receive, after verifying the user of the user device, user voice input and user handwritten input from the user device. The virtual environment platform may generate processed user speech by processing the user voice input, wherein a characteristic of the processed user speech and a corresponding characteristic of the user voice input are different and may generate formatted user text by processing the user handwritten input, wherein the formatted user text is machine-encoded text. The virtual environment platform may cause the processed user speech to be audibly presented and the formatted user text to be visually presented in the VR environment.

Cooking management system with wireless voice engine server
11710485 · 2023-07-25 · ·

The disclosed technology provides computer-to-wireless-voice integration methods and systems. In some implementations, the methods and systems deliver real-time voice instructions to users of required time-sensitive actions and ensure that such directives are received and a recipient effectively acts on the directives. The systems and methods include receiving a notification of an event from a terminal in a wireless active voice engine (WAVE) system, determining an active voice directive corresponding to the event with a WAVE module, converting the active voice directive into a voice event via a directive converter, and notifying a targeted recipient of the active voice directive corresponding to the event with a communications module. In some implementations, the systems and methods include sending a confirmation event via the receiver to the communications module that the active voice directive was received by the targeted recipient and communicating the active voice directive has been completed.