Patent classifications
G09B19/06
System to evaluate dimensions of pronunciation quality
The present invention provides a system for determining a language proficiency of a user in an evaluated language. A machine learning engine may be trained using audio file variables from a plurality of audio files and human generated scores for a comprehensibility, accentedness and intelligibility for each audio file. The system may receive an audio file from a user and determine a plurality of audio file variables from the audio file. The system may apply the audio file variables to the machine learning engine to determine a comprehensibility, an accentedness and an intelligibility score for the user. The system may determine one or more projects and/or classes for the user based on the user's comprehensibility score, accentedness score and/or intelligibility score.
System of language learning with augmented reality
Education curricula materials include encoded indicia such as a QR code that contains information related to identifying requested augmented reality image data from a server over a network. By scanning the QR code, a computer uses its decoding software to create a data set for transmitting to the server. The data set may include an identifier for selected augmented reality image data associated with the user's curriculum, information about the curriculum at issue, the academic level of the user, and any other data necessary to ensure that the most appropriate augmented reality image data is transmitted back to the computer. The server transmits comprehensive augmented reality image data back to the computer for viewing on a computerized display accessible by a student. Part of the content may include an interactive pedagogical agent that helps the student with a part of the instruction related to a portion of the curriculum.
System of language learning with augmented reality
Education curricula materials include encoded indicia such as a QR code that contains information related to identifying requested augmented reality image data from a server over a network. By scanning the QR code, a computer uses its decoding software to create a data set for transmitting to the server. The data set may include an identifier for selected augmented reality image data associated with the user's curriculum, information about the curriculum at issue, the academic level of the user, and any other data necessary to ensure that the most appropriate augmented reality image data is transmitted back to the computer. The server transmits comprehensive augmented reality image data back to the computer for viewing on a computerized display accessible by a student. Part of the content may include an interactive pedagogical agent that helps the student with a part of the instruction related to a portion of the curriculum.
Method for inputting Chinese characters using mortise and tenon joint structures for Chinese characters formed by building members
Disclosed are mortise and tenon joint structures for Chinese characters formed by building members having the same or different configurations, and a method for inputting Chinese characters by using the building members. The Chinese characters are de-structured such that each Chinese character consists one or more radicals which are formed by the mortise and tenon joint structures or variants thereof. The variants of the mortise and tenon joint structures are formed by shift, rotation or combination of shift and rotation of the building members of the mortise and tenon joint structures. This establishes an inputting method between the Latin letters and Chinese characters.
Method for inputting Chinese characters using mortise and tenon joint structures for Chinese characters formed by building members
Disclosed are mortise and tenon joint structures for Chinese characters formed by building members having the same or different configurations, and a method for inputting Chinese characters by using the building members. The Chinese characters are de-structured such that each Chinese character consists one or more radicals which are formed by the mortise and tenon joint structures or variants thereof. The variants of the mortise and tenon joint structures are formed by shift, rotation or combination of shift and rotation of the building members of the mortise and tenon joint structures. This establishes an inputting method between the Latin letters and Chinese characters.
Method of interactive foreign language learning by voice talking each other using voice recognition function and TTS function
Disclosed is an interactive foreign language learning method that enables foreign language learning through conversation between a learner and a terminal having a screen, such as a smartphone, a tablet computer, a notebook computer, a personal computer (PC), and the like, based on a video containing foreign language sound expressions, such as movies, dramas, news, and the like, through speech transmission using a speech recognition function and a TTS function of the terminal. In the interactive foreign language learning method, upon determining that speech input by a learner in a speech waiting state for speech recognition matches a previously stored voice command, the terminal performs operation corresponding to the voice command and enters the speech waiting state again, and upon determining that the speech input by the learner does not match the voice command, the terminal allows the learner to perform foreign language learning in learning modes according to learner selection, such as a learning mode in which the learner speaks after the terminal, a conversation mode in which the terminal and a user alternately speaks a sentence, and an intermediate learning mode, while changing the learning modes in response to a voice command of the learner, whereby the learner can perform interactive foreign language learning through speech transmission between the terminal and the learner while minimizing screen touch, and can have actual conversation with other learners performing foreign language learning using the same application program implementing the learning method.
Method of interactive foreign language learning by voice talking each other using voice recognition function and TTS function
Disclosed is an interactive foreign language learning method that enables foreign language learning through conversation between a learner and a terminal having a screen, such as a smartphone, a tablet computer, a notebook computer, a personal computer (PC), and the like, based on a video containing foreign language sound expressions, such as movies, dramas, news, and the like, through speech transmission using a speech recognition function and a TTS function of the terminal. In the interactive foreign language learning method, upon determining that speech input by a learner in a speech waiting state for speech recognition matches a previously stored voice command, the terminal performs operation corresponding to the voice command and enters the speech waiting state again, and upon determining that the speech input by the learner does not match the voice command, the terminal allows the learner to perform foreign language learning in learning modes according to learner selection, such as a learning mode in which the learner speaks after the terminal, a conversation mode in which the terminal and a user alternately speaks a sentence, and an intermediate learning mode, while changing the learning modes in response to a voice command of the learner, whereby the learner can perform interactive foreign language learning through speech transmission between the terminal and the learner while minimizing screen touch, and can have actual conversation with other learners performing foreign language learning using the same application program implementing the learning method.
UTTERANCE EVALUATION APPARATUS, UTTERANCE EVALUATION, AND PROGRAM
A stable evaluation result is obtained from a voice of speech for any sentence. A speech evaluation device (1) outputs a score for evaluating speech of an input voice signal spoken by a speaker in a first group. A feature extraction unit (11) extracts an acoustic feature from the input voice signal. A conversion unit (12) converts the acoustic feature of the input voice signal to an acoustic feature when a speaker in a second group speaks the same text as text of the input voice signal. An evaluation unit (13) calculates a score indicating a higher evaluation as a distance between the acoustic feature before the conversion and the acoustic feature after the conversion becomes shorter.
TEACHING LITERARY CONCEPTS THROUGH MEDIA
Methods, systems, and storage media for teaching literary concepts are disclosed. Exemplary implementations may: generate a first list comprising a plurality of literary concepts; receive a first selection comprising at least one literary concept of the plurality of literary concepts; generate a second list comprising a plurality of media based on the first selection; receive a second selection comprising at least one media of the plurality of media; and generate a lesson plan comprising the at least one media based on the second selection.
Method and system for adaptive language learning
Methods and systems provide an adaptive method of language learning using automatic speech recognition that allows a user to learn a new language using only their voice—and without using their hands or eyes. The system may be implemented in an application for a smartphone. Each lesson comprises a series of questions that adapt to the user's knowledge. The questions ask for the translation of a word or phrase by playing an audio prompt in the origin language, recording the user speaking the translation in the target language, indicating whether the utterance was correct or incorrect, and providing feedback related to the user's utterance. Each user response is evaluated in real time, and the application provides individualized feedback to the user based on their response. Subsequent questions in the lesson and future lessons are dynamically ordered to adapt to the user's knowledge.