Interactive situational teaching system for use in K12 stage
20210150924 ยท 2021-05-20
Inventors
Cpc classification
G09B5/065
PHYSICS
G09B5/14
PHYSICS
G09B5/067
PHYSICS
International classification
G09B5/06
PHYSICS
G09B5/12
PHYSICS
G09B5/14
PHYSICS
Abstract
Provided is an interactive situational teaching system for use in K12 stage, comprising a computer apparatus and a scenario creating apparatus, an image acquiring apparatus and a user terminal connected to the computer apparatus, wherein the computer apparatus is configured to receive an operation instruction from the user terminal to control the scenario creating apparatus and the image acquiring apparatus, and the computer apparatus is capable of synthesizing and saving situational audio/video information obtained from the image acquiring apparatus and user audio/video information obtained from the user terminal as an audio/video file, and is also capable of presenting the audio/video file via the scenario creating apparatus. By using the system of the present invention, the experience and interest of a K12 stage user participating in interactive situational teaching is further enhanced, which is also applicable in solving the problem of coursework submission for interactive situational teaching.
Claims
1. An interactive situational teaching system for use in K12 stage, comprising a computer apparatus and a scenario creating apparatus, an image acquiring apparatus and a user terminal connected to the computer apparatus, wherein the image acquiring apparatus comprises a camera for remotely acquiring situational audio/video information of situational teaching; the scenario creating apparatus comprises a projection device and a sound device, and is configured to project a predetermined scenario stored in the computer apparatus or an actual scenario obtained by the image acquiring apparatus to a target area to display a situational teaching scenario; the user terminal comprises a recording apparatus and a videoing apparatus, and is configured to acquire user audio/video information and send an operation instruction from a user to the computer apparatus; and the computer apparatus is configured to receive the operation instruction from the user terminal, control the scenario creating apparatus and the image acquiring apparatus, and synthesize and save the situational audio/video information obtained from the image acquiring apparatus and the user audio/video information obtained from the user terminal as an audio/video file.
2. The system according to claim 1, wherein the computer apparatus comprises a situational audio/video extracting unit, a user audio/video acquiring unit, and an information synthesizing and saving unit, wherein the situational audio/video extracting unit is configured to extract, according to the preset information set based on a teaching goal, a segment of the situational audio/video information acquired from the image acquiring apparatus that is related to the preset information, such as a video segment, an audio segment or a screenshot image, and establish an association relationship between the preset information and a segment in an order; the user audio/video acquiring unit is configured to segment, according to the preset information set based on the teaching goal, the user audio/video information acquired by the user terminal, and establish an association relationship between the preset information and a segment; and the information synthesizing and saving unit is configured to synthesize, according to the preset information, the situational audio/video information and the user audio/video information respectively processed by the situational audio/video extracting unit and the user audio/video acquiring unit into an audio/video file, and save the audio/video file to the computer apparatus.
3. The system according to claim 2, wherein the situational audio/video extracting unit further comprises an information presetting unit, an information comparing unit, a data extracting unit, and a data saving unit, wherein the information presetting unit is configured to take key points as preset information according to the teaching goal, particularly outline text information of the teaching goal, and set an audio and/or image corresponding to the preset information as reference information; the information comparing unit is configured to compare the situational audio/video information with the audio and/or image of the reference information, to acquire a time node of the situational audio/video information corresponding to the preset information; the data extracting unit is configured to extract situational audio/video information corresponding to the preset information based on the time node according to a preset rule of, for example, extracting an image at a fixed time interval, extracting a video segment or an audio segment at a fixed time interval, etc.; and the data saving unit is configured to save the extracted situational audio/video information in an order, and establish a corresponding association relationship with the preset information.
4. The system according to claim 3, wherein the user audio/video acquiring unit further comprises an audio recognizing unit, a text comparing unit, and a segment marking unit, wherein the audio recognizing unit is configured to recognize and convert an audio in the obtained user audio/video information into a text content according to a speech recognition model, and establish a corresponding association relationship between the text content and the user audio/video information according to time information, such as digital time stamp information; the text comparing unit is configured to perform search comparison on the text content according to the preset information, and establish a corresponding association relationship for the text content according to the preset information; the segment marking unit is configured to establish, according to the corresponding association relationships respectively obtained by the audio recognizing unit and the text comparing unit, a corresponding association relationship between the preset information and the user audio/video information based on the text content, and perform segment marking on the user audio/video information according to the key points of the preset information.
5. The system according to claim 4, wherein the information synthesizing and saving unit further comprises a corresponding relationship processing unit, a data compression processing unit, a time fitting processing unit, and a data synthesis processing unit, wherein the corresponding relationship processing unit is configured to associate, according to the corresponding association relationship with the preset information, the user audio/video information subjected to the segment marking with the situational audio/video information segment extracted by the situational audio/video extracting unit, and establish a corresponding relationship between the user audio/video information and the situational audio/video information; the data compression processing unit is configured to perform compression processing on the corresponding situational audio/video information according to a preset rule based on the duration of a user audio/video information segment to meet the time requirement of the preset rule; the time fitting processing unit is configured to perform fitting processing on the user audio/video information based on the segment marking according to the compressed situational audio/video information, for example, add an idle time between segments to complete the play of the situational audio/video information; and the data synthesis processing unit is configured to synthesize, according to the corresponding relationship, the user audio/video information and the situational audio/video information after the fitting processing to form an audio/video file.
6. The system according to claim 5, wherein the synthesized audio/video file is played by the scenario creating apparatus.
7. The system according to claim 6, wherein the synthesized audio/video file is submitted to a teacher as a coursework of situational teaching.
8. The system according to claim 7, wherein the recording apparatus and the videoing apparatus of the user terminal are apparatuses built in or provided external to the user terminal.
9. The system according to claim 8, wherein the user terminal is a desktop computer, a notebook computer, a smart phone, or a PAD.
10. The system according to claim 9, wherein the user audio/video information is a recorded summative explanation in the order of the key points of the teaching goal according to the requirements of the teaching goal after the user completes the learning or practice of the situational teaching.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0043] The accompanying drawings illustrate one or more embodiments of the present invention and, together with the written description, serve to explain the principles of the invention. Wherever possible, the same reference numbers are used throughout the drawings to refer to the same or like elements of an embodiment.
[0044]
[0045]
[0046]
[0047]
[0048]
DETAILED DESCRIPTION OF EMBODIMENTS
[0049] The present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the present invention are shown. The present invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure is thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like reference numerals refer to like elements throughout.
[0050] The specific embodiments of the present invention will be further described in detail below in combination with the accompanying drawings. It should be understood that the embodiments described herein are used only to explain the present invention, rather than limit the present invention. Various variations and modifications made by those skilled in the art without departing from the spirit of the present invention shall fall into the scope of the independent claims and dependent claims of the present invention.
[0051]
[0052] The image acquiring apparatus 30 comprises at least one camera 301 for remotely acquiring situational audio/video information of situational teaching. The camera 301 may be provided with a camera of an audio acquiring apparatus, or may have an audio acquiring apparatus that is separately provided. Preferably, the camera 301 is a high definition camera.
[0053] The scenario creating apparatus 20 comprises a projection device 201 and a sound device 203, and is configured to project a predetermined scenario stored in the computer apparatus 10 or an actual scenario obtained by the image acquiring apparatus 30 to a target area to display a situational teaching scenario. Preferably, the scenario creating apparatus 20 further comprises an augmented reality (AR) display apparatus 204 for displaying image information to be projected in an AR manner after the image information is processed, so that a user can view it by using a corresponding viewing device.
[0054] The user terminal 40 comprises a recording apparatus 401 and a videoing apparatus 402, and is configured to acquire user audio/video information and send an operation instruction from the user to the computer apparatus. The interactive situational teaching system may be provided with a plurality of user terminals 40, or user terminals 40 with which any user can access the system as permitted. For many intelligent user terminals, the recording apparatus 401 and the videoing apparatus 402 have been integrated, but for a higher quality of audio/video data or other reasons, peripheral apparatuses for recording and videoing such as high-fidelity microphones or high-definition cameras may be used. According to the present invention, a user uses the user terminal 40 to perform learning in the interactive situational teaching. When the user completes the learning or practice in the situational teaching, or before the end of the learning, summative explanation is performed in an order of key points of a teaching goal according to the requirements of the teaching goal to form user audio/video information described below. Specifically, the user terminal 40 may be a desktop computer, a notebook computer, a smart phone, or a PAD, but is not limited thereto, any device that satisfies the following functions can be used.
[0055] The user terminal 40 may comprise: a processor, a network module, a control module, a display module, and an intelligent operating system. The user terminal may be provided with a variety of data interfaces for connecting to various extension devices and accessory devices via a data bus. The intelligent operating system comprises Windows, Android and its improvements, and iOS, on which application software can be installed and run so as to realize functions of various types of application software, services, and application program stores/platforms under the intelligent operating system.
[0056] The user terminal 40 may be connected to the Internet by RJ45/Wi-Fi/Bluetooth/2G/3G/4G/G.hn/Zigbee/Z-ware/RFID, connected to other terminals or other computers and devices via the Internet, and connected to various extension devices and accessory devices by using a variety of data interfaces or bus modes, such as 1394/USB/serial/SATA/SCSI/PCI-E/Thunderbolt/data card interface, and by using a connection mode like an audio/video interface, such as HDMI/YpbPr/SPDIF/AV/DVI/VGA/TRS/SCART/Displayport so as to constitute a conference/teaching device interaction system. The functions of acoustic control and shape control are realized by using a sound capture control module and a motion capture control module in the form of software, or by using a sound capture control module and a motion capture control module in the form of data bus on-board hardware; the display, projection, voice access, audio/video playing, as well as digital or analog audio/video input and output functions are realized by connecting to a display/projection module, a microphone, a sound device and other audio/video devices via audio/video interfaces; the image access, sound access, use control and screen recording of an electronic whiteboard, and an RFID reading function are realized by connecting to a camera, a microphone, the electronic whiteboard and an RFID reading device via data interfaces, and a mobile storage device, a digital device and other devices can be accessed and managed and controlled via corresponding interfaces; the functions including manipulation, interaction and screen shaking between multi-screen devices are realized by means of DLNA/IGRS technologies and Internet technologies.
[0057] In the present invention, the processor of the user terminal 40 is defined to include but not limited to: an instruction execution system, such as a computer/processor-based system, an application specific integrated circuit (ASIC), a computing device, or a hardware and/or software system capable of fetching or acquiring logic from a non-transitory storage medium or a non-transitory computer-readable storage medium and executing instructions contained in the non-transitory storage medium or the non-transitory computer-readable storage medium. The processor may further comprise any controller, state machine, microprocessor, Internet-based entity, service or feature, or any other analog, digital, and/or mechanical implementation thereof.
[0058] In the present invention, the computer-readable storage medium is defined to include but not limited to: any medium capable of containing, storing or maintaining programs, information and data. The computer-readable storage medium includes any of many physical media, such as an electronic medium, a magnetic medium, an optical medium, an electromagnetic medium or a semiconductor medium. More specific examples of memories suitable for the computer-readable storage medium and the user terminal and server include but not limited to: a magnetic computer disk (such as a floppy disk or a hard drive), a magnetic tape, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM), a compact disk (CD) or digital video disk (DVD), Blu-ray memory, a solid state disk (SSD), and a flash memory.
[0059] The computer apparatus 10 is configured to receive the operation instruction from the user terminal 40, control the scenario creating apparatus 20 and the image acquiring apparatus 30, and synthesize and save the situational audio/video information obtained from the image acquiring apparatus 30 and the user audio/video information obtained from the user terminal 40 as an audio/video file. The computer apparatus 10 may be any commercial or home computer device that meets actual needs, such as an ordinary desktop computer, a notebook computer, or a tablet computer. The above functions of the computer apparatus 10 are performed and implemented by its functional units.
[0060] The user terminal 40 of the user is connected to the computer apparatus 10 in a wired or wireless manner through a network or a data cable to receive or actively carry out the learning of a situational teaching subject. For example, the user can perform situational learning on such topics by using the system of the present invention, for example, observe blooming of a flower in the season when it is in bloom, such as in spring, observe changes of red leaves in autumn, observe lightning in a lightning weather, or observe seed germination. As an example, the process of observing the blooming of a flower is taken as a teaching scenario. After the user sends a learning instruction via the user terminal 40, the computer apparatus 10 receives the instruction to acquire a camera 301 for observing the flower. The camera 301 may be a camera specially set up in a wild field or indoor, or may be, for example, a public monitoring camera in a botanical garden or in a forest, and these cameras may be invoked according to a license agreement. Some flowers may take a long time to bloom, while some flowers may take a short time to bloom, such as night-blooming cereus. Specifically, according to the content of a syllabus of a situational teaching, the time when the camera 301 starts monitoring and acquiring situational audio/video information is set. For example, audio/video information may be regularly monitored and acquired from the beginning of buds. For example, a corresponding acquisition time interval of audio/video information is set according to the blooming speed of a flower. The acquired situational audio/video information may be displayed regularly or irregularly by the scenario creating apparatus 20 in order to observe the real time status, as well as situation changes.
[0061]
[0062]
[0063]
[0064]
[0065] Preferred embodiments of the present invention introduced above are intended to make the spirit of the present invention more apparent and easier to understand, but not to limit the present invention. Any updates, replacements and improvements made within the spirit and principles of the present invention should be regarded as within the scope of protection of the claims of the present invention.
INDUSTRIAL APPLICABILITY
[0066] By using the system of the present invention, the experience and interest of a K12 stage user participating in interactive situational teaching is further enhanced, which is also applicable in solving the problem of coursework submission for interactive situational teaching.
[0067] The foregoing description of the exemplary embodiments of the present invention has been presented only for the purposes of illustration and description and is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching.
[0068] The embodiments were chosen and described in order to explain the principles of the invention and their practical application so as to activate others skilled in the art to utilize the invention and various embodiments and with various modifications as are suited to the particular use contemplated. Alternative embodiments will become apparent to those skilled in the art to which the present invention pertains without departing from its spirit and scope. Accordingly, the scope of the present invention is defined by the appended claims rather than the foregoing description and the exemplary embodiments described therein.