CONTROL DEVICE, ELECTRONIC MUSICAL INSTRUMENT SYSTEM, AND CONTROL METHOD
20220084491 · 2022-03-17
Assignee
Inventors
Cpc classification
G10L15/22
PHYSICS
G10H2240/305
PHYSICS
G10H1/0075
PHYSICS
G06F3/167
PHYSICS
International classification
G10H5/00
PHYSICS
Abstract
Provided is a control device which controls an electronic musical instrument, comprising: an acquisition means which understands the intention of an utterance of a user on the basis of the utterance, and acquires from a dialogue engine that generates first data in which the intention is stated, the first data generated in response to the utterance; a storage means which stores conversion data in which the first data and a control command for controlling the electronic musical instrument are associated with each other; and a conversion means which generates, on the basis of the acquired first data and the conversion data, second data suitable for a control interface of the electronic musical instrument to be controlled, and transmits the second data to the electronic musical instrument.
Claims
1. A control device that controls an electronic musical instrument, comprising: an acquisition means that acquires first data which is generated in response to an utterance of a user from a dialogue engine which understands an intention of the utterance on a basis of the utterance and generates the first data in which the intention is stated; a storage means that stores conversion data which is data in which the first data is correlated with a control command for controlling the electronic musical instrument; and a conversion means that generates second data which is suitable for a control interface of the electronic musical instrument to be controlled on a basis of the first data that has been acquired and the conversion data and transmits the second data to the electronic musical instrument.
2. The control device according to claim 1, wherein the conversion means generates the second data comprising one of a command for changing a parameter set in the electronic musical instrument to be controlled and a command for reading the parameter that has been set on a basis of the first data.
3. The control device according to claim 1, wherein the conversion means acquires a response from the electronic musical instrument in response to the second data, converts the response to third data for causing the dialogue engine to generate a response utterance, and transmits the third data to the dialogue engine.
4. The control device according to claim 1, wherein the storage means stores the conversion data for each of a plurality of electronic musical instruments, and wherein the conversion means selects a corresponding conversion data when it is detected that one of the plurality of electronic musical instruments has been connected to the control device.
5. The control device according to claim 1, wherein the storage means stores a history of a parameter set in a past in the electronic musical instrument on a basis of the second data, and wherein the conversion means generates the second data for restoring the parameter with reference to the history when an intention indicating that the parameter set in the electronic musical instrument to be controlled is restored is stated in the first data that has been acquired.
6. An electronic musical instrument system comprising: an electronic musical instrument that comprises a predetermined interface; a voice input means that transmits vocal sound uttered by a user to a dialogue engine which understands an intention of an utterance of the user on a basis of the utterance and generates first data in which the intention is stated; an acquisition means that acquires the first data generated in response to the utterance from the dialogue engine; a storage means that stores conversion data in which the first data is correlated with a control command for controlling the electronic musical instrument; and a conversion means that generates second data which is suitable for the predetermined interface on a basis of the first data that has been acquired and the conversion data and transmits the second data to the electronic musical instrument.
7. A control method which is performed by a control device that controls an electronic musical instrument, the control method comprising: an acquisition step of acquiring first data which is generated in response to an utterance of a user from a dialogue engine which understands an intention of the utterance on a basis of the utterance and generates the first data in which the intention is stated; and a conversion step of generating second data which is suitable for a control interface of the electronic musical instrument to be controlled on a basis of conversion data which is data in which the first data is correlated with a control command for controlling the electronic musical instrument and the first data that has been acquired and transmits the second data to the electronic musical instrument.
8. A non-transitory computer readable medium storing a program for causing a computer to perform the control method according to claim 7.
9. A control method which is performed by a control device that controls an electronic musical instrument, the control method comprising: a step of acquiring and storing a parameter which is set in the electronic musical instrument when the electronic musical instrument has been connected to the control device; a step of acquiring an instruction for changing at least a parameter of the electronic musical instrument from a user; a step of generating a control command for changing the parameter that has been instructed on a basis of the instruction and transmitting the control command to the electronic musical instrument; and a step of updating the parameter that has been stored with a changed parameter.
10. The control device according to claim 1, wherein understanding the intention of the utter means understanding a subjective expression of the utter on a basis of information set in advance in the storage means.
11. The electronic musical instrument system according to claim 6, wherein the second data comprising one of a command for changing a parameter set in the electronic musical instrument to be controlled and a command for reading the parameter that has been set on a basis of the first data.
12. The electronic musical instrument system according to claim 6, wherein the conversion means acquires a response from the electronic musical instrument in response to the second data, converts the response to third data for causing the dialogue engine to generate a response utterance, and transmits the third data to the dialogue engine.
13. The electronic musical instrument system according to claim 6, wherein the storage means stores the conversion data for each of a plurality of electronic musical instruments, and wherein the conversion means selects a corresponding conversion data when it is detected that one of the plurality of electronic musical instruments has been connected to the conversion means.
14. The electronic musical instrument system according to claim 6, wherein the storage means stores a history of a parameter set in a past in the electronic musical instrument on a basis of the second data, and wherein the conversion means generates the second data for restoring the parameter with reference to the history when an intention indicating that the parameter set in the electronic musical instrument to be controlled is restored is stated in the first data that has been acquired.
15. The electronic musical instrument system according to claim 6, wherein understanding the intention of the utter means understanding a subjective expression of the utter on a basis of information set in advance in the storage means.
16. The control method according to claim 7, wherein the second data comprising one of a command for changing a parameter set in the electronic musical instrument to be controlled and a command for reading the parameter that has been set on a basis of the first data.
17. The control method according to claim 7, wherein the conversion means acquires a response from the electronic musical instrument in response to the second data, converts the response to third data for causing the dialogue engine to generate a response utterance, and transmits the third data to the dialogue engine.
18. The control method according to claim 7, wherein the storage means stores the conversion data for each of a plurality of electronic musical instruments, and wherein the conversion means selects a corresponding conversion data when it is detected that one of the plurality of electronic musical instruments has been connected to the control device.
19. The control method according to claim 7, wherein the storage means stores a history of a parameter set in a past in the electronic musical instrument on a basis of the second data, and wherein the conversion means generates the second data for restoring the parameter with reference to the history when an intention indicating that the parameter set in the electronic musical instrument to be controlled is restored is stated in the first data that has been acquired.
20. The control method according to claim 7, wherein understanding the intention of the utter means understanding a subjective expression of the utter on a basis of information set in advance the acquisition step.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
DESCRIPTION OF EMBODIMENTS
First Embodiment
[0036] Hereinafter, an exemplary embodiment will be described with reference to the accompanying drawings. The following embodiment can be appropriately modified according to a configuration or various conditions of a system and the disclosure is not limited to the embodiment.
[0037]
[0038] The electronic musical instrument system according to this embodiment includes a control device 10 that transmits and receives a control command to and from an electronic musical instrument 20, a server device 30 that takes charge of a voice interaction, and a voice input and output device 40.
[0039] The voice input and output device 40 is a device that receives an instruction for the electronic musical instrument 20, which has been uttered from a user, by vocal sound and transmits the received instruction to the server device 30. The voice input and output device 40 also has a function of reproducing voice data which is transmitted from the server device 30.
[0040] The server device 30 is a dialogue engine that understands content (an intention) of an utterance of a user on the basis of voice data transmitted from the voice input and output device 40, converts the utterance into a general-purpose data exchange format, and transmits the converted data to the control device 10. The server device 30 also has a function of generating voice data on the basis of data transmitted from the control device 10.
[0041] The control device 10 is a device that generates a control signal for controlling the electronic musical instrument 20 on the basis of data acquired from the server device 30 and transmits the control signal. As a result, parameters of musical sound which is output from the electronic musical instrument 20 can be changed or various effects can be added to the musical sound. The control device 10 also has a function of converting a response transmitted from the electronic musical instrument 20 into a format which can be analyzed by the server device 30. As a result, information acquired from the electronic musical instrument 20 can be provided to a user by vocal sound.
[0042] The control device 10 and the electronic musical instrument 20 are connected via a predetermined interface which is specialized for connection of an electronic musical instrument. The control device 10 and the server device 30 are connected via a network, and the server device 30 and the voice input and output device 40 are connected via a network.
[0043] The electronic musical instrument 20 is a synthesizer including a performance operator which is a keyboard instrument and a sound source. In this embodiment, the electronic musical instrument 20 generates musical sound based on a performance operation which is performed on the keyboard instrument and outputs the generated musical sound from a speaker which is not illustrated. The electronic musical instrument 20 changes parameters of musical sound on the basis of a control signal transmitted from the control device 10. In this embodiment, a synthesizer is exemplified as the electronic musical instrument 20, and another device may be employed. An object to be changed is not limited to parameters of musical sound.
[0044] For example, an object to be changed may be a reproduction tempo of a musical piece, a tempo of a metronome, selection of a musical piece, reproduction start or reproduction stop of a musical piece, start (note-on) and stop (note-off) of sound emission, control of a pitch bend, selection of a tone, or recording start or recording stop of performance. This change may be performed during performance (during emission of sound).
[0045] The electronic musical instrument 20 can return information on the basis of a control signal transmitted from the control device 10. For example, musical sound parameters, tempos, a title of a musical piece, instrument information (instrument information or the like), or the like which are currently set may be returned.
[0046] The configuration of the control device 10 will be described below.
[0047] The control device 10 is a small computer such as a smartphone, a mobile phone, a tablet computer, a personal information assistant, a notebook computer, or a wearable computer (such as a smart watch). The control device 10 includes a central processing unit (CPU) 101, an auxiliary storage device 102, a main storage device 103, a communication unit 104, and a short-range communication unit 105.
[0048] The CPU 201 is an arithmetic operation device that takes charge of control which is performed by the control device 10.
[0049] The auxiliary storage device 102 is a rewritable nonvolatile memory. A program which is executed by the CPU 101 or data which is used by the control program is stored in the auxiliary storage device 102. The auxiliary storage device 102 may store an application into which the program which is executed by the CPU 101 is packaged. The auxiliary storage device may store an operating system for executing such an application.
[0050] The main storage device 103 is a memory to which the program which is executed by the CPU 101 or the data which is used by the control program is loaded. The following processes are performed by loading the program stored in the auxiliary storage device 102 to the main storage device 103 and causing the CPU 101 to execute the program.
[0051] The communication unit 104 is a communication interface for transmitting and receiving data to and from the server device 30. The control device 10 and the server device 30 are communicatively connected to each other via a wide area network such as the Internet or a LAN. The network is not limited to a single network and any type of network may be used as long as data can be transmitted and received therethrough.
[0052] The short-range communication unit 105 is a radio communication interface that transmits and receives a signal to and from an electronic musical instrument 20. For example, Bluetooth (registered trademark) Low Energy (BLE) can be employed as a radio communication mode, but another mode may be employed. When BLE is used for connection with the electronic musical instrument 20, an MIDI over Bluetooth Low Energy (BLE-MIDI) standard may be used. In this embodiment, wireless connection is used for connection between the control device 10 and the electronic musical instrument 20, but wired connection may be used. In this case, the short-range communication unit 105 is replaced with a wired connection interface.
[0053] The configuration illustrated in
[0054] A hardware configuration of an electronic musical instrument 20 will be described below with reference to
[0055] The electronic musical instrument 20 is a device that synthesizes musical sound on the basis of an operation which is performed on a performance operator (a keyboard instrument), and amplifies and outputs the synthesized musical sound. The electronic musical instrument 20 includes, a short-range communication unit 201, a CPU 202, a ROM 203, a RAM 204, a performance operator 205, a DSP 206, a D/A converter 207, an amplifier 208, and a speaker 209.
[0056] The short-range communication unit 201 is a radio communication interface that transmits and receives a signal to and from the control device 10. In this embodiment, the short-range communication unit 201 is wirelessly connected to the short-range communication unit 105 of the control device 10 and transmits and receives a message based on an MIDI standard. Details of data which is transmitted and received will be described later.
[0057] The CPU 202 is an arithmetic operation device that takes charge of control which is performed by the electronic musical instrument 20. Specifically, the CPU performs processes which are described in this specification, processes of synthesizing musical sound using the DSP 206 which will be described later on the basis of scanning or performed operations of the performance operator 205, and the like.
[0058] The ROM 203 is a rewritable nonvolatile memory. A control program which is executed by the CPU 202 or data which is used by the control program is stored in the ROM 203.
[0059] The RAM 204 is a memory to which the control program which his performed by the CPU 202 or data which is used by the control program is loaded. The processes which will be described later are performed by loading the program stored in the ROM 203 to the RAM 204 and causing the CPU 202 to execute the program.
[0060] The configuration illustrated in
[0061] The performance operator 205 is an interface that receives a performance operation from a performer. In this embodiment, the performance operator 205 includes a keyboard instrument that is used for performance and an input interface (for example, a knob or a push button) that designates musical sound parameters or the like.
[0062] The DSP 206 is a microprocessor that is specialized for processing a digital signal. In this embodiment, the DSP 206 performs processes specialized for processing a voice signal under the control of the CPU 202. Specifically, the DSP performs synthesis of musical sound, addition of effects to musical sound, and the like on the basis of a performance operation and outputs a voice signal. The voice signal output from the DSP 206 is converted to an analog signal by the D/A converter 207, is amplified by the amplifier 208, and then is output from the speaker 209.
[0063] The server device 30 will be described below.
[0064] The server device 30 is, for example, a computer such as a personal computer, a workstation, a general-purpose server device, or a dedicated server device. The server device 30 includes a CPU, a main storage device, an auxiliary storage device, and a communication unit similarly to the control device 10. The hardware configuration is the same as that of the control device 10 except that a short-range communication unit is not provided and thus detailed description thereof will be omitted. In the following description, an arithmetic operation device of the server device 30 is referred to as a CPU 301.
[0065] A hardware configuration of the voice input and output device 40 will be described below with reference to
[0066] The voice input and output device 40 is a so-called smart speaker including a means that inputs and outputs vocal sound and a means that communicates with the server device 30. For example, an AmazonEcho (registered trademark) or a GoogleHome (registered trademark) can be used as the voice input and output device 40.
[0067] When a user utters vocal sound to the voice input and output device 40, the voice input and output device 40 communicates with a predetermined server device (the server device 30 in this embodiment) and the server device performs a process corresponding to the utterance. In the server device, a service for cooperating with the voice input and output device 40 is performed. The service (also referred to as a skill) can be desired by a third party or a user. In this embodiment it is assumed that a service for controlling an electronic musical instrument is performed by the server device 30.
[0068] The voice input and output device 40 includes a microcomputer 401, a communication unit 402, a microphone 403, and a speaker 404.
[0069] The microcomputer 401 is a one-chip microcomputer into which an arithmetic operation device, a main storage device, and an auxiliary storage device are packaged. The microcomputer 401 provides a front end process in response to vocal sound. Specifically, the microcomputer 401 performs a process of recognizing a position (a position relative to the device) of a user having uttered vocal sound, a process of separating voices uttered from a plurality of users, a process of setting directivity of the microphone 403 which will be described later on the basis of a position of a user, a noise reduction process, an echo cancellation process, a process of generating voice data which is transmitted to the server device 30, a process of reproducing voice data received from the server device 30, and the like.
[0070] The communication unit 402 is a communication interface that transmits and receives data to and from the server device 30. The voice input and output device 40 and the server device 30 are communicatively connected to each other via a wide area network such as the Internet or a LAN. The network is not limited to a single network and any type of network may be used as long as it can realize transmission and reception of data.
[0071] The microphone 403 and the speaker 404 are means that acquire vocal sound uttered by a user and provides vocal sound to a user.
[0072] Functional blocks of the control device 10, the electronic musical instrument 20, the server device 30, and the voice input and output device 40 will be described below with reference to
[0073] The functional blocks of the voice input and output device 40 will be first described below.
[0074] A voice input means 4011 of the voice input and output device 40 converts an electrical signal input from the microphone 403 to voice data and transmits the voice data to the server device 30 via the network.
[0075] A voice output means 4012 acquires voice data from the server device 30 and outputs the acquired voice data via the speaker 404.
[0076] The functional blocks of the server device 30 will be described below.
[0077] In the server device 30, a service for cooperating with the voice input and output device 40 is performed as described above. Specifically, the server device 30 recognizes vocal sound, understands, for example, an intention indicating “what” and “how,” and performs processing based on the understanding.
[0078] In this embodiment, the server device 30 provides data for controlling an electronic musical instrument to the control device 10 on the basis of the understood intention. The server device 30 generates voice data indicating the result of processing on the basis of data transmitted from the control device 10 and returns the generated voice data to the voice input and output device 40.
[0079] A voice recognition means 3011 of the server device 30 performs a process of recognizing voice data transmitted from the voice input and output device 40 and understands an intention of an utterance of a user (which is hereinafter referred to as a user utterance. The content of the user utterance is referred to as “user utterance text”). For example, it is assumed that a user has uttered “set a tempo to 120.” In this case, an intention indicating that “set a value <120>” to a parameter “tempo” is understood. Recognition of vocal sound and understanding of an intention can be performed using existing techniques. For example, the content of a user utterance may be converted to information indicating “what” and “how” using a model which has been subjected to machine learning in advance.
[0080] The voice recognition means 3011 may understand an intention of a subjective expression on the basis of information set in advance and convert the intention to a numerical value. For example, when “slightly set the tempo down” has been uttered and information indicating “slight (a little) in tempo is 3 BPM” is stored in advance, an intention indicating that “the parameter of tempo is set down by a value <3>” can be understood. When “slightly set reverb up” has been uttered and information indicating “slight (a little) in reverb is 3 dB” is stored in advance, an intention indicating that “the parameter of reverb is set down by a value <3>” can be understood. When “slightly set high of the equalizer down” has been uttered and information indicating “the high represents 12 kHz” and “slight (a little) in the equalizer is 3 dB” is stored in advance, an intention indicating that “the parameter of 12 kHz of the equalizer is set down by a value <3>” can be understood.
[0081] In addition, information indicating what genre of music an expression such as a “light piece of music” or a “calm piece of music” represents may be stored in advance and be used.
[0082] A conversion means 3012 converts an intention output from the voice recognition means 3011 to data in a format which can be understood by the control device 10 and converts a response transmitted from the control device 10 to voice data.
[0083] Data described in a general-purpose data exchange format is transmitted and received between the server device 30 and the control device 10. In this embodiment, data is exchanged using a communication protocol HTTPS or MQTT using data in the form of JavaScript Object Notation (JSON) (hereinafter referred to as JSON data). When MQTT is used as the protocol, data in an arbitrary format (for example, JSON, XML, Enciphered Binary, or Base 64) can be stored in the payload.
[0084] The functional blocks of the control device 10 will be described below.
[0085] The electronic musical instrument 20 to be controlled is not based on the premise of control using vocal sound, and thus does not include a voice interface. The control device 10 converts data transmitted from the server device 30 (JSON data generated on the basis of a user utterance) and data based on an interface of the electronic musical instrument 20 therebetween using a conversion means 1011. In this embodiment, the interface of the electronic musical instrument 20 is an MIDI interface and data based on the interface is an MIDI message.
[0086] The conversion means 1011 includes data for performing the aforementioned conversion (hereinafter referred to as conversion data) and performs the conversion with reference to the conversion data. Details of the conversion data will be described later.
[0087] The functional blocks of the electronic musical instrument 20 will be described below.
[0088] A control signal receiving means 2022 of the electronic musical instrument 20 is a means that receives an MIDI message converted by the control device 10 and processes the received MIDI message. A control signal transmitting means 2021 is a means that generates a response corresponding to the received MIDI message and transmits the generated response.
[0089] Processes until a corresponding MIDI message is transmitted to the electronic musical instrument 20 after a user has uttered vocal sound will be described below.
[0090] First, when a user uttered vocal sound to the voice input and output device 40, the voice input means 4011 detects the voice and acquires content of the user utterance (Step S1). For example, the voice input means 4011 detects a word for returning from a standby state (a wake word) and acquires the content of a subsequent utterance. The acquired user utterance text is converted to voice data and the voice data is transmitted to the server device 30 via the network.
[0091] The server device 30 (the voice recognition means 3011) acquiring the voice data performs voice recognition and converts the content of the user utterance to natural language text. An intention of the text is understood on the basis of a service set in advance (Step S2).
[0092] For example, when a user utterance is “set the tempo to 100,” understanding of an intention is performed on the result of recognition of the user utterance and the intention indicating that “the “tempo” is “set” to “100”” is understood. This service is realized using known technique and is set up in advance by a user.
[0093] Then, the conversion means 3012 generates JSON data on the basis of the acquired intention (Step S3).
[0094] Then, the control device 10 (the conversion means 1011) converts the received JSON data to an MIDI message (Step S4).
[0095] This conversion is performed with reference to conversion data stored in advance.
[0096] A conversion method will be described below.
[0097] The conversion data is data in which a parameter ID described in the JSON data is correlated with an address, a data length, and bit arrangement information in the MIDI interface.
[0098] In this embodiment, when “command” described in the JSON data is “put,” a record in which the parameter ID (“tempo” herein) matches is identified and an address, a data length, and bit arrangement information are acquired. Then, an MIDI message for writing a value to be set (100 herein) to the acquired address is generated.
[0099] The data length and the bit arrangement information are used to generate data which is to be written to the electronic musical instrument 20. For example, when the value is 100 (0x64), the data length is 4 bytes, and the bit arrangement information indicates that “four lower bits are valid,” data which is to be written to the designated address is data obtained by extracting four lower bits (0x0064) out of data in which 0x64 is converted to a bit string of four bytes (00000000 00000000 00000011 00000010). It is possible to change the tempo by writing the generated data to the address corresponding to the tempo in the electronic musical instrument 20.
[0100] An MIDI message may be, for example, a message for writing data (also referred to as DT1), which is used for the MIDI standard.
[0101] When the conversion is completed, the conversion means 1011 transmits the generated MIDI message to the electronic musical instrument 20. Accordingly, the parameter (such as the tempo) is changed on the basis of the user utterance.
[0102] Although not illustrated in
[0103] As described above, with the electronic musical instrument system according to the first embodiment, it is possible to control an electronic musical instrument using vocal sound. Accordingly, it is possible to greatly improve convenience when a double-handed musical instrument such as a guitar or a drum is played. Without changing an interface or firm ware of an existing electronic musical instrument, the electronic musical instrument can be caused to cope with a voice command. An existing voice input and output device 40 and an existing server device 30 that provide an existing voice service can be used to control an electronic musical instrument.
[0104] In the first embodiment, an example in which the tempo is set has been described above, but other parameters may be set as long they are parameters which are used by the electronic musical instrument 20. For example, a current tone, a current sound volume, a type of an effect, or ON/OFF of a metronome function may be set.
Second Embodiment
[0105] In the first embodiment, an example in which an arbitrary parameter is set for the electronic musical instrument 20 has been described above. In a second embodiment, parameters which are currently set for the electronic musical instrument 20 are inquired.
[0106] The hardware configuration and the functional configuration of an electronic musical instrument system according to the second embodiment are the same as in the first embodiment, thus description thereof will be omitted and only differences from the first embodiment will be described below. In the following description, steps which are not mentioned are the same as in the first embodiment.
[0107] In the second embodiment, a user gives a user utterance for inquiring about parameters such as “what tempo is set?” or “what is the current tempo?” By performing understanding of an intention on the user utterance, an intention indicating that “the “tempo” is “acquired”” is acquired in Step S2.
[0108]
[0109] In Step S4, an MIDI message indicating that “a set tempo is inquired” is generated.
[0110] In this embodiment, when the command described in the JSON data is “get,” a record in which the parameter ID (“tempo” herein) matches is identified and an address, a data length, and bit arrangement information are acquired. Then, an MIDI message for reading a value from the acquired address is generated.
[0111] The method of generating the MIDI message is the same as in the first embodiment except that a message for requiring data is used instead of a message for writing data. The MIDI message may be, for example, a message (also referred to as RQ1) for requiring data, which his used in the MIDI standard.
[0112] When data is required, the second embodiment is the same as the first embodiment in that an address or a data length is designated and a message is generated.
[0113]
[0114] In Step S5, conversion from the MIDI message to the JSON message is performed. In this step, a value of the parameter stored in the designated address is acquired using the conversion data which is described above in the first embodiment.
[0115] The JSON data generated in this step is data in which the read value of the parameter is substituted into the dotted line part in
[0116] Then, the server device 30 (the conversion means 3012) generates voice data which is provided to a user on the basis of the received JSON data (Step S6). Generation of voice data can be performed using existing techniques. For example, the conversion means 3012 generates voice data indicating that “the tempo is 120” on the basis of the received JSON data (which is an object ““tempo”:120” correlated with the key “option”).
[0117] The generated voice data is transmitted to the voice input and output device 40 (the voice output means 4012) and is output via the speaker (Step S7).
[0118] In this embodiment, an example in which a value of a parameter is read by vocal sound without any change is described above, but a numerical value may be replaced with a character string and transmitted to the server device 30 by the control device 10. For example, a numerical value indicating a tone may be replaced with a tone name to generate JSON data. This data may be a part of the aforementioned conversion data.
Third Embodiment
[0119] In the first and second embodiments, it is assumed that a single electronic musical instrument 20 is connected to the control device 10. On the other hand, since an address of a parameter, a tone name, or the like is specific to an electronic musical instrument, it is difficult to connect a plurality of electronic musical instruments 20 to the control device 10 when a single piece of conversion data is used. In a third embodiment, connection of a plurality of electronic musical instruments 20 is enabled by automatically selecting conversion data.
[0120] The control device 10 according to the third embodiment stores a plurality of pieces of conversion data in the auxiliary storage device 102, and the control device 10 detects connection between the control device 10 and the electronic musical instrument 20 and selects conversion data corresponding to the connected electronic musical instrument 20 when the electronic musical instrument 20 is connected to the control device 10.
[0121]
[0122] In the third embodiment, a parameter table specific to an electronic musical instrument is correlated with conversion data (see
[0123] Then, in Step S10, the control device 10 generates an MIDI message for setting the extracted parameters in the electronic musical instrument 20 and transmits the generated MIDI message.
[0124] In this way, by describing an arbitrary parameter in the parameter table, it is possible to set a predetermined parameter in the electronic musical instrument 20 at a timing at which the electronic musical instrument 20 is connected without uttering vocal sound. The parameter table may be prepared in advance and be dynamically updated.
[0125] In the aforementioned example, default parameters which are set in the electronic musical instrument 20 are described in the parameter table. On the other hand, details of the parameter table may be synchronized with details of the parameters set in the electronic musical instrument 20.
[0126] For example, at a timing at which the electronic musical instrument 20 is connected to the control device 10, the control device 10 may acquire all the parameters set in the electronic musical instrument 20 and record the acquired parameters in the parameter table. The parameter table may be updated using the parameters when the MIDI message for setting a parameter in the electronic musical instrument 20 is generated in Step S4. With this configuration, the control device 10 can normally ascertain a newest parameter which is set in the electronic musical instrument 20.
[0127] At the timing at which the electronic musical instrument 20 is connected to the control device 10, the control device 10 may transmit all the stored parameters to the electronic musical instrument 20 and set the parameters therein. With this method, it is also possible to synchronize the parameters set in the electronic musical instrument 20 with the parameters stored in the control device 10.
[0128] It is preferable to use a parameter table which differs depending on the types of the electronic musical instruments to be connected. Accordingly, even when a different type of electronic musical instrument is connected, parameters such as a sound volume can be set to appropriate values on the basis of characteristics of the electronic musical instrument.
Fourth Embodiment
[0129] A fourth embodiment is an embodiment in which the control device 10 can store details of parameters of an electronic musical instrument which have been set immediately before and cancel settings (undo).
[0130] In the fourth embodiment, similarly to the third embodiment, the control device 10 stores a plurality of pieces of conversion data for each electronic musical instrument. Each of the plurality of pieces of conversion data is correlated with an undo table which is specific to an electronic musical instrument 20 (see
[0131] The undo table is updated at a timing immediately after an electronic musical instrument 20 is connected to the control device 10 and at a timing immediately before an MIDI message is transmitted to the electronic musical instrument 20. For example, when the tempo is changed from 100 to 120, information indicating tempo=100 is recorded as a previous value of the tempo. The previous value of the tempo may be acquired from the electronic musical instrument 20.
[0132] The undo table is used when a user utters vocal sound indicating that “changing of the parameters which was performed by a previous utterance is restored.” In this embodiment, two types including “undo for restoring the parameters to values before being changed” and “undo for restoring the parameters to initial values (values at the time of connection)” can be performed. For example, when a user utters “restore” as illustrated in
[0133] In this embodiment, when such a command is received, the control device 10 acquires parameters to be set with reference to the undo table, generates an MIDI message for setting the parameters in the electronic musical instrument 20, and transmits the MIDI message to the electronic musical instrument 20 in Step S4. Accordingly, the parameters changed by the user are restored to original values.
MODIFIED EXAMPLES
[0134] The aforementioned embodiments are only examples and the disclosure can be appropriately modified without departing from the gist thereof. For example, the aforementioned embodiments may be combined.
[0135] In the aforementioned embodiments, a synthesizer is exemplified as the electronic musical instrument 20, but an electronic piano, electronic drums, an electronic wind instrument or the like may be connected.
[0136] A target to which a control signal is transmitted may not be an electronic musical instrument in which a sound source is incorporated. For example, a control signal may be transmitted to a device that adds an effect to an input voice (an effector) or a device that amplifies vocal sound (a musical instrument amplifier such as a guitar amplifier).
[0137] In the aforementioned embodiments, an electronic musical instrument that transmits and receives a message in the MIDI standard has been described, but a message in another standard may be used.
[0138] In the aforementioned embodiments, the JSON format is used for exchange of data between the control device 10 and the server device 30, but another format may be used.
[0139] When the server device 30 has a function of storing and caching information which are acquired in the past, a response may be generated using the stored information. For example, when a command indicating “set the tempo to 120” was transmitted to an electronic musical instrument in the past, the information may be cached by the conversion means 3012. When a user utters “what is the current tempo?” a response may be generated using the cached information.
[0140] In the aforementioned embodiments, a single application is executed by the control device 10, but when there is an existing control program for controlling the electronic musical instrument 20, transmission and reception of an MIDI message may be performed via an API of the control program 1012 as illustrated in
[0141] In the aforementioned embodiments, a single electronic musical instrument 20 is connected to the control device 10, but a plurality of electronic musical instruments 20 may be connected to the control device 10. In this case, an electronic musical instrument 20 that transmits and receives an MIDI message to and from the control device 10 may be designated. For example, when a user gives an utterance indicating that a musical instrument is changed (for example, “switch to Drum A”), the server device 30 may generate JSON data in which data indicating that the electronic musical instrument 20 is switched is described and transmit the JSON data to the control device 10.
[0142] In the aforementioned embodiments, the control device 10, the electronic musical instrument 20, and the voice input and output device 40 are independent from each other, but these devices may be unified. For example, as illustrated in
REFERENCE SIGNS LIST
[0143] 10: Control device [0144] 20: Electronic musical instrument [0145] 30: Sever device [0146] 40: Voice input and output device