EMULATING A VIRTUAL INSTRUMENT FROM A CONTINUOUS MOVEMENT VIA A MIDI PROTOCOL

20220270576 · 2022-08-25

Assignee

Inventors

Cpc classification

International classification

Abstract

The present invention relates to methods and systems for creating a sound effect out of a continuous movement, in particular by means of detecting a continuous movement through a force sensor in a device. A method is shown for creating a sound effect out of a continuous movement. The method comprises a step of providing a first device, where-by the device is adapted at detecting continuous movement and a no-movement state. The method further comprises the step of defining at least one first parameter of movement, in particular a first axis of movement of said continuous movement. A further step comprises the assigning at least one first midi-channel to the first axis of movement. A base-line value is defined for the no-movement state, and along that first axis of movement a range of values is relative to said base-line value is defined. This range of values is relative to said base-line value is reflective of a continuous movement along that first axis of movement. A sound effect is then output relative to the detected continuous movement. One aspect or additional embodiment of the present invention comprises the step of defining at least one first parameter of movement, whereby said first parameter of movement is an angular range in one axis X, Y, Z of an orientation in space of the first device (99.1) adapted at detecting continuous movement (A.1) and a no-movement state.

Claims

1. A method for creating a sound effect out of a continuous movement, comprising the steps of: a. providing a first device (99.1) adapted to detect continuous movement (A.1) and a no-movement state; b. defining at least one first parameter of movement, in particular whereby the first parameter of movement is a first axis of movement (X.1) of the continuous movement; c. assigning at least one first midi-channel to the first parameter of movement (X.1); d. defining a baseline value for the no-movement state, and defining along the first parameter of movement of (X.1) a range of values relative to the baseline value and reflective of a continuous movement along the first parameter of movement; e. outputting a sound effect relative to the detected continuous movement.

2. The method according to claim 1, whereby the first parameter of movement is an angular range in one axis X, Y, Y, Z of an orientation in space of the first device (99.1) adapted at detecting continuous movement (A.1) and a no-movement state.

3. The method according to claim 1, where a single musical note is attributed to a wedge-shaped sector defining a particular angle relative to a predetermined origin within a movement range 130 of an operator and the device is adapted to detect movement within a particular wedge-shaped sector and relate it to the single musical note.

4. The method according to claim 1, wherein the device (99.1) is further adapted to detect an end and/or a start of the non-movement state.

5. The method according to claim 1, whereby at least one second device (99.2) is provided adapted to detect a second continuous movement (A.2) and a second no-movement state.

6. The method according to claim 1, whereby a sound volume is attributed to a speed of a continuous movement.

7. The method according to claim 1, further comprising assigning a midi-note-on to an end of the non-movement state.

8. The method according to claim 1, whereby the outputting is performed by an outputting device.

9. The method according to claim 1, further comprising one of receiving at least one first midi-channel with an outputting device and receiving a plurality of midi-channels from a plurality of devices (99.1, 99.2) adapted at detecting continuous movement (A.1, A.2; B.1, B.2; C.1; C.2) and a no-movement state, such that a plurality of midi-channels is generated from the plurality of continuous movements detected.

10. The method according to claim 9, whereby a priority is attributed to the midi-channels received by the outputting device, whereby priority is attributed to the midi-channel with the greatest change in continuous movement.

11. The method according to claim 8, whereby the receiving is a wireless receiving, on particular a wireless receiving by means of short-wavelength radio waves, even more particularly a Bluetooth protocol.

12. The method according to claim 1, whereby at least one second axis (Y.1) and/or at least one third axis (Z.1) is defined for the continuous movement (A.1).

13. The method according to claim 1, whereby the first device (99.1) adapted at detecting continuous movement (A.1) and a no-movement state is assigned to an anatomical plane of the user (F, G, H) and the sound effect relative to the detected continuous movement in that anatomical plane is a predetermined sound effect for that plane (F, G, H).

14. The method according to claim 13, whereby a plurality of devices is provided and to each device an anatomical plane of the user (F, G, H) is assigned and the sound effect relative to the detected continuous movement in that anatomical plane is a predetermined sound effect for that plane (F, G, H).

15. The method according to claim 1, whereby the midi-channel is a midi-CC channel and the values are values ranging from 0 to 127.

16. The method according to claim 15, where the baseline value is set at 64 and for a movement in a first direction (f1) along the first axis of movement (X.1) the range of values relative to the baseline value ranges from 0 to 63 and for a movement in a second direction (f2) along the first axis of movement (X.1) the range of values relative to the baseline value ranges from 65 to 127.

17. The method according to claim 1, whereby the a. providing a first device (99.1) adapted at detecting continuous movement (A.1) and a no-movement state comprises providing a device with a processing unit adapted to recognize a pre-learned movement sequence out of force signal(s) detected by at least one sensor, for generating a force signal from the at least one detected force, in particular by applying a machine learning algorithm, and converting the movement sequence into a digital auditory signal, in particular a MIDI-signal.

18. The method according to claim 1, whereby the device is adapted to be affixed to an extremity of a user.

19. The method according to claim 1, whereby at least one second parameter of movement is defined as an orientation of the first device (99.1) adapted at detecting continuous movement (A.1) and a no-movement state in space.

20. A system for managing transmissions of a plurality of devices adapted at detecting a movement and generating a movement specific midi signal, in particular a midi-on note and/or a midi-off note and/or a midi-cc channel with values ranging from 0 to 127, whereby a. the transmissions are wirelessly transmitted from the plurality of devices to an output unit; b. each signal comprising information convertible to a sound effect by the output unit; c. each signal is output with a latency between a force sensing and output by the output unit of maximally 30 ms, in particular of between 10 and 20 ms; d. each signal is packed in a transmission pack consisting of four information blocks selected from the group consisting of midi-on note and/or a midi-off note and/or a midi-cc channel; and wherein e. the transmission packs are prioritized in that the transmissions with signals containing the highest variation are preferred, and/or f. the transmission packs with midi-on information blocks are prioritized.

Description

BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS

[0060] FIG. 1 shows depicts schematically an embodiment of the present invention;

[0061] FIG. 2 shows a schematic representation of a device according to the present invention;

[0062] FIG. 3 shows a schematic representation of a network setup for a working of the method of the present invention;

[0063] FIG. 4a shows a sample assignment for string instrument simulation, and

[0064] FIG. 4b shows a sample assignment for piano simulation.

DETAILED DESCRIPTION OF THE INVENTION

[0065] FIG. 1 shows schematically how the method of the present invention can be implemented. This example works with two devices, namely a first device 99.1 and a second device 99.2. These devices 99.1, 99.2 are operated by a user 100.

[0066] For this specific example the devices 99.1, 99.2 can be assumed to be either held in one hand each, or affixed to either the left, or the right arm, for instance by means of a strap. In the present example a left-handed user 100 has affixed a first device 99.1 to the left wrist by means of a strap. The second device 99.2 is also affixed to a wrist, namely the right wrist of the user 100. For the sake of simplified illustration, the areas of movement are defined by four quadrants. A first quadrant corresponds to movement that is easily accessible by the first device by moving the left arm and hand. This device is to the left of the median plane M of the user. This quadrant is also above the horizontal plane H of the user 100. The first device performs a continuous movement A.1. The method of the present example in this simplified illustration defines a first axis of movement X.1 of the said continuous movement A.1. In the present example, the first axis of movement x.1 corresponds to the x-axis of a Cartesian coordinate system. By means of this invention it is possible to represent the continuous movement A.1 as consisting of vectors in a cartesian, three-dimensional coordinate system.

[0067] At the same time a second device performs a second movement A.2. This movement can also be subdivided into a plurality of axial movements, whereby the axes corresponds to axes of a cartesian coordinate system with a first axis X.2, and a second axis Z.2 shown for illustrative purposes in FIG. 1. The movement of the second device 99.2 also illustrates an acceleration, i.e., a start of a continuous movement.

[0068] In the context of the present invention, the start of a continuous movement would be used to generate a midi-note-on signal. At the same time a subsequently the continuous movement would be used to generate a midi-CC-signal. This signal is attributed with a value representative of the axis where the movement is performed. The axis is defined at the time point of starting the movement in the present example and has a value of between 0 and 127, where 64 is defined as the baseline, i.e., the value where a non-movement exists. Depending on which direction along an axis the movement is performed a value of higher or lower than 64 is given to the respective movement.

[0069] FIG. 2 shows a sample arrangement of how a device adapted a detecting continuous movement can be arranged. The sample device 10 has a casing 21 in which a number of electrical components are arranged. Central to the device 10 is a nine-axis sensor 20 capable of detecting the continuous movement as well as a non-movement state. The nine-axis sensor 20 is equipped with a number of integrated orientation and movement sensors, such as at least an accelerometer, preferably a three-axial accelerometer, a gyroscope, preferably a three-axial gyroscope, a geomagnetic sensor, preferably a three-axial geomagnetic sensor, for instance. The required chipsets of the sensors can be integrated into a single pin.

[0070] The sensor can be integrated operationally connected in the device 10 by means of interfaces for connecting it to the power supply units and controller or processing units. The exemplary device 10 further comprises a signal processing unit 16 as controller, which is in a functional relationship with the nine-axis sensor 20 and receives and processes all the information provided by the nine-axis sensor 20. Most modern sensors come equipped with firmware already adapted at providing a first parameterization of the detected sensor data. If that is not the case, or if further parametrization is required or desired, the signal processing unit 16 can be adapted at providing the desired or required parameterization.

[0071] In the present example, the device is powered by an accumulator 17 functionally connected to a charging circuit 18 adapted at wirelessly charging the accumulator 17. For connecting the device 20 with a charging cable to a socket a charging connector 19 is also provided. Many presently available charging contacts, as the one used in the present implementation, are also capable of acting as a data transfer contact into which a charging/data contact, for instance a Micro USB connector, can be connected with the device 10. For this end, respective slits can be provided on the housing 20 of the device 10.

[0072] The present example also features a user interface 15. In its most basic manifestation, the user interface 15 can be a simple on/off button used to put the device into an operational state or turn it off. More sophisticated types of devices can come equipped with a touchscreen that is capable of providing access to a plurality of functions of the device. Such a user interface 15 can be used, for instance, to select an operational mode of the device 20, such as for instance the specific instrument that is to be simulated by the device 20. The user interface 15 can also be adapted at providing the device 20 with access to further auxiliary gadgets and devices, such as for instance for linking a number of devices together. In a particular example, a number of devices can be attributed to a specific channel, such that the number of devices recognizes other devices belonging to the same channel. This can be useful for instance when a plurality of devices is used by more than one person to prevent the devices from confounding each other and misrepresenting particular types of movement in their representation as music notes. In this example, all devices with the same channel know that they belong for instance to “string instrument No. 1”, whereas all the devices with another channel identify themselves as “string instrument No. 2”. For other embodiments, the channels can be attributed to a particular dancer or entertainer and the movements can be processed within the context of the channel they are attributed to.

[0073] The present device 10 further comprises a memory unit 14 for storing various instrument types and instrument attributions. This memory 14 can be characterized as a removable type of memory, such as an SD-card, or it can be fixedly integrated in the device 10. The device further comprises a microprocessor system 13.

[0074] The device has a wireless connectivity such as in the present example a Bluetooth unit 12 and a respective antenna 11. The Bluetooth unit 12 follows the Standard 5.0 for Bluetooth.

[0075] FIG. 3 shows how a number of devices 10.1, 10.2, 10.3 can be used together with a number of smartphones 30.1, 30.2 and connected by means of a cloud service 40 with a number of computers 41.1, 41.2, 41.3. The devices 10.1, 10.2, 10.3 are connected by means of a wireless Bluetooth connectivity with the smartphones 30.1, 30.2 which can provide access, for instance, to the operation modes and to the capabilities of the devices 10.1, 10.2, 10.3. The smartphones can be connected by means of a mobile network with a cloud database 40 that can provide a repository for instrument settings and note sets (as shown in the examples of FIG. 4a, 4b, below) and can be used as distribution system for content generated on computers 41.1, 41.2, 41.3.

[0076] By means of the setup shown in FIG. 3, a distribution of different type of instrument configurations can be established.

[0077] For this example, all three axes of movement are used in the cartesian coordinate system and used for generating three midi-CC-signals for outputting a sound effect. In this example a movement along the y-axis is used to trigger a midi-on note and a tone and determine the tone length by means of a relative midi-CC-channel. The absolute midi-CC-value determines the pitch of the tone.

[0078] A relative midi-cc-Message outputs a speed of orientational change of the sensor. The original position of orientation does not matter. The relative midi-cc-Message reflects the relative change of orientation.

[0079] An absolute midi-cc-message outputs an exact orientation of the sensor in space in terms of x, y, or z axis. The absolute midi-cc-message reflects the absolute orientation of the sensor regardless of speed and relative change of orientation.

[0080] For the simulation of a string instrument the value of a relative midi-CC-channel in the y-axis is determined by a left-right movement. As soon as this value is higher than 64 (for instance 65, or 66 whereby the threshold value can be predetermined) a midi-one note is triggered. This midi-one note is maintained as long as no midi-off note is triggered. This is not triggered for as long as the value remains above 64. As soon as the value reaches 64 a midi-off note is triggered. If the value drops below 64, though a further midi-on note is triggered which is maintained for as long as the value remains below 64. This simulates the exact behavior of bowing. The tone pitch is controlled with the second hand and a second device which in a real string instrument would be holding the strings and also be used to control pitch. These are predetermined to be connected with an absolute value of a y-axis, which can be defined in the present example as generating high midi-cc-values for as long as the hand remains points upwards and generate low midi-cc-values as soon as or for as long as a hand points downwards. This midi-cc-values have been linked to a pitch value of the midi-one note triggered by the relative midi-cc value.

[0081] For this particular example the octaves can be mapped to the values 0 to 127 and it can be adjustable by a user or predetermined by the device or software if a value is between 1 or 8 octaves. The more octaves a value is set for, the more the resolution of the notes can be increased. In a particular example, this means, that a high resolution is achieved if many octaves are placed in an axis, for instance the Y-axis. All the octaves are placed in order and the distances between subsequent notes are equidistant.

[0082] In an alternative or additional aspect, each note is attributed with an angular range in a particular axis with regard to an orientation of the sensor or sensing device. For instance, an angular range of between 0 and 5 degrees is attributed to the note A, an angular range of between 5 and 10 degrees with a note B, etc. The skilled artisan readily understands, that this attribution is only explained as an illustrative example and ultimately is discretionary for the performance or type of instrument the method is intended to simulate.

[0083] A fast movement generates a high cc-Value in the axis x, y or z, or all of them summed up. This cc-value is mapped to the volume-value of a sound. This leads to louder sounds in faster movements, and silent sounds in slow movements.

[0084] FIG. 4a is provided for illustrating an assignment of notes as workable in the context of the present invention for a string instrument implementation. The note on is controlled by movement in the x-axis relative to the operator 120 inside the movement range 130. The pitch is controlled by means of movement in the y-axis. The orientation of the sensing device inside the movement range 130 determines which musical note is output.

[0085] Of course, the more notes are arranged in a given arch, the more precise the movements have to be to hit the correct note. The shown representation in FIG. 4a is thus a sample implementation of the present invention.

[0086] The musical notes are arranged in wedge-shaped sectors with a particular angle relative to a predetermined origin. Orienting the device in that specific angle results in emission of the note attributed to theta wedge-shaped vector. Movement in the X-Axis generates the midi note-on and a pitch is controlled by movement in the Y-axis.

[0087] It has surprisingly been found, that attributing the notes inside a movement range 130 of an operator 120 and attributing the notes to a particular orientation of the sensing device results in an intuitive approach and is fast to learn for operators 120.

[0088] The attributed notes, the definition of the note-on and the pitch all come surprisingly natural and intuitive to a string instrument player, providing an excellent training device that is easily storable and can be carried everywhere.

Example 2 Piano

[0089] For a piano simulation, a virtual keyboard is defined close to or around a horizontal plane of the user. Depending on the orientation of the hand with the device a different type of tonal sound is played. The keyboard therefore is an imaginary keyboard around the user. The tonal sound is triggered with a relative midi-CC in the y-axis as soon as the hand is moved with a threshold intensity and remains for as long as the movement persists.

[0090] The respective sample representation of a piano implementation in FIG. 4b follows a different approach than the one depicted for the string instrument, above in in FIG. 4a.

[0091] In this arrangement, the note-on is determined by movement in the y-axis, whereas the pitch is controlled by movement in the x-axis. To support the piano players used to operating a piano in the axial or horizontal axis, the circular arrangement around the operator 120 is chosen inside the movement range 130 as axial and normal to the operator. As with the string instrument above, the wedge-shaped vectors define musical notes. This has been found to provide the most intuitive approach for a piano simulation.

Example 3 Guitar

[0092] For this particular example sectors are defined around the wrist rotation axis of the hand where the device is held or affixed to. Each string is mapped to a particular position angle of the wrist. For instance, five strings with different tonal pitches can be mapped to a particular wrist rotation. Like this the user can trigger the sound effects by rotating the wrist in a movement that is similar as letting the hand drop on the strings of a real guitar. The second hand can be used to control pitch for each string. This can generate an adequate simulation of playing a guitar in the air.