Performance analysis method
11600252 · 2023-03-07
Assignee
Inventors
Cpc classification
G10H2210/091
PHYSICS
G10H2210/385
PHYSICS
G06N7/01
PHYSICS
G10H2210/165
PHYSICS
G10H2250/311
PHYSICS
International classification
Abstract
A performance analysis method according to the present invention includes generating information related to a performance tendency of a user, from observed performance information relating to a performance of a musical piece by the user and inferred performance information that occurs when the musical piece is performed based on a specific tendency.
Claims
1. A performance analysis method for musical piece information containing a plurality of pieces of unit information representing contents of a musical piece, the method comprising: generating a plurality of pieces of inferred performance information corresponding to a plurality of different candidate tendencies, for each unit period of a plurality of unit periods in the musical piece information, from the musical piece information using first and second neural networks by, for each of the plurality of different candidate tendencies: generating, with the first neural network, feature information based on pieces of unit information, from among the plurality of pieces of unit information, included in a predetermined analysis period, which includes a current unit period; generating, with the second neural network, a linear predictor coefficient based on the generated feature information and past pieces of inferred performance information from a previous unit period; and calculating the inferred performance information from the generated linear predictor coefficient and the past pieces of inferred performance information; generating, for each unit period, observed performance information from a time series of plural pieces of instruction information output by a performance apparatus, the observed performance information being a variable relating to a tendency of a performance of the musical piece; and generating, for each unit period, performance tendency information related to a performance tendency of a user from the generated plurality of pieces of inferred performance information and the generated observed performance information by comparing each of the plurality of pieces of inferred performance information respectively generated for the plurality of different candidate tendencies with the generated observed performance information.
2. The performance analysis method according to claim 1, wherein one candidate tendency of the plurality of candidate tendencies is taken as the performance tendency of the user according to a posterior probability of the observed performance information being observed in a state where each of the plurality of candidate tendencies and the musical piece information are provided.
3. The performance analysis method according to claim 2, wherein the one candidate tendency is taken as the performance tendency of the user according to the posterior probability corresponding to each of the plurality of candidate tendencies and an occurrence probability of the one candidate tendency.
4. A performance analysis apparatus comprising: a storage unit storing musical piece information containing a plurality of pieces of unit information representing contents of a musical piece; and a control unit including at least one processor, configured to: generate a plurality of pieces of inferred performance information corresponding to a plurality of different candidate tendencies, for each unit period of a plurality of unit periods in the musical piece information, from the musical piece information using first and second neural networks by, for each of the plurality of different candidate tendencies by, and for each of the plurality of different candidate tendencies that: generates, with the first neural network, feature information based on pieces of unit information, from among the plurality of pieces of unit information, included in a predetermined analysis period, which includes a current unit period; generates, with the second neural network, a linear predictor coefficient based on the generated feature information and past pieces of inferred performance information from a previous unit period; and calculates the inferred performance information from the generated linear predictor coefficient and the past pieces of inferred performance information; generate, for each unit period, observed performance information from a time series of plural pieces of instruction information output by a performance apparatus, the observed performance information being a variable relating to a tendency of a performance of the musical piece; and generate, for each unit period, performance tendency information related to a performance tendency of a user from the generated plurality of pieces of inferred performance information and the generated observed performance information by comparing each of the plurality of pieces of inferred performance information respectively generated for the plurality of different candidate tendencies with the generated observed performance information.
5. A non-transitory storage medium storing a program executable by a computer to execute a performance analysis method for musical piece information containing a plurality of pieces of unit information representing contents of a musical piece, the method comprising: generating a plurality of pieces of inferred performance information corresponding to a plurality of different candidate tendencies, for each unit period of a plurality of unit periods in the musical piece information, from the musical piece information using first and second neural networks by, for each of the plurality of different candidate tendencies: generating, with the first neural network, feature information based on pieces of unit information, from among the plurality of pieces of unit information, included in a predetermined analysis period, which includes a current unit period; generating, with the second neural network, a linear predictor coefficient based on the generated feature information and past pieces of inferred performance information from a previous unit period; and calculating the inferred performance information from the generated linear predictor coefficient and the past pieces of inferred performance information; and generating, for each unit period, observed performance information from a time series of plural pieces of instruction information output by a performance apparatus, the observed performance information being a variable relating to a tendency of a performance of the musical piece; generating, for each unit period, performance tendency information related to a performance tendency of a user from the generated plurality of pieces of inferred performance information and the generated observed performance information by comparing each of the plurality of pieces of inferred performance information respectively generated for the plurality of different candidate tendencies with the generated observed performance information.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
DESCRIPTION OF EMBODIMENTS
First Embodiment
(8)
(9) The control device 11 includes a processing circuit such as a CPU (Central Processing Unit), for example. The control device 11 is realized by a single or multiple chips (processors), for example. The storage device 12 stores a computer program that is executed by the control device 11 and various types of data that are used by the control device 11. For example, a known recording medium such as a semiconductor recording medium or magnetic recording medium or a combination of multiple types of recording media can be freely employed as the storage device 12.
(10) The storage device 12 of the present embodiment stores musical piece information S representing the contents of a musical piece. This musical piece information S designates the pitch, intensity and sounding period (sounding time and continuation length) for each of a plurality of notes constituting the musical piece. The musical piece information S can be configured in various forms, and, for example, a file (SMF: Standard MIDI File) in MIDI (Musical Instrument Digital Interface) format in which instruction data that designates the pitch and intensity and instructs sounding or silencing and time data that designates the sounding time point of each instruction data are arrayed in time series is a favorable example of the musical piece information S.
(11) As illustrated in
(12) The performance apparatus 13 in
(13) The performance analysis apparatus 100 of the first embodiment infers a tendency (hereinafter, “performance tendency”) E.sub.m of the performance that occurs when the user performs the musical piece with the performance apparatus 13. The performance tendency E.sub.m is a musical expression or performance mannerism unique to a performer. Generation of the performance tendency E.sub.m is executed for every unit period Q.sub.m. The time series of M performance tendencies E.sub.1 to E.sub.M corresponding to different unit periods Q.sub.m signifies the temporal transition of the performance tendency that occurs when the user performs the musical piece. The performance analysis apparatus 100 of the first embodiment infers one of a plurality of different types (K types) of performance tendencies (hereinafter, “candidate tendencies”) as the performance tendency E.sub.m of the user.
(14) As illustrated in
(15) As illustrated in
(16) As described above, in the case where performance speed (tempo) is selected as the performance tendency, candidate tendencies C.sub.k include, for example, (i) speed becomes gradually faster, (ii) speed becomes gradually slower, (iii) speed is variable, (iv) speed is steady (flat), and the like in the unit period Q.sub.m, and K types of such candidate tendencies C.sub.k are set.
(17) The performance inference unit 21 of the first embodiment is, as illustrated in
(18)
(19) The inferred performance information y.sub.m.sup.k of the first embodiment is generated by an autoregressive process represented by the following equation (1).
(20)
(21) The coefficient generation unit 31 in
(22) That is, this neural network N.sub.k learns using training data that is based on the performance by a predetermined performer, so as to be able to predict (output) the performance information y.sub.m.sup.k of the unit period Q.sub.m, from the musical piece information S and the performance information of at least one past unit period Q.sub.m or more prior to the unit period Q.sub.m. That is, the K neural networks N.sub.k include a representation learned by training data including a candidate tendency indicating that the tempo becomes gradually slower and a representation learned by training data including a candidate tendency indicating that the tempo becomes gradually faster, such as described above. Note that training data can be generated by various methods, and can, for example, be generated, based on a performance by one specific performer, a performance by a specific performer (or typical performer) in a musical piece of a specific genre, or the like.
(23) As illustrated in
(24) The first neural network Na, as illustrated in
(25) As illustrated in
(26) The second neural network Nb in
(27) Accordingly, the second neural network Nb outputs a linear predictor coefficient a.sub.mj appropriate for the P pieces of inferred performance information y.sub.m−1.sup.k to y.sub.m−P.sup.k and the feature information F.sub.m.sup.k based on the target tendency C.sub.k. That is, the second neural network Nb has learned, such that the tendency of a predetermined performance is included in the linear predictor coefficient a.sub.mj.sup.k that serves as an output.
(28) The computational processing unit 32 of
(29) As is clear from the above description, the performance information generation unit G.sub.k of the first embodiment, generates, for the unit period Q.sub.m, inferred performance information y.sub.m.sup.k in which the candidate tendency C.sub.k is reflected, by providing the musical piece information S (plural pieces of unit information U.sub.m−w to U.sub.m+w) and the past P pieces of inferred performance information y.sub.m−1.sup.k to y.sub.m−P.sup.k to the neural network N.sub.k. Processing for generating the inferred performance information y.sub.m.sup.k from the feature information F.sub.m.sup.k corresponding to the unit period Q.sub.m and the past P pieces of inferred performance information y.sub.m−1.sup.k to y.sub.m−P.sup.k is sequentially executed in time-series order for each of the M unit periods Q.sub.1 to Q.sub.M within the musical piece. The time series of M pieces of inferred performance information y.sub.1.sup.k to y.sub.m.sup.k that is generated by the performance information generation unit G.sub.k with the above processing is equivalent to the temporal change in performance speed that occurs when the musical piece is performed based on the candidate tendency C.sub.k.
(30) As is clear from the above description, the performance inference unit 21 of the first embodiment generates inferred performance information y.sub.m.sup.k that occurs when the musical piece is performed based on each candidate tendency C.sub.k, by providing the musical piece information S to each of the K neural networks N.sub.1 to N.sub.K in which the different candidate tendencies C.sub.k are reflected.
(31) The performance observation unit 22 in
(32) The tendency generation unit 23 infers the performance tendency E.sub.m of the user from the K pieces of inferred performance information y.sub.m.sup.1 to y.sub.m.sup.K generated by the performance inference unit 21 and the observed performance information x.sub.m generated by the performance observation unit 22. Specifically, the tendency generation unit 23 compares each of the K pieces of inferred performance information y.sub.m.sup.1 to y.sub.m.sup.k with the observed performance information x.sub.m. The tendency generation unit 23 then generates candidate tendency C.sub.k corresponding to the inferred performance information y.sub.m.sup.k that is similar to the observed performance information x.sub.m among the K pieces of inferred performance information y.sub.m.sup.1 to y.sub.m.sup.K as the performance tendency E.sub.m of the user. Generation of the performance tendency E.sub.m by the tendency generation unit 23 is sequentially executed for every unit period Q.sub.m. Accordingly, a performance tendency E.sub.m such as shown in
(33) The tendency generation unit 23 of the first embodiment generates the performance tendency E.sub.m according to a posterior probability p (x.sub.m|U.sub.m, C.sub.k) of the observed performance information x.sub.m being observed under the condition that the candidate tendency C.sub.k and the musical piece information S are provided. The posterior probability p(x.sub.m|U.sub.m, C.sub.k) is a conditional probability of the observed performance information x.sub.m being observed when a note that is specified by the unit information Um is performed based on the candidate tendency C.sub.k. Specifically, the tendency generation unit 23, as represented by the following equation (2), selects the candidate tendency C.sub.k at which the posterior probability p(x.sub.m|U.sub.m, C.sub.k) is maximized, among the K types of candidate tendencies C.sub.1 to C.sub.K, as the performance tendency E.sub.m of the user. Note that the probability distribution of the posterior probability p(x.sub.m|U.sub.m, C.sub.k) is a normal distribution, for example. As is clear from the above description, the control device 11 of the first embodiment functions as an element (performance analysis unit) that specifies the performance tendency E.sub.m from observed performance information x.sub.m relating to the performance of a musical piece by a user.
(34)
(35)
(36) When the performance analysis processing is started, the control device 11 selects the earliest unselected unit period Q.sub.m among the M unit periods Q.sub.1 to Q.sub.M within the musical piece (S1). The control device 11 executes performance inference processing S2, performance observation processing S3, and tendency processing S4 for the unit period Q.sub.m. The performance inference processing S2 is executed by the performance inference unit 21, the performance observation processing S3 is executed by the performance observation unit 22, and the tendency processing S4 is executed by the tendency generation unit 23. Note that the performance inference processing S2 may be executed after execution of the performance observation processing S3.
(37) The performance inference processing S2 is processing (S21 to S23) for generating K pieces of inferred performance information y.sub.m.sup.1 to y.sub.m.sup.K corresponding to the different candidate tendencies C.sub.k for the unit period Q.sub.m, by providing the musical piece information S and the past P pieces of inferred performance information y.sub.m−1.sup.k to y.sub.m−P.sup.k to each neural network N.sub.k. Note that at the stage at which the first unit period Q.sub.1 within the musical piece is selected as the earliest unselected unit period, inferred performance information y.sub.0 prepared as an initial value is provided to each neural network N.sub.k together with the musical piece information S.
(38) Specifically, the coefficient generation unit 31 of the performance information generation unit G.sub.k generates, with the first neural network Na, feature information F.sub.m.sup.k that depends on the plural pieces of unit information U.sub.m−w to U.sub.m+w corresponding to the analysis period A.sub.m surrounding the unit period Q.sub.m within the musical piece information S (S21). The coefficient generation unit 31 generates, with the second neural network Nb, a linear predictor coefficient a.sub.mj that depends on the feature information F.sub.m.sup.k and the past P pieces of inferred performance information y.sub.m−1.sup.k to y.sub.m−P.sup.k (S22). The computational processing unit 32 then generates the inferred performance information y.sub.m.sup.k of the unit period Q.sub.m from the linear predictor coefficient a.sub.mj.sup.k and the past P pieces of inferred performance information y.sub.m−1.sup.k to y.sub.m−P.sup.k (S23).
(39) The performance observation processing S3 is processing for generating the observed performance information x.sub.m of the unit period Q.sub.m from the time series of the plural pieces of instruction information Z that are output by the performance apparatus 13. The tendency processing S4 is processing for inferring the performance tendency E.sub.m of the user from the K pieces of inferred performance information y.sub.m.sup.1 to y.sub.m.sup.K generated in the performance inference processing S2 and the observed performance information x.sub.m generated in the performance observation processing S3.
(40) When the tendency processing S4 is executed, the control device 11 determines whether the above processing (S2 to S4) has been completed for all (M) of the unit periods Q.sub.1 to Q.sub.M within the musical piece (S5). If there is an unprocessed unit period Q.sub.m (S5: NO), the control device 11 newly selects the unit period Q.sub.m+1 immediately after the unit period Q.sub.m that is selected at the current point in time (S1), and executes the performance inference processing S2, the performance observation processing S3, and the tendency processing S4. On the other hand, when processing is completed for all the unit periods Q.sub.1 to Q.sub.M within the musical piece (S5: YES), the control device 11 ends the performance analysis processing of
(41) As described above, in the first embodiment, it is possible to generate the performance tendency E.sub.m of the user from observed performance information x.sub.m relating to the performance of the musical piece by the user. In the first embodiment, each of the K pieces of inferred performance information y.sub.m.sup.1 to y.sub.m.sup.K that occur when the musical piece is performed based on the different candidate tendencies C.sub.k is compared with the observed performance information x.sub.m. Accordingly, one of the K types of candidate tendencies C.sub.1 to C.sub.K can be generated as the performance tendency E.sub.m of the user.
(42) According to the first embodiment, it is possible to appropriately generate inferred performance information y.sub.m.sup.k that occurs when the musical piece is performed based on the candidate tendency C.sub.k, by providing the musical piece information S of the musical piece to the neural network N.sub.k in which candidate tendency C.sub.k is reflected. Also, one of the K types of candidate tendencies C.sub.1 to C.sub.K is generated as the performance tendency E.sub.m of the user, according to the posterior probability p (x.sub.m|U.sub.m, C.sub.k) of the inferred performance information y.sub.m.sup.k being observed under the condition that the candidate tendency C.sub.k and the musical piece information S are provided. Accordingly, it is possible to appropriately generate a most likely performance tendency E.sub.m among the K types of candidate tendencies C.sub.1 to C.sub.K.
(43) The following effects can thereby be obtained. For example, when driving another apparatus such as a video apparatus or a lighting apparatus in synchronization with the performance of the performance apparatus 13, the performance tendency E.sub.m can be provided to the other apparatus. Thus, the other apparatus is able to display video or control lighting, based on the performance tendency E.sub.m. Accordingly, when synchronizing a performance and another apparatus, the occurrence of delays in driving the other apparatus can be prevented.
(44) Also, in the case where, for example, a performance of a person is carried out in conjunction with an automatic performance by a machine, if the observed performance information x.sub.m can be acquired from the performance of the person and the performance tendency E.sub.m can be generated therefrom, the automatic performance can be performed based on the performance tendency E.sub.m, thus enabling the combined performance of the performance of a person and the performance of a machine. That is, since the automatic performance can be carried out based on the performance tendency E.sub.m of the person, the occurrence of performance delays or the like can be prevented, and the performance speed (tempo) of the automatic performance can be reliably aligned with the performance of the person.
Second Embodiment
(45) A second embodiment of the present invention will now be described. Note that, in the embodiments illustrated below, signs used in the description of the first embodiment will be used for elements whose operation or function is similar to the first embodiment, and a detailed description of those elements will be omitted as appropriate.
(46) The tendency generation unit 23 of the first embodiment, as shown in equation (2) above, selects the candidate tendency C.sub.k at which the posterior probability p(x.sub.m|U.sub.m, C.sub.k) of the observed performance information x.sub.m being observed under the condition that the candidate tendency C.sub.k and the musical piece information S are provided is maximized as the performance tendency E.sub.m of the user. The tendency generation unit 23 of the second embodiment, as represented by the following equation (3), selects one of the K types of candidate tendencies C.sub.1 to C.sub.K as the performance tendency E.sub.m of the user, according to the posterior probability p (x.sub.m|U.sub.m, C.sub.k) and an occurrence probability π.sub.k of the candidate tendency C.sub.k.
(47)
(48) The occurrence probability π.sub.k is the probability of the candidate tendency C.sub.k occurring, and is individually set for every candidate tendency C.sub.k. Specifically, the occurrence probability π.sub.k of a candidate tendency C.sub.k that is readily observed in the performance of a large number of performers is set to a large numerical value, and the occurrence probability π.sub.k of an atypical candidate tendency C.sub.k whose performers are limited in number is set to a small numerical value. For example, the provider of the performance analysis apparatus 100 appropriately sets the occurrence probability π.sub.k of each candidate tendency C.sub.k, with reference to statistical material of the performance tendencies of the musical piece. Note that the occurrence probability π.sub.k of each candidate tendency C.sub.k may be set to a numerical value instructed by the user of the performance analysis apparatus 100.
(49) As is clear from equation (3), the tendency generation unit 23 of the second embodiment selects, as the performance tendency E.sub.m of the user, the candidate tendency C.sub.k that maximizes the product of the posterior probability p(x.sub.m|U.sub.m, C.sub.k) and the occurrence probability π.sub.k, among the K types of candidate tendencies C.sub.1 to C.sub.K. Accordingly, there is a tendency for a candidate tendency C.sub.k with a larger occurrence probability π.sub.k to be more readily selected as the performance tendency E.sub.m of the user.
(50) Operations other than those of the tendency generation unit 23 are similar to the first embodiment. Accordingly, similar effects to the first embodiment are also realized in the second embodiment. Also, in the second embodiment, the occurrence probability π.sub.k of each candidate tendency C.sub.k is taken into consideration in generating the performance tendency E.sub.m in addition to the posterior probability p(x.sub.m|U.sub.m, C.sub.k). Accordingly, there is an advantage in that the performance tendency E.sub.m of the user can be inferred with high accuracy, based on the tendency of each of the K types of candidate tendencies C.sub.1 to C.sub.K being readily observed in an actual performance situation.
Third Embodiment
(51)
(52) The tendency generation unit 23 generates the performance tendency E.sub.m of the user, by comparing the inferred performance information y.sub.m generated by the performance inference unit 21 with the observed performance information x.sub.m generated by the performance observation unit 22. The performance tendency E.sub.m of the third embodiment is an index of the correlation (e.g., similarity) between the inferred performance information y.sub.m and the observed performance information x.sub.m. That is, an evaluation value indicating the degree of approximation of a tendency of the performance of the user and the reference tendency C.sub.REF is generated as the performance tendency E.sub.m. For example, if a tendency of a model performance is set as the reference tendency C.sub.REF, the performance tendency E.sub.m can be used as an index of the skill level (closeness to the model performance) of the performance of the user.
(53) As is clear from the above description, according to the third embodiment, an index of the relationship between the performance of the user and the reference tendency C.sub.REF can be generated as the performance tendency E.sub.m. Accordingly, the performance tendency E.sub.m according to the third embodiment differs from the performance tendency E.sub.m according to the first and second embodiments. Note that, as is clear from the illustration of the third embodiment, a configuration that generates K pieces of inferred performance information y.sub.m.sup.1 to y.sub.m.sup.K corresponding to different candidate tendencies C.sub.k and a configuration that selects one of K types of candidate tendencies C.sub.1 to C.sub.K as the performance tendency E.sub.m of the user are not essential in the present embodiment.
(54) Variations
(55) Illustrated below are modes of specific variations that are appended to the modes illustrated above. Two or more modes freely selected from those illustrated below may be combined as appropriate as long as there are no mutual inconsistencies.
(56) (1) In the configuration (first embodiment or second embodiment) for selecting one of K types of candidate tendencies C.sub.1 to C.sub.K for every unit period Q.sub.m, the candidate tendency C.sub.k that is selected as the performance tendency E.sub.m can be changed every unit period Q.sub.m. On the other hand, the K types of candidate tendencies C.sub.1 to C.sub.K include combinations that readily transition and combinations that do not readily transition. Taking the above circumstances into consideration, a configuration that takes a probability (hereinafter, “transition probability”) τ of one of any two types of candidate tendencies C.sub.k transitioning to the other candidate tendency C.sub.k into consideration in generating the performance tendency E.sub.m is also favorable.
(57) Specifically, the transition probability τ of one candidate tendency C.sub.k1 of the combination transitioning to the other candidate tendency C.sub.k2 is set for every combination obtained by selecting two types from the K types of candidate tendencies C.sub.1 to C.sub.K (k1=1 to K, k2=1 to K). For example, the transition probability τ is set for all combinations obtained by selecting two types of candidate tendencies C.sub.k from the K types of candidate tendencies C.sub.1 to C.sub.K while allowing duplication. The transition probability τ of the combination is set to a larger numerical value, as the likelihood of transitioning from the one candidate tendency C.sub.k1 of the combination to the other candidate tendency C.sub.k2 increases.
(58) The tendency generation unit 23 selects one of the K types of candidate tendencies C.sub.1 to C.sub.K as the performance tendency E.sub.m of the user, taking the transition probability τ into consideration in addition to the posterior probability p(x.sub.m|U.sub.m, C.sub.k). Specifically, the tendency generation unit 23 selects one of the K types of candidate tendencies C.sub.k as the as the performance tendency E.sub.m, such that a candidate tendency C.sub.k having a higher probability τ of transitioning from the candidate tendency C.sub.k selected as the immediately pervious performance tendency E.sub.m−1 is more readily selected as the performance tendency E.sub.m of the unit period Q.sub.m. According to the above configuration, it is possible to transition the performance tendency E.sub.m with a natural combination that reflects the transition of tendencies in actual performances. Note in the performance tendency E.sub.m of the user may be generated, taking the occurrence probability π.sub.k of the second embodiment into consideration in addition to the posterior probability p(x.sub.m|U.sub.m, C.sub.k) and the transition probability τ.
(59) (2) In the aforementioned embodiments, the analysis period A.sub.m centered on the unit period Q.sub.m is illustrated, but the relationship between the unit period Q.sub.m and the analysis period A.sub.m is not limited to that illustrated above. The number of unit periods within the analysis period A.sub.m that are located prior to the unit period Q.sub.m may be different from the number of unit periods located thereafter.
(60) (3) For example, it is also possible to realize the performance analysis apparatus 100 with a server apparatus that communicates with a terminal apparatus (e.g., mobile phone or smartphone) via a communication network such as a mobile communication network or the Internet. Specifically, the performance analysis apparatus 100 sequentially generates the performance tendency E.sub.m from instruction information Z and musical piece information S received from the terminal apparatus, and transmits the performance tendencies E.sub.m to the terminal apparatus. Note that, in a configuration in which observed performance information x.sub.m generated by a performance observation unit 22 within the terminal apparatus is transmitted from the terminal apparatus to the performance analysis apparatus 100, the performance observation unit 22 is omitted from the performance analysis apparatus 100.
(61) (4) In the aforementioned embodiments, the performance speed of the musical piece is illustrated as the inferred performance information y.sub.m.sup.k, but the variable that is represented by the inferred performance information y.sub.m.sup.k is not limited to that illustrated above. For example, any variable relating to a musical expression that can differ by performer, such as the performance intensity of the musical piece, can be utilized as the inferred performance information y.sub.m.sup.k. The observed performance information x.sub.m is similarly not limited to performance speed. That is, various types of variables (e.g., performance intensity) relating to musical expressions can be utilized as the observed performance information x.sub.m.
(62) (5) In the above embodiments, the neural network N is divided in two, but can also be used as one. That is, the inferred performance information y.sub.m.sup.k of the next unit period Q.sub.m can also be predicted with one neural network N, from the musical piece information S and the past inferred performance information y.sub.m−1.sup.k to y.sub.m−P.sup.k. Also, the first neural network Na is used in order to extract the feature information F.sub.m.sup.k from the musical piece information S, but the feature information F.sub.m.sup.k can also be extracted by analyzing the musical piece information S, without using a neural network.
(63) (6) In the above embodiments, the inferred performance information y.sub.m.sup.k of the unit period Q.sub.m is predicted by using a neural network, but a learner constituted by a support vector machine, a self-organizing map or a learner that learns by reinforcement learning, for example, can be used as the learner that performs such prediction, apart from the above neural network.
(64) (7) The following configurations, for example, can be appreciated from the embodiments illustrated above.
(65) Mode 1
(66) A performance analysis method according to a favorable mode (mode 1) of the present invention, in which a computer generates a performance tendency of a user, from observed performance information relating to a performance of a musical piece by the user and inferred performance information that occurs when the musical piece is performed based on a specific tendency. According to the above mode, it is possible to generate a performance tendency of a user from observed performance information relating to a performance of a musical piece by the user.
(67) Mode 2
(68) In a favorable example (mode 2) of mode 1, generation of the performance tendency includes performance inference processing for generating, for each of a plurality of different candidate tendencies, inferred performance information that occurs when the musical piece is performed based on the candidate tendency, and tendency processing for generating one of the plurality of candidate tendencies as the performance tendency of the user, by comparing each of the plural pieces of inferred performance information respectively generated for the plurality of candidate tendencies with the observed performance information. In the above mode, each of plural pieces of inferred performance information that occur when a musical piece is performed based on different candidate tendencies is compared with observed performance information. Accordingly, it is possible to generate one of the plurality of candidate tendencies as the performance tendency of the user.
(69) Mode 3
(70) In a favorable example (mode 3) of mode 2, in the performance inference processing, the inferred performance information is generated, by providing, for each of the plurality of candidate tendencies, musical piece information representing contents of the musical piece to a neural network in which the candidate tendency is reflected. In the above mode, it is possible to appropriately generate inferred performance information that occurs when the musical piece is performed based on a candidate tendency, by providing musical piece information to a neural network in which the candidate tendency is reflected.
(71) Mode 4
(72) In a favorable example (mode 4) of mode 2 or 3, in the tendency processing, one of the plurality of candidate tendencies is generated as the performance tendency of the user, according to a posterior probability of the observed performance information being observed under a condition that each of the candidate tendencies and the musical piece information are provided. According to the above mode, it is possible to appropriately generate a most likely performance tendency among the plurality of candidate tendencies.
(73) Mode 5
(74) In a favorable example (mode 5) of mode 4, in the tendency processing, one of the plurality of candidate tendencies is generated as the performance tendency of the user, according to the posterior probability corresponding to each of the candidate tendencies and an occurrence probability of the candidate tendency. According to the above mode, there is an advantage in that a performance tendency of a user can be inferred with high accuracy, based on the tendency of whether each of a plurality of candidate tendencies is readily observed in an actual performance situation, for example.
(75) Mode 6
(76) In a favorable example (mode 6) of mode 1, generation of the performance tendency includes performance inference processing for generating inferred performance information that occurs when the musical piece is performed based on the specific tendency, and tendency processing for generating the performance tendency of the user by comparing the inferred performance information with the observed performance information. According to the above mode, an index of the relationship between a performance of a user and a specific tendency can be generated as the performance tendency.
(77) Mode 7
(78) A program according to a favorable mode (mode 7) of the present invention causes a computer to function as a performance analysis unit that generates a performance tendency of a user, from observed performance information relating to a performance of a musical piece by the user and inferred performance information that occurs when the musical piece is performed based on a specific tendency. According to the above mode, it is possible to generate the performance tendency of a user from observed performance information relating to a performance of a musical piece by the user.
(79) The program according to mode 7 is provided in the form of storage in a computer-readable recording medium and is installed on a computer, for example. The recording medium is, for example, a non-transitory recording medium, favorable examples of which include an optical recording medium (optical disk) such as a CD-ROM, and can encompass a recording medium of any known format such as a semiconductor recording medium or magnetic recording medium. Note that non-transitory recording media include any recording media excluding transitory propagating signals, and do not preclude volatile recording media. Also, the program may be provided to a computer in the form of distribution via a communication network.
REFERENCE SIGNS LIST
(80) 100 Performance analysis apparatus 11 Control apparatus 12 Storage apparatus 13 Performance apparatus 21 Performance inference unit 22 Performance observation unit 23 Tendency generation unit 31 Coefficient generation unit