METHOD AND SYSTEM FOR IMPROVING A PHYSIOLOGICAL CONDITION OF A SUBJECT
20230355918 · 2023-11-09
Inventors
Cpc classification
G16H20/70
PHYSICS
H04S7/302
ELECTRICITY
H04S2400/11
ELECTRICITY
A61M21/02
HUMAN NECESSITIES
A61M21/0094
HUMAN NECESSITIES
H04R2203/12
ELECTRICITY
International classification
Abstract
A method for improving a physiological condition of a subject, e.g. a human or animal, is disclosed. The method comprises providing an audio signal to the subject, wherein the audio signal is associated with a virtual sound source having a shape and a position relative to the subject. The virtual sound source is defined by a plurality of virtual points, each virtual point having a position relative to the subject. The audio signal comprises audio signal components for the respective virtual points of the virtual sound source, wherein each audio signal component has been determined based on the virtual position of its associated virtual point such that the audio signal is perceived by the subject as originating from the virtual sound source having said shape and said position relative to the subject. Further, a system for performing this method is also disclosed.
Claims
1. A method for improving a physiological condition of a subject, e.g. a human or animal, the method comprising: providing an audio signal to the subject, wherein the audio signal is associated with a virtual sound source having a shape and a position relative to the subject, wherein the virtual sound source is defined by a plurality of virtual points, each virtual point having a position relative to the subject, and wherein the audio signal comprises audio signal components for the respective virtual points of the virtual sound source, wherein each audio signal component has been determined based on the position of its associated virtual point such that the audio signal is perceived by the subject as originating from the virtual sound source having said shape and said position relative to the subject.
2. The method according to claim 1, wherein the method is a non-therapeutic method.
3. The method according to claim 1, wherein the audio signal is obtained by obtaining virtual sound source information defining the respective positions of the virtual points relative to the subject, the virtual points defining the virtual sound source having said shape and said position relative to the subject, obtaining an input audio signal, and determining the respective audio signal components for the respective virtual points based on the input audio signal and based on the respective positions of the virtual points, wherein for each audio signal component respectively associated with a virtual point, determining the audio signal component comprises: modifying the input audio signal to obtain a modified audio signal component using a signal delay operation introducing a time delay, wherein the time delay is based on the position of the virtual point associated with the audio signal component relative to the shape of the virtual sound source; and determining the audio signal component based on a combination of the input audio signal, or of an inverted and/or attenuated or amplified version of the input audio signal, and the modified audio signal component; and combining the determined audio signal components to obtain the audio signal.
4. The method according to claim 3, wherein the input audio signal is an audio signal produced by a tuning fork.
5. The method according to claim 1, comprising: providing the audio signal to the subject using a plurality of loudspeakers, and determining a loudspeaker audio signal for each loudspeaker, wherein each loudspeaker audio signal is determined based on the audio signal components, and providing the loudspeaker audio signals to the respective loudspeakers.
6. The method according to claim 5, wherein determining the loudspeaker audio signal for each loudspeaker comprises, for each loudspeaker audio signal, attenuating each audio signal component based on a loudspeaker specific coefficient in order to obtain a loudspeaker specific set of attenuated audio signal components and combining the attenuated audio signal components in the loudspeaker specific set of attenuated audio signal components.
7. The method according to claim 5, wherein the plurality of loudspeakers comprises a loudspeaker in front of the subject and a loudspeaker behind the subject and a loudspeaker to the right of the subject and a loudspeaker to the left of the subject and a loudspeaker above the subject and a loudspeaker below the subject.
8. The method according to claim 5, wherein the plurality of loudspeakers comprises at least eight loudspeakers: a loudspeaker above the subject; a loudspeaker in front of, below the subject; a loudspeaker in front of, to the left of, above the subject; a loudspeaker in front of, to the right of, above the subject; a loudspeaker behind, above the subject; a loudspeaker behind, to the left of, below the subject; a loudspeaker behind, to the right of, below the subject; and a loudspeaker below the subject.
9. The method according to claim 1, wherein the virtual sound source is shaped as a cube or a pyramid or a sphere.
10. The method according to claim 1, wherein the audio signal is configured such that it is perceived by the subject that said virtual sound source is surrounding the subject.
11. The method according to claim 1, comprising providing the audio signal to the subject using a plurality of loudspeakers that surround the subject.
12. The method according to claim 1, wherein the audio signal is provided to the subject for at least one minute.
13. The method according to claim 1, wherein the virtual sound source associated with the audio signal changes shape and/or position while the audio signal is provided to the subject thus wherein the respective positions relative to the subject of the respective virtual points defining the virtual sound source change while the audio signal is provided to the subject such that the audio signal is perceived by the subject as originating from the virtual sound source having a varying position and/or orientation relative to the subject.
14. The method according to claim 1, wherein one or more virtual points of the virtual sound source are virtually positioned at a depth below the subject, wherein the audio signal is obtained by for each audio signal component associated with a virtual point that is positioned at a virtual depth below the subject, adding depth characteristics to the audio signal component in question comprising modifying the audio signal component in question using a time delay operation introducing a time delay, a signal attenuation and a signal feedback operation in order to obtain a modified version of the audio signal component and combining the modified version of the audio signal component with the audio signal component in question, wherein the signal attenuation is performed in dependence of the virtual depth below the subject of the virtual point associated with the audio signal component in question.
15. The method according to claim 1, wherein one or more virtual points of the virtual sound source are virtually positioned at a height above the subject, wherein the audio signal is obtained by for each audio signal component associated with a virtual point that is positioned at a virtual height above the subject, adding height characteristics to the audio signal component in question comprising modifying the audio signal component in question using a signal inverting operation, a signal delay operation introducing a time delay and a signal attenuation to obtain a modified version of the audio signal component and combining the modified version of the audio signal component with the audio signal component in question, wherein the signal attenuation is performed in dependence of the virtual height of the virtual sound source.
16. The method according to claim 1, wherein one or more virtual points of the virtual sound source are virtually positioned at a virtual distance from the subject, wherein the audio signal is obtained by for each audio signal component associated with a virtual point that is positioned at a virtual distance from the subject, adding distance characteristics to the audio signal component in question comprising modifying the audio signal component in question using a first signal delay operation introducing a first time delay, a first signal attenuation operation and a signal feedback operation in order to obtain a first modified version of the audio signal component and combining the first modified version of the audio signal component with the audio signal component in question to obtain a second modified version of the audio signal component and performing a second signal attenuation, wherein the first and second signal attenuation are performed in dependence of the virtual distance from the subject.
17. A system for improving a physiological condition of a subject, e.g. a human or animal, the system comprising: a data processing system for determining an audio signal associated with a virtual sound source having a shape and a position relative to the subject, wherein the virtual sound source is defined by a plurality of virtual points, each virtual point having a position relative to the subject, and wherein the audio signal comprises audio signal components for the respective virtual points of the virtual sound source, the data processing system being configured to determine each audio signal component based on the position of its associated virtual point such that the audio signal is perceived by the subject as originating from the virtual sound source having said shape and said position relative to the subject, and the system comprising one or more loudspeakers for providing the determined audio signal to the subject.
18. A computer readable medium comprising instructions which when executed by a processor executes a method comprising: providing an audio signal to a subject, wherein the audio signal is associated with a virtual sound source having a shape and a position relative to the subject, wherein the virtual sound source is defined by a plurality of virtual points, each virtual point having a position relative to the subject, and wherein the audio signal comprises audio signal components for the respective virtual points of the virtual sound source, wherein each audio signal component has been determined based on the virtual position of its associated virtual point such that the audio signal is perceived by the subject as originating from the virtual sound source having said shape and said position relative to the subject.
19. The method of claim 16 and further comprising performing a second signal delay operation introducing a second time delay on the second modified version of the audio signal component.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0085] Aspects of the invention will be explained in greater detail by reference to exemplary embodiments shown in the drawings, in which:
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092]
[0093]
[0094]
[0095]
[0096]
[0097]
[0098]
[0099]
[0100]
[0101]
[0102]
[0103]
[0104]
[0105]
[0106]
[0107]
[0108]
[0109]
[0110]
[0111]
[0112]
[0113]
[0114]
[0115]
[0116]
[0117]
[0118]
[0119]
[0120]
[0121]
[0122]
DETAILED DESCRIPTION OF THE DRAWINGS
[0123] In the figures, identical reference numerals refer to identical or similar elements. Further, a flow chart may be understood to depict both an embodiment of a method in that several steps are depicted as well as an embodiment of a system, such as a circuit, that is configured to process signals as depicted in the flow chart. Further, elements that are indicted by dashed lines are optional elements.
[0124]
[0125] The method 10 for determining the audio signal may comprise [0126] obtaining virtual sound source information 6 defining the respective positions of the virtual points relative to the dimensional shape of the virtual sound source and relative to the subject 2, the virtual points defining the virtual sound source having said shape and said position relative to the subject 2, and [0127] obtaining an input audio signal 8, and [0128] determining the respective audio signal components for the respective virtual points based on the input audio signal 8 and based on the respective positions of the virtual points, wherein [0129] for each audio signal component respectively associated with a virtual point, determining the audio signal component comprises [0130] modifying the input audio signal 8 to obtain a modified audio signal component using a signal delay operation introducing a time delay, wherein the time delay is based on the defined position of the virtual point associated with the audio signal component relative to the dimensional shape of the virtual sound source; and [0131] determining the audio signal component based on a combination, e.g. a summation, of the input audio signal 8, or of an inverted and/or attenuated or amplified version of the input audio signal 8, and the modified audio signal component, and [0132] combining the determined audio signal components to obtain the audio signal.
[0133] The method referred to herein provides an accessible and efficient way to improve the physiological condition, e.g. to improve the homeostasis, of a subject 2, by means of encoding virtual source information 6 into sound waves propagating from a sound output medium, e.g. loudspeakers 4. It should be understood that the claims of the described improved physiological effects may be considered valid with the whole of the described methods used; and/or any separate part of the described methods used to achieve such effects; and/or any other methods used to obtain an audio signal that is perceived by a subject to originate from a virtual sound source having a shape, be it prior-art methods or future to-be-invented methods. The methods described herein for determining and/or generating the audio signal may include digital processing of sound signals, analogue circuits to modify sound signals and/or in combination with methods of acoustic modification and generation of sound to obtain sound projection of a defined dimensional shape, size and density.
[0134]
[0135] In an embodiment, the virtual sound source 10 is shaped as a pyramid as depicted. It should be understood that the method is not limited to one type of shape, and claimed effects comprise the encoding of shape in an audio signal, as distinct from other commonly described attributes of sound such as its pitch, loudness, timbre etc; and, that embodiments may include any type and/or combination of shape and spatial transformation of such shape.
[0136] In an embodiment, the loudspeakers 4 may be placed surrounding the subject 2 vertically and horizontally, i.e. surrounding the subject equally from above, below, front, back, left and right; and, each loudspeaker may be positioned at equal radius from the center where the subject 2 is positioned. It should be understood that the method is not limited to one shape configuration of loudspeakers and/or a fixed amount and positions of loudspeakers, and that embodiments may include any amount of loudspeakers in any spatial configuration thereof.
[0137] In an embodiment, the loudspeakers 4 used for such configuration may be omnidirectional, i.e. with equal distribution of the audible frequency range across an angle 90-degrees off-axis to achieve optimal coherence between the configured loudspeakers. It should be understood that the method may include obtaining described effects with any other combination of loudspeakers and/or with any other types of loudspeakers or sound transducers, including but not limited to vibro-transducers, bone-conduction transducers and headphones. It should be understood that the invention may include configuration of devices that project sound within the human audible frequency range as well by devices that project in the ultrasonic range (>20 kHz) and infrasonic range (<20 Hz), which may exceed the generally regarded human audible frequency range.
[0138] In an embodiment, the subject 2 is placed in the center of a loudspeaker configuration, thus enabling the subject 2 to receive the acoustic summation of the audio signal equally from all sides. It should be understood that the method may include positioning of the subject 2 in any other position or posture, including laying, sitting, standing and/or moving in space; and, that the subject 2 may experience the described physiological effect of the projected sound shape while being physically positioned inside or outside of the virtual sound source 10.
[0139]
[0140] The processing comprises, in an embodiment, associating the input audio signal with a distinct shape, ie. modifying the input audio signal based on the virtual sound source information and generating audio signal components for respective virtual points that define the virtual sound source. Optionally, a spatial wave transform operation is performed when determining each audio signal component. Such spatial wave transform is described with reference to
[0141] The audio signal 12 provided to the subject 2, which the subject 2 perceives as originating from a virtual sound source 10 having a shape and position, may be said to form a projection of the virtual sound source 10 with that shape. The virtual sound source 10 besides a shape also has a position relative to the subject 2 and may also have a certain density. The virtual points may define the density of the virtual sound source 10 in that a higher density of virtual points per volume corresponds to a higher density of the virtual sound source 10 The physiological response to the sound shape projection 12, for example indicating improved homeostasis, may be measured by a significant decrease 14 in Alpha-wave mean power and a significant decrease 16 in Alpha:Beta-wave power ratio in the Brain Activity; and, a significant decrease 18 in LF:HF power ratio in the Heart Rate Variability (HRV) where LF stands for “Low Frequency” and HF for “High Frequency”. The described effects, i.e. improvement of the physiological state of the subject 2, may be observable within a short exposure period to the audio signal, e.g. less than 5 minutes.
[0142] The experience described by the subject 2, as a result of being provided the audio signal, are associated with feelings in the subject 2 of deep relaxation 20, i.e. significantly more relaxed and less nervous after exposure than before exposure; increased confidence 22, i.e. more confidence and less anxiety after exposure than before exposure; and, increased happiness 24, i.e. more happy and less frustrated and/or less depressed after exposure than before exposure.
[0143]
[0144]
[0145] The audio signal 12 may then be distributed to several loudspeakers 4 using a signal distribution matrix 13 as will be explained in more detail below.
[0146] The acoustical summation 30 of the audio output signals 28 thus obtained for each discrete loudspeaker z.sub.n in a loudspeaker configuration results in a sound shape projection 32, i.e. a sound source has a distinct shape, size and is positioned at a particular distance, height and depth in relation to the subject 2. The generated audio signal 12, once played out by a loudspeaker system 4, can be considered a projection of the virtual sound source's shape irrespective of how many loudspeakers are used and irrespective of the position of the observer 2 relative to the loudspeakers 4. The described sound shape projection (at least partially) overrules the spatio-spectral properties of the individual loudspeaker(s) and creates a coherent spatial projection of the sound signal by means of its size and shape. This is also described in patent applications NL2024434 and NL 2025950 describing a method to associate an audio signal with a virtual sound source, the contents of which should be considered included in this disclosure in its entirety.
[0147]
[0148] The virtual sound source information 6 can then subsequently be used to modify the input audio signal 8 by, optionally, applying a ‘spatial wave transform’38 relative to the dimensional shape of the virtual sound source, e.g. determine a plurality of audio signal components 26_x respectively associated with the virtual points as defined by the virtual sound source information. The respective positions of the virtual points may be denoted in Cartesian coordinates (x, y, z).
[0149] The audio signal components 26 are further modified based on the distance, height and depth relative to the subject 2 of their associated virtual points. The resulting audio signal components 26 may then be input to a ‘signal distribution matrix’ 13 with as input the optionally modified audio signal components y(t).sub.n and particle positions, i.e the virtual position of each determined point on the virtual shape generated by the particle grid generator, optionally denoted in Cartesian coordinates (x, y, z).
[0150] The signal distribution matrix 13 can then distribute the audio signal to a plurality of loudspeakers 4 as described in more detail below.
[0151] Once the audio signal 8 is provided using the loudspeakers 4 to the subject 2, the subject 2 will perceive the audio signal 8 as if it originates from a virtual sound source 10 having the shape as output by the shape generator 34.
[0152]
[0153] In light of this system, it is clear that the method for improving the physiological condition of a subject may comprise generating an input audio signal based on sound waves hitting such pressure velocity transducer, amplifying the input audio signal as generated by the pressure-velocity transducer, and converting the analogue input audio signal into a digital version, and processing the audio signal based on the virtual sound source information in manners described herein, and convert the analogue audio signal as output by the data processing system to an analogue version, and amplifying the resulting audio signal before feeding it a plurality of loudspeakers. Herein, amplifying the resulting audio signal may comprise separately amplifying each loudspeaker audio signal.
[0154] The system and method as depicted in
[0155] In another embodiment, the audio input signal(s) 46 may have been output by a recording process in which sounds have been acquired or generated prior to playback and stored onto a readable digital or analogue storage medium;
[0156] In another embodiment, the audio input signal(s) 48 have been output by means of a digital or analogue synthesis process, acquired prior to playback and stored onto a digital or analogue storage medium; and/or acquired in real-time and/or optionally converted into a digital signal.
[0157] This disclosure also relates to a computer processing unit 100, also referred to as a data processing system, that executes computer program and/or code portion designed to modify an audio input signal and generate modified audio signal components associated with points on a virtual shape; and, generate audio signal components associated with a discrete loudspeaker as part of a loudspeaker configuration, i.e. audio output signals.
[0158]
[0159] The virtual points may be equally distributed over the surface of the virtual sound source 10. A higher density of the virtual points on such surface corresponds to a higher resolution.
[0160] It should be appreciated that the virtual sound source 10 can be defined to be hollow. In such case, the virtual sound source information 6 does not define virtual points “inside” the virtual sound source 10, but only on the external surfaces and edges of the virtual sound source 10. The virtual sound source 10 can also be “solid”. In such case, the virtual sound source information 6 defines, in addition to virtual points on the exterior surfaces and edges of the virtual sound source 10, virtual points “inside” the virtual sound source 10, which may be equally distributed across the interior volume of the virtual sound source 10.
[0161] In an embodiment, a virtual sound source 10 has a geometric shape, i.e. a pure dimensional shape, or semi-geometric, irregular or may be organically shaped. It should be understood that the virtual sound source 10 may have any form and that any method may be used to determine the shape of the virtual sound source and the virtual points constituting that shape.
[0162] The density of the virtual points may also be referred to as the resolution of the virtual points and/or the ‘grid resolution’.
[0163]
[0164] An infinite lattice L can be defined as
L=a.(Z.v_1+Z.v_2+Z.v_3) [0165] where Z is the ring of integers, and v_1, v_2, v_3 describe three vectors and constant a relates to the minimal increment as [0166] ={points (x,y), such that x=a.n.(v_1.x)+a.m.(v_2.x), y=a.n.(v_1.y)+a.m.(v_2.y), with n, m integers.}
[0167] As it is considered that sound propagates symmetrically in all directions, the patterns of overlapping or tangent circles generated by the lattice is considered, where a sphere is centered around each virtual point of the grid. The radius of the circles may be increased to influence the generated patterns of the sound propagation in space, which are further described in the following examples.
[0168]
[0169]
[0170]
[0171]
k*R/res
[0172] where R is the radius of the actual shape and has 6*k points on it. The 0-circle is the center point, while res-circle is the actual shape.
[0173]
[0174]
k*L/res
[0175] where L is the side length of the actual shape and has 9*k points, or 3*k for each edge.
[0176]
k*L/res
[0177] where L is the side length of the actual shape and has 8*k points, or 2*k per edge.
[0178]
k*R/res
[0179] where R is the radius of the actual shape and has 5*k points, or k per edge.
[0180]
k*R/res
[0181] where R is the radius of the actual shape and has 6*k points, or k per edge.
[0182] To determine the number of points within a shape, i.e. the grid resolution, res=resolution and integer >=0. If res=0 then only one point is positioned in the center. If res=1, one point is positioned at the center and one on each vertex; etc.
[0183]
[0184] The k sphere (0<k<res) has radius k*a and in an embodiment the sphere is composed of 3*k circles joining at height with 6*k points on each circle.
[0185]
[0186]
[0187]
[0188]
[0189]
[0190] In an embodiment, a shape can be a swarm, a cluster of bounded points that bounce within the area or boundaries of a dimensional shape, or forming an infinite, deterministic or probabilistic transformation of shape.
[0191]
x=r(a+cos v)cos u y=r(a+cos v)sin u z=r sin v
[0192] A torus shape can then transform into 3 types by modifying the parameters for r and a. If a=1 a ‘horn torus’ if formed; if a<1 a ‘spindle torus’ is formed; if a>1 a ‘ring torus’ is formed.
[0193]
r=a.u+b: x=(a.u+b)*cos(u)y=(a.u+b)*sin(u)z=0
[0194]
x=r.cos(a.u)y=r.sin(b.u)z=u with r,a,b fixed
[0195] or the helicoid variant
x=r.cos(a.u)y=r.sin(a.u)z=u
[0196] where, for instance: 1≤r≤1 and −π≤u≤π, or else in −inf, +inf.
[0197]
[0198]
[0199]
[0200]
[0201]
[0202]
[0203]
[0204] Each loudspeaker k is associated with a loudspeaker coefficient a_k. In the depicted embodiment, determining loudspeaker audio signal z_k for loudspeaker k comprises attenuating each audio signal component y_n based on loudspeaker coefficient a_k in order to obtain a loudspeaker specific set of attenuated audio signal components. A loudspeaker coefficient for a loudspeaker may be determined based on a distance between the loudspeaker in question and the virtual point. Attenuating each audio signal component y_n based on loudspeaker coefficient a_k may involve simply a multiplication y_n*a_k. In such case, the loudspeaker specific set of attenuated audio signal components for loudspeaker k may be described by: {y_1*a_k; y_2*a_k; y_3*a_k; . . . ; y_N*a_k}, wherein N denotes the total number of virtual points defined for the virtual sound source. Subsequently, the audio signal components in this set are combined, e.g. summed, in order to arrive at the loudspeaker audio signal z_k for loudspeaker k. This method is performed for all loudspeakers k.
[0205] In this disclosure, values in the triangles, i.e. in the attenuation or amplification operations, may be understood to indicate a constant with which a signal is multiplied. These constants are often indicated by “a” or “b”. Thus, if such value is larger than 1, then a signal amplification is performed. If such value is smaller than 1, then a signal attenuation is performed.
[0206] The signal distribution matrix 13 may have a multiplier and a summation at each position where an input line to which an output signal of a multiplier is supplied, crosses an output, as shown in
[0207] Each output line may further comprise a signal attenuator having as attenuation coefficient:
a=1/N.sup.2 [0208] where N is the number of audio signal components y.sub.n in the signal distribution matrix and the obtained attenuation for a translates to gain G in decibels dB as
G(dB)=10 log.sub.10(a)
[0209] It should be understood that the modification of input audio signal into audio signal components into loudspeaker audio signals, x->y_n->z_k, may be the process of a pre-calculated shape of a virtual sound source, and/or a shape that is transformed in real-time, i.e. the shape, size and density and/or the position and rotation of the shape in space are subject to changes in real-time generated by a controller, a pre-automated set of data executed in real-time and/or a real-time computer generated process.
[0210]
[0211] In such embodiment, the method comprises a spatial wave transform 64, which means that, for the determination of each audio signal component, the input audio signal x(t) is modified to obtain a modified audio signal component using a signal delay operation introducing a time delay and determining the audio signal component based on a combination, e.g. a summation, of the input audio signal, or of an inverted and/or attenuated or amplified version of the input audio signal, and the modified audio signal component. The formula for determining the time delay that is introduced for determining the modified audio signal component may be given by
Δt=Vx.sub.n/v
[0212] wherein V is the dimensional volume of the shape and x.sub.n denotes for point n on the virtual shape a coefficient, each point having a relative spatial position denoted in Cartesian coordinates (xyz); and v is a constant relating to the speed of sound through a medium. The determination of the audio signal components by means of a spatial wave transform is also described in patent applications NL2024434 and NL 2025950 which contents should be considered included in this disclosure in their entirety.
[0213] It should be appreciated that the determination of a plurality of audio signal components respectively associated with virtual points of a virtual sound source may be referred to as shape encoding 66.
[0214] The obtained audio signal components associated with the respective virtual points of the virtual sound source may be further modified by what is referred to as depth encoding 68, height encoding 70 and distance encoding 72 in
[0215] It should be understood that embodiments described herein may be performed in alternative order and using process flow that differ from those that are illustrated; and, that not all steps are required in every embodiment. In other words, one or more steps may be omitted or replaced, performed in different orders, in parallel with one another and/or additional steps may be added.
[0216]
[0217] In the depicted embodiment, the input audio signal x(t) is modified (see lower branch of
[0218] The attenuation operation 79 after the summation operation 78 may comprise decreasing the gain G of the audio signal with −6 dB. The cut-off frequency f.sub.c for the high pass filter in dependence of point n on a virtual shape may be determined as
f.sub.c=v/V2(1−r.sub.n/R) for r.sub.n/R≤0.5
f.sub.c=v/V2(r.sub.n/R) for r.sub.n/R>0.5
[0219] where v is a constant relating to the speed of sound through a medium, V is the dimensional volume of a virtual shape, r.sub.n denotes the spherical radius from the center of a virtual shape to point n, and R denotes the spherical radius from the center of the shape passing through the vertices where two or more edges of a virtual shape meet. In case of two or more values for R, the largest value R is considered.
[0220]
[0221]
[0222] Furthermore, a modified signal component y(t).sub.n is obtained comprising a summation, e.g. combination of first, resp. second modified audio signal components, an attenuation operation in dependence of the number of modified audio signal components associated with one and the same point on a virtual shape, where
a=1/P.sup.2, and
G(dB)=10 log.sub.10(a)
[0223] and, optionally, a high-pass filter operation using the formula as described above with respect to obtaining the cut-off frequency f.sub.c for the high pass filter in dependence of point n on a virtual shape.
[0224] It should be appreciated that the flow chart of
[0225]
[0226] Adding the depth characteristics to the audio signal component in
[0227] In this embodiment, the signal attenuation is defined by parameter “b”. If value b=0 no depth of the virtual point below the subject will be encoded, if value b=1, a maximum depth for the virtual point associated with the audio signal component will be encoded.
[0228] The value “a” with which the result of the combination of modified audio signal and input audio signal is optionally attenuated or amplified 94 equals to
a=(1−b)x
[0229] where x is a multiplication factor to correct the signal gain G depending on the amount of signal feedback b that influences the steepness of a high-frequency dissipation curve. By varying value b, preferably between 0-1, a change in depth is added to the audio signal.
[0230] Preferably, the time delay Δt that is introduced by the time delay operation is as short as possible, e.g. shorter than 0.00007 seconds, preferably shorter than 0.00005 seconds, more preferably shorter than 0.00002 seconds. Most preferably, approximately 0.00001 seconds. In case of a digital sample rate of 96 kHz, the time delay may be 0.00001 seconds.
[0231] It should be appreciated that the flow chart of
[0232]
[0233]
[0234]
[0235] In this embodiment, if value b=0 no height characteristics will be added to the audio signal component. If value b=1, a maximum height of the virtual point will be perceived. If the first attenuation operation is performed, the gain G of value “a” of attenuation 148 may be equal to
a=(1−b)x
[0236] where x is a multiplication factor to correct the signal gain G depending on the amount of attenuation b that influences the steepness of a low-frequency dissipation curve. By varying value b, preferably between 0-1, a change in height can be added to an audio signal component.
[0237] Preferably, the time delay Δt that is introduced by the time delay operation 142 is as short as possible, e.g. shorter than 0.00007 seconds, preferably shorter than 0.00005 seconds, more preferably shorter than 0.00002 seconds. Most preferably, approximately 0.00001 seconds. In case of a digital sample rate of 96 kHz, the time delay may be 0.00001 seconds.
[0238]
[0239]
[0240]
[0241] In dependence of the distance of the virtual point associated with the audio signal component in question the values for b, the attenuation constant for operation 162, and the value for a, the attenuation constant for operation 168, is varied. The constants may be understood to indicate a constant with which a signal is multiplied. Thus, if such value is larger than 1, then a signal amplification is performed. If such value is smaller than 1, then a signal attenuation is performed. When b=0 and a=1 no distance will be encoded and when b=1 and a=0 a maximum distance will be encoded. The gain G of value a may relate to the value for b as
a=(1−b)x
[0242] where the value for x is a multiplication factor applied to the amount of signal feedback that influences the steepness of a high-frequency dissipation curve.
[0243] Preferably, the time delay Δt1 that is introduced by the time delay operation 160 is as short as possible, e.g. shorter than 0.00007 seconds, preferably shorter than 0.00005 seconds, more preferably shorter than 0.00002 seconds. Most preferably, approximately 0.00001 seconds. In case of a digital sample rate of 96 kHz, the time delay may be 0.00001 seconds
[0244] The optional time delay Δt2 that is introduced by the time delay operation 170 creates a Doppler effect associated with movement of the virtual sound source. The time delay may be determined as
Δt2=r/v
[0245] wherein r is the distance between the position of virtual point associated with the audio signal component in question denoted in Carthesian coordinates (xyz) and the subject, which may be expressed as a vantage point (xyz) and v a constant expressing the speed of sound through a medium.
[0246] It should be appreciated that the flow chart of
[0247]
[0248]
[0249]
[0250] The resulting audio output signal is the summation of audio signal components y.sub.n″″ to obtain an audio signal with spectral modifications, such that it closely resembles the resonance of a sound source with a distinct shape; i.e. the projection of a virtual sound source with a dimensional shape, size and density at a particular distance, height and depth in relation to the subject, a ‘sound shape projection’.
[0251] The shape data used to obtain the modified audio signals y may be pre-calculated and stored on a readable digital or analogue storage medium; and, generated and/or modified in real-time and provided to the system as a data-streaming input. In another embodiment, the shape data comprises pre-recorded signals of a sounding object of a particular shape, size and material(s), captured at defined angle and distance to the object and describing attributes of the acoustic propagation of the object in space. In another embodiment, the shape data comprises of the acquired spectral modification data of a sound signal originating from a sounding object of particular shape, size and material(s), captured at a defined angle and distance, i.e. the ratio of amplitudes between all frequencies or frequency bands that are attributes of the acoustic propagation of the object in space.
[0252] In an embodiment of the invention, the audio signal processing and/or code portions used in the invention may include other methods known-in-the-art to obtain modified audio signal(s) and to encode (parts of) the shape data in the modified audio signal, including real-time FFT (Fast Fourier Transform) Analysis, Ray Tracing, Bandpass Filtering Synthesis and Convolution Synthesis. In another embodiment, the acquisition of shape data may be an input to a sound signal generating device and modify a generated audio signal, such as a sine-wave signal, by applying methods known-in-the-art, such as Additive Synthesis.
[0253]
[0254]
[0255]
[0256]
[0257] In an embodiment of the invention, the audio input signal may be one or several musical tones or rhythmic pulsations, i.e. a sound signal with steady periodic oscillation, a ‘pitch’ or ‘pulse’; and a ‘timbre’, meaning a distinguishable structure of higher-order harmonics to a fundamental pitch which are present in the sound and characterize the sound source. In an embodiment, a musical tone, such as obtained by a unweighted tuning fork, has been recorded in a studio to obtain the audio signal that is input for the system to improve physiological condition of a subject as described herein. It should be understood that the invention may include the use of other audio input signals of any other character and time duration, originating from other sound sources and/or obtained by any other means, including the repetition of the signal and various time exposures of the listener to (repetitions of) such audio signal.
[0258] The conducted experiments referred to in this disclosure have been obtained with sound stimuli of an unweighted tuning fork as the audio input signal. The musical tone of the tuning fork is repeated several times in its entirety during a total exposure period of 5 minutes.
[0259]
[0260] Several experiments have been conducted to test subjects' responses to an audio signal as described herein. The experiments involved four “tasks”, referred to as task 1, task 2, task 3 and task 4. Task 1 involved providing subjects a reference signal that does not project a virtual sound source as described herein (see
[0261] The exact parameters used in the flow charts of
[0262] The values for Δt, a, and b in building blocks
[0263] [y] represents the vertical axis, i.e. height, in the Cartesian coordinates (x,y,z) of a virtual point n. If [y].sub.n<0 then depth D.sub.n is defined as D.sub.n=y/−1 and height [y].sub.n=0; if [y].sub.n>0 then height H.sub.n is defined as H.sub.n=y and D.sub.n=0.
[0264] The coefficient b for operation 88 in
G(b)=10 log.sub.10(P.sub.b/P.sub.o) [0266] where P.sub.b is the power value b and P.sub.o is the reference power P.sub.o=1
[0267] The coefficient a for operation 94 in
G(a)=(10 log.sub.10(P.sub.a/P.sub.o))x [0269] where P.sub.a is the power value a and P.sub.o is the reference power P.sub.o=1
[0270] The coefficient b for operation 144 in
b=(1/H.sub.O)H.sub.n [0271] where H.sub.O is the threshold height (in m) and the attenuation gain G(dB) is given by
G(b)=10 log.sub.10(P.sub.b/P.sub.o) [0272] where P.sub.b is the power value b and P.sub.o is the reference power (P.sub.o=1)
[0273] The coefficient a for operation 148 in
G(a)=(10 log.sub.10(P.sub.a/P.sub.o))x [0275] where P.sub.a is the power value a and P.sub.o is the reference power P.sub.o=1
[0276] The values for Δt1, a, b, and Δt2 in building blocks 160, 168, 162, 170, respectively in
[0277] Delay time Δt1 is as small as possible but >0, see explanation at
[0278] The coefficient b for operation 162 in
r.sub.O->n=√((x.sub.n−x.sub.O).sup.2+(y.sub.n−y.sub.O).sup.2+(z.sub.n−z.sub.O).sup.2) [0281] and the attenuation gain G(b) in dB is given by
G(b)=10 log.sub.10(P.sub.b/P.sub.o) [0282] where P.sub.b is the power value b and P.sub.o is the reference power P.sub.o=1
[0283] The coefficient a for operation 168 in
a=P.sub.o(1/r.sub.O->n)x [0284] where P.sub.o is the reference power level and the obtained coefficient a translates to gain G(a) in dB as
G(a)=(10 log.sub.10(P.sub.a/P.sub.o))x [0285] and where x is a multiplication factor, in the case of performing task 2-4 set to a value of x=1.1 Delay time Δt2 for operation 170 in
Δt2=r.sub.O->n/v [0286] where v is the propagation speed of sound travelling through a medium, in the case of task 2-4 set to 343 m/sec, i.e. the speed of sound through air at average temperature of 20 C and humidity of ca 50%.
[0287] The loudspeaker coefficients a (y.sub.n->z.sub.n) in building block 13 in
[0288] A loudspeaker configuration (see
TABLE-US-00001 <speaker speakerType=“satellite” z=“0” y=“1.26” x=“0” ch=“1” id=“A”/> <speaker speakerType=“satellite” z=“0.59” y=“0.41” x=“−1.02” ch=“2” id=“B”/> <speaker speakerType=“satellite” z=“0.59” y=“0.41” x=“1.02” ch=“3” id=“C”/> <speaker speakerType=“satellite” z=“−1.18” y=“0.41” x=“0” ch=“4” id=“D”/> <speaker speakerType=“satellite” z=“1.18” y=“−0.41” x=“0” ch=“5” id=“E”/> <speaker speakerType=“satellite” z=“−0.59” y=“−0.41” x=“1.02” ch=“6” id=“F”/> <speaker speakerType=“satellite” z=“−0.59” y=“−0.41” x=“−1.02” ch=“7” id=“G”/> <speaker speakerType=“satellite” z=“0” y=“−1.26” x=“0” ch=“8” id=“H”/> <-- Top Layer -- > <shape speakers=“D B A” type=“projectionTriangle”/> <shape speakers=“B C A” type=“projection Triangle”/> <shape speakers=“C D A” type=“projection Triangle”/> <-- Mid Layer -- > <shape speakers=“E C B” type=“projectionTriangle”/> <shape speakers=“E F C” type=“projectionTriangle”/> <shape speakers=“F D C” type=“projection Triangle”/> <shape speakers=“F G D” type=“projectionTriangle”/> <shape speakers=“G B D” type=“projectionTriangle”/> <shape speakers=“G E B” type=“projectionTriangle”/> <-- Bottom Layer -- > <shape speakers=“H E G” type=“projection Triangle”/> <shape speakers=“H F E” type=“projectionTriangle”/> <shape speakers=“H G F” type=“projectionTriangle”/> <-- Six Tetrahedrons filling the inside -- > <shape speakers=“F E C D” type=“tetrahedron”/> <shape speakers=“G E D B” type=“tetrahedron”/> <shape speakers=“B D C E” type=“tetrahedron”/> <shape speakers=“F G E D” type=“tetrahedron”/> <shape speakers=“G F E H” type=“tetrahedron”/> <shape speakers=“C D B A” type=“tetrahedron”/> <projectionPoint z=“0” y=“2” x=“0”/> </grid> <routing value=“1 2 3 4 5 6 7 8”/> <center z=“0” y=“0” x=“0”/> </setup>
[0289] If a point associated with y.sub.n is located at a projection angle of a loudspeaker projection plane, or is located within a loudspeaker volume, consisting of loudspeakers z.sub.n on a particular face within a particular volume of the loudspeaker configuration, then for all loudspeakers not contained in the loudspeaker configuration shape
a(y.sub.n->z.sub.n)=0
[0290] and for each loudspeaker located on the loudspeaker configuration shape the distance r.sub.N for y.sub.n->x.sub.n is determined as
r.sub.n=√((x.sub.yn−x.sub.zn).sup.2+(y.sub.yn−y.sub.zn).sup.2+(z.sub.yn−z.sub.zn).sup.2)
Furthermore
Σr=r.sub.1+r.sub.2+ . . . +r.sub.n
and the amplitude of signal y.sub.n for each loudspeaker z.sub.n is determined as
a(y.sub.n->z.sub.n)=1/(Σr/r.sub.n)
and the attenuation gain in dB is determined as
G(y.sub.n->z.sub.n)=10 log.sub.10(P.sub.a/P.sub.o)
and thus
y.sub.n(a)=Σa(y.sub.n->z.sub.n)=1
which yields equal power panning.
[0291] The attenuation a of each obtained loudspeaker signal z.sub.n in
[0292]
[0293] Compared to the input audio signal, the audio output signal comprising stereo sound projection shows an increase in power ratio of the fundamental to the first harmonic of 1:0.0003 (−35 dB) and from the fundamental to the second harmonic of 1:0.00008 (−41 dB).
[0294] By reproduction of the audio signal using a “standard” method, such as a stereo sound system, one may conclude that some of the recorded information is modified, i.e. the strength or presence of occurring harmonics in the spectrum of the recorded sound source is partially diminished or obscured by the propagation of the output medium.
[0295]
[0296] Compared to the input audio signal shown in
[0297]
[0298] Compared to the audio input signal (see
[0299]
[0300] Compared to the audio input signal (see
[0301] By reproduction of the audio signal using a sound shape projection, one may conclude that the recorded information is modified such that the resulting spectrum resembles the resonance of the projected shape, e.g. the resonance of a sound source with a shape of a pyramid, cube or sphere, and that the strength or presence of the occurring harmonics in the spectrum of the audio input source may be increased or decreased due to the shape projection, and that such sound shape projection (at least partially) overrules the spatio-spectral properties resulting from propagation of the output medium, i.e. the individual loudspeakers.
[0302] In an embodiment, a virtual sound source may be shaped as a pyramid, a cube or a sphere. Data with regards to the physiological response of the human body after exposure to the projection of such shapes as referred to in this disclosure, has been obtained with said projection of these three shapes as examples. These three shapes were chosen as fundamental basic geometries and their relation to natural processes (Y Li et al., 2015), crystallisation processes (C Park et al, 2010), and prior subject of physiological experiments (I R Kumar et al., 2005). The effects referred to as the effects of sound shapes refers to the observable effect that is obtained to a significant degree with either of these shapes, i.e. the general effect of attributing shape to sound, other than the effect of rhythm, pitch or timbral information which is commonly present in sound; and, that the accurate projection of shape has a distinct difference and/or increase of such effect in comparison to projection of the same audio signal while not taking specifically into account the shape of the projected sound object, f.i. using standard methods known in the art, such as stereo sound projection. Although the effects of each distinct shape referred to in this disclosure may show distinct differences, the claims on the method described herein refer to those observable and measurable effects that the projection of the shapes have in common. It should be understood that the method comprising the invention refers to the audible production of shape, and thus may include any geometrical and/or non-geometrical shape and/or mathematically coherent projections of shapes of any spatial dimensions, and/or shapes that transform in periodic oscillations, such as spirals.
[0303]
[0304] On the right is shown the summary of the measured physiological effects observed after 5 minutes of the sound exposure. No distinguishable effect in the subject can be observed in the mean power of the Alpha-wave activity comparing post-exposure to pre-exposure, hereinafter also referred to as ‘base-condition’. The Alpha:Beta-wave ratio of the Brain Activity and the LF:HF ratio of the Heart-Rate Variability have slightly decreased comparing post-exposure to task 1 to base condition, but not decreased significantly.
[0305]
[0306] On the right is shown the summary of the measured physiological effects observed after 5 minutes of the sound exposure. A significant decrease can be observed in Alpha-wave mean power and in the Alpha:Beta-wave power ratio of the Brain Activity; and, a significant decrease in LF:HF power ratio of the Heart Rate Variability; the measured effects of which indicate improved homeostasis of the subject.
[0307]
[0308] On the right is shown the summary of the measured physiological effects observed after 5 minutes of the sound exposure. A significant decrease can be observed in Alpha-wave mean power and a very significant decrease in the Alpha:Beta-wave power ratio of the Brain Activity; and, a significant decrease in LF:HF power ratio of the Heart Rate Variability; the measured effects of which indicate improved homeostasis of the subject.
[0309]
[0310] On the right is shown the summary of the measured physiological effects observed after 5 minutes of the sound exposure. A significant decrease can be observed in Alpha-wave mean power and in the Alpha:Beta-wave power ratio of the Brain Activity; and, a significant decrease in LF:HF power ratio of the Heart Rate Variability; the measured effects of which indicate improved homeostasis of the subject.
[0311] Importantly, the observed effect of the sound shape projection as in task 2-4, is that all observed effects significantly increase compared to task 1 and compared to base condition; that is, the decrease in mean power of Alpha wave activity of the brain, the significant decrease in the difference ratio slope between Alpha:Beta-wave; and, the significant decrease in the difference ratio slope between LF:HF power ratio of the Heart Rate Variability.
[0312] Decrease in Alpha:Beta-wave power ratio may be interpreted as enhanced relaxation (International Medical Journal (1994) 23(no. 1):1-3⋅April 2016 & R F Navea et al, Conference paper-Project Einstein 2015, At De La Salle University—Manila) and indicative of enhanced cohesion in brain waves. Lower levels of Alpha-waves at the left front central were significantly associated with higher levels of self acceptance, environmental mastery, personal growth and total Psychological Well Being (H L. Urryet et al, Psychol Sci. 2004 June; 15(6):367-72) also suggesting a positive effect on cardiovascular and respiratory systems in accordance to mood induction (Matti Grohn et al, Proceedings of the 18th International Conference on Auditory Display, Atlanta, GA, USA, June 18-21, 2012). Decrease in Alpha-wave activity is also reported to relate to higher levels of oxygen in the blood (H Yuan et al, Neuroimage. 2010 Feb. 1; 49(3): 2596). Results indicate participants were in a state of enhanced concentration, i.e. immersion (S. Lim et al, Sensors (Basel) 2019 Apr. 8; 19(7):1669). Furthermore, while immersed in a VR experience, Alpha-wave activity has been shown to decrease during arithmetic tasks also suggesting inflicting attention inwards, compared to purely mental tasks (Elisa Magosso et al, Computational Intelligence and Neuroscience Volume 2019).
[0313] It is generally accepted that the activities of the autonomic nervous system (ANS), which consists of the sympathetic (SNS) and parasympathetic nervous systems (PNS), are reflected in the low-(LF) and high-frequency (HF) bands in heart rate variability (HRV) while the ratio of the powers in those frequency bands, the so called LF:HF power ratio, has been used to quantify the degree of sympathovagal balance (Sin-Ae Park et al; Int J Environ Res Public Health. 2017 September; 14(9): 1087).
[0314] High resolution audio stimuli have proven to enhance relaxation compared to low resolution audio stimuli and showing decrease in Alpha-wave power (T. Harada et al, International Medical Journal (1994) 23(no. 1):1-3⋅April 2016). In comparison to the same results observed in the conducted experiments, the method as described in this disclosure may be considered a novel method to obtain high resolution audio signals, and more specifically, with shape information encoded in the signal, which shows a marked difference in results compared to stereo sound projection using the same high-fidelity audio equipment.
[0315]
[0316] Furthermore, the obtained data suggest that in the Alpha-wave range there is a significant difference between base condition and tasks 2-4. Alpha slightly decreases between base state and stereo sound projection (task 1: N=12 p≤0.061), and then significantly decreases, with variation, between task 1 and sound shape projections (task 2: N=12 p≤0.01, task 3: N=12 p≤0.023, task 4: N=12 p≤0.059). As shown, significant results are obtained between base condition and task 2, and between base condition and task 3; and, to a lesser extent, between base condition and task 4.
[0317] A p-value less than 0.05 (typically <0.05) is statistically significant. It indicates strong evidence against the null hypothesis, as there is less than a 5% probability the null is correct and the results are random. (Saul McLeod, https://www.simplypsychology.org/p-value.html, retrieved 22 Jul. 2020).
[0318]
[0319]
[0320] The experimental data referred to herein are obtained from a study with the goal to explore whether presentation of sound shape projection, i.e. whether the provision of an audio signal that is configured such that it is perceived by the subject as originating from a virtual sound source having a shape, has an effect on physiological measures (EEG, HRV); and, explore whether different sound shape projections have different effects on physiological measures (EEG, HRV). The study was conducted with a totality of 50 participants N=50 subjects of which 22 female and 28 male subjects. All subjects were healthy young adults between 20-40 years old. Subjects declared not to suffer from any mental or health issues and were not taking any medication regularly. The study was conducted according to the Helsinki Ethics Declaration.
[0321] EEG (Electroencephalogram) data was collected by Prof. Dr. Thomas Feiner and Frank Hegger and processed by Dr. Anat Barnea. HRV (Heart Rate Variability) data was collected by Bertram Reinberg. The HRV data was recorded in parallel to EEG recordings. Observations considered for statistics were: the mean amp of frequency bands averaged over all electrodes e=19 (per subject, per band). No spatial localization of the signals was considered besides Left:Right.
[0322] The experiment was conducted in a sound-proofed, acoustically treated studio environment with omnidirectional loudspeakers placed above, below and around the subjects as depicted in
[0323] Subjects were exposed to stereo sound projection (task 1) or a sound shape projection (task 2, 3 or 4). Each sound stimuli was played for an epoch of 5 minutes. All subjects were also monitored for 5 minutes base condition (no sound with eyes closed) and 5 minutes sound stimuli. The presentation of shapes (task 1-4) was randomly intermingled between subjects. The experiment was conducted ‘blind-to-blind’, where both participants and the doctors taking the physiological measurements were unaware of which task was playing and what were the characteristics of the sound samples. Participants were asked to answer assessment questionnaires before and after their exposure to the sound stimuli.
[0324] Each condition, including base condition, was measured for 5 minutes of which the last minute was analyzed. Subjects with noisy artifacts were removed from the analysis paradigm, before running statistical analysis.
[0325]
[0326]
[0327] All participants were required to answer BECK Depression Inventory (BDI) before enrolling in the experiment, the average score of all participants was 7.45 which is considered normal. BECK Depression Inventory (BDI) is used for the concurrent validity of ratings in clinical and nonclinical subjects with regards to the Hamilton Psychiatric Rating Scale for Depression (HRSD). (Aaron T. Beck, Clinical Psychology Review Volume 8, Issue 1, 1988, Pages 77-100).
[0328] All participants were requested to answer the Multidimensional Mood Questionnaire (MDMQ) directly before and after each sound stimuli to monitor their well-being and emotional response. Statistical analysis of results from all questionnaires was conducted on SPSS 25.0 using Paired t test method.
[0329]
[0330]
[0331]
[0332]
[0333]
[0334]
[0335]
[0336]
[0337] As shown in
[0338] The memory elements 104 may include one or more physical memory devices such as, for example, local memory 108 and one or more bulk storage devices 110. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 100 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the number of times program code must be retrieved from the bulk storage device 110 during execution.
[0339] Input/output (I/O) devices depicted as an input device 112 and an output device 114 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a touch-sensitive display, or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.
[0340] In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in
[0341] A network adapter 116 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 100, and a data transmitter for transmitting data from the data processing system 100 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 100.
[0342] As pictured in
[0343] Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression “non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on the processor 102 described herein.
[0344] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0345] The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.
[0346] The inventors acknowledge dr. Claire Glanois and dr. Galit Fuhrmann Alpert for their contributions to this disclosure.