A METHOD, COMPUTER PROGRAM PRODUCT AND DEVICE FOR CLASSIFYING SOUND AND FOR TRAINING A PATIENT
20210215776 ยท 2021-07-15
Inventors
- Thomas Erik Amthor (Hamburg, DE)
- Annerieke Heuvelink-Marck (Eindhoven, NL)
- Ron Dotsch (Haarlem, NL)
- Sanne Nauts (Eindhoven, NL)
- Privender Kaur SAINI (Veldhoven, NL)
- Ozgur Tasar (Eindhoven, NL)
Cpc classification
G01R33/543
PHYSICS
A61B5/055
HUMAN NECESSITIES
A61M2205/59
HUMAN NECESSITIES
A61M21/02
HUMAN NECESSITIES
A61B2503/06
HUMAN NECESSITIES
G01R33/283
PHYSICS
International classification
G01R33/28
PHYSICS
A61M21/02
HUMAN NECESSITIES
Abstract
It is an object of the invention to increase the predictability of the MRI exam for the patients. This object is achieved by a method for classifying sound of a magnetic resonance imaging sequence into a sound category, wherein the magnetic resonance sequence comprises a one or more sound blocks, wherein individual sound blocks have signal characteristics and wherein sound blocks having similar characteristics are to be classified into the same sound category, the method comprising the steps of: receiving information about one or more gradient waveforms to be used in the magnetic resonance imaging sequence and; using a classification algorithm to map the waveform information to a sound category and; allocating a visual to the sound category.
Claims
1. A method for increasing the predictability for a patient of a future magnetic resonance imaging exam, wherein the magnetic resonance imaging exam comprises one or more magnetic resonance sequences, wherein the one or more magnetic resonance sequences, in operation, produce sounds which comprise one or more sound blocks, wherein individual sound blocks have signal characteristics, and wherein sound blocks having similar signal characteristics are classified into the same sound category, and wherein a different visual is allocated to each individual sound category, wherein a similar combination of sound categories and visuals is planned to be used in the future magnetic resonance exam of the patient, wherein the method comprises a training phase, for training a patient in associating different visuals with different sound categories the training phase comprising the steps of receiving data comprising a plurality of sound categories and visuals allocated to each of the sound categories and; providing to the patient simultaneously or within a time interval of less than 60 seconds a sound from a sound category and a visual allocated to the sound category and wherein the method further comprises a subsequent MRI data acquisition phase comprising acquiring MRI data by means of an MRI sequence using an MRI system, wherein the visual corresponding to a particular sound block of a particular category is displayed prior to the generation of that sound block by the MRI system due to the implementation of an MRI sequence.
2. The method according to claim 1, the method further comprising providing a plurality of the visuals to the patient and; receiving a user input from the patient, wherein the user input comprises a selection of a visual from the plurality of visuals and; in response to the user input providing a sound to the patient, wherein the sound is a sound from the sound category to which the selected visual is allocated.
3. A computer program product, wherein the computer program product comprises program code executable instructions stored on a non-transitory computer readable medium for causing a computer to carry out the steps of the method according to claim 1.
4. A system for increasing the predictability of a future magnetic resonance imaging exam for a patient, wherein the magnetic resonance imaging exam comprises one or more magnetic resonance sequences, wherein the one or more magnetic resonance sequences in operation, produce sounds which comprise one or more sound blocks, wherein individual sound blocks have specific signal characteristics, and wherein sound blocks having similar signal characteristics are classified into the same sound category, and wherein a different visual is allocated to each individual sound category, wherein a similar combination of sound categories and visuals is planned to be used in the future magnetic resonance exam of the patient, wherein the system comprises a training device part for training a patient in associating different visuals with different sound categories, the training device part comprising: a plurality of input receiving means configured for receiving an input from a patient, wherein each of the input receiving means display one of the different visuals and; a sound producing module configured for producing a sound in response to user input received from an input receiving means selected by a user, wherein the produced sound is a sound in the sound category corresponding to the visual displayed on the user selected input receiving means, and a data storage comprising the sound categories or sound representative for the sound categories, wherein the data storage further provides for a link between the sound categories and the visuals allocated to them, and wherein the system further comprises an MRI data acquisition part comprising: a gradient system configured for producing magnetic field gradients in accordance with the MRI sequence, wherein the use of the gradient system to implement the MRI sequence results in production of the sound blocks, a data storage comprising a plurality of sound categories and visuals allocated to each of the sound categories and; a display or display means configured to display a visual corresponding to the sound category of a particular sound block prior to the generation of that sound block due to the implementation of the MRI sequence
5. (canceled)
Description
BRIEF DESCRIPTION OF THE FIGURES
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
DETAILED DESCRIPTION OF THE INVENTION
[0062]
[0063] The information about the gradient waveforms is first processed by a prediction module 130. The prediction module is configured for mapping waveform information to a sound. This could for example be achieved by means of an algorithm containing a magneto-mechanical model of the gradient coil (and possibly its surrounding elements) to calculate the mechanical response to the gradient wave forms. This mechanical response can be used as a prediction of the audible sound. Calculation of the mechanical response could be performed as a function of frequency and amplitude for each of the three gradient directions separately. The complete response of the MRI sequence can then be simulated by a linear superposition of responses for all involved frequencies. The system can be modeled using coupled magnetomechanical differential equations, which may be solved with available Multiphysics toolboxes. Examples for commercial Multiphysics products are ANSYS and COMSOL.
[0064] The information about the audible sound is then processed by a classification module 140. The purpose of this classification is to group parts of the waveform together to form blocks that correspond to the sound categories identified by the human ear and to sort these blocks into categories of different types of sound.
Example for noise block categories are:
[0065] click sound
[0066] hammering sound
[0067] long chirp
[0068] monotonous humming
[0069] etc
[0070] Classification can be done either by mapping of known features, such as power and center frequency, to a previously defined set of categories, or by using machine learning to perform this mapping based on a test data set of sound blocks labelled by humans. After defining or creating the sound categories a different visual will be allocated to each of the different sound categories.
[0071] It should be noted that some types of algorithms, e.g. algorithms based on artificial intelligence, may be capable of directly classifying a waveform information into a sound category without the need for a separate prediction module 130.
[0072] The sound classification algorithm could comprise two steps:
[0073] 1. Identification of distinct blocks, possibly separated by silence (sound blocks) and
[0074] 2. Classification of the individual sound blocks.
[0075] Also, the algorithm could be more simple. The algorithm could for example directly assign a visual to a type of sequence, e.g. a square to a T1w sequence and a triangle to a FLAIR sequence.
[0076] In addition to the sound category, each sound block could be assigned a sound intensity value and the absolute timestamps of start and end of the block. A sound block could for example be described by the following data set: start time, end time (alternatively: duration), sound category and optionally, sound parameters (e.g., intensity, center frequency)
[0077] Using the expected and previous sound blocks (e.g., covering a time span of +/10, 20, 30, 40, 50 or 60 seconds relative to the current time), a visualization engine 150 translates the blocks and their properties to visual objects (visuals) to be presented to the patient on a display device 155. The visualization may also include a marker representing the current time. The visualization may be updated in real time, i.e. the visual objects move across the screen, showing the patient which sound blocks are to be expected and when to expect them.
[0078]
[0079] The visualization 240 can have many different shapes.
[0080]
[0081]
[0082] After the learning phase comes the scanning phase, during which patients are in the bore of the MRI-scanner. During this phase, patients receive real-time feedback about the sequence that is currently being performed. Additionally, they may be presented with information about the sequences that will be performed after this.
[0083] Before the scanning phase commences, the MR technologist sets up the session and schedules the relevant pulse sequences 512. The system 520 imports the scheduled pulse sequences into a processing unit 520 that consists of three elements:
[0084] 1. A database 522 that stores assets (visuals e.g., images like symbols and/or animations, depending on embodiment) and their associations with specific pulses (the same associations as in the database of the learning phase);
[0085] 2. A buffer 524 which is filled with visual renderings of the planned pulse sequences, ready to be transferred to the in-bore display when needed; and
[0086] 3. A selector mechanism 526 that selects which rendering in the buffer to transfer to the (in-bore) display depending on which pulse sequence the MRI technician decides to execute.
[0087] Once the scheduled pulse sequences are loaded into the processing unit, each pulse sequence is automatically converted into visual renderings of the sequence based on the specific pulses that are presented in the sequence, their temporal order, their timing and possibly other properties such as loudness, pitch or subjective discomfort. These visual renderings are synchronized with the MRI sequence. The visual renderings of the time pulses are stored in a session buffer, along with an id that matches them to a specific sequence.
[0088] When the scanning phase starts, the MR technologist initiates the first pulse sequence in the MRI tech console 512. This is communicated in advance to the processing unit 520. The processing unit in turn selects the visual rendering associated with the pulse sequence from the buffer 524 and transfers it to the unit controlling the (in-bore) display, which in turn displays the visual renderings 524 at the moment the pulse sequence starts.
[0089] After the sequence was completed, the MRI technologist may decide either to continue with the next planned pulse sequence, or to repeat a previous sequence 514 (for instance because there was excessive movement during the scan, resulting in reduced image quality). The next sequence (either planned or any previous sequence) is then communicated to the processing unit, just before it is initiated. The same logic as described above is repeated for every pulse sequence, until all planned sequences and unplanned repetitions have been completed and the session is terminated by the MR technologist.
[0090]
[0091] The visuals used 602a, 602b can serve as buttons, and a user may create music by pressing these buttons and combining sounds, akin to a children's musical toy. The physical device (toy) also comprises a sound producing module 604 e.g. a speaker in order to produce the sound. Further, the physical device comprises a data storage comprising the sound categories or sound representative for the sound categories and their link to the visuals allocated to them. The device may be made such that new data can be easily uploaded to the device in order to adapt the device to different MRI exams and their respective sounds. The software application or physical device 620 can be made available in the waiting room of a hospital or can be provided to the patient to be used at home. Whenever a (pediatric) patient presses a button on the musical toy, the accompanying MRI-sound is generated. Instead of using arbitrary symbols, the symbols may also be linked to the frequency and/or pitch of the sound. For example, the sound categories can be associated with animations of a character jumping rope in the same frequency as the sounds in the sound category.
[0092]
[0093] Also, the software application 610 or physical device 620 could be used to display a movie, wherein the visuals in the movie are displayed while a sound from the sound category to which it is allocated is played simultaneously or within a certain time interval.
[0094] The relationship between visuals and sound blocks used for a specific patient may be stored. In this way the same visual/sound block combination could be used when a patient comes for a rescan or follow up.
[0095] Whilst the invention has been illustrated and described in detail in the drawings and foregoing description, such illustrations and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.