MACHINE LEARNING BASED DETECTION OF MOTION CORRUPTED MAGNETIC RESONANCE IMAGING
20230186464 · 2023-06-15
Inventors
- Abraam Shawki Ibrahim Soliman (Eindhoven, NL)
- Jeroen Van Gemert (Breda, NL)
- Elwin De Weerdt (Tilburg, NL)
Cpc classification
G01R33/5611
PHYSICS
G01R33/5608
PHYSICS
A61B5/055
HUMAN NECESSITIES
G01R33/56509
PHYSICS
G01R33/5615
PHYSICS
International classification
A61B5/055
HUMAN NECESSITIES
Abstract
The present disclosure relates to a method comprising: receiving (201) acquired k-space data of an object, reconstructing (203) an image from the acquired k-space data, generating (205) reconstructed k-space data from the reconstructed image, determining (207) delta k-space data as a difference between the acquired k-space data and the reconstructed k-space data, splitting (209) the k-space data into one or more data chunks, wherein each data chunk of the data chunks comprises a set of one or more samples having a set of k-space coordinates, for each set of k-space coordinates of the one or more sets of coordinates, selecting (211), from the delta k-space data, a residual data set having the set of k-space coordinates, inputting (213) at least part of the data chunks and corresponding residual data sets to a trained machine learning model, thereby obtaining from the trained machine learning model probabilities of motion corruption for each of the data chunks of the acquired k-space.
Claims
1. A medical analysis system for enabling a magnetic resonance image reconstruction, the medical analysis system comprising a processor and at least one memory storing machine executable instructions, the processor being configured for controlling the medical analysis system, wherein execution of the machine executable instructions causes the processor to: provide a trained machine learning model, the trained machine learning model being configured to detect motion corrupted data; receive acquired k-space data of an object; reconstruct an image from the acquired k-space data; generate reconstructed k-space data from the reconstructed image; determine delta k-space data as a difference between the acquired k-space data and the reconstructed k-space data; split the acquired k-space data into one or more data chunks, wherein each data chunk of the data chunks comprises a set of one or more samples having a set of k-space coordinates; for each set of k-space coordinates of the one or more sets of k-space coordinates, select, from the delta k-space data, a residual data set having the set of k-space coordinates; input at least part of the data chunks and corresponding residual data sets to the trained machine learning model, thereby obtaining from the trained machine learning model probabilities of motion corruption for each of the input data chunks.
2. The system of claim 1, wherein execution of the machine executable instructions further causes the processor to perform the inputting by inputting one residual data set and one data chunk having the same set of k-space coordinates to the trained machine learning model.
3. The system of claim 1, wherein execution of the machine executable instructions further causes the processor to perform the inputting by repeatedly inputting one residual data set and one data chunk having the same set of k-space coordinates to the trained machine learning model until all selected residual data sets are processed.
4. The system of claim 1, wherein execution of the machine executable instructions further causes the processor to perform the inputting by inputting multiple residual data sets and associated multiple data chunks to the trained machine learning model.
5. The system of claim 1, wherein the set of k-space coordinates of a data chunk are contiguous with respect to their acquisition time or related by certain physiological measurements.
6. The system of claim 1, wherein the acquired k-space data results from subjecting the object to a number K>=1 of shots of a predefined pulse sequence, wherein each data chunk of the data chunks comprises some or all samples of a single shot.
7. The system of claim 1, wherein the acquired k-space data results from subjecting the object to a number K of shots of a predefined pulse sequence, wherein each data chunk of the data chunks comprises samples of multiple shots selected according to a predefined sequence criterion.
8. The system of claim 1, the trained machine learning model being a deep neural network.
9. The system of claim 1, wherein execution of the machine executable instructions further causes the processor to use the output of the trained machine learning to generate a weighting map, the weighting map comprising a weight for each set of k-space coordinates of the one or more sets of k-space coordinates, the weight indicating whether k-space data having the set of k-space coordinates is corrupted with motion.
10. The system of claim 9, wherein execution of the machine executable instructions further causes the processor to use the weighting map in an iterative re-weighted least squares coil combination scheme for reconstructing a motion corrected image.
11. The system of claim 9, wherein execution of the machine executable instructions further causes the processor to select sets of k-space coordinates of the sets of k-space coordinates whose weights are higher than a predefined threshold, using the k-space data representing the selected sets for reconstructing a motion corrected image.
12. The system of claim 1, wherein execution of the machine executable instructions further causes the processor to: receive a training data set, the training data set comprising data chunks, each data chunk being labelled as being motion corrupted or not, training the machine learning model and providing the trained machine learning model.
13. A magnetic resonance imaging, MRI, system comprising the system of claim 1, the MRI system being configured to acquire the k-space data.
14. A method comprising: receiving acquired k-space data of an object; reconstructing an image from the acquired k-space data; generating reconstructed k-space data from the reconstructed image; determining delta k-space data as a difference between the acquired k-space data and the reconstructed k-space data; splitting the k-space data into one or more data chunks, wherein each data chunk of the data chunks comprises a set of one or more samples having a set of k-space coordinates; for each set of k-space coordinates of the one or more sets of coordinates, selecting, from the delta k-space data, a residual data set having the set of k-space coordinates; inputting at least part of the data chunks and corresponding residual data sets to a trained machine learning model, thereby obtaining from the trained machine learning model probabilities of motion corruption for each of the input data chunks.
15. A computer program product comprising machine executable instructions for execution by a processor, which instructions enable the processor to perform the method of claim 14.
16. The system of claim 8, wherein deep neural network is a convolutional neural network.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0030] In the following preferred embodiments of the invention will be described, by way of example only, and with reference to the drawings in which:
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0037] In the following, like numbered elements in the figures are either similar elements or perform an equivalent function. Elements which have been discussed previously will not necessarily be discussed in later figures if the function is equivalent.
[0038] Various structures, systems and devices are schematically depicted in the figures for purposes of explanation only and so as to not obscure the present invention with details that are well known to those skilled in the art. Nevertheless, the attached figures are included to describe and explain illustrative examples of the disclosed subject matter.
[0039]
[0040] It will be appreciated that the methods described herein are at least partly non-interactive, and automated by way of computerized systems. For example, these methods can further be implemented in software 121, (including firmware), hardware, or a combination thereof. In examples, the methods described herein are implemented in software, as an executable program, and is executed by a special or general-purpose digital computer, such as a personal computer, workstation, minicomputer, or mainframe computer.
[0041] The processor 103 is a hardware device for executing software, particularly that stored in memory 107. The processor 103 can be any custom made or commercially available processor, a central processing unit (CPU), an auxiliary processor among several processors associated with the control system 111, a semiconductor based microprocessor (in the form of a microchip or chip set), a micro-processor, or generally any device for executing software instructions. The processor 103 may control the operation of the scanning imaging system 101.
[0042] The memory 107 can include any one or combination of volatile memory elements (e.g., random access memory (RAM, such as DRAM, SRAM, SDRAM, etc.)) and non-volatile memory elements (e.g., ROM, erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), programmable read only memory (PROM). Note that the memory 107 can have a distributed architecture, where various components are situated remote from one another, but can be accessed by the processor 103. Memory 107 may store an instruction or data related to at least one other constituent element of the medical analysis system 100.
[0043] The control system 111 may further comprise a display device 125 which displays characters and images and the like e.g. on a user interface 129. The display device 125 may be a touch screen display device.
[0044] The medical analysis system 100 may further comprise a power supply 108 for powering the medical analysis system 100. The power supply 108 may for example be a battery or an external source of power, such as electricity supplied by a standard AC outlet.
[0045] The scanning imaging system 101 may comprise at least one of NMI, CT and PET-CT imagers. The control system 111 and the scanning imaging system 101 may or may not be an integral part. In other terms, the control system 111 may or may not be external to the scanning imaging system 101.
[0046] The scanning imaging system 101 comprises components that may be controlled by the processor 103 in order to configure the scanning imaging system 101 to provide image data to the control system 111. The configuration of the scanning imaging system 101 may enable the operation of the scanning imaging system 101. The operation of the scanning imaging system 101 may, for example, be automatic.
[0047] The connection between the control system 111 and the scanning imaging system 101 may for example comprise a BUS Ethernet connection, WAN connection, or Internet connection etc.
[0048] In one example, the scanning imaging system 101 may be configured to provide output data such as images in response to a specified measurement. The control system 111 may be configured to receive data such as MR image/k-space data from the scanning imaging system 101. For example, the processor 103 may be adapted to receive information (automatically or upon request) from the scanning imaging system 101 in a compatible digital form so that such information may be displayed on the display device 125. Such information may include operating parameters, alert notifications, and other information related to the use, operation and function of the scanning imaging system 101.
[0049] The medical analysis system 100 may be configured to communicate via a network 130 with other scanning imaging systems 131 and/or databases 133. The network 130 comprises for example a wireless local area network (WLAN) connection, WAN (Wide Area Network) connection LAN (Local Area Network) connection or a combination thereof. The databases 133 may comprise information relates to patients, scanning imaging systems, anatomies, scan geometries, scan parameters, scans etc. The databases 133 may for example comprise an electronic medical record (EMR) database comprising patients' EMR, Radiology Information System database, medical image database, PACS, Hospital Information System database and/or other databases comparing data that can be used for planning a scan geometry. The databases 133 may, for example, comprise training datasets for the training performed by the present subject matter.
[0050] The memory 107 may further comprise an artificial intelligence (AI) component 150. The AI component 150 may or may not be part of software component 121. The AI component 150 may, for example, comprise a trained machine learning model 160. The trained machine learning model 160 may be configured to receive at least a data chunk and to output a probability for each input data chunk to be motion corrupted.
[0051]
[0052] Acquired k-space data m of an object may be received in step 201. In one example, the k-space data may be received from a remote MRI system. The MRI system may, for example, be remotely connected to the computer system via a network such as Internet. This may be advantageous as it may enable a centralized processing of acquired k-space data. For example, the method of
[0053] In one example, the k-space data may be acquired in accordance with a single shot acquisition so that the entire k-space matrix is filled in one execution of the pulse sequence. The shot is a unit representing the data collection that is performed by the excitation pulse of one time. In another example, the k-space data may be acquired in accordance with a multi-shot acquisition so that more than one execution of the pulse sequence is performed to collect all the desired k-space data of the k-space matrix. A data chunk may be part of data of a single shot or may comprise selected data points that belong to different shots based on certain sequence criterion. For example, in case of a periodic motion of the object (e.g. respiratory or cardiac motion), using different shots may enable to capture specific behavior in the cyclic motion. In one example, each data chunk of the one or more data chunks may be assigned an index indicative of the set of k-space coordinates that is represented by the data chunk. Upon receiving the acquired k-space data, a pre-processing method may be applied on the acquired k-space data. The pre-processing method comprises steps 203 to 211.
[0054] A MR image p may be reconstructed in step 203 from the acquired k-space data. For that, an inverse Fourier transformation may, for example, be performed on the k-space data. The k-space data may be transformed into frequency-domain data which may, for example, be a two-dimensional (2-D) or three-dimensional (3-D) data set. Exemplary image reconstruction methods may include parallel imaging, Fourier reconstruction, constrained image reconstruction, compressed sensing, or the like, or a variation thereof, or any combination thereof.
[0055] Reconstructed k-space data kr may be generated in step 205 from the reconstructed image p. The reconstructed k-space data may be generated by multiplying an encoding matrix E with the reconstructed image p: kr=E.p. The reconstructed k-space data may be provided as a reconstructed k-space matrix having the same size as the acquired k-space matrix.
[0056] Delta k-space data R may be determined in step 207. The delta k-space data R may be obtained as the difference between the acquired k-space data m and the reconstructed k-space data kr: R=m−E.p. The delta k-space data may be provided as a self-consistency matrix. The self-consistency may be determined using motion residuals, where the motion residuals denote a difference between the reconstructed k-space kr using all the acquired data and the measured or acquired k-space m.
[0057] The k-space data m may, for example, be provided or stored as a k-space matrix or map, wherein each point or sample of the k-space data has its own k-space coordinates in the k-space matrix. The k-space data m may be split (or divided) in step 209 into a number N of distinct data chunks m.sup.c, where c=1, N and N>=1. Each data chunk of the data chunks may comprise samples that have a respective set of k-space coordinates S.sub.c in the k-space matrix. The set of k-space coordinates S.sub.c may be contiguous or non-contiguous coordinates in the k-space matrix. This may be advantageous as it may enable to detect movements at a higher degree of freedom. Each data chunk of the data chunks may comprise one or more data points.
[0058] Assuming for simplification of the description that N=2, the data chunk m.sup.1 may comprise one data point having a k-space coordinate S.sup.1 e.g. (kx1, ky1), and the data chunk m.sup.2 may comprise multiple data points having a set of k-space coordinate S.sup.2 e.g. (kx2, ky2) and (kx4, ky4).
[0059] For each set of k-space coordinates Se of the N sets of k-space coordinates, the corresponding residual data set Re is selected in step 211 from the delta k-space data R. The selected residual data set Re comprises samples having the set of k-space coordinates Sc. The residual data set may be referred to as a data chunk residual Re. Following the above example, a data point of the delta k-space data R having the coordinates (kx1, ky1) of the set S.sup.1 may be selected and data points of the delta k-space data R having the coordinates (kx2, ky2) and (kx4, ky4) of the set S.sup.2 may be selected. In one example, the data chunk residual Re of data chunk c may be defined using a selection mask M.sup.c to select all the data points from chunk c as follows: R.sup.c=M.sup.c(m−E.p). The selection matrix of data chunk residual R.sup.c may be determined using the set of k-space coordinates of samples of data chunk c. The selection matrix may, for example, be user defined. The data chunk residual R.sup.c may vary in size, for instance: it is a 3D matrix for 2D multicoil acquisition and a 4D matrix for a 3D multicoil acquisition.
[0060] The execution of the pre-processing method may thus result in N residual data sets R.sup.c and N corresponding measured data sets mc. That is, N pairs (R.sup.1, m.sup.1) (R.sup.N, m.sup.N) of residual data sets and measured data sets. Following the above example, the execution of the pre-processing method may result in two pairs (R.sup.1, m.sup.1) and (R.sup.2, m.sup.2) associated with the sets of k-space coordinates S.sup.1 and S.sup.2 respectively.
[0061] In step 213, at least part of the N pairs may be input to the trained machine learning model 160, in order to obtain data indicative of motion corrupted data points. Step 213 may enable an inference of the trained machine leaning model. Step 213 may be performed in accordance with one or more inference examples.
[0062] In a first inference example, one pair of the N pairs e.g. (R.sup.2, m.sup.2) is input to the trained machine learning model. The trained machine learning model may provide an output indicating a probability that the data chunk m.sup.2 is motion corrupted.
[0063] In a second inference example, all N pairs may be provided as input to the trained machine learning model. The trained machine learning model may output N probabilities p.sup.c, each indicating the probability that a respective data chunk m.sup.c is motion corrupted. The inference example used in step 213 may be chosen based on the training method (e.g. as described with reference to
[0064] The method of
[0065]
[0066] In step 301, a training set may be received e.g. by the control system 111. For example, the training set may be obtained from one or more sources. For example, the training set may be retrieved by the control system 111 from the databases 133.
[0067] In order to enable the first inference example, the training set may comprise entries, wherein each entry of the entries comprises a pair (R.sup.c, m.sup.c) of a data chunk and a residual data set having same k-space coordinates and a label L.sup.c indicating whether the data chunk m.sup.c is motion corrupted. Thus, each entry of the of the training set may comprise a triplet (R.sup.c, m.sup.c, L.sup.c). In this case, the input layer of the CNN may comprise a number of nodes equal to the number of coordinates of the set of coordinates times 2.
[0068] In order to enable the second inference example, the training set may comprise entries, wherein each entry comprises a triplet (R.sup.1 . . . R.sup.N, m.sup.c . . . m.sup.N, L.sup.1 . . . L.sup.N) of all N data chunks, associated residual sets and the labels of each of the data chunks. In this case, the number of all coordinates of the samples of the N of data chunks defines the number of nodes in the input layer of the CNN. For example, each data point of the pair of data chunk and the residual data set may be associated with a respective node of the CNN.
[0069] In step 303, the machine learning model 160 may be trained using the received training set. The CNN may, for example, comprise groups of weights e.g. weights from the input layer to a first hidden layer, from first to second hidden layer etc. Before the training of the CNN, the weights may be initialized by random numbers or values. The training may be performed in order to search optimization parameters (e.g. weights and bias) of the CNN and minimize the classification error or residuals. For example, the training set is used as input for feeding forward the CNN. This may enable to compute data loss in the output layer of the CNN e.g. by a loss function (cost function). The data loss measures the compatibility between a predicted task and the ground truth label. After getting data loss, the data loss may be minimized by changing the weights and bias of the CNN. This may for example be performed by back-propagating the loss into every layers and neuron by gradient descent.
[0070] In one example, the training set of step 301 may continuously be enhanced using additional data. For example, the training set may be updated by adding the processed data chunks associated probabilities. Additionally or alternatively, the training set may be updated by adding further triplets. Step 303 may regularly be repeated e.g. as soon as the training set has been updated. And the retrained CNN may, for example, be used instead of the trained CNN in the method of
[0071]
[0072] Reconstructed k-space data 403 is determined by reconstructing an image from the acquired k-space data, and converting the image to k-space. The reconstructed k-space data 403 is subtracted from the acquired k-space data 401 to get a self-consistency matrix 404. The residuals or data points of the self-consistency matrix 404 are grouped into four residual data sets 405A, 405B, 405C and 405D corresponding to the data chunk positions of data chunks 402A, 402B, 402C and 402D respectively. This may result in four pairs (402A, 405A), (402B, 405B), (402C, 405C), (402D, 405D) of data chunks and associated residual data sets. The data chunk and the residual data set of each pair of the pairs have the same set of k-space coordinates.
[0073] The acquired k-space and the self-consistency matrix are subsequently fed to a CNN 410 as shown in
[0074]
[0075] Within the bore 706 of the magnet there is also a set of magnetic field gradient coils 710 which is used during acquisition of magnetic resonance data to spatially encode magnetic spins of a target volume within the imaging volume or examination volume 708 of the magnet 704. The magnetic field gradient coils 710 are connected to a magnetic field gradient coil power supply 712. The magnetic field gradient coils 710 are intended to be representative. Typically, magnetic field gradient coils 710 contain three separate sets of coils for the encoding in three orthogonal spatial directions. A magnetic field gradient power supply supplies current to the magnetic field gradient coils. The current supplied to the magnetic field gradient coils 710 is controlled as a function of time and may be ramped or pulsed.
[0076] MRI system 700 further comprises an RF coil 714 at the subject 718 and adjacent to the examination volume 708 for generating RF excitation pulses. The RF coil 714 may include for example a set of surface coils or other specialized RF coils. The RF coil 714 may be used alternately for transmission of RF pulses as well as for reception of magnetic resonance signals e.g., the RF coil 714 may be implemented as a transmit array coil comprising a plurality of RF transmit coils. The RF coil 714 is connected to one or more RF amplifiers 715.
[0077] The magnetic field gradient coil power supply 712 and the RF amplifier 715 are connected to a hardware interface of control system 111. The memory 107 of control system 111 may for example comprise a control module. The control module contains computer-executable code which enables the processor 103 to control the operation and function of the magnetic resonance imaging system 700. It also enables the basic operations of the magnetic resonance imaging system 700 such as the acquisition of magnetic resonance data.
[0078] As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as an apparatus, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a ‘circuit’, ‘module’ or ‘system’. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer executable code embodied thereon.
[0079] Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A ‘computer-readable storage medium’ as used herein encompasses any tangible storage medium which may store instructions which are executable by a processor of a computing device. The computer-readable storage medium may be referred to as a computer-readable non-transitory storage medium. The computer-readable storage medium may also be referred to as a tangible computer readable medium. In some embodiments, a computer-readable storage medium may also be able to store data which is able to be accessed by the processor of the computing device. Examples of computer-readable storage media include, but are not limited to: a floppy disk, a magnetic hard disk drive, a solid state hard disk, flash memory, a USB thumb drive, Random Access Memory (RAM), Read Only Memory (ROM), an optical disk, a magneto-optical disk, and the register file of the processor. Examples of optical disks include Compact Disks (CD) and Digital Versatile Disks (DVD), for example CD-ROM, CD-RW, CD-R, DVD-ROM, DVD-RW, or DVD-R disks. The term computer readable-storage medium also refers to various types of recording media capable of being accessed by the computer device via a network or communication link. For example, a data may be retrieved over a modem, over the internet, or over a local area network. Computer executable code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
[0080] A computer readable signal medium may include a propagated data signal with computer executable code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
[0081] A ‘computer memory’ or ‘memory’ is an example of a computer-readable storage medium. A computer memory is any memory which is directly accessible to a processor. A ‘computer storage’ or ‘storage’ is a further example of a computer-readable storage medium. A computer storage is any non-volatile computer-readable storage medium. In some embodiments computer storage may also be computer memory or vice versa.
[0082] A ‘processor’ as used herein encompasses an electronic component which is able to execute a program or machine executable instruction or computer executable code. References to the computing device comprising ‘a processor’ should be interpreted as possibly containing more than one processor or processing core. The processor may for instance be a multi-core processor. A processor may also refer to a collection of processors within a single computer system or distributed amongst multiple computer systems. The term computing device should also be interpreted to possibly refer to a collection or network of computing devices each comprising a processor or processors. The computer executable code may be executed by multiple processors that may be within the same computing device or which may even be distributed across multiple computing devices.
[0083] Computer executable code may comprise machine executable instructions or a program which causes a processor to perform an aspect of the present invention. Computer executable code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the ‘C’ programming language or similar programming languages and compiled into machine executable instructions. In some instances, the computer executable code may be in the form of a high-level language or in a pre-compiled form and be used in conjunction with an interpreter which generates the machine executable instructions on the fly.
[0084] The computer executable code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
[0085] Aspects of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block or a portion of the blocks of the flowchart, illustrations, and/or block diagrams, can be implemented by computer program instructions in form of computer executable code when applicable. It is further understood that, when not mutually exclusive, combinations of blocks in different flowcharts, illustrations, and/or block diagrams may be combined. These computer program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0086] These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
[0087] The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0088] A ‘user interface’ as used herein is an interface which allows a user or operator to interact with a computer or computer system. A ‘user interface’ may also be referred to as a ‘human interface device’. A user interface may provide information or data to the operator and/or receive information or data from the operator. A user interface may enable input from an operator to be received by the computer and may provide output to the user from the computer. In other words, the user interface may allow an operator to control or manipulate a computer and the interface may allow the computer indicate the effects of the operator's control or manipulation. The display of data or information on a display or a graphical user interface is an example of providing information to an operator. The receiving of data through a keyboard, mouse, trackball, touchpad, pointing stick, graphics tablet, joystick, gamepad, webcam, headset, gear sticks, steering wheel, pedals, wired glove, dance pad, remote control, and accelerometer are all examples of user interface components which enable the receiving of information or data from an operator.
[0089] A ‘hardware interface’ as used herein encompasses an interface which enables the processor of a computer system to interact with and/or control an external computing device and/or apparatus. A hardware interface may allow a processor to send control signals or instructions to an external computing device and/or apparatus. A hardware interface may also enable a processor to exchange data with an external computing device and/or apparatus. Examples of a hardware interface include, but are not limited to: a universal serial bus, IEEE 1394 port, parallel port, IEEE 1284 port, serial port, RS-232 port, IEEE-488 port, Bluetooth connection, Wireless local area network connection, TCP/IP connection, Ethernet connection, control voltage interface, MIDI interface, analog input interface, and digital input interface.
[0090] A ‘display’ or ‘display device’ as used herein encompasses an output device or a user interface adapted for displaying images or data. A display may output visual, audio, and or tactile data. Examples of a display include, but are not limited to: a computer monitor, a television screen, a touch screen, tactile electronic display, Braille screen,
[0091] Cathode ray tube (CRT), Storage tube, Bistable display, Electronic paper, Vector display, Flat panel display, Vacuum fluorescent display (VF), Light-emitting diode (LED) displays, Electroluminescent display (ELD), Plasma display panels (PDP), Liquid crystal display (LCD), Organic light-emitting diode displays (OLED), a projector, and Head-mounted display.
[0092] While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.
[0093] Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word ‘comprising’ does not exclude other elements or steps, and the indefinite article ‘a’ or ‘an’ does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measured cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.
LIST OF REFERENCE NUMERALS
[0094] 100 medical analysis system
[0095] 101 scanning imaging system
[0096] 103 processor
[0097] 107 memory
[0098] 108 power supply
[0099] 109 bus
[0100] 111 control system
[0101] 121 software
[0102] 125 display
[0103] 129 user interface
[0104] 133 databases
[0105] 150 AI component
[0106] 160 machine learning model
[0107] 201-213 method steps
[0108] 301-303 method steps
[0109] 401 acquired k-space data
[0110] 402A-D data chunks
[0111] 403 reconstructed k-space data
[0112] 404 self-consistency matrix
[0113] 405A-D residual data sets
[0114] 410 CNN
[0115] 411 probability map
[0116] 700 magnetic resonance imaging system
[0117] 704 magnet
[0118] 706 bore of magnet
[0119] 708 imaging zone
[0120] 710 magnetic field gradient coils
[0121] 712 magnetic field gradient coil power supply
[0122] 714 radio-frequency coil
[0123] 715 RF amplifier
[0124] 718 subject