Radar-based Cross-sectional Image Reconstruction of Subject
20220137208 · 2022-05-05
Inventors
- Raghed El Bardan (Centerville, VA, US)
- Albert Dirienzo (Cazenovia, NY, US)
- Dhaval Malaviya (Centerville, VA, US)
Cpc classification
G01S7/2923
PHYSICS
G01S7/2806
PHYSICS
International classification
Abstract
One or more aspects of this disclosure relate to the usage of an impulse radio ultra-wideband (IR-UWB) radar to reconstruct a cross-sectional image of subject in a noninvasive fashion. This image is reconstructed based on the pre- and post-processing of recorded waveforms that are collected by the IR-UWB radar, after getting reflected-off the subject. Furthermore, a novel process is proposed to approximate the different tissues' dielectric constants and, accordingly, reconstruct a subject's cross-sectional image.
Claims
1. A process comprising: generating one or more waveforms; transmitting, via one or more transmit antennas, the one or more waveforms; receiving as signals, via one or more receive antennas, reflections of the one or more waveforms; generating a time-delayed copy of the received signals; autocorrelating the received signals with the time-delayed copy of the received signals; applying a k-point moving average; averaging all received signals; and blocking a DC component by subtracting the averaged signals from each signal.
2. The process of claim 1, wherein the k-point moving average is applied to remove outliers and short-term fluctuations.
3. The process of claim 1, wherein subtracting the averaged signals from each signal remove clutter and static objects.
4. A process comprising: receiving as signals, via one or more receive antennas, reflections of one or more waveforms; sampling the signals as M signals in N sampling time units, wherein the N sampling time units represent N-elements in a received waveform b.sub.i, where ∀i∈{1, 2, . . . , M}; for each M signal, determining the distances at which the N-elements are sampled; determine a reflection coefficient, Γ.sub.i,j, of each of the N-elements at a j-th medium boundary between mediums; and determining, for each medium, the medium's dielectric constant, ∀j using a vector network analyzer; and constructing an M×N matrix, ε, of the computed dielectric constants.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0008] Certain specific configurations of a modeling system and components thereof, are described below with reference to the accompanying figures.
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016] It will be recognized by the skilled person in the art, given the benefit of this disclosure, that the exact arrangement, sizes and positioning of the components in the figures is not necessarily to scale or required. The particular sizes and angles of one component relative to another component may vary to provide a desired response or output from the component or other structures.
DETAILED DESCRIPTION
[0017] Systems, methods, and computer-readable media are described that facilitate the imaging of internal structures of living organisms. Dielectric constants of different tissues, organs, and fluids that constitute a subject at their corresponding order of depths may be estimated. These inferences are based on the pre- and post-processing of recorded waveforms that are collected by a radar (e.g., a ultra-wideband radar) used with one or more transmitter-receiver pairs. For reference, the disclosure uses the term IR-UWB radar as an example type of radar. It is appreciated that other radar systems may be used. A waveform is defined as the shape and form of a signal such as a wave moving in a physical medium.
[0018] For multiple pairs of receivers, the pairs may be controlled to operate as one or more active phased arrays and mounted on- or off-body, with the signals being reflected off the subject. In the case where two or more IR-UWB radars are used, these may be simultaneously used and they will transmit similar Gaussian modulated pulses. They may include a phase-shift in dispersal patterns when used in active phased arrays and processed with phased-array processing. Alternatively, only one radar transmitter/receiver antenna pair may be used instead. While having a more simplistic architecture, the computational complexity associated with a single transmitter/receiver pair increases.
[0019] The procedure through which the recorded waveforms are processed in order to extract these dielectric constants and, consequently, construct the subject' cross-sectional image is described in the process below.
[0020] Various tools and techniques may be used with the techniques being based on machine learning (e.g., regression, decision trees, random forest, SVM, gradient-boosting algorithms, neural nets, Markov decision process, etc.), signal processing (e.g., sampling, filtering, autocorrelation, adaptive noise cancellation, etc.), statistics (e.g., pre-processing and post-processing metrics that involve the computation of means, modes, etc.), and logic analysis (e.g., conditional statements to narrow down a list of choices, a 1-to-1 mapping function, etc.).
[0021]
[0022]
[0023]
[0024]
[0025]
[0026] Also, the output of the pulse repetition frequency generator 503 is received by a range delay circuit, a delay added as instructed by the controller 501, and output to the receiver 509. The receiver selectively decodes the received waveforms based on the bins associated with the distances d.sub.1-d.sub.n. The output of receiver 509 is converted from an analog signal into a digital signal via A/D converter 510. The results are interpreted by signal processor 511 and the results exchanged with the controller 501 and storage 502.
[0027] For a phased-array setup, the output of the pulse repetition frequency generator 503 is output to transmitter 504 (optionally also using range delay 508 to adjust each phase) and the resulting signals sent to respective output transmitter antennas 505a-505c. The signals reflect off various structures in subject 506 and are received by receiver antennas 507a-507c, respectively. The remaining processing is similar to the process described above but based on a phased-array combination of signals.
[0028]
[0029] Let F.sub.s denote the sampling rate at which the reflected signal shown in
and each element in the above vector represents the reflected signal's sample in its corresponding range bin. Let f.sub.s denote the sampling rate at which the N-elements received waveform is recorded. Consequently, the sampling time vector that corresponds to
The N-elements received waveforms are collected over a period of M units of sampling time in this dimension. As a result, is obtained and may be represented by:
[0030] where a.sub.i,j for i∈{1, 2, . . . , M} and j∈{1, 2, . . . , N}, denote the normalized amplitude (or amplitude) of the i-th reflected signal sampled at the j-th time unit. The recording of a reflected signal or received waveform may be referred to as slow-time sampling (with a sampling frequency of f.sub.s). On the other hand, fast-time sampling (with a sampling frequency of F.sub.s) denotes the sampling rate at which the samples that comprise a given received waveform are collected. Note here that F.sub.s>>>f.sub.s is a valid assumption.
[0031]
[0032] The following describes various processes for processing received waveforms.
[0033] Phase 1: Pre-Processing
[0034] An autocorrelation routine may be used to strengthen the time-lagged signals in . This may be done by taking the correlation of a signal (i.e., a column in
) with a delayed copy of itself as a function of delay. In order to do this, the autocorrelation of a signal a.sub.j in
(where a.sub.j=[a.sub.1,j, a.sub.2,j, . . . , a.sub.M,j]) is computed based on:
Σ.sub.i=1.sup.Ma.sub.i,ja.sub.i-τ,j∀j∈{1,2, . . . ,N}.
[0035] Alternatively, one may compute the autocorrelation from the raw signal, e.g., a.sub.j, using two Fast Fourier transforms (FFTs) according to:
IFFT[FFT[a.sub.j](FFT[a.sub.j])*],
[0036] where IFFT is the inverse FFT and (.)* is the complex conjugate of (.). The short-term fluctuations may be smoothed and longer-term trends may be highlighted by applying a simple Low-Pass FIR filter, e.g., a k-point moving average filter in both dimensions.fwdarw.. This filter takes k samples of input at a time and compute the average of those k-samples and produces a single output point. The background clutter may be removed by subtracting the average of all waveforms in
from each signal in
.fwdarw.χ where
[0037] The static DC component may be blocked by subtracting the average of all columns in χ from each column in χ.fwdarw. where
[0038] Phase 2: Processing
[0039] The sampling times for any N-elements received waveform b.sub.i, ∀i∈{1, 2, . . . , M} is given by
This said, one may compute the corresponding distances at which the elements of the recorded waveform (b.sub.i) are sampled as follows:
[0040] where V is the signal's propagation speed in a given medium, C is the speed of light in vacuum, ∈.sub.r is the dielectric constant of the medium, and
[0041] Reflected signal and recorded waveform are used interchangeably in this disclosure.
[0042] One may compute the reflection coefficient of each N-elements received waveform (b.sub.i) at the j-th medium boundry, Γ.sub.i,j, according to the following equation:
[0043] where A.sub.i,j.sup.ref denotes the amplitude of the reflected signal (b.sub.i) at the boundary of medium j and A.sub.i,j-1.sup.inc represents the amplitude of the incident signal at the boundary of medium j−1. The reflection coefficient is defined as a parameter that describes how much of an electromagnetic wave is reflected by an impedance discontinuity in the transmission medium.
[0044] One may compute A.sub.i,j-1.sup.inc based on:
A.sub.i,j-1.sup.inc=A.sub.0.sup.inc−Σ.sub.k=1.sup.j-1A.sub.i,k.sup.ref,
[0045] where A.sub.0.sup.inc is the transmitted signal's amplitude.
[0046] Γ.sub.i,j is also given by:
[0047] where ∈.sub.r.sub.
[0048] where ∈.sub.r.sub.
[0049] These steps may be repeated for the remaining M−1 recorded waveforms. Consequently, an M×N matrix, ε, may be constructed that is filled with the computed dielectric constants. Accordingly, one obtains:
[0050] Phase 3: Post-Processing
[0051] For each column in ε, a clustering method (e.g., k-means, hierarchical clustering, a mixture of Gaussians, etc.) may be applied and, accordingly, the centroid of formed clusters and the number of nodes (elements) that each cluster is made of may be saved. Clustering is a technique for finding similarity groups in data, called clusters. Here, it attempts to group propagation media in a population together by similarity of their dielectric properties (constants). Clustering is often called an unsupervised learning approach, as ones dataset is unlabeled and there are no class values denoting a priori grouping of the data instances given. K-means is a method of vector quantization that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean.
[0052] The centroid of a finite set of k points x.sub.1,
[0053] If only one cluster forms in column j, j∈{1, 2, . . . , N}, for example, then its centroid value constitutes ∈.sub.r.sub.
∈.sub.r=[∈.sub.r.sub.
[0054] Furthermore, one applies the clustering method again, but on ∈.sub.r this time in order to cluster its equal and/or approximate elements together. Then, the centroid value of each cluster substitutes the values of nodes or elements that are attached to it. Accordingly, ∈.sub.r is updated.
[0055] A grayscale and/or color-map matching scheme may be used that assigns unique values in ∈.sub.r to unique grayscale color codes. Note here that a grayscale color consists of equal intensities of each color in the RGB format. In order to do that, one, for example, may record the dielectric constant in ∈.sub.r and convert that number into its hexadecimal representation (e.g., a 0 maps to #000000 in grayscale color code and 10 maps to #0A0A0A in grayscale color code).
[0056] The color codes may be obtained for values in ∈.sub.r and, eventually, construct (an image). For example, the image of
[0057] [#A0A0A0; #E1E1E1; #FFFFFF; #A4A4A4; #363636; #898989]
[0058] One or more aspects may include:
[0059] A process comprising: the generation and transmitting of waveforms as detailed in above; receiving one or more waveforms as shown in
[0060] A process comprising: the sampling of any N-elements received waveform b.sub.i, ∀i∈{1, 2, . . . , M}; the evaluation of depths or distances at which the elements of the waveform (b.sub.i) are sampled based on previous equations; the computation of the reflection coefficient of each N-elements received waveform (b.sub.i) at the j-th medium boundary, Γ.sub.i,j, according to aforementioned equations; the evaluation of the j's propagation medium dielectric constant, ∀j, based on previous equation; the direct measurement of medium j's dielectric constant, ∀j, using a vector network analyzer; the repetition of the aforementioned steps for the remaining M−1 recorded waveforms; and the construction of an M×N matrix, ε, filled using the computed dielectric constants.
[0061] A process comprising: the application of a clustering method (e.g., k-means, hierarchical clustering, a mixture of Gaussians, etc.) on columns in ε and, accordingly, the recording of the centroid of each of them along with the number of elements each cluster is made of; the selection of one cluster that admits the maximum number of nodes or elements if there are more than one cluster in each column and the recording of its centroid value; the construction of ∈.sub.r=[∈.sub.1, ∈.sub.2, . . . , ∈.sub.N] in which each entry is the result of the clustering method; the use of clustering or a classification method (e.g., decision trees and random forests) on in order to group its equal and/or approximate elements together or classify them; the substitution of the values of elements in any cluster by its centroid value; the update of the implementation of a grayscale color-map matching scheme that assigns unique values in ∈.sub.r to unique grayscale color codes by taking the dielectric constant in ∈.sub.r and converting that number into its RGB hexadecimal representation; the display of the color codes as obtained for values in ∈.sub.r and, eventually, obtaining an image; the implementation of a personalized color-map matching scheme that is user-defined and not limited to grayscale color codes in order to highlight certain functionalities (e.g., highlight blood flow going from the heart to extremities with a color that is different from its flow back to the heart).
[0062] A process comprising: a UWB-based radar (sensor) or some other type of radar (e.g., Doppler radar), Lidar (which stands for Light Detection and Ranging and is a remote sensing method that uses light in the form of a pulsed laser to measure ranges), or a camera-based sensor whether it is on-body or off-body; one or more UWB sensor and each sensor is supplied with either a single transmit and receive antennas or an active phased-array antenna which, in turn, is composed of many radiating elements each with its own transmitter-receiver unit and a phase shifter. The radiation beams, in this case, are formed by shifting the phase of the signal emitted from each radiating element, to provide constructive/destructive interference in order to steer the beams in the desired direction; the setup of UWB sensors in such a way that two or more dimensions are imaged, where each sensor can cover or target one dimension of the subject to be imaged by transmitting and receiving waveforms and, later, processing the information collected regarding the different propagation media's dielectric constants; (although not necessary from a system-functioning perspective) the synchronization of the UWB sensors' receivers' sweep via a master oscillator that sets the pulse repetition frequency of the sensors' emitted pulses and a controller issuing the sweep or scan commands; the transmission of signals' reflections to a signal processor and then a storage unit (i.e., a cloud, mobile phone, tablet, etc.) using wireless or wired connectivity; the pre- and post-processing of signals' reflections as described above using, but not limited to, learning methods (e.g., regression, decision trees, random forest, SVM, gradient-boosting algorithms, neural nets, Markov decision process, etc.) and signal processing techniques (e.g., sampling, filtering, autocorrelation, adaptive noise cancellation, etc.); the reconstruction of one or more cross-sectional subject' image, each corresponds to the dimension that the sensor is covering, also as described above; the fusion of multiple re-constructed one dimensional images or the information which was used to build those images (e.g., reflection coefficients and dielectric constants obtained in each imaging dimension) using a Kalman filtering approach in order to obtain a more complex, complete, and meaningful image of the subject that is of higher dimension.
[0063] A process comprising: the sensor as a stand-alone device used in both on-body and off-body imaging architectures, including but not limited to being mounted on an wall, in a bed, etc. in off-body architectures; the use of a machine just like medical imaging equipment or robot/s (used by doctors in surgeries for example) with one or more sensors respectively mounted on or built in one or more of the moving robotic arms;
[0064] A process comprising: the imaging of a subject or any part/organ of it (e.g., legs and hands in humans and animals, etc.); the real-time detection and tracking of organs, tissues, bones, fluids, and/or physiological abnormalities (e.g., functional, organic, metabolic, etc.), including but not limited to tumors, based on the image reconstructed (tumors have dielectric properties that are different from those of the body organ they are attached to or exist in for example); the providing of feedback (by the physician) based on the localization and detection of a target (tumor for example) as well as the tracking performance to make better and more efficient clinical decisions in terms of preventive (e.g., screening), predictive, and/or diagnosing measures (e.g., not being able to detect an onset of an illness or a disease such as tumors in the lungs or edema compared to using this imaging functionality to detect such illnesses and, accordingly, allow physicians to assess and investigate these conditions more thoroughly).
[0065] A process comprising: the fusion of the reconstructed image (or the data processed leading to the image reconstruction) with different information coming from different sources and/or sensors to make more reliable inferences on particular phenomena of interest and draw new learnings.
[0066] A process comprising: a separate radar chip (e.g., a silicon chip) and its corresponding transmitter-receiver antennas whether single or in the form of a phased array, or one application-specific integrated circuit (ASIC) that incorporates a radar, antennas of any suitable form, required hardware components, and any firmware or software setup including the processes to perform the required task.
[0067] A process comprising: the ability to use machine learning methods and/or artificial intelligence techniques (e.g., regression, decision trees, random forest, gradient-boosting algorithms, neural nets, Markov decision process, etc.) on the reflected signals in order to infer robust and useful information which, in turn, can be fed back to a controller that would automatically adjust the antenna beams (one or more) as to maximize the signal-to-noise ratio (intelligent beamforming).