METHOD AND APPARATUS FOR WIRELESS PORTABLE ULTRASOUND IMAGING
20220202394 · 2022-06-30
Assignee
Inventors
- Lawrence Trong-Huan Le (Edmonton, CA)
- Edmond Hok-Ming Lou (Edmonton, CA)
- Kim-Cuong Thi Nguyen (Edmonton, CA)
- Paul William Major (Sherwood Park, CA)
- Neelambar Reddy Kaipatur (Edmonton, CA)
Cpc classification
A61B8/12
HUMAN NECESSITIES
A61B8/0858
HUMAN NECESSITIES
A61B8/4455
HUMAN NECESSITIES
A61B8/5261
HUMAN NECESSITIES
A61B8/4281
HUMAN NECESSITIES
B06B1/067
PERFORMING OPERATIONS; TRANSPORTING
A61B8/5223
HUMAN NECESSITIES
A61C19/043
HUMAN NECESSITIES
G06T7/187
PHYSICS
International classification
A61B8/00
HUMAN NECESSITIES
A61B8/12
HUMAN NECESSITIES
B06B1/06
PERFORMING OPERATIONS; TRANSPORTING
Abstract
Presented is a wireless portable ultrasound acquisition system for dental imaging, having an ultrasound probe with a control switch connected through a cable to a portable ultrasound acquisition system that communicates wirelessly with a smart tablet or a phone display to display the ultrasound images. The system uses ultrasound signals to create images of alveolar bone structure and boundaries of enamel, dentin and gingiva of a patient.
Claims
1. An apparatus for imaging an oral structure of upper and lower jaws at facial and lingual surfaces of a patient, the apparatus comprising: a) an ultrasound probe comprising an array of piezoelectric transducer crystals operating at an ultrasonic frequency of at least 20 megahertz; b) a probe tip configured for housing the array of crystals, the probe tip configured for rotating and bending; c) a gel pad comprising one or both of polymer and hydrogel configured to be disposed on the probe tip and positioned between the array of crystals and the oral structure; d) a battery; and e) a control switch configured for controlling the operation of the apparatus.
2. The apparatus as set forth in claim 1, further comprising a handle, the probe tip rotatably attached to the handle.
3. The apparatus as set forth in claim 1, wherein the gel pad comprises low ultrasonic attenuation at the ultrasonic frequency and is safe for use in the oral structure of the patient, the gel pad configured to cover the array, the gel pad further configured to be shaped to conform to the oral structure to be imaged.
4. The apparatus as set forth in claim 1, comprising an ultrasound data acquisition unit, the acquisition unit comprising: a) a microcontroller or digital signal processor or an application specific integrated circuit (“ASIC”) operatively coupled to the array and configured to control ultrasound signal generation, ultrasound signal acquisition, processing of acquired ultrasound signals and communication of the acquired ultrasound signals; and b) a wireless communications transceiver module operatively coupled to the microcontroller or digital signal processor or ASIC, the transceiver module configured to wirelessly transmit the acquired ultrasound signals to a peripheral smart device comprising a visual display.
5. The apparatus as set forth in claim 4, further comprising a control foot pedal configured for wireless communication with the transceiver module, the foot pedal configured to control the operation of the apparatus.
6. The apparatus as set forth in claim 4, wherein the transceiver module is configured to communicate using one or more of Bluetooth®, Wi-Fi®, Wi-Fi Direct® and ZigBee® communications protocols.
7. The apparatus as set forth in claim 4, wherein the microcontroller or digital signal processor or ASIC is configured to multiplex ultrasound signals transmitted to the array.
8. The apparatus as set forth in claim 4, wherein the microcontroller or digital signal processor or ASIC further comprises an analog-to-digital converter configured to digitize ultrasound signals received from the array.
9. The apparatus as set forth in claim 4, wherein the peripheral smart device comprises one or more of a general purpose computer, a personal digital assistant, a smart phone, a smart television and a computing tablet.
10. The apparatus as set forth in claim 9, wherein the peripheral smart device comprises an iOS® or Android® operating system.
11. The apparatus as set forth in claim 4, wherein the acquisition unit comprises a battery management circuit.
12. The apparatus as set forth in claim 4, wherein the peripheral smart device comprises a memory further comprising software code segments configured to cause the peripheral smart device to carry out one or more steps comprising of: a) enhancing ultrasound signals representing images of alveolar bone structure and boundaries of enamel, dentin and gingiva of a patient using a noise removal filter, a contrast enhancement, an edge enhancement, and machine learning; b) identifying peaks (global maximum) and troughs (global minimum) of one or more of cementoenamel junctions, alveolar bone crests and gingival sulcus of the patient using object detection and recognition; c) calculating changes in bone level or pocket depth of the patient using measurements between ultrasound images of different periods; d) comparing the ultrasound images of the patient with one or more of CBCT images of an oral structure of the patient and enhancing visualization of soft and hard tissues of the oral structure; e) eliminating artifacts caused by multiple reflections of ultrasonic waves in the ultrasonic images of the oral structure; f) calculating ultrasonic velocity for the hard tissues; and g) correcting the detected thickness of the hard tissues.
13. The apparatus as set forth in claim 12, wherein the software code segments are configured to cause the peripheral smart device to carry out the step of detecting boundary and segments of the oral structure using one or more of multi-label graph cut approach, contrast enhancement, a homomorphic filter, and machine learning.
14. The apparatus as set forth in claim 12, wherein the software code segments are configured to cause the peripheral smart device to carry out the step of extracting interest landmarks of the oral structure using a combination of region extraction, edge detection, local maximum and/or local minimum localization and one or more of adaptive median filtering, homomorphic filtering, and contrast enhancement.
15. The apparatus as set forth in claim 12, wherein the software code segments are configured to cause the peripheral smart device to carry out the step of measuring changes of the oral structure over a period of time using the measurements from ultrasound images of different periods of time.
16. The apparatus as set forth in claim 12, wherein the software code segments are configured to cause the peripheral smart device to carry out the step of fusing the ultrasound images of the oral structure with one or more of CBCT images of the oral structure using a combination of region extraction, edge detection, probability-based set registration, and one or more of adaptive median filtering, homomorphic filtering, and contrast enhancement.
17. The apparatus as set forth in claim 12, wherein the software code segments are configured to cause the peripheral smart device to carry out the step of predicting and removing the multiple reflections artifacts.
18. The apparatus as set forth in claim 12, wherein the software code segments are configured to cause the peripheral smart device to carry out the step of calculating the ultrasonic velocity of the hard tissues.
19. The apparatus as set forth in claim 12, wherein the software code segments are configured to cause the peripheral smart device to carry out the step of correcting the detected thickness of the hard tissues.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
DETAILED DESCRIPTION OF EMBODIMENTS
[0069] In this description, references to “one embodiment”, “an embodiment”, or “embodiments” mean that the feature or features being referred to are included in at least one embodiment of the technology. Separate references to “one embodiment”, “an embodiment”, or “embodiments” in this description do not necessarily refer to the same embodiment and are also not mutually exclusive unless so stated and/or except as will be readily apparent to those skilled in the art from the description. For example, a feature, structure, act, etc. described in one embodiment can also be included in other embodiments but is not necessarily included. Thus, the present technology can include a variety of combinations and/or integrations of the embodiments described herein.
[0070] A method and apparatus for wireless ultrasound imaging is provided for qualitative and quantitative assessment of dental conditions and, in particular, the tooth-periodontal complex.
[0071] Referring to
[0072] In some embodiments, apparatus 100 can comprise wireless portable ultrasound acquisition system 2 for dental imaging, comprising an ultrasound probe 1 with a control switch 3, which can be connected through a cable to a portable ultrasound acquisition system that can communicate with a smart tablet or a phone display 5 wirelessly using one or both of Wi-Fi Direct® or Bluetooth®, to display the ultrasound images. The control switch can be used to turn on/off the image acquisition. In addition, pedal 6 can also connects to the ultrasound acquisition system to control image acquisition. In some embodiments, the ultrasound acquisition unit can comprise battery 4, and can be configured to operate in emission and reception. The ultrasound probe can operate at a minimum frequency of 20 MHz and can comprise a small scale multi-array transducer 7 with matching layer. A layer of hydrogel 8 can also be incorporated to act as a delay line between the transducer and gum.
[0073] Referring to
[0074] Referring to
[0075] Consider a segment, {circumflex over (l)}, measured from the ultrasonograph (
[0076] is the correction factor and θ is the acute angle {circumflex over (l)} makes with the direction perpendicular to the plate (or the direction parallel to the ultrasound beam). The behavior of C in terms of θ is shown (
[0077] In some embodiments, apparatus 100 can provide a portable and an improved ultrasonic imaging system constructed to facilitate imaging the tooth-periodontium complex, qualitative and quantitative assessment of the tooth-periodontal structures of a dental client or a pet animal, in a non-invasive manner.
[0078] Referring to
[0079] In some embodiments, smart device 5 can comprise a memory further comprising a processor and a memory further comprising software code segments configured to cause the smart device to carry out one or more processes on ultrasonic images obtained by apparatus 100 as described herein.
Noise Removal
[0080] In some embodiments, smart device 5 can comprise software code segments configured to cause the smart device to enhance ultrasound signals representing images of alveolar bone structure and boundaries of enamel, dentin, and gingiva of a patient. To accomplish this, there are different noise filtering techniques for ultrasound imaging that can be used as linear filtering (such as Gaussian filter) and nonlinear filtering (such as adaptive median filtering and homomorphic filtering):
[0081] Gaussian filter is a convolution operation that can be applied to each image pixel with a 2×2 Gaussian kernel to remove high-frequency noises (example in
[0082] The adaptive median filter can operate in a rectangular window area S.sub.xy that can be centered on the pixel (x,y) . The output of the adaptive median filtering is a new value as a replacement to the value of the pixel at (x,y) for each window-filtering time. Adaptive median filter can remove noise while keeping edges relatively sharp.
[0083] The homomorphic filtering is a process that can comprise of three stages: (i) calculating the Fourier transform of the logarithmic compressed image, (ii) applying high-pass filter function and (iii) constructing the inverse Fourier transform of the image. As a result, the homomorphic filtering can normalize the brightness across the image and enhances contrast. In the homomorphic filtering process, the filter is typical in circularly symmetric curve shape, centered at (0,0) coordinates in the frequency domain. Here, a Gaussian high-pass filter can be used to build homomorphic function.
Contrast Enhancement
[0084] Due to the inherent properties of ultrasound images and an approximate selection of the initial region, the region of interest (“ROI”) is inhomogeneous and has low contrast. The reflection from alveolar bone is scattered by the rough surfaces and the corresponding bone boundary is less focused and blurred. Therefore, a linear contrast enhancement approach was applied to enhance the contrast of the images by expanding the original intensity values of the image linearly, thus allowing a better detection of the bone boundary. An example of a noise removal and contrast enhanced image is given in
[0085] In some embodiments, smart device 5 can comprise software code segments configured to cause the smart device to identify peaks and troughs of one or more of cementoenamel junctions (“CEJ”), gingival margin and alveolar bone crests of a patient using object detection and recognition.
Image Preprocessing
[0086] As described above, image enhancement can be accomplished using noise removal with one or more of Gaussian filter, adaptive median filter, homomorphic filtering, and contrast enhancement.
Image Segmentation Using Multi-Label Graph Cut
[0087] To obtain an accurate and reproducible detection of the CEJ location, an initial approximate region of interest consisting of the CEJ and part of enamel and cementum was manually selected and utilized in the proposed approach. A K-means clustering can be used for the identification of foreground and background regions within the initial region of interest. The K-means (K=2) was used to set two pre-classified labels and build the initial graph, since using all of the pixels as the reference for segmentation may slowdown the execution. K-means partition pixel intensities into two initial clusters based on the similarity to the clustering centers. The centers were adjusted based on the average intensity of pixels. This step was repeated until convergence had been reached.
Edge Detection and Enhancement
[0088] Edges are important for differentiating various types of tissues (gingiva, bone, enamel) in an image. The strength of the edges is calculated by intensity gradient or the change in intensity in the direction of steepest ascent. Edge enhancement can be done with the convolution using first order derivative kernels (Sobel kernel, Canny kernel) or second order derivatives (Laplacian kernel, Log filter).
Feature Selection
[0089] After clustering the region using graph cut segmentation, the function extracts every point in the foreground region, and then detects the edge corresponding to the upper border of the enamel, cementum and alveolar bone. Since enamel, cementum, and alveolar bone are strong ultrasound reflectors, their intensities are very high in comparison with gingiva thus easy to detect. Based on the small V-shaped characteristic of CEJ/gingival margin/alveolar bone crest, our method calculates the absolute value of change along the vertical coordinate axis and then compares to the location of the previous point; the point with largest absolute value of change is seen as CEJ CEJ/gingival margin/alveolar bone crest. In other words, for the upper line of n elements u(i) with i=
[0090] From that, the CEJ/gingival margin/alveolar bone crest was selected as corresponding to the maximum absolute |u′(i)|.sub.max of the differential. Finally, transforming the pixel location from coordinate into the original image, the result of the function marks the CEJ/gingival margin/alveolar bone crest in the original image.
[0091] In some embodiments, calculating changes in pocket depth (A), alveolar bone level to the CEJ (B), or gingiva thickness at the CEJ (C) of the patient (as shown in
[0092] In some embodiments, smart device 5 can compare the ultrasound images of a patient with one or more of CBCT images, and/or MRI images of the oral structure and enhancing visualization of soft and hard tissues of the oral structure by means of Coherence Point Drift (“CPD”) registration.
Region-Growing Segmentation
[0093] This method is a common and effective approach for image segmentation. The user specifies a seed point inside the object to be segmented. Consider a pixel f as a seed point with an intensity I.sub.f. The pixels neighboring f are evaluated to determine if they should also be considered part of the object. To do so, a tolerance, ±t, is set for the lower and upper limit. The “flood fill” region-growing algorithm will add a neighboring pixel q to the pixel f's region if I.sub.q is inside the interval [(I.sub.f−t), (I.sub.f+t)]. The process is repeated recursively for the other neighbors of f to expand from the seed pixel to a coherent region.
Coherence Point Drift (CPD) Registration
[0094] The method considers the alignment of two point sets as a probability density estimation problem. By maximizing the likelihood, the CPD can fit the Gaussian mixture model (“GMM”) centroids of the moving point set to the fixed point set. The GMM probability density function, p, is
[0095] where D is the dimension of the point sets, N and M are the number of points in the point sets, and the weight, w (0≤w≤1)), provides a flexible control in the presence of severe outliers and missing points. In the rigid registration, the coherence constraint was imposed by re-parameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the expectation-maximization (“EM”) algorithm in arbitrary dimensions. The EM algorithm used for optimization of the likelihood function can comprise of two steps: E-step to compute the probabilities and M-step to update the transformation. Another advantage of the CPD is that it can preserve the topological structure of the point sets because the GMM centroids are moved coherently as a group.
An Example of Coherence Point Drift Registration Between US and CBCT
[0096]
[0097]
[0098] In some embodiments, smart device 5 can comprise software code segments configured to cause the smart device to eliminate artifacts caused by multiple reflections of ultrasonic waves in the ultrasonic raw signals by means of predictive deconvolution.
[0099]
[0100]
[0101] In some embodiments, smart device 5 can comprise software code segments configured to cause the smart device to calculate the velocity of ultrasound signals in hard tissues of the patient and to correct the detected thickness of the alveolar bone of the patient. The corrected velocity is:
[0102] where the corrected thickness is:
Image Segmentation Using Machine Learning
[0103] The proposed machine learning method primarily consists of an encoder and a decoder component to capture the image features, and to construct and localize the segmentation labels, respectively. All the parameters of the neural networks were initialized and computed using the training data, where the parameter values were updated iteratively to minimize a cost function. Although not used for computing the neural net parameters, the validation set was also utilized during training to determine when to stop the parameter update to prevent overfitting.
[0104] In some embodiments, smart device 5 can comprise software code segments configured to detect boundary and segments of the oral structure using multi-label graph cut optimization approach or machine learning.
[0105] The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans can implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the embodiments described herein.
[0106] Embodiments implemented in computer software can be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof. A code segment or machine-executable instructions can represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment can be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. can be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
[0107] The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the embodiments described herein. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description herein.
[0108] When implemented in software, the functions can be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein can be embodied in a processor-executable software module, which can reside on a computer-readable or processor-readable storage medium. A non-transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory processor-readable storage media can be any available media that can be accessed by a computer. By way of example, and not limitation, such non-transitory processor-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer or processor. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm can reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which can be incorporated into a computer program product.
[0109] Although a few embodiments have been shown and described, it will be appreciated by those skilled in the art that various changes and modifications can be made to these embodiments without changing or departing from their scope, intent or functionality. The terms and expressions used in the preceding specification have been used herein as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding equivalents of the features shown and described or portions thereof, it being recognized that the invention is defined and limited only by the claims that follow.