APPARATUS FOR REAL-TIME VISUALIZING A MOVEMENT OF A LOWER JAW VERSUS AN UPPER JAW IN A CRANIOMAXILLOFACIAL AREA OF A PATIENT IN DENTAL DIAGNOSTICS
20220133248 · 2022-05-05
Assignee
Inventors
- Wolf BLECHER (Hemsbach, DE)
- Franziska RIVERSA (Modautal, DE)
- Ulrich SCHULZE-GANZLIN (Lorsch, DE)
- Kai LINDENBERG (Wersau, DE)
- Christian Beckhaus (Darmstadt, DE)
Cpc classification
G06F3/011
PHYSICS
A61B6/462
HUMAN NECESSITIES
A61B6/5247
HUMAN NECESSITIES
A61B6/463
HUMAN NECESSITIES
International classification
A61B6/00
HUMAN NECESSITIES
Abstract
The present invention relates to an apparatus (20) for real-time visualizing a movement of a lower jaw (12) versus an upper jaw (14) in a craniomaxillofacial area (16) of a patient (18) in dental diagnostics, comprising: an input interface (32) for receiving a two-dimensional image signal (26) of the craniomaxillofacial area of the patient from a camera (40) and a three-dimensional jaw model (28) of the craniomaxillofacial area of the patient being precalculated based on volume data; a registration unit (34) for registering the three-dimensional jaw model with the two-dimensional image signal in a first jaw position of the lower jaw versus the upper jaw and in a second jaw position of the lower jaw versus the upper jaw; an imaging unit (36) for generating a two-dimensional depth-view (30) of the craniomaxillofacial area from the three-dimensional jaw model based on the conducted registrations and the two-dimensional image signal, said two-dimensional depth-view including a structure underlying an image area of the two-dimensional image signal; and an output interface (38) for outputting the two-dimensional depth-view. The present invention further relates to a system (10) and method for real-time visualizing a movement of a lower jaw (12) versus an upper jaw (14) in a craniomaxillofacial area (16) of a patient (18) in dental diagnostics.
Claims
1. Apparatus (20) for real-time visualizing a movement of a lower jaw (12) versus an upper jaw (14) in a craniomaxillofacial area (16) of a patient (18) in dental diagnostics, comprising: an input interface (32) for receiving a two-dimensional image signal (26) of the craniomaxillofacial area of the patient from a camera (40) and a three-dimensional jaw model (28) of the craniomaxillofacial area of the patient being precalculated based on volume data; a registration unit (34) for registering the three-dimensional jaw model with the two-dimensional image signal in a first jaw position of the lower jaw versus the upper jaw and in a second jaw position of the lower jaw versus the upper jaw; an imaging unit (36) for generating a two-dimensional depth-view (30) of the craniomaxillofacial area from the three-dimensional jaw model based on the conducted registrations and the two-dimensional image signal, said two-dimensional depth-view including a structure underlying an image area of the two-dimensional image signal; and an output interface (38) for outputting the two-dimensional depth-view.
2. Apparatus (20) according to claim 1, wherein the registration unit (34) is configured to register the three-dimensional jaw model (28) in a first jaw position with closed bite and in a second jaw position with fully opened bite.
3. Apparatus (20) according to claim 1, wherein the registration unit (34) is configured to register the three-dimensional jaw model (28) with the two-dimensional image signal (26) based on an image recognition of a tooth of the patient (18) within the two-dimensional image signal (26); a user input; and/or a feature extraction of soft tissue contours within the two-dimensional image signal of the patient.
4. Apparatus (20) according to claim 1, wherein the imaging unit (36) is configured to generate the two-dimensional depth-view (30) corresponding to a view of the three-dimensional jaw model (28 from an angle of view of the two-dimensional image signal (26).
5. Apparatus (20) according to claim 1, wherein the input interface (32) is configured to receive the two-dimensional image signal (26) from the camera (40) of augmented reality glasses (22); and the output interface (38) is configured to output the two-dimensional depth-view (30) on a display (42) of the augmented reality glasses.
6. Apparatus (20) according to claim 1, wherein the output interface (38) is configured to semi-transparently output the two-dimensional depth-view (30); and/or output the two-dimensional depth-view (30) only in an area of the temporomandibular joints of the patient (18).
7. Apparatus (20) according to claim 1, with an angle unit (44) for determining an angle of view of the camera (40) in relation to the three-dimensional jaw model (28), wherein the registration unit (34) is configured to register the three-dimensional jaw model (28) with the two-dimensional image signal (26) based on the determined angle of view.
8. Apparatus (20) according to claim 7, wherein the input interface (32) is configured to receive orientation data with information on an orientation of the camera (40) in an external coordinate system; and the angle unit (44) is configured to determine the angle of view based on the orientation data.
9. Apparatus (20) according to claim 1, wherein the input interface (32) is configured to receive a user input with information on a desired view; and the imaging unit (36) is configured to generate the two-dimensional depth-view (30) based on the user input.
10. Apparatus (20) according to claim 1, wherein the input interface (32) is configured to receive the three-dimensional jaw model (28) via a wireless communication connection.
11. System (10) for visualizing a movement of a lower jaw (12) versus an upper jaw (14) in a craniomaxillofacial area (16) of a patient (18) in dental diagnostics, comprising: an apparatus (20) according to any one of the preceding claims; and augmented reality glasses (22) having a camera (40) for generating the two-dimensional image signal (26) and a display (42) for displaying the two-dimensional depth-view (30), wherein the display is preferably semi-transparent.
12. System (10) according to claim 11, wherein the augmented reality glasses (22) include an orientation sensor (46) for determining an angle of view of the camera (40).
13. System (10) according to any one of claim 12, wherein the apparatus (20) is integrated into the augmented reality glasses (22).
14. Method for real-time visualizing a movement of a lower jaw (12) versus an upper jaw (14) in a craniomaxillofacial area (16) of a patient (18) in dental diagnostics, comprising the steps of: receiving (S10) a two-dimensional image signal (26) of the craniomaxillofacial area of the patient from a camera (40) and a three-dimensional jaw model (28) of the craniomaxillofacial area of the patient being precalculated based on volume data; registering (S12) the three-dimensional jaw model with the two-dimensional image signal in a first jaw position of the lower jaw versus the upper jaw and in a second jaw position of the lower jaw versus the upper jaw; generating (S14) a two-dimensional depth-view (30) of the craniomaxillofacial area from the three-dimensional jaw model based on the conducted registrations and the two-dimensional image signal, said two-dimensional depth-view including a structure underlying an image area of the two-dimensional image signal; and outputting (S16) the two-dimensional depth-view.
15. Computer program product with program code for executing the steps of the method according to claim 14 when the program code is executed on a computer.
Description
[0032] These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter. In the following drawings
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040] In
[0041]
[0042] The present invention proposes to generate a two-dimensional depth-view 30 of the craniomaxillofacial area based on the real-time two-dimensional image signal 26 and the three-dimensional jaw model 28. This two-dimensional depth-view 30 represents a view of the three-dimensional jaw model 28 from an angle of view that is equivalent to the current angle of view of the two-dimensional image signal. The three-dimensional jaw model 28 is processed to obtain a view from an angle of view similar to the angle of view of the two-dimensional image signal. The current position of the patient's jaw as viewed in the two-dimensional image signal is reflected in the two-dimensional depth-view 30.
[0043] This generated two-dimensional depth-view can then be overlaid over the observed real-time image. Thereby, an augmented reality view of the patient's mandibular movement can be realized. The two-dimensional depth-view 30 preferably is a sort of overlay for the two-dimensional image signal 26 to be displayed on a screen of the augmented reality glasses. A dentist or another caregiver can then see the patient through the augmented reality glasses and obtain an overlay image augmenting his observation with the structures underlying the currently viewed area. During the diagnosis it is possible that the dentist communicates with the patient and directs the movements of the patient's jaw. This makes it possible to accurately diagnose the individual jaw movement and determine an optimal position of an occlusal splint.
[0044] In
[0045] Via the input interface 32, the two-dimensional image signal of the craniomaxillofacial area of the patient is received from a camera. The two-dimensional image signal preferably corresponds to an output signal of a camera. Further, the three-dimensional jaw model is received, e.g. from a database including the precalculated model. It is possible that this three-dimensional jaw model is precalculated based on volume data of the patient obtained in a scan (preferably an X-ray tomography scan) carried out prior to the diagnosis. For this, the input interface may be configured for wireless communication with an external database storing the three-dimensional jaw model.
[0046] The registration unit 34 performs an image registration procedure. A first registration is performed when the lower jaw is in a first jaw position versus the upper jaw and another registration is performed when the lower jaw is in a second jaw position versus the upper jaw. Based on the two jaw positions, the features of the two-dimensional image signal are mapped in real time to the features in the three-dimensional jaw model. The two representations of the patient's craniomaxillofacial area are mapped to one another. The image registration thereby includes a mapping or matching of the coordinate systems of the two representations. According to the present invention, the registration unit 34 performs two separate image registration procedures, in which the two-dimensional image signal and the three-dimensional jaw model are transformed into one coordinate system.
[0047] For the image registration, different algorithms can be used. For instance, it is possible to make use of intensity-based or feature-based registration algorithms. The basis for the registration may be an image recognition of a tooth of the patient. Ideally, a tooth can be visually observed and recognized in the two-dimensional image signal and is also represented in the three-dimensional jaw model. The characteristic form of a tooth can be identified in both the two-dimensional image signal and the three-dimensional jaw model. A tooth forms a suitable target for registering the three-dimensional jaw model with the two-dimensional image signal.
[0048] In the imaging unit 36, the two registrations are used to generate the two-dimensional depth-view of the craniomaxillofacial area. This two-dimensional depth-view corresponds to a real-time visualization of the features included in the three-dimensional jaw model for the area of the two-dimensional image signal and from an equivalent angle of view on the three-dimensional jaw model. In other words, a real-time representation of the underlying bones and teeth etc. for the currently viewed craniomaxillofacial area of the patient is calculated. For this, the imaging unit 36 may be configured to algorithmically determine the intermediate steps between the two registrations. In particular, it is possible to interpolate between the two registrations. In this process standard image processing and movement calculation approaches can be used.
[0049] The output interface 38 is configured for outputting the two-dimensional depth-view. Preferably, the outputting is performed in real time. The output interface 38 may be connected to a head-up display of the augmented reality glasses. Thereby, it is possible that a semi-transparent output is performed to overlay the two-dimensional depth-view on the observed portion of the craniomaxillofacial area of the patient. It is possible that the two-dimensional depth-view is only output in an area of the temporomandibular joints of the patient to allow the dentist to obtain information on the dysfunction.
[0050] In
[0051] In
[0052] The camera 40 is used for displaying the two-dimensional depth-view. The display 42 is a head-up display that is at least semi-transparent for visual light in order to allow the person wearing the augmented reality glasses 22 to view the reality and, at the same time, view the additional information displayed on the display 42. The camera 40 is connected to the augmented reality glasses so that the angle of view of the glasses corresponds to the angle of view of the camera 40.
[0053] In the illustrated embodiment, the system 10 further includes an orientation sensor 46, which preferably includes an internal sensor for determining an orientation of the glasses in an external coordinate system.
[0054] In
[0055] In an embodiment, it is possible that the two-dimensional depth-view is generated based on user input received via the input interface. Thereby, the two-dimensional depth view can be individualized based on current needs of the dentist. For instance, a dentist may choose to only overlay a certain area and may specify this area via user input. The user input may, e.g., be obtained via a wireless signal from an interface device like a table or the like.
[0056] In
[0057] The foregoing discussion discloses and describes merely exemplary embodiments of the present disclosure. As will be understood by those skilled in the art, the present disclosure may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the description is intended to be illustrative, but not limiting the scope of the disclosure, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, defines, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.
[0058] In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
[0059] In so far as embodiments of the disclosure have been described as being implemented, at least in part, by software-controlled data processing apparatus, it will be appreciated that a non-transitory machine-readable medium carrying such software, such as an optical disk, a magnetic disk, semiconductor memory or the like, is also considered to represent an embodiment of the present disclosure. Further, such software may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. A method according to the present invention may particularly be carried out to control the operation of a software-defined radio.
[0060] The elements of the disclosed devices, circuitry and system may be implemented by corresponding hardware and/or software elements, for instance appropriated circuits. A circuit is a structural assemblage of electronic components including conventional circuit elements, integrated circuits including application-specific integrated circuits, standard integrated circuits, application-specific standard products, and field-programmable gate arrays. Further a circuit includes central processing units, graphics processing units, and microprocessors which are programmed or configured according to software code. A circuit does not include pure software, although a circuit includes the above described hardware executing software.