Determining information about a patients face
10398867 ยท 2019-09-03
Assignee
Inventors
- Elizabeth Powell Margaria (Pittsburgh, PA, US)
- Jonathan Sayer Grashow (Pittsburgh, PA, US)
- Rudolf Maria Jozef Voncken (Eindhoven, NL)
- Dmitry Nikolayevich Znamenskiy (Eindhoven, NL)
Cpc classification
A61B5/0053
HUMAN NECESSITIES
A61B5/70
HUMAN NECESSITIES
G06T19/00
PHYSICS
A61B1/24
HUMAN NECESSITIES
A61B2034/105
HUMAN NECESSITIES
International classification
A61B5/107
HUMAN NECESSITIES
A61B1/24
HUMAN NECESSITIES
A61M13/00
HUMAN NECESSITIES
G06T19/00
PHYSICS
A61B5/00
HUMAN NECESSITIES
Abstract
An electronic apparatus includes a compilation unit structured to receive a plurality of different 3-D models of a patient's face, to compare the different 3-D models of the patient's face and to determine additional information about the patient's face based on the comparison, wherein the additional information includes at least one of a location of hard tissue, a depth of soft tissue, and a compliance of soft tissue, and wherein the patient's face is manipulated between the different 3-D models.
Claims
1. An electronic apparatus comprising: an air blower structured to generate an airflow that applies pressure to a patient's face; a face scanning unit structured to generate a plurality of 3-D models of the patient's face, wherein the plurality of 3-D models comprises at least a first 3-D model of the patient's face in an absence of the airflow, and a second 3-D model of the patient's face when the airflow is applying pressure to the patient's face; and a compilation unit structured to: receive the plurality of 3-D models, and determine additional information about the patient's face based on a comparison of the plurality of 3-D models, wherein the additional information includes at least one of a location of hard tissue, a depth of soft tissue, and a compliance of soft tissue and/or hard tissue.
2. The electronic apparatus of claim 1, further comprising: an output unit structured to provide the additional information to a user.
3. The electronic apparatus of claim 1, wherein the compilation unit is structured to correlate the plurality of 3-D models of the patient's face based on landmarks on the patient's face.
4. The electronic apparatus of claim 3, wherein the landmarks include at least one of a sellion, eye corners and a glabella.
5. The electronic apparatus of claim 1, wherein: in a third 3-D model of the patient's face, the patient's teeth are shown, and in a fourth 3-D model of the patient's face, the patient's teeth are not shown.
6. The electronic apparatus of claim 5, wherein a lip retractor is used to show the patient's teeth.
7. The electronic apparatus of claim 1, further comprising: an air cushion having a body in a shape of a cushion of a patient interface device and including a plurality of apertures formed therein, wherein the air cushion is structured to provide the airflow generated by the air blower to the patient's face through the plurality of apertures.
8. The electronic apparatus of claim 1, further comprising: a laser pointer structured to generate a laser dot on the patient's face in an area where the airflow is applying pressure to the patient's face.
9. The electronic apparatus of claim 1, wherein the air blower is structured to modulate the airflow.
10. The electronic apparatus of claim 1, wherein the patient's face is manipulated by the airflow applying pressure to one or more portions of the patient's face.
11. A method implemented on a computer system comprising a non-transitory computer readable medium having computer code stored thereon for determining additional information about a patient's face, the method comprising: generating a first 3-D model of a patient's face; generating an airflow that applies pressure to the patient's face; generating at least a second 3-D model of the patient's face when the airflow is applying pressure to the patient's face; and determining additional information about the patient's face based on a comparison of the first 3-D model and at least the second 3-D model, wherein the additional information includes at least one of a location of hard tissue, a depth of soft tissue and a compliance of soft tissue and/or hard tissue.
12. The method of claim 11, wherein the second 3-D model is generated for the airflow applying pressure to a first portion of the patient's face, the method further comprises: causing the airflow to apply pressure to a second portion of the patient's face; and generating a third 3-D model of the patient's face, wherein the additional information is determined based on the first 3-D model, the second 3-D model, and the third 3-D model being compared.
13. The method of claim 11, wherein determining the additional information based on the comparison comprises: correlating the first 3-D model and at least the second 3-D model based on landmarks on the patient's face.
14. The method of claim 11, wherein: in a third 3-D model of the patient's face, the patient's teeth are shown, and in a fourth 3-D model of the patient's face, the patient's teeth are not shown, such that the additional information is further determined based on the third 3-D model and the fourth 3-D model.
15. The method of claim 11, wherein the airflow is provided through an air cushion having a body in a shape of a cushion of a patient interface device and including a plurality of apertures formed therein.
16. The method of claim 11, further comprising: generating a laser dot on the patient's face in an area where the airflow is applying pressure to the patient's face; modulating the airflow to cause oscillations of the laser dot on the patient's face, wherein the additional information is determined based on an amplitude and a frequency of the oscillations.
17. The method of claim 11, wherein the patient's face is manipulated by the airflow applying pressure to one or more portions of the patient's face.
18. The electronic apparatus of claim 8, wherein the air blower is further structured to: modulate the airflow to cause oscillations of the laser dot on the patient's face, wherein the additional information is determined based an amplitude and frequency of the oscillations.
19. The electronic apparatus of claim 18, wherein: the plurality of 3-D models further comprises at least a third 3-D model of the patient's face; the second 3-D model corresponding to the airflow applying pressure to a first portion of the patient's face and the third 3-D model corresponding to the airflow applying pressure to at a second location of the patient's face; and the air blower being structured to modulate the airflow comprises the air blower generating the airflow having a first amplitude when applying pressure to the first location and a second amplitude when applying pressure to the second location, wherein the additional information is determined based on the first 3-D model, the second 3-D model, and the third 3-D model.
20. The electronic apparatus of claim 1, further comprising an output unit structured to output data used to construct a patient interface device for the patient based on the additional information.
21. The method of claim 16, further comprising: modulating the airflow such that the airflow has a first amplitude when applying pressure to a first location of the patient's face, wherein the second 3-D model is generated for the airflow applying pressure to the first location of the patient's face; modulating the airflow such that the airflow has a second amplitude when applying pressure to a second location of the patient's face; generating a third 3-D model of the patient's face when the airflow is applying pressure to the second location of the patient's face, wherein the additional information is determined based on the first 3-D model, the second 3-D model, and the third 3-D model.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
(9) As used herein, the singular form of a, an, and the include plural references unless the context clearly dictates otherwise. As used herein, the statement that two or more parts or components are coupled shall mean that the parts are joined or operate together either directly or indirectly, i.e., through one or more intermediate parts or components, so long as a link occurs. As used herein, directly coupled means that two elements are directly in contact with each other. As used herein, fixedly coupled or fixed means that two components are coupled so as to move as one while maintaining a constant orientation relative to each other.
(10) Directional phrases used herein, such as, for example and without limitation, top, bottom, left, right, upper, lower, front, back, and derivatives thereof, relate to the orientation of the elements shown in the drawings and are not limiting upon the claims unless expressly recited therein.
(11) As employed herein, the terms processor, processing unit, and similar terms shall mean a programmable analog and/or digital device that can store, retrieve and process data; a controller; a control circuit; a computer; a workstation; a personal computer; a microprocessor; a microcontroller; a microcomputer; a central processing unit; a mainframe computer; a mini-computer; a server; a networked processor; or any suitable processing device or apparatus.
(12) As employed herein, the term addition information about a patient's face means information in addition to the external geometry of the patient's face and includes information such as, without limitation, information on the location of hard tissue, information on the depth of soft tissue and information on the compliance of soft tissue.
(13) A system 2 adapted to provide a regimen of respiratory therapy to a patient is generally shown in
(14) A BiPAP device is a bi-level device in which the pressure provided to the patient varies with the patient's respiratory cycle, so that a higher pressure is delivered during inspiration than during expiration. An auto-titration pressure support system is a system in which the pressure varies with the condition of the patient, such as whether the patient is snoring or experiencing an apnea or hypopnea. For present purposes, pressure/flow generating device 4 is also referred to as a gas flow generating device, because flow results when a pressure gradient is generated. The present invention contemplates that pressure/flow generating device 4 is any conventional system for delivering a flow of gas to an airway of a patient or for elevating a pressure of gas at an airway of the patient, including the pressure support systems summarized above and non-invasive ventilation systems.
(15) In the illustrated example system 2 of
(16) A schematic diagram of an electronic apparatus 20 for determining additional information about the patient's face is shown in
(17) Electronic apparatus 20 includes a face scanning unit 22, a compilation unit 24 and an output unit 26. Face scanning unit 22, compilation unit 24 and output unit 26 may share a housing and form a single device. However, it is also contemplated that face scanning unit 22, compilation unit 24 and output unit 26 may be located in different housings in different devices without departing from the scope of the disclosed concept.
(18) Face scanning unit 24 is structured to generate 3-D models of the patient's face by, for example, scanning the patient's face. Face scanning unit 22 may be, without limitation, a 3-D optical scanner, a camera, a push-pin array or any other device suitable for generating 3-D models of the patient's face. Face scanning unit 22 is structured to generate multiple 3-D models of the patient's face by, for example, scanning the patient's face at different times. Face scanning unit 22 is structured to output the different 3-D models of the patient's face to compilation unit 24.
(19) Compilation unit 24 is structured to receive multiple different 3-D models of the patient's face from face scanning unit 22. For example, in a first 3-D model of the patient's face, the patient's face is not manipulated during scanning, whereas in a second 3-D model of the patient's face, the patient's face is manipulated in some manner during scanning so that the first 3-D model of the patient's face and the second 3-D model of the patient's face are different.
(20) Referring to
(21) Referring back to
(22) In some exemplary embodiments of the disclosed concept, compilation unit 24 is structured to compare the different 3-D models of the patients face by correlating the different 3-D models of the patient's face. The compilation unit 24 may detect anatomical landmarks on the patient's face to facilitate the correlation. Some anatomical landmarks such as, without limitation, the sellion, eye corners and glabella remain unchanged even when a patient changes expressions. Thus, these landmarks can be used to correlate different 3-D models of the patient's face where the patient changes expressions between 3-D models.
(23)
(24)
(25) Referring back to
(26) The additional information is useful in determining an optimally fitting patient interface device for the patient. Additional information such as, without limitation, the location of hard tissue, the depth of soft tissue and the compliance of soft tissue and/or hard tissue can affect how a patient interface device fits a patient. For example, an area of a patient's face where hard tissue is located and the soft tissue has little depth or compliance can be a concern for irritation if a patient interface applied pressure to that area. The additional information can be used to select and/or design a patient interface device that does not apply pressure, or applies less pressure, to an area of the patient's face where hard tissue is located and the depth and compliance of soft tissue are low, thus resulting in a better fit of the patient interface device than if it were selected or designed based on the external geometry of the patient's face alone. Algorithms that determine the fit between a patient interface device and the patient can employ the additional information in order to more accurately optimize the fit of a patient interface device for a patient.
(27) It is contemplated that the additional information may be used to select, adjust or customize a patient interface device for the patient that optimally fits the patient. Furthermore, it is contemplated that the additional information may be used to create a custom patient interface device for the patient that optimally fits the patient.
(28) It is contemplated that the patient's face may be manipulated in any suitable manner to generate the different 3-D models of the patient's face. As shown in
(29) In some other example embodiments of the disclosed concept, the patient's face is manipulated by using airflow. In some example embodiments of the disclosed concept, the patient's face is manipulated by placing the patient in a wind tunnel and allowing the airflow of the wind to deform the patient's face. In some other example embodiments of the disclosed concept, an airflow is generated and blown only towards selected areas of the patient's face.
(30)
(31)
(32) Modified face scanning unit 50 is structured to generate different 3-D models of the patient's face. To this extent, 3-D camera 52 takes 3-D images of the patient's face for use in generating 3-D models of the patient's face. Air blower 54 is structured to generate airflow in the direction of the patient's face. The airflow manipulates the patient's face by causing deformation of a portion of the patient's face. By taking a 3-D images of the patient's face when air blower 54 is not generating airflow to manipulate a portion of the patient's face and taking another 3-D image when air blower 54 is generating airflow to manipulate a portion of the patient's face, different 3-D models of the patient's face can be generated by modified face scanning unit 50.
(33) Air blower 54 may be any device suitable for generating airflow and blowing the air on a portion of the patient's face. In some exemplary embodiments of the disclosed concept, an air directing member 58, such as a conduit, may be attached to air blower 54 in order to facilitate directing the airflow to a selected portion of the patient's face. In some exemplary embodiments of the disclosed concept, air blower 54 is structured to modulate the airflow by, for example and without limitation, periodically increasing and decreasing the amplitude of the generated airflow. The modulated airflow causes variations in the deformation of the patient's face, which can assist in determining additional information about the patient's face when different 3-D models of the patient's face are compared.
(34) Laser pointer 56 is structured to generate a laser dot on the patient's face in the area where air blower 54 is blowing air on the patient's face. The laser dot can be used to triangulate the distance from air blower 54 to the patient's face to more accurately calculate the position of air blower 54. Additionally, the modulated airflow generated by air blower 54 will cause lateral oscillations of the laser dot on the patient's face. The amplitude and frequency of these oscillations can be translated into additional information about the patient's face such as, without limitation, depth fluctuations of soft tissue in the area of the airflow which indicate properties of the soft tissue, such as depth and compliance.
(35) Modified face scanning unit 50 may further include control buttons 60. A user of modified face scanning unit 50 may use control buttons 60 to operate or adjust setting of modified face scanning unit 50.
(36) Modified face scanning unit 50 may be used in conjunction with compilation unit 24 previously described and shown in
(37) In addition to the manners of manipulating the patient's face that have already been described, it is contemplated that the patient's face may be manipulated in any suitable manner without departing from the scope of the disclosed concept. For example and without limitation, the patient's face may also be manipulated by pressing on the patient's face, having the patient change expressions in any suitable manner (e.g., without limitation, having a patient blow up or suck in their cheeks) or having the patient change positions (e.g., without limitation, standing up and laying down) to have the changed effect of gravity manipulate the patient's face.
(38)
(39) In operation 70, a 3-D model of the patient's face is generated. The 3-D model may be generated using any suitable type of device such as, without limitation, the face scanning unit 22 of
(40) In operation 76, the different 3-D models of the patient's face are provided to a compilation unit such as compilation unit 24 shown in
(41) The present disclosed concept can be embodied in an electronic apparatus, such as, for example and without limitation, a mobile device, a mobile computer, a tablet computer, a peripheral device etc. The present disclosed concept can also be embodied as computer readable codes on a tangible computer readable recording medium. The computer readable recording medium is any data storage device that can store data which can be thereafter read by a computer system. Examples of the computer readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices.
(42) It is contemplated that the additional information determined about a patient face in conjunction with any of the embodiments, combination of embodiments, or modification of embodiments of the disclosed concept described herein can be used by, for example and without limitation, a caregiver, technician, or patient in the process of selecting a patient interface device, adjusting a patient interface device, customizing a patient interface device or creating a patient interface device.
(43) In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word comprising or including does not exclude the presence of elements or steps other than those listed in a claim. In a device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The word a or an preceding an element does not exclude the presence of a plurality of such elements. In any device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain elements are recited in mutually different dependent claims does not indicate that these elements cannot be used in combination.
(44) Although the invention has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred embodiments, it is to be understood that such detail is solely for that purpose and that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present invention contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment.