ADVANCED HEAD DISPLAY UNIT FOR FIRE FIGHTERS

20210375053 · 2021-12-02

    Inventors

    Cpc classification

    International classification

    Abstract

    In this patent, an advanced head display unit designed primarily for fire fighters is disclosed. The advanced augmented reality/virtual reality (AR/VR) head display unit improves coordination between teammates through eye tracking coupled with augmented reality features. This allows one fire fighter to know where another fire fighter is looking and helps coordinate tasks by dividing the scene into sectors and visibly marking each sector. Further, this system helps determine where to hose with a smart target system. Further, multiple sensors are utilized together to triangulate the location of a victim's voice. Additional advantages are also disclosed above.

    Claims

    1. A method comprising: performing a scene understanding on a head display unit wherein the scene contains items; performing eye tracking of a first user wearing the head display unit wherein the head display units has eye tracking capabilities and wherein the first user looks at at least one of the items; and analyzing the eye tracking data of the user.

    2. The method of claim 1 further comprising dividing the scene into at least two portions wherein: a first portion of the scene is displayed with a first digital marking for the first user; and a second portion of the scene is displayed with a second digital marking for the second user.

    3. The method of claim 1 further comprising: determining a relative location of the first head display unit worn by the first user as compared to a second head display unit worn by a second user; determining the second head display unit's pointing direction; and displaying a first digital line on the first head display unit wherein the digital line originates in proximity to the second head display unit and extends in the pointing direction of the second head display unit.

    4. The method of claim 1 further comprising: determining a look angle direction of a second user wearing a second head display unit with eye tracking capabilities; displaying a digital line on the first head display unit wherein the digital line originates in proximity to the second head display unit and extends in the look angle of the second user.

    5. The method of claim 1 further comprising: determining a convergence point of a second user in the scene; displaying a digital line on the first head display unit wherein the digital line originates in proximity to the second head display unit and extends to the convergence point of the second user.

    6. The method of claim 1 further comprising: determining a pointing direction of an object held by a second user; and displaying a fourth digital line on the first head display unit wherein the fourth digital line originates in proximity to the second user and extends in the pointing of the object.

    7. The method of claim 1 further comprising: providing a first digital mark for a first item for the first user; and providing a second digital mark for the first item for a second user.

    8. The method of claim 1 further comprising: providing a first set of digital marks to cause a first smooth tracking eye pattern for a first user; and providing a second set of digital marks to cause a second smooth tracking eye pattern for a first user.

    9. The method of claim 1 further comprising recording a first user's fixation locations of items in the scene and displaying at set of digital objects at the fixation locations.

    10. The method of claim 9 further comprising displaying the set of digital objects to a second user.

    11. The method of claim 10 further comprising placing a digital object in proximity to a moving item within the scene, which enables the first user to perform smooth tracking of the moving item.

    12. The method of claim 10 further comprising placing an appearing-disappearing digital object in proximity to a moving item within the scene, which enables the first user to perform saccades of the moving item.

    13. The method of claim 1 further comprising wherein the head display unit comprises at least one forward looking infrared camera and wherein an artificial intelligence algorithm uses the data from the at least one forward looking infrared camera to determine the optimum aim point for a fire hose and wherein the optimum aim point is displayed on the head display unit.

    14. The method of claim 1 further comprising wherein the head display unit comprises a laser range finder and wherein the laser range finder generates a 3D image of the items and wherein a digital 3D image of the items is displayed on the head display unit to the first user.

    15. The method of claim 1 further comprising wherein the first user's head display unit contains a first acoustic sensor and a first position locator and a second user's head display unit contains a second acoustic sensor and a second position locator and wherein data from the first user's head display unit and data from the second user's head display unit are utilized to triangulate the location of a sound.

    16. The method of claim 1 further comprising wherein the first user's head display unit contains a hologram generator wherein a second user can view the hologram with the naked eye.

    17. The method of claim 1 further comprising wherein the first user's left hand is located on the first user's left thigh; wherein the first user's right hand is located on the first user's right thigh; wherein the user's left hand, right hand, left thigh and right thigh are within a camera's field of view; wherein a movement of a finger of the user's right hand causes a first digital keystroke; wherein a movement of a finger of the user's left hand causes a second digital keystroke;

    18. The method of claim 17 further comprises wherein a movement comprises at least one of the group consisting of: a tapping motion; a lifting motion, and, a dragging motion.

    19. The method of claim 17 further comprises wherein the determination of which keystroke is determined by at least one of the group consisting of: a position of a finger on a thigh; a speed of a finger movement; and an artificial intelligence algorithm.

    20. The method of claim 1 further comprising registering a digital volume-subtending 3D cursor to an item wherein the shape of the 3D cursor comprises one of the group consisting of: a three-dimensional geometric object; and a halo surrounding the item.

    Description

    BRIEF DESCRIPTION OF FIGURES

    [0032] FIG. 1 illustrates a Fire fighter head display unit.

    [0033] FIG. 2 illustrates the head display unit showing digital lines.

    [0034] FIG. 3 illustrates the head display unit showing a simulated line of fire of hose.

    [0035] FIG. 4 illustrates the head display unit showing coordination between sectors of spray of water or fire repellant.

    [0036] FIG. 5A illustrates a top down view of two fire men.

    [0037] FIG. 5B illustrates a cross sectional view of the first Fire fighter in the y-z plane.

    [0038] FIG. 5C illustrates a cross sectional view of the second Fire fighter in the x-z plane.

    [0039] FIG. 5D illustrates what the first fire fighter would see when looking at the second fire fighter.

    [0040] FIG. 5E illustrates a 3D picture-in-picture type display wherein a first user can see what a second user is looking at.

    [0041] FIG. 6A illustrates the head display unit showing use of a smooth tracking visual aid marker.

    [0042] FIG. 6B illustrates a flow chart to supplement FIG. 6A.

    [0043] FIG. 7A illustrates the head display unit showing use of a saccades visual aid marker.

    [0044] FIG. 7B illustrates a flow chart to supplement FIG. 7A.

    [0045] FIG. 8 illustrates use of a milli-meter wave (MMW) to help the Fire fighter see through walls.

    [0046] FIG. 9 illustrates generation of multiple holographic flashes.

    [0047] FIG. 10 illustrates the placement of a digital 3D cursor into the physical world.

    [0048] FIG. 11A illustrates a user who is using their thighs as a surface to type.

    [0049] FIG. 11B illustrates what the user sees on their head display unit while typing.

    DETAILED DESCRIPTION OF FIGURES

    [0050] FIG. 1 illustrates a Fire fighter head display unit. 100 illustrates the head display unit, which in this embodiment is in the form of a helmet with sensors mounted on the helmet. Multiple acoustic sensors 102 are shown, one for each quadrant. A multipurpose laser range finder/marker 104 is shown. A forward looking infrared (FLIR)/TV 106 is shown. A near-infrared (IR) (preferred embodiment is 850 mm) unit 108 is shown. A hologram generator 110 is shown. An eye tracking system 112 is shown. The eye tracking is important for a variety of aspects of this patent. For example, assessing alertness, enhancing viewing during the human eye's smooth tracking and saccades movements and alerting a first user where a second user is looking. A digital magnetic compass 114 is shown. A laser receiver 116 is shown. An extended reality unit is shown 118.

    [0051] FIG. 2 illustrates the head display unit showing digital lines. The head display unit 200 is shown, which contains a left eye display 202 and a right eye display 204. A Fire fighter can see a left eye view of the terrain 206 and a right eye view of the terrain 208. The convergence point in the left eye display 210 is shown. The convergence point in the right eye display 212 is also shown. Please see U.S. Pat. No. 9,349,183, METHOD AND APPARATUS FOR THREE DIMENSIONAL VIEWING OF IMAGES, which is incorporated by reference in its entirety, for additional details regarding convergence. In the left eye display 202, there is a first digital line 214 illustrating the pointing direction of the user's head display unit. In the right eye display 204, there is a second digital line 216 illustrating the pointing direction of the user's head display unit. In the left eye display 202, there is a third digital line 218 illustrating the look angle of the left eye, which goes to convergence point 210. In the right eye display 204, there is a fourth digital line 220 illustrating the look angle of the right eye, which goes to convergence point 212. Note that this figure illustrates lines where the user is looking. However, other digital objects other than lines (e.g., dots) can be utilized. Furthermore, as taught later in this patent, the digital lines that are displayed on a first user's head display unit could be of a second user. Furthermore, a composite line displaying the general HDU pointing directions or look angles could also be utilized if one person was trying to determine in general where a group of users is looking.

    [0052] FIG. 3 illustrates the head display unit showing a simulated line of fire of hose. The head display unit 300 is shown, which contains a left eye display 302 and a right eye display 304. Wearing the head display unit 300, the Fire fighter can see a left eye view of the terrain 306 and a right eye view of the terrain 308. The aim point of the user's hose in the left eye display 310 is shown. The aim point of the user's hose in the right eye display 312 is also shown. 314 illustrates the trajectory of the water (or other fire repellant) stream through the air in the left eye field of view. 316 illustrates the trajectory of the water (or other fire repellant) stream through the air in the right eye field of view. 318 illustrates the spot where the water (or other fire repellant) hits the target on the left eye field of view. 320 illustrates the spot where the water (or other fire repellant) hits the target on the left eye field of view. Note that hash marks can be used along the trajectories to show distance markers. The system improves performance by adding a target, which can adjust for a variety of factors (e.g., distance, wind, weapon type, etc.) to assist with aiming.

    [0053] FIG. 4 illustrates the head display unit showing coordination between sectors of spray of water or fire repellant. The head display unit 400 is shown, which contains a left eye display 402 and a right eye display 404. Through the lenses of the head display unit 400, the Fire fighter can see a left eye view of the scene 406 and a right eye view of the scene 408. The left eye view of simulated line pair 410 (in a “V” shape) designating a first teammate's sector of hosing. The left eye view of simulated line pair 412 (in a “V” shape) designating the sector of fire of the Fire fighter who is viewing the HDU 400 shown in this image. The left eye view of simulated line pair 414 (in a “V” shape) designating a second teammate's sector of hosing. The right eye view of simulated line pair 416 (in a “V” shape) designating a first teammate's sector of hosing. The right eye view of simulated line pair 418 (in a “V” shape) designating the sector of fire of the Fire fighter who is viewing the HDU 400 shown in this image. The right eye view of simulated line pair 420 (in a “V” shape) designating a second teammate's sector of hosing.

    [0054] FIG. 5A illustrates a top down view of two fire men. 500 illustrates a first Fire fighter. 501 illustrates the pointing direction of the first Fire fighter's head display unit. 502 illustrates a second Fire fighter. 503 illustrates the pointing direction of the second Fire fighter's head display unit. For illustrative purposes, the x-axis and y-axis are shown. Assume that the second Fire fighter 502 and the first Fire fighter 500 are 500 feet away from one another and it is too far for the First fire fighter 500 to see the detail of the second Fire fighter 502, but the first Fire fighter 500 wants to know where the second Fire fighter is looking.

    [0055] FIG. 5B illustrates a cross sectional view of the first Fire fighter in the y-z plane. 500 illustrates the first Fire fighter. 501 illustrates the pointing direction of the first Fire fighter's head display unit. For illustrative purposes, the y-axis and z-axis are shown.

    [0056] FIG. 5C illustrates a cross sectional view of the second Fire fighter in the x-z plane. 500 illustrates the second Fire fighter. 503 illustrates the pointing direction of the second Fire fighter's head display unit. For illustrative purposes, the x-axis and z-axis are shown.

    [0057] FIG. 5D illustrates what the first fire fighter would see when looking at the second fire fighter. 504 illustrates the head display unit of the first Fire fighter. 502A illustrates the second Fire fighter in the right eye display. 502B illustrates the second Fire fighter in the right eye display. 505A illustrates a digital line displayed on the right eye portion of the head display unit of the first Fire fighter, which corresponds to the pointing direction of the second Fire fighter, which is shown as 503 in FIGS. 5A and 5C. This simple scenario is meant to illustrate the inventive step. In practice, the change in height, change in forward position, change in side position, change in pointing direction are accounted for in real time. Note that this was shown for the pointing direction of the HDU; however, it could be shown for the look angle or via the aim direction (e.g., of the fire hose).

    [0058] FIG. 5E illustrates a 3D picture-in-picture type display wherein a first user can see what a second user is looking at. 506A illustrates what user 502 is viewing in the left eye display. 506B illustrates what user 502 is viewing in the right eye display. Thus, this would be a 3D picture in picture.

    [0059] FIG. 6A illustrates the head display unit showing use of a smooth tracking visual aid marker. The head display unit 600 is shown, which contains a left eye display 602 and a right eye display 604. Through the lenses of the head display unit 600, the Fire fighter can see a left eye view of the terrain 606 and a right eye view of the terrain 608. A left eye view of the smooth tracking visual aid marker is shown at a first time point 610, a second time point 612, a third time point 614, a fourth time point 616, a fifth time point 618 and a sixth time point 620. A right eye view of the smooth tracking visual aid marker is shown at a first time point 622, a second time point 624, a third time point 626, a fourth time point 628, a fifth time point 630 and a sixth time point 632. The visual aid marker can take multiple shapes, sizes, colors and visual appearances, such as a round circle, arrow, etc. The data received would be uploaded into the continuous situational awareness system. Multiple Fire fighters could see the same marker (from different view points). This is useful because there may be something in the field of view of interest (e.g., victim) that is slowly moving, but hard to see. The visual tracker helps identify such an item. FIG. 6B illustrates a flow chart to supplement FIG. 6A. In the first step 634, some situations, the camera can detect small movements of an item of interest that the human eye does not detect or does not detect very well. Next 636, the HDU display can place a smooth tracking dot in close proximity to a small moving object. Finally 638, the smooth tracking dot moves in a continuous fashion so as to help the human eye follow the subtle movement.

    [0060] FIG. 7A illustrates the head display unit showing use of a saccades visual aid marker. The head display unit 700 is shown, which contains a left eye display 702 and a right eye display 704. Through the lenses of the head display unit 700, the Fire fighter can see a left eye view of the terrain 706 and a right eye view of the terrain 708. In this scenario, there are three items are need to be tracked by a single Fire fighter. It is easy for a Fire fighter to forget one of the targets. Therefore, this technique is useful. A left eye view of a first saccadian tracking visual aid marker 710 is shown during a first time interval (e.g., time interval between 0 seconds and 2 seconds), but subsequently disappears (e.g., immediately after the 0 seconds to 2 seconds time interval has passed). A left eye view of a second saccadian tracking visual aid marker 712 is shown during a second time interval (e.g., time interval between 2 seconds and 4 seconds), but subsequently disappears (e.g., immediately after the 2 seconds to 4 seconds time interval has passed). A left eye view of a third saccadian tracking visual aid marker 714 is shown during a third time interval (e.g., time interval between 4 seconds and 6 seconds), but subsequently disappears (e.g., immediately after the 4 seconds to 6 seconds time interval has passed). A right eye view of a first saccadian tracking visual aid marker 716 is shown during a first time interval (e.g., time interval between 0 seconds and 2 seconds), but subsequently disappears (e.g., immediately after the 0 seconds to 2 seconds time interval has passed). A right eye view of a second saccadian tracking visual aid marker 718 is shown during a second time interval (e.g., time interval between 2 seconds and 4 seconds), but subsequently disappears (e.g., immediately after the 2 seconds to 4 seconds time interval has passed). A right eye view of a third saccadian tracking visual aid marker 720 is shown during a third time interval (e.g., time interval between 4 seconds and 6 seconds), but subsequently disappears (e.g., immediately after the 4 seconds to 6 seconds time interval has passed). This process would then repeat so the Fire fighter would repeat monitoring of the items. For example, a left eye view of a first saccadian tracking visual aid marker 710 is shown during a fourth time interval (e.g., time interval between 6 seconds and 8 seconds), but subsequently disappears (e.g., immediately after the 6 seconds to 8 seconds time interval has passed) and a right eye view of a first saccadian tracking visual aid marker 716 is shown during a fourth time interval (e.g., time interval between 6 seconds and 8 seconds), but subsequently disappears (e.g., immediately after the 6 seconds to 8 seconds time interval has passed). And so on. The saccadian tracking visual aid marker (e.g., circle shown in this figure) can take multiple shapes, sizes, colors and visual appearances, such as an arrow, etc. The data received would be uploaded into the continuous situational awareness system. Multiple troops could see the same marker. FIG. 7B illustrates a flow chart to supplement FIG. 7A. In the processing block 722, some situations require that multiple objects need to be tracked by a single Fire fighter. And, of course, the Fire fighter is vulnerable to human error and improper sequencing and tracking. In the second step 724, the HDU displays a first saccadian visual aid marker at the first object for a first time interval, a second saccadian visual aid marker at a second object during a second time interval. And, so on. Finally, the saccadian visual aid marker described in processing block 724 repeats for additional rounds. Combinations of smooth tracking visual aid markers (e.g., shown continuously or shown for specific intervals) and saccadian visual aid markers would optimize a Fire fighter's ability to monitor multiple targets.

    [0061] FIG. 8 illustrates use of a milli-meter wave (MMW) to help the Fire fighter see through walls. A Fire fighter 800 is shown wearing a helmet 802 equipped with MMW 804. A wall 806 is shown. A victim 808 are shown hidden behind the wall. The MMW sensor 804 on the helmet 802 can detect the victim 808, which can be displayed on the head display unit (not shown). The preferred MMW is available at Intelligent Automation Incorporated, Rockville, Md.

    [0062] FIG. 9 illustrates generation of multiple holographic flashes. A Fire fighter 900 is shown wearing a helmet 902 equipped with a holographic generator 904. The holographic generator (e.g., femtosecond laser hologram) 904 emits pulses to simulate flashes 906. This can serve as a notification of a position of the Fire fighter to other individuals (not wearing a HDU). In some embodiments, a police officer could use this apparatus to help disperse crowds. For example, one police officer could generate images of many police officers as a scare tactic. In some embodiments, this could be used for entertainment (e.g., generation of Disney characters).

    [0063] FIG. 10 illustrates the placement of a digital 3D cursor into the physical world. 1000 illustrates an object on a desk. 1001 illustrates an extended reality display wherein the extended reality glasses show the objects on the desk as well as the 3D cursor surrounding the object 1000. 1002 illustrates the 3D cursor in the left eye display. 1003 illustrates the 3D cursor in the right eye display. Thus, this embodiment comprises a 3D cursor can be placed onto a physical object. For example, in the event of a prolonged rescue situation, certain items need to be accounted for at all times (e.g., oxygen tank). This embodiment discloses a method of placing a 3D cursor around the physical items. The 3D cursor could take the form of a 3D geometric object or a halo surrounding the item. The placement of the 3D cursor could be via hand gestures, tool or voice command. The user could implement a command to show or hide all 3D cursors. This could be helpful for object tracking. Additional description of the 3D cursor is disclosed in METHOD AND APPARATUS FOR THREE DIMIENSIONAL VIEWING OF IMAGES, Pat. No. 9,980,691, which is incorporated by reference in its entirety. Additionally, features include those described in U.S. patent application Ser. No. 16/785,606, IMPROVING IMAGE PROCESSING VIA A MODIFIED SEGMENTED STRUCTURE, which is incorporated by reference in its entirety. In other non-fire fighter example, this could represent a concept of a family who has a housekeeper. The family needs the kitchen to be cleaned, but there are so many areas in the kitchen. The housekeeper needs to know which areas to focus on. Therefore, the family could indicate high priority areas by placing a series of volume cursors over the areas (e.g., oven, toaster, dishwasher, etc.). Medium and low priority areas could be marked in a similar manner.

    [0064] FIG. 11A illustrates a user who is using their thighs as a surface to type. 1100 illustrates the hands of a user which are touching the user's thighs. Note that touching the thighs is preferred because the user would have tactile sensation from both the skin on the thighs and the finger tips. 1101 illustrates the head display unit.

    [0065] FIG. 11B illustrates what the user sees on their head display unit while typing. 1101 illustrates the head display unit. 1102 illustrates a digital keyboard on the head display unit. 1103 illustrates downward facing cameras, which show the user's hands while they are on the user's lap.