Headset computer that uses motion and voice commands to control information display and remote devices
11947387 ยท 2024-04-02
Assignee
Inventors
- Jeffrey J. Jacobsen (Hollister, CA, US)
- Christopher Parkinson (Richland, WA, US)
- Stephen A. Pombo (Campbell, CA, US)
Cpc classification
G06F3/011
PHYSICS
G05D1/224
PHYSICS
G05D1/223
PHYSICS
G06F3/167
PHYSICS
International classification
G09G5/00
PHYSICS
G05D1/00
PHYSICS
Abstract
A wireless hands-free portable headset computer with a micro display arranged near but below a wearer's eye in a peripheral vision area not blocking the wearer's main line of sight. The headset computer can display an image or portions of an image, wherein the portions can be enlarged. The headset computer also can be equipped with peripheral devices, such as light sources and cameras that can emit and detect, respectively, visible light and invisible radiation, such as infrared radiation and ultraviolet radiation. The peripheral devices are controllable by the wearer by voice command or by gesture. The headset computer also can be broken down into component parts that are attachable to another article worn by an individual, such as a helmet or respirator mask.
Claims
1. A headset computer system comprising: a first housing including a processor; a second housing including a boom with an integrated microdisplay and microphone; a third housing including a power supply, each of the housings separately packaged and individually attachable to a headgear; one or more signal and power connections between the first, second and third housings; and fasteners for separately attaching and detaching the first, second and third housing to and from the headgear; wherein the headgear has two pads disposed thereon, wherein the first housing is attachable to the headgear via a first pad of the two pads, and wherein the third housing is attachable to the headgear via a second pad of the two pads.
2. The headset computer system of claim 1, wherein the headgear is a helmet.
3. The headset computer system of claim 1, wherein the fasteners include hook and loop fasteners.
4. The headset computer system of claim 1, wherein the first housing further encloses noise cancellation circuits.
5. The headset computer system of claim 4, wherein the noise cancellation circuits are configured to reduce background noise of a rebreather.
6. The headset computer system of claim 1 wherein the boom is configured to support the microdisplay.
7. The headset computer system of claim 1, wherein the fasteners include a mechanical clip configured to attach the boom to the headgear.
8. The headset computer system of claim 1, wherein the first, second, and third housings are configured to be retrofittable to the headgear.
9. The headset computer system of claim 1, wherein the first housing further includes a camera, light source, or combination thereof.
10. A method comprising: retrofitting headgear to include headset computer functionality, the retrofitting including implementing the headset computer functionality via a first housing, second housing, and third housing, each of the housings separately packaged and individually attachable to the headgear, the implementing employing a processor, a boom with an integrated microdisplay and microphone, and a power supply, the first housing including the processor, the second housing including the boom with the integrated microdisplay and microphone, the third housing including the power supply, the implementing further employing one or more signal and power connections between the first, second and third housings, and the retrofitting further including employing fasteners for separately attaching and detaching the first, second and third housing to and from the headgear.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The foregoing will be apparent from the following more particular description of example embodiments of the disclosure, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating various embodiments.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DETAILED DESCRIPTION
(13)
(14) As will be explained in detail below in connection with
(15) Viewport into 3-D Virtual Space
(16) One function performed by the headset computer 100 is to provide a graphical viewport and/or window into a 3-D virtual space. The graphical viewpoint determines which information is presented on the microdisplay 120. In this mode, for example, a movement by the wearer's head can bring a different section of that 3-D virtual space into view on the microdisplay 120.
(17)
(18) It will be understood that the 3-D virtual space may include various elements such as computer desktops, application windows, photographs, 3-D object models or any other type of digital image objects. It should be further understood that these image objects can be positioned next to, overlaid on or behind or beside one another in the 3-D virtual space.
(19) The user can manipulate the various image objects by giving commands using the headset computer 100. In one example, the user can ask for a level of enlargement of a particular area of interest within one of the objects. The location and size of the window area may be selected by the tracking of head motions, voice commands and/or hand gestures. For example, the user may specify a position and magnification and/or zoom level to be applied to a particular application software window. The result is similar to using a magnifying glass to look at something seamlessly over a large area, but by using the head tracker/gesture detector/voice input detection to zoom into an area being seen on the microdisplay 120 and at what level of magnification. Thus, using this feature the user can move his head left, right, up or down and then select a particular one of the image objects 300, 310, 320 through 340 to be active. In one example, the user 200 might from the position shown in
(20) The user 200 can also issue commands to retain a piece of a large image that he wishes to magnify, freezing that portion on the screen and setting it aside and then going back and looking at another area of that image or even requesting another level of magnification for that other area. In this way, the user can view the same portions of an image at different levels of magnification and/or view different bits or pieces of a larger image at different levels of magnification and then switch between them by merely moving his head left or right, up or down
(21) In yet another example, the wearer may issue voice commands to manipulate the position of the various image objects in the 3-D virtual space. For example, he may select an image object such as by moving his head, but then issue a voice command such as to move object up or move object A behind object B. This causes the head tracker to then control the relative position of the selected image object(s) within the 3-D virtual space, rather than allowing him to navigate among a given single object within the 3-D space.
(22) It will be understood that the wearer 200 thus has access to a virtual desktop that is in any form factor that can be represented in a 3-D virtual space, i.e. he may be working in a 360? surface that wraps around his head or may be given the impression that he is working in a 3-D space with a long depth of field.
(23) In another example, the user 200 may turn his head to the lower left causing the window 350 to become active. This window may be a 3-D model of an object such as an engine. The user may then proceed to manipulate this 3-D model using voice, head tracking and/or hand gesture commands to manipulate the viewpoint in 3-D space. The wearer may also issue a command to manipulate the model itself, such as to say, rotate object 90? horizontal causing the representation of the motor to rotate in 3-D space.
(24) The view of the displayed image on the microdisplay 120 does not require the user to be physically oriented as if he were looking in any particular direction. For example, the user may remotely view any image being virtually generated in a sitting or standing position as might be projected on a wall in a room, but yet that wearer may be himself physically oriented in other positions such as laying down.
(25) Hands-Free Synthetic Vision
(26)
(27) Using the headset computer 100, the wearer can thus experience hands-free synthetic vision that combines a synthetic view that is, for example, a far infrared view showing heat signatures of individuals or objects on the other side of a wall or other obstruction. An example of the same is shown in
(28) As shown in
(29) In a further example, a volume of space can be estimated by the wearer aiming the laser at three or more points and asking the headset computer to figure out the distances between them. These functions can be useful in uses such as surveying or material estimating necessary. This can now be accomplished without the wearer actually moving about or by using measuring implements other than the laser range finder as built into the headset computer 100.
(30) Components Retrofittable to Helmet
(31)
(32) Camera(s), laser(s), and other peripherals can also be mounted to the helmet 500. Instead of requiring the wearer to wear a dedicated headset under the helmet, this packaging approach can implement a headset computer functionality without the user having to become comfortable with new headgear. In addition, operation with certain types of headgear (such as a rebreather) is not affected. This particular end use may be improved if the on board electronics also provide for noise cancellation. For example, if the wearer is using a rebreather, the rebreather tends to make a lot of background noise that would otherwise interfere with voice inputs or sound recording. The on-board electronics may include noise cancellation circuits or programming that eliminate the background noise of the rebreather. A similar approach can be used to cancel out other background noises to allow for clearer recording of voices or other sounds.
(33) Headset Computer Controls Remote Vehicle, Receives and Displays Images from and to the Remote Vehicle
(34) In yet another implementation, the voice, head motion and/or hand gesture inputs received from the sensors located within the headset computer 100 can be used to derive a remote control command. That control command can then be sent over a wireless interface to control a remote vehicle robot, or other object. In this end use, the input device may also further include a wireless joystick and/or mouse to provide further inputs to control the vehicle.
(35) In one example, a voice input to the headset computer can generate a control command to control the path of the vehicle. Voice commands, such as turn right, turn left, move forward, move backward, stop and so forth can be included in the processing capabilities of the headset computer 100. Similarly, head tracking inputs can generate a control command to control the path of the vehicle, or more commonly the direction of the camera on the vehicle. In this way, the user can obtain an experience that he is physically located on the vehicle. This is accomplished by having the camera on the vehicle transmitting video preferably wirelessly back to the headset computer. The video received at the remote vehicle can then be displayed on the display within the headset computer.
(36) In yet another example, a wireless handheld controller 610 such as that shown in
(37) Using this arrangement, a person can control a vehicle such as an unmanned aerial vehicle (
(38) In the absence of a separate user input device, the camera on the headset computer 100 may detect the user's hand gestures as control inputs. The wearer can also give speech commands to give the vehicle certain commands. For example, if the wearer says freeze, that can be detected by the headset computer which then translates the spoken command into one or more commands to control the flight path of the unmanned aerial vehicle, to stop doing everything else and simply hover or follow a circular flight path around a current point of interest.
(39) In other examples a voice command such as return to base can cause the vehicle to follow a complex programmed flight path. Another example can be circle at a specific altitude which can cause the vehicle to generally follow a geo-stable circle around its present location. This can alleviate the user from tediously having to continuously provide commands via the handheld controller.
(40) Other voice commands and hand held commands can be used to control other aspects of the vehicle's capabilities, performance and/or path of travel.
(41) In one embodiment, the vehicle 620 may itself contain a camera that transmits its video output wirelessly back to the headset computer 100. Video carried back to the headset computer 100 is then displayed on the microdisplay 120. The wearer's head movements and/or gestures may then be used in a natural way to control the position, attitude, pan, zoom, magnification, light spectral sensitivities or other capabilities of the camera on the remote vehicle. The user's head movements can then be tracked by the on board electronics of the headset computer 100 and translated by the headset computer into commands that are sent back to aim the camera of the unmanned vehicle. As an example, if the wearer looks to the left, that motion is detected by the head tracker in the headset computer, translated into a camera move left command. That move left command is then sent wirelessly to the remote vehicle, causing the camera on the remote vehicle to pan to the left.
(42) By returning the video stream back from the vehicle and displaying it on the microdisplay gives the wearer a visual experience as if he were, for example, a miniature pilot inside an unmanned aerial vehicle.
(43) In yet another function, the user can, for example, use speech commands to control other peripherals that the vehicle itself might contain. An unmanned aerial vehicle such as shown in
(44) System Description
(45)
(46) The headset computer device 100 can be used in various ways. It can be used as a completely contained, head-mounted fully functional portable personal computer/smart phone with full connectivity to external computers and networks through a short and/or long-range wireless links such as Bluetooth, WiFi, cellular, LTE, WiMax or other wireless radios.
(47) Device 100 can be also used as a remote display for a streaming video signal provided by a remote host computer. The host may be, for example, a laptop, cell phone, Blackberry, iPhone?, or other computing device having lesser or greater computational complexity than the device 100 itself. The host then provides information to the device 100 to be displayed. The device 100 and host are connected via one or more suitable wireless connections such as provided by the Bluetooth WiFi, cellular, LTE, WiMax or other wireless radio link. The host may itself be further connected to other networks such as through a wired or wireless connection to the Internet.
(48) While what is shown in
(49) In the
(50)
(51) The camera, motion tracking and audio inputs to the device 100 are interpreted as user commands in various ways to control operation of the local processor, the microdisplay, or the external host.
(52) Head movement tracking and/or vocal commands can also be provided by the user 1050 to manipulate the settings of camera 1060. For example, a user vocal command, such as zoom or pan, can be recognized by the local processor and cause the camera 1060 to zoom in or telephoto out.
(53)
(54) Among the commands that can be carried out on the local processor and/or the remote host 200 is one to select a field of view 300 within the virtual display. Thus, it should be understood that a very large format virtual display area might be associated with operating system or application software running on the device 100 or on the host 200. However, only a portion of that large virtual display area within the field of view is returned to and actually displayed by the remote control display device 120 as selected by the voice, hand gestures, or head motion commands.
(55)
(56) The device 100 may also include an eye pod assembly 4000 that includes the aforementioned microdisplay 4010 (e.g. the microdisplay 1010 and boom 1008 of
(57) Device system 100 may also receive inputs from external input devices such as a wireless mouse, track ball, or keyboard that may be wirelessly connected through the Bluetooth interface 4108.
(58) Software in the WLAN/BT front end 4108, the OMAP 4100 and/or host 200 may be used to interpret hand gestures detected by the camera or other sensors. A camera board 4060 may optionally provide video input, as well.
(59) The OMAP processor 4100 may include a central processing unit, and on-chip memory such as Random Access Memory (RAM) that may include non volatile memory and/or Read Only Memory (ROM). The OMAP may be a Texas Instruments model OMAP 3530 processor or newer version sold by Texas Instruments, Inc. and using a multimedia processor. The OMAP 4100 may typically execute an embedded system such as operating a particular version of MicroSoft Windows?. The OMAP 4100 is generally a more powerful, and more power consuming processor than the WLAN/BT interface 4108.
(60) In this example, a TPS 65950 power/audio companion chip, also available from Texas Instruments, provides audio, USB, keypad control and battery charging functions to the system.
(61) The WLAN/BT interface 4108 may be a model LBEE 1W8 NEC-interface circuit, a Bluetooth circuit such as available from CSR, Ltd. of Cambridge, United Kingdom or other radio module with similar or greater capabilities.
(62) The display driver may be a model KCD-A 910 display driver available from Kopin Corporation of Westborough, Massachusetts.
(63) The microdisplay 4010, also available from Kopin, can include models CyberDisplay 230K, WQVGA, VGA, WVGA, SVGA or other manufactures' acceptable microdisplays.
(64) An NCS module 4400 takes raw microphone signal data as input, and outputs audio data with background noise removed. It produces an audio signal to the audio companion chip 4102 and from there to the OMAP processor 4100. Voice recognition is performed in software on the OMAP processor 4100, using the cleaned up microphone signals as fed in by the NCS 4400.
(65) The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
(66) While this disclosure has described several example embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.