HEAD-MOUNTED DISPLAY FOR NAVIGATING VIRTUAL AND AUGMENTED REALITY

20220146841 · 2022-05-12

    Inventors

    Cpc classification

    International classification

    Abstract

    Locomotion-based motion sickness has long been a complaint amongst virtual reality gamers and drone pilots. Traditional head-mounted display experiences require a handheld controller (e.g. thumbstick, touchpad, gamepad, keyboard, etc.) for locomotion. Teleportation compromises immersive presence and smooth navigation leads to sensory imbalances that can cause dizziness and nausea (even when using room-scale sensor systems). Designers have therefore had to choose between comfort and immersion. The invention is a hands-free, body-based navigation technology that puts the participant's body in direct control of movement through virtual space. Participants lean forward to advance in space; lean back to reverse; tip left or right to strafe/sidestep; and rotate to look around. In some embodiments, the more a participant leans, the faster they go. Because the interactions were designed to respond to natural bearing and balancing instincts, movement coordination is intuitive and vection-based cybersickness is reduced.

    Claims

    1. A system for minimizing a person's discomfort whilst navigating a collection of visually displayable digital objects, comprising: a head-mounted display (HMD) device configured to visually display one or more digital objects to a person wearing the HMD device, the HMD device being associated with one or more microprocessors and one or more sensors, wherein the HMD device, in combination with the microprocessors and sensors, is configured to cause at least one of the objects to: increase in size when the person leans towards the object; and decrease in size when the person leans away from the object.

    2. The system of claim 1, wherein the HMD device is configured to vary the object's displayed size in response to a lean angle of the person.

    3. The system of claim 2, wherein the HMD device is configured to cause the object to increase in display size at a faster rate as the person leans at a greater angle towards the object.

    4. The system of claim 3, wherein the object's increased display size causes the object to appear closer to the person.

    5. The system of claim 2, wherein the HMD device is configured cause the object to decrease in display size at a faster rate as the person leans at a greater angle away from the object.

    6. The system of claim 5, wherein the object's decreased display size causes the object to appear further away from the person.

    7. The system of claim 2, wherein the HMD device is configured to cause the object to maintain its apparent size when the person leans towards the object past a specified threshold lean angle.

    8. The system of claim 2, wherein the HMD device is configured to cause the object to maintain its apparent size when the person leans away from the object past a specified threshold lean angle.

    9. The system of claim 2, wherein the one or more digital objects are displayed in a virtual environment.

    10. The system of claim 2, wherein the one or more digital objects are displayed as an overlay to a person's view of their physical environment.

    11. A method for minimizing a person's discomfort whilst navigating a collection of visually displayable digital objects, comprising: visually displaying, by a computer processor associated with one or more sensors, one or more digital objects to a person wearing a head-mounted display (HMD) device; and causing, by the computer processor, at least one of the objects to: increase in size when the person leans towards the object; and decrease in size when the person leans away from the object.

    12. The method of claim 11, further comprising varying, by the computer processor, the object's displayed size in response to a lean angle of the person.

    13. The method of claim 12, further comprising causing the object to increase in display size at a faster rate as the person leans at a greater angle towards the object.

    14. The method of claim 13, wherein the object's increased display size causes the object to appear closer to the person.

    15. The method of claim 12, further comprising causing the object to decrease in display size at a faster rate as the person leans at a greater angle away from the object.

    16. The method of claim 15, wherein the object's decreased display size causes the object to appear further away from the person.

    17. The method of claim 12, further comprising maintaining an apparent size of the object when the person leans towards the object past a specified threshold lean angle.

    18. The method of claim 12, further comprising maintaining an apparent size of the object when the person leans away from the object past a specified threshold lean angle.

    19. The method of claim 12, further comprising displaying the one or more digital objects in a virtual environment.

    20. The method of claim 12, further comprising displaying the one or more digital objects as an overlay to a person's view of their physical environment.

    21. One or more statutory computer readable storage media (CRM) comprising instructions that, when executed by a computer associated with a head-mounted display (HMD) device and one or more sensors, are capable of visually displaying a collection of digital objects to a person and causing at least one of the objects to: increase in size when the person leans towards the object; and decrease in size when the person leans away from the object.

    22. The CRM of claim 21, wherein the instructions are capable of varying the object's displayed size in response to a lean angle of the person.

    23. The CRM of claim 22, wherein the instructions are capable of causing the object to increase in display size at a faster rate as the person leans at a great angle towards the object.

    24. The CRM of claim 23, wherein the object's increased display size causes the object to appear closer to the person.

    25. The CRM of claim 22, wherein the instructions are capable of causing the object to decrease in display size at a faster rate as the person leans at a greater angle away from the object.

    26. The CRM of claim 25, wherein the object's decreased display size causes the object to appear further away from the person.

    27. The CRM of claim 22, wherein the instructions are capable of causing the object to maintain its apparent size when the person leans towards the object past a specified threshold lean angle.

    28. The CRM of claim 22, wherein the instructions are capable of causing the object to maintain its apparent size when the person leans away from the object past a specified threshold lean angle.

    29. The CRM of claim 22, wherein the instructions are capable of causing the one or more digital objects to be displayed in a virtual environment.

    30. The CRM of claim 22, wherein the instructions are capable of causing the one or more digital objects to be displayed as an overlay to a person's view of their physical environment.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0112] FIG. 1 illustrates a representative prior-art handheld computing device.

    [0113] FIG. 2 illustrates a handheld computing device with three videos playing simultaneously.

    [0114] FIG. 3 illustrates a representative model of a virtual environment.

    [0115] FIGS. 4A, 14A and 15 illustrate a “pivot down” motion or “pivoted down” posture.

    [0116] FIGS. 4B and 16 illustrate a “pivoted up” posture or “pivot up” motion.

    [0117] FIG. 4C illustrates a “pivot left” motion or “pivoted left” posture.

    [0118] FIG. 4D illustrates a “pivot right” motion or “pivoted right” posture.

    [0119] FIGS. 4E, 14E and 17B illustrate a “tip right” motion or “tipped right” posture.

    [0120] FIGS. 4F and 17A illustrate a “tip left” or motion or “tipped left” posture.

    [0121] FIGS. 4G and 18 illustrate an “aim left” motion or “aimed left” posture.

    [0122] FIGS. 4H, 14H and 18 illustrate an “aim right” motion or “aimed right” posture.

    [0123] FIG. 5A illustrates a “slide left” motion.

    [0124] FIG. 5B illustrates a “slide right” motion.

    [0125] FIG. 5C illustrates a “slide down” motion.

    [0126] FIG. 5D illustrates a “slide up” motion.

    [0127] FIG. 5E illustrates a “pull” motion.

    [0128] FIG. 5F illustrates a “push” motion.

    [0129] FIGS. 6A, 6B and 6C illustrate a pivot down interaction sequence and related visual display states. FIG. 6A represents a device starting posture while FIGS. 6B and 6C represent interaction sequence transition states.

    [0130] FIGS. 6D, 6E and 6F illustrate a pivot up interaction sequence and related visual display states. FIG. 6D represents a device starting posture while FIGS. 6E and 6F represent interaction sequence transition states.

    [0131] FIGS. 7A, 7B and 7C illustrate virtual camera posture and orientation states corresponding to interaction sequence states illustrated in FIGS. 6A, 6B and 6C respectively.

    [0132] FIGS. 7D, 7E and 7F illustrate virtual camera location and orientation states corresponding to interaction sequence states illustrated in FIGS. 6D, 6E and 6F respectively.

    [0133] FIGS. 8A, 8B and 8C illustrate a pivot right interaction sequence and related visual display states. FIG. 8A represents a device starting posture while FIGS. 8B and 8C represent interaction sequence transition states.

    [0134] FIGS. 8D, 8E and 8F illustrate a pivot left interaction sequence and related visual display states. FIG. 8D represents a device starting posture while FIGS. 8E and 8F represent interaction sequence transition states.

    [0135] FIGS. 9A, 9B and 9C illustrate virtual camera location and orientation states corresponding to interaction sequence states illustrated in FIGS. 8A, 8B and 8C respectively.

    [0136] FIGS. 9D, 9E and 9F illustrate virtual camera location and orientation states corresponding to interaction sequence states illustrated in FIGS. 8D, 8E and 8F respectively.

    [0137] FIGS. 10A, 10B and 10C illustrate a tip right interaction sequence and related visual display states. FIG. 10A represents a device starting posture while FIGS. 10B and 10C represent interaction sequence transition states.

    [0138] FIGS. 10D, 10E and 10F illustrate a tip left interaction sequence and related visual display states. FIG. 10D represents a device starting posture while FIGS. 10E and 10F represent interaction sequence transition states.

    [0139] FIGS. 11A, 11B and 11C illustrate virtual camera location and orientation states corresponding to interaction sequence states illustrated in FIGS. 10A, 10B and 10C respectively.

    [0140] FIGS. 11D, 11E and 11F illustrate virtual camera location and orientation states corresponding to interaction sequence states illustrated in FIGS. 10D, 10E and 10F respectively.

    [0141] FIG. 12 illustrates a representative model of an architectural scale manifold vista VE.

    [0142] FIG. 13 is a block diagram of an exemplary hardware configuration model for a device implementing the participant experience described in reference to FIGS. 1-12 and 14-18.

    [0143] Like reference numerals in the various drawings indicate like elements. And as described above, like reference numerals apply to all like elements in the drawings including those like elements absent reference numeral indicia in a given view.

    DETAILED DESCRIPTION

    [0144] The present invention comprises techniques for using, and configuring for use, a handheld computing device with simple human body “language” integrating ordinary locomotion, orientation and stabilization gestures to navigate and explore polylinear audio and video streams produced for display through multiple video panes and virtual speakers that are spatially distributed in a virtual environment of architectural scale and manifold vistas, for peripatetic discovery and perusal with proprioceptive feedback; all without the need for button, keyboard, joystick or touchscreen interaction.

    [0145] FIG. 1 illustrates a representative prior-art handheld computing device 100 with a visual display 101, auditory displays (i.e. speakers) 102 on the left and 103 on the right, headphones 104 with auditory displays 106 on the left and 107 on the right, a representative data transport connection 105 between device 100 and headphones 104, and a wireless data transport signal 108 to and/or from device 100. The inventors intend device 100 and its constituent components to be recognized as such throughout FIGS. 2-10F, whether or not identified by reference numeral indicia in each of said figures.

    [0146] If building an embodiment of the invention for an Apple iOS device, designer directed engineer(s) may construct aspects of the system using Apple's Xcode developer tools [available at the time of writing in the Apple App Store] and a computer programming language such as Objective C. Xcode provides ready-to-use libraries of code that engineers may use in building embodiments of the invention, with application programming interfaces (each an “API”) to those code libraries. Apple provides a wealth of technical resources on developing apps for iOS devices on the iOS Dev Center [at the time of writing via http://developer.apple.com/]. Apple's online iOS Developer Library includes getting started guides, sample code, technical notes, articles and training videos.

    Virtual Environment

    [0147] FIG. 2 illustrates device 100 with three videos playing simultaneously. Video pane 113 is representative of a broadcast soccer match. Video pane 116 is representative of a live videoconference stream. Video pane 119 is representative of a locally stored documentary film about bird songs.

    [0148] FIG. 3 illustrates a representative model of a virtual environment 110 containing video panes 113 and 116 to the north and video pane 119 to the east. Video pane 113 is associated with two spatially situated virtual speakers for channels of audio 114 (left) and 115 (right); video pane 116 with audio 117 (left) and 118 (right); video pane 119 with audio channels 120 (left) and 121 (right). Virtual camera 111 is located in the southwest region of the space with a lens orientation 112 facing towards the northeast. The inventors intend video panes 113, 116 and 119, and their respective virtual speakers 114, 115, 117, 118, 120 and 121 and virtual camera 111 and orientation 112 to be recognized as such throughout FIGS. 3-11F, whether or not identified by reference numeral indicia in each of said figures.

    [0149] Such a VE may be built using an OpenGL API such as OpenGL ES version 2.0. Techniques for 3D engineering are taught in books such as Beginning iPhone Games Development by Peter Bakhirev, P J Cabrera, Ian Marsh, Scott Penberthy, Ben Britten Smith and Eric Wing [Apress, 2010].

    [0150] Video may be built into a VE by texture mapping frames of video during playback onto an object surface in the VE. Code libraries by Dr. Gerard Allan for texturing of streamed movies using OpenGL on iOS are available from Predictions Software Ltd [at the time of writing via http://www.predictions-software.com/], including APIs to features engineered in support of instantiation of preferred embodiments of the present invention.

    [0151] A single movie file may be used as a texture atlas for multiple synchronized video panes in a VE. To accomplish this, a portion of each video frame (e.g. a top-left quadrant, a top-right quadrant, a bottom-left quadrant or a bottom-right quadrant) may be texture mapped to a separate video pane in the VE. By dividing each video frame into four regions, only one movie file need be streamed at a time to produce a polylinear video experience comprising four unique video panes.

    [0152] This technique theoretically improves responsiveness to participant interaction by reducing processor load and increasing rendering speed.

    [0153] An OpenAL API may be used for building 3D spatialized audio into a VE. Techniques for buffering and spatializing streamed audio files are taught in books such as Learning Core Audio: A Hands-On Guide to Audio Programming for Mac and iOS by Chris Adamson and Kevin Avila [Addison-Wesley Professional, 2012]. The OpenAL 1.1 specification and programmers guide is available from Creative Labs, Inc. [at the time of writing via http://connect.creativelabs.com/openal/].

    [0154] OpenAL enables a designer to specify the location of each virtual speaker in the VE, the direction the virtual speaker is facing in the VE, a roll-off factor (i.e. the attenuation range of a virtual speaker), a reference distance (i.e. the distance that a virtual speaker's volume would normally fall by half), a maximum distance (i.e. the distance at which the virtual speaker becomes completely inaudible) and other parameters. OpenAL on iOS currently supports production of up to 32 tracks of simultaneously playing sounds, all ultimately rendered down to a left-to-right stereo mix.

    Device Posture & Motion

    [0155] FIGS. 4A, 14A and 15 illustrate a “pivot down” motion or “pivoted down” posture, in this case a rotation of a device around its x-axis such that the top edge of the device moves further from an upright participant and/or the bottom edge of the device moves closer to the participant's body. If the device is head-mounted, then pivot down occurs when then the participant drops their face down towards the ground. Certain preferred embodiments of the invention use x-axisometer data to determine the degree of pivot down of a device, comparing said data to a sensor reference data that indicates both a pivot down origin and a pivot down neutral zone threshold. If the x-axisometer data indicates a pivot down greater than the pivot down origin but less than the pivot down neutral zone threshold, then the pivot down does not cause a change of location of the virtual camera in the VE. If the x-axisometer data indicates pivot down greater than the pivot down neutral zone threshold, then the pivot down causes a change of location of the virtual camera in the VE.

    [0156] In one preferred embodiment, the pivot down origin is preset at Δ25° pivoted down from vertical and the pivot down neutral zone threshold is preset at Δ10° from the origin. Pivot down origin preference, however, varies from participant to participant. Some prefer to look down at a device; others prefer to hold the device up high with arms outstretched. In an alternate preferred embodiment, the pivot down origin is established on the fly by identifying the average resting posture of the device for a given participant during app launch. Origins and neutral zone thresholds may also be set and/or calibrated, individually or en masse, by participants in a preferences panel. For clarity's sake, pivot down neutral zones are not limited to any particular device posture ranges and may be established, for example, near an origin at or near the bottom of a device's pivot down range. Furthermore, there is no limit to the number of neutral zones that may be employed in a single embodiment. FIG. 15 illustrates a pivot down neutral zone between a horizontal origin at 0° and a pivot down neutral zone threshold angle (a) at 5°. It also illustrates a pivot down neutral zone between the pivot down angles of a vertical origin at 90° and a neutral zone threshold (c) at 50°. The location of the virtual camera in the VE would not change as a result of a pivot down being detected between either pair of neutral zone boundaries (e.g. between 0° and 5° pivoted down; and between 50° and 90° pivoted down).

    [0157] X-axisometer data may be derived, for example, from an iOS UI Accelerometer object data feed. Apple's Accelerometer Filter sample code implements a low and high pass filter with optional adaptive filtering. This readily adoptable code smooths out raw accelerometer data, which can then be converted to an angular value. Apple, Google, Microsoft and other device platform manufacturers also provide sensor fusion algorithm APIs, which can be programmatically employed to smooth out the stream of sensor data. Apple's Core Motion API uses gyroscope data to smooth out accelerometer data, providing interpolation and fine grain corrections for x-axisometer data free from delayed response and drift.

    [0158] In certain preferred embodiments, when the device is held perpendicular to the physical ground—the pivot down origin in certain preferred embodiments—then the virtual camera view is established parallel with the virtual ground in the VE, as if looking straight ahead. When the device is pivoted down from the pivot down origin, then the virtual camera is rotated around the virtual camera's x-axis to moderately drop the vertical center of the virtual camera closer to the virtual ground, as if the participant dropped their gaze slightly downward. The adjusted angle of the virtual camera need not have a 1:1 correlation with the posture of the device. In a preferred embodiment, the degree of vertical drop of the virtual camera view is dampened in comparison to the degree of pivot down by a factor of five. For every Δ1° of pivot down, the camera view is angled down by Δ0.2°. This provides sufficient feedback for the participant to maintain awareness of their movement of the device while softening that feedback enough to avoid distraction. In certain preferred embodiments, the degree of angle down of the virtual camera view is capped at a maximum drop of Δ10° down from straight ahead to stabilize the participant experience while the virtual camera is changing position.

    [0159] FIGS. 4B and 16 illustrate a “pivoted up” posture or “pivot up” motion, in this case a rotation of a device around its x-axis such that the top edge of the device moves closer to an upright participant and/or the bottom edge of the device moves further from the participant's body. If the device is head-mounted, then pivot up occurs when then the participant lifts their face up towards the sky. Preferred embodiments of the invention use x-axisometer data to determine the degree of pivot up of a device, comparing said data to a sensor reference data that indicates both a pivot up origin and a pivot up neutral zone threshold. If the x-axisometer data indicates a pivot up greater than the pivot up origin but less than the pivot up neutral zone threshold, then the pivot up does not cause a change of location of the virtual camera in the VE. If the x-axisometer data indicates pivot up greater than the pivot up neutral zone threshold, then the pivot up causes a change of location of the virtual camera in the VE.

    [0160] In one preferred embodiment, the pivot up origin is preset at Δ25° pivoted up from vertical and the pivot up neutral zone threshold is preset at Δ10° from the origin. Pivot up origin preference, however, varies from participant to participant. Some prefer to look down at a device; others prefer to hold the device up high with arms outstretched. In an alternate preferred embodiment, the pivot up origin is established on the fly by identifying the average resting posture of the device for a given participant during app launch. Origins and neutral zone thresholds may also be set and/or calibrated, individually or en masse, by participants in a preferences panel. For clarity's sake, pivot up neutral zones are not limited to any particular device posture ranges and may be established, for example, near an origin at or near the top of a device's pivot up range. Furthermore, there is no limit to the number of neutral zones that may be employed in a single embodiment. FIG. 16 illustrates a pivot up neutral zone between a horizontal origin at 0° and a pivot up neutral zone threshold angle (d) at 5°. It also illustrates a pivot up neutral zone between the pivot up angles of a vertical origin at 90° and a neutral zone threshold (f) at 60°. The location of the virtual camera in the VE would not change as a result of a pivot up being detected between either pair of neutral zone boundaries (e.g. between 0° and 5° pivoted up; and between 60° and 90° pivoted up).

    [0161] In certain preferred embodiments, when the device is held perpendicular to the physical ground—the pivot down origin in certain preferred embodiments—then the virtual camera view is established parallel with the virtual ground in the VE, as if looking straight ahead. When the device is pivoted up from the pivot up origin, then the virtual camera is rotated around the virtual camera's x-axis to moderately raise the vertical center of the virtual camera away from the virtual ground, as if the participant raised their gaze slightly upward. The adjusted angle of the virtual camera need not have a 1:1 correlation with the posture of the device. In a preferred embodiment, the degree of vertical rise of the virtual camera view is dampened in comparison to the degree of pivot up by a factor of five. For every 41° of pivot up, the camera view is angled up by 40.2°. This provides sufficient feedback for the participant to maintain awareness of their movement of the device while softening that feedback enough to avoid distraction. In certain preferred embodiments, the degree of angle up of the virtual camera view is capped at a maximum rise of Δ10° up from straight ahead to stabilize the participant experience while the virtual camera is changing position.

    [0162] FIG. 4C illustrates a “pivot left” motion or “pivoted left” posture, in this case a rotation of a device counter-clockwise around its y-axis such that (a) the right edge of the device moves closer to the ground, (b) the left edge of the device moves further from the ground and/or (c) the device is aimed left. In certain embodiments of the invention, when the device is held in a vertical posture with the device's y-axis parallel to a v-axis, then pivot left interactions and aim left interactions may result in identical v-axisometer data. It would be understood by a person of ordinary skill in the art that any coincidentally matching sensor data results in such idiosyncratic circumstances have no bearing on the novelty of employing the presently disclosed techniques to obtain reliable results across all circumstances.

    [0163] When the device is in a tipped up posture between vertical and horizontal, pivot left motions are difficult to humanly distinguish from tip left motions. In other words, participants intending to tip left have a tendency to pivot left at the same time. For these reasons, certain embodiments of the invention specifically avoid mapping any interaction results to y-axisometer data. Mapping of y-axisometer data to unique interaction results, when performed in the context of the present invention, must be carefully considered so as not to degrade the simplicity, ease and comfort of the participant experience.

    [0164] Y-axisometer data independent of v-axisometer data may be derived, for example, from an iOS UI Accelerometer object data feed. Apple's Accelerometer Filter sample code implements a low and high pass filter with optional adaptive filtering. This readily adoptable code smooths out raw accelerometer data, which can then be converted to an angular value. Apple, Google, Microsoft and other device platform manufacturers also provide sensor fusion algorithm APIs, which can be programmatically employed to smooth out the stream of sensor data. Apple's Core Motion API uses gyroscope data to smooth out accelerometer data, providing interpolation and fine grain corrections for y-axisometer data free from delayed response and drift.

    [0165] FIG. 4D illustrates a “pivot right” motion or “pivoted right” posture, in this case a rotation of a device clockwise around its y-axis such that (a) the left edge of the device moves closer to the ground, (b) the right edge of the device moves further from the ground and/or (c) the device is aimed right. In certain embodiments of the invention, when the device is held in a vertical posture with the device's y-axis parallel to a v-axis, then pivot right interactions and aim right interactions may result in identical v-axisometer data. It would be understood by a person of ordinary skill in the art that any coincidentally matching sensor data results in such idiosyncratic circumstances have no bearing on the novelty of employing the presently disclosed techniques to obtain reliable results across all circumstances.

    [0166] When the device is in a tipped up posture between vertical and horizontal, pivot right motions are difficult to humanly distinguish from tip right motions. In other words, participants intending to tip right have a tendency to pivot right at the same time. For these reasons, certain embodiments of the invention specifically avoid mapping any interaction results to y-axisometer data. Mapping of y-axisometer data to unique interaction results, when performed in the context of the present invention, must be carefully considered so as not to degrade the simplicity, ease and comfort of the participant experience.

    [0167] FIGS. 4E, 14E and 17B illustrate a “tip right” motion or “tipped right” posture, in this case a rotation of a device like an automobile steering wheel in a clockwise direction around its z-axis. If the device is head-mounted, then tip right occurs when then the participant drops their head to the right such that the right side of their face is lower than the left side of their face. Certain preferred embodiments of the invention use z-axisometer data to determine the degree of tip right of a device, comparing said data to a sensor reference data that indicates both a tip right origin and a tip right neutral zone threshold. If the z-axisometer data indicates a tip right greater than the tip right origin but less than the tip right neutral zone threshold, then the tip right does not cause a change of location of the virtual camera in the VE. If the z-axisometer data indicates tip right greater than the tip right neutral zone threshold, then the tip right causes a change of location of the virtual camera in the VE.

    [0168] In one preferred embodiment, the tip right origin is preset at Δ0° (i.e. vertical) and the tip right neutral zone threshold is preset at Δ10° from the origin. When it comes to left/right tip centering, level (as detected by a device) isn't necessarily the same as a person's perceived sense of level. Some people hold one shoulder higher than another, some rest their head at a slight angle, others stand square. As a result, the world is framed differently for different people. Thus, calibrating tip origin to a participant's resting position can yield more comfortable interactions because it cuts down on inadvertent input. In a preferred embodiment, the tip right origin is established on the fly by identifying the average resting posture of the device for a given participant during app launch. Origins and neutral zone thresholds may also be set and/or calibrated, individually or en masse, by participants in a preferences panel. For clarity's sake, tip right neutral zones are not limited to any particular device posture ranges and may be established, for example, near an origin at or near the far right of a device's tip right range. Furthermore, there is no limit to the number of neutral zones that may be employed in a single embodiment. FIG. 17B illustrates a tip right neutral zone between a vertical origin at 0° and a tip right neutral zone threshold angle (i) at 5°. It also illustrates a tip right neutral zone between the tip right angles of a horizontal origin at 90° and a neutral zone threshold (j.sub.1) at 80°. The location of the virtual camera in the VE would not change as a result of a tip right being detected between either pair of neutral zone boundaries (e.g. between 0° and 5° tipped right; and between 80° and 90° tipped right).

    [0169] Z-axisometer data may be derived, for example, from an iOS UI Accelerometer object data feed. Apple's Accelerometer Filter sample code implements a low and high pass filter with optional adaptive filtering. This readily adoptable code smooths out raw accelerometer data, which can then be converted to an angular value. Apple, Google, Microsoft and other device platform manufacturers provide optional sensor fusion algorithm APIs, which can be programmatically employed to smooth out the stream of sensor data. Apple's Core Motion API uses gyroscope data to smooth out accelerometer data, providing interpolation and fine grain corrections for z-axisometer data free from delayed response and drift.

    [0170] In certain preferred embodiments, when the device is held perpendicular to the physical ground—the tip right origin in certain preferred embodiments—then the virtual camera view is established parallel with the virtual ground in the VE, as if looking straight ahead without cocking one's head left or right. When the device is tipped right from the tip right origin, then the virtual camera is moderately rotated counter-clockwise around the virtual camera's z-axis to tip the left side of the virtual camera closer to the virtual ground, somewhat compensating for the difference between the virtual horizon and the real horizon as a result of tipping the device. The adjusted angle of the virtual camera need not have a 1:1 inverse correlation with the posture of the device. In a preferred embodiment, the degree of counter-clockwise rotation of the virtual camera view is dampened in comparison to the degree of tip right by a factor of five. For every Δ1° of tip right, the camera view is rotated counter-clockwise by Δ0.2°. This provides sufficient feedback for the participant to maintain awareness of their movement of the device while softening that feedback enough to avoid distraction. In certain preferred embodiments, the degree of counter-clockwise rotation of the virtual camera view is capped at a maximum rotation of Δ10° left from level to stabilize the participant experience while the virtual camera is changing position.

    [0171] FIGS. 4F and 17A illustrate a “tip left” or motion or “tipped left” posture, in this case a rotation of a device like an automobile steering wheel in a counter-clockwise direction around its z-axis. If the device is head-mounted, then tip left occurs when then the participant drops their head to the left such that the left side of their face is lower than the right side of their face. Certain preferred embodiments of the invention use z-axisometer data to determine the degree of tip left of a device, comparing said data to a sensor reference data that indicates both a tip left origin and a tip left neutral zone threshold. If the z-axisometer data indicates a tip left greater than the tip left origin but less than the tip left neutral zone threshold, then the tip left does not cause a change of location of the virtual camera in the VE. If the z-axisometer data indicates tip left greater than the tip left neutral zone threshold, then the tip left causes a change of location of the virtual camera in the VE.

    [0172] In one preferred embodiment, the tip left origin is preset at Δ0° (i.e. vertical) and the tip left neutral zone threshold is preset at Δ10° from the origin. When it comes to left/right tip centering, level (as detected by a device) isn't necessarily the same as a person's perceived sense of level. Some people hold one shoulder higher than another, some rest their head at a slight angle, others stand square. As a result, the world is framed differently for different people. Thus, calibrating tip origin to a participant's resting position can yield more comfortable interactions because it cuts down on inadvertent input. In a preferred embodiment, the tip left origin is established on the fly by identifying the average resting posture of the device for a given participant during app launch. Origins and neutral zone thresholds may also be set and/or calibrated, individually or en masse, by participants in a preferences panel. For clarity's sake, tip left neutral zones are not limited to any particular device posture ranges and may be established, for example, near an origin at or near the far left of a device's tip left range. Furthermore, there is no limit to the number of neutral zones that may be employed in a single embodiment. FIG. 17A illustrates a tip left neutral zone between a vertical origin at 0° and a tip left neutral zone threshold angle (g) at 5°. It also illustrates a tip left neutral zone between the tip left angles of a horizontal origin at 90° and a neutral zone threshold (h.sub.1) at 80°. The location of the virtual camera in the VE would not change as a result of a tip left being detected between either pair of neutral zone boundaries (e.g. between 0° and 5° tipped left; and between 80° and 90° tipped left).

    [0173] In certain preferred embodiments, when the device is held perpendicular to the physical ground—the tip left origin in certain preferred embodiments—then the virtual camera view is established parallel with the virtual ground in the VE, as if looking straight ahead without cocking one's head left or right. When the device is tipped left from the tip left origin, then the virtual camera is moderately rotated clockwise around the virtual camera's z-axis to tip the right side of the virtual camera closer to the virtual ground, somewhat compensating for the difference between the virtual horizon and the real horizon as a result of tipping the device. The adjusted angle of the virtual camera need not have a 1:1 inverse correlation with the posture of the device. In a preferred embodiment, the degree of clockwise rotation of the virtual camera view is dampened in comparison to the degree of tip left by a factor of five. For every Δ1° of tip left, the camera view is rotated clockwise by Δ0.2°. This provides sufficient feedback for the participant to maintain awareness of their movement of the device while softening that feedback enough to avoid distraction. In certain preferred embodiments, the degree of clockwise rotation of the virtual camera view is capped at a maximum rotation of Δ10° right from level to stabilize the participant experience while the virtual camera is changing position.

    [0174] FIGS. 4G and 18 illustrate an “aim left” motion or “aimed left” posture, in this case a rotation of a device in a counter-clockwise direction around a v-axis. If the participant and device are located in Austin, Tex., then a relevant v-axis may be expressed by a plumb line that extends perpendicular to the Earth's surface at their location in Austin 151 between the sky and the center of the planet 150. An upright participant can accomplish this manipulation by rotating their body and the device to the left whilst holding the device directly in front of them. If the device is head-mounted, then aim left occurs when then the participant turns their head to the left. Certain preferred embodiments of the invention use v-axisometer data to determine the aim of a device. If the v-axisometer data indicates a device orientation to the left of the most recent aim, then the aim left causes the orientation of the virtual camera to be rotated counter-clockwise in the VE. In another embodiment, the v-axisometer data may be compared to a sensor reference data that indicates an aim left origin and/or an aim left neutral zone threshold. In such an embodiment, if the v-axisometer data indicates an aim left greater than the aim left origin but less than the aim left neutral zone threshold, then the aim left does not cause a change of orientation of the virtual camera in the VE. For clarity's sake, aim left neutral zones are not limited to any particular device posture ranges and may be established, for example, near an origin at or near the far left of a device's aim left range. Furthermore, there is no limit to the number of neutral zones that may be employed in a single embodiment. It should also be understood that while neutral zones may be used to wholly dampen virtual movement in response to device movement, such dampening need not be as extreme as totally stopping movement. For additional clarity's sake, aim left thresholds established for the purposes of applying a particular speed transformation equation (such as dampening or wholly stopping movement) can logically be used for amplifying (as taught above in the Inventor's Lexicon section) the virtual movement in response to device movement in that aim left range. FIG. 18 illustrates a practical clarifying framework for deploying such a numerical transform function, in this case making it easier for a participant to turn around in the VE while seated, for example, in a stationary non-swivel chair. From device aim left angle (k) to device aim left angle (m), the leftward change in aim angle of the virtual camera matches the leftward change in aim angle of the device; whereas from aim left angle (m) to aim left angle (n), the leftward change in aim angle of the virtual camera increases at a greater rate of change than the leftward change in aim angle of the device.

    [0175] A v-axisometer sensor reference data origin may be established based on the real-world compass, based on a starting position of an app, based on the resting posture of a device, or based on user preference. Using the compass as an origin enables all participants, wherever located on the planet to engage in a common audiovisual composition with components mapped to specific ordinal referents. In one preferred embodiment, content designed to be located on the east side of a VE requires all participants to aim east in the real world to access such content. Alternately, content designed to be associated with a specific location in the world could base the location of objects in the VE on the relative location of the participant in the real world to such reference location. For example, a video from Kyoto, Japan could appear on the east side of a VE for participants in North America, while on the west side of a VE for participants in China. In a videoconferencing embodiment, the v-axisometer origin may be established based on the posture of the device upon launching the videoconference app, or subsequently calibrated to match the relative configuration of conference attendees.

    [0176] V-axisometer data may be derived, for example, from an iOS Core Location object data feed. Magnetic heading may be used rather than true heading to avoid usage of a GPS sensor. At the time of writing, iOS magnetometer data is limited to Δ1° resolution accuracy. To resolve visible jerkiness in perspective rendering, a preferred embodiment averages the five most recent v-axisometer data results to provide relatively smooth animation transitions between each discrete orientation reading. Apple, Google, Microsoft and other device platform manufacturers also provide sensor fusion algorithm APIs, which can be programmatically employed to smooth out the stream of sensor data. Apple's Core Motion API uses gyroscope data to smooth out magnetometer data, providing interpolation and fine grain corrections for v-axisometer data free from delayed response, drift and magnetic interference.

    [0177] FIGS. 4H, 14H and 18 illustrate an “aim right” motion or “aimed right” posture, in this case a rotation of a device in a clockwise direction around a v-axis. If the participant and device are located in Austin, Tex., then a relevant v-axis may be expressed by a plumb line that extends perpendicular to the Earth's surface at their location in Austin 151 between the sky and the center of the planet 150. An upright participant can accomplish this manipulation by rotating their body and the device to the right whilst holding the device directly in front of them. If the device is head-mounted, then aim right occurs when then the participant turns their head to the right. Certain preferred embodiments of the invention use v-axisometer data to determine the aim of a device. If the v-axisometer data indicates a device orientation to the right of the most recent aim, then the aim right causes the orientation of the virtual camera to be rotated clockwise in the VE. In another embodiment, the v-axisometer data may be compared to a sensor reference data that indicates an aim right origin and/or an aim right neutral zone threshold. In such an embodiment, if the v-axisometer data indicates an aim right greater than the aim right origin but less than the aim right neutral zone threshold, then the aim right does not cause a change of orientation of the virtual camera in the VE. For clarity's sake, aim right neutral zones are not limited to any particular device posture ranges and may be established, for example, near an origin at or near the far right of a device's aim right range. Furthermore, there is no limit to the number of neutral zones that may be employed in a single embodiment. It should also be understood that while neutral zones may be used to wholly dampen virtual movement in response to device movement, such dampening need not be as extreme as totally stopping movement. For additional clarity's sake, aim right thresholds established for the purposes of applying a particular speed transformation equation (such as dampening or wholly stopping movement) can logically be used for amplifying (as taught above in the Inventor's Lexicon section) the virtual movement in response to device movement in that aim right range. FIG. 18 illustrates a practical clarifying framework for deploying such a numerical transform function, in this case making it easier for a participant to turn around in the VE while seated, for example, in a stationary non-swivel chair. From device aim right angle (k) to device aim right angle (p), the rightward change in aim angle of the virtual camera matches the rightward change in aim angle of the device; whereas from aim right angle (p) to aim right angle (q), the rightward change in aim angle of the virtual camera increases at a greater rate of change than the rightward change in aim angle of the device.

    [0178] FIG. 5A illustrates a “slide left” motion, in this case moving the device to the left in a straight line along its x-axis. Slide left motions may be detected using an API such as Apple's Core Motion Manager. Slide left motions may be used to initiate video interactions ranging from basic media transport functions (such as pause, fast-forward, rewind, skip forward and skip back) to traversing links from a video to related content (whether or not such related content is video), traversing seamless expansions, engaging interactive advertisements or otherwise directing the flow of a video or the experience.

    [0179] FIG. 5B illustrates a “slide right” motion, in this case moving the device to the right in a straight line along its x-axis. Slide right motions may be detected using an API such as Apple's Core Motion Manager. Slide right motions may be used to initiate video interactions ranging from basic media transport functions (such as pause, fast-forward, rewind, skip forward and skip back) to traversing links from a video to related content (whether or not such related content is video), traversing seamless expansions, engaging interactive advertisements or otherwise directing the flow of a video or the experience.

    [0180] FIG. 5C illustrates a “slide down” motion, in this case moving the device down in a straight line along its y-axis. Slide down motions may be detected using an API such as Apple's Core Motion Manager. Slide down motions may be used to initiate video interactions ranging from basic media transport functions (such as pause, fast-forward, rewind, skip forward and skip back) to traversing links from a video to related content (whether or not such related content is video), traversing seamless expansions, engaging interactive advertisements or otherwise directing the flow of a video or the experience.

    [0181] FIG. 5D illustrates a “slide up” motion, in this case moving the device up in a straight line along its y-axis. Slide up motions may be detected using an API such as Apple's Core Motion Manager. Slide up motions may be used to initiate video interactions ranging from basic media transport functions (such as pause, fast-forward, rewind, skip forward and skip back) to traversing links from a video to related content (whether or not such related content is video), traversing seamless expansions, engaging interactive advertisements or otherwise directing the flow of a video or the experience.

    [0182] FIG. 5E illustrates a “pull” motion, in this case moving the device in a straight line along its z-axis in the direction of the front of the device (i.e. closer to the participant). Pull motions may be detected using an API such as Apple's Core Motion Manager. In certain preferred embodiments, locking the current location and/or orientation of a virtual camera in a VE (a “view lock”) may be accomplished with a pull motion so that a device may be subsequently moved or laid down without changing the current location or orientation of the virtual camera. In other preferred embodiments, a pull motion is used to disengage a view lock. In certain embodiments, one or more origins are determined by the posture of the device upon disengagement of the view lock; while in certain embodiments, one or more origins are unaffected by disengaging a view lock. A pull motion may be used to both engage and disengage a view lock. View lock may also be engaged and/or disengaged with a button press or touch screen tap.

    [0183] Movements may be performed in succession (e.g. pull then push) to effect results. In certain embodiments, a pull-based movement sequence is used to jump the virtual camera to an optimal viewing location (but not necessarily optimal orientation) in relation to content in view. In certain embodiments, such a gesture both jumps the virtual camera to this optimal viewing location and engages a view lock. The view lock may be used to establish a view lock neutral zone or to extend the range of a neutral zone already in place around one or more axes of device movement.

    [0184] In other preferred embodiments, a pull motion may be used to initiate video interactions ranging from basic media transport functions (such as pause, fast-forward, rewind, skip forward and skip back) to traversing links from a video to related content (whether or not such related content is video), traversing seamless expansions, engaging interactive advertisements or otherwise directing the flow of a video or the experience.

    [0185] FIG. 5F illustrates a “push” motion, in this case moving the device in a straight line along its z-axis in the direction of the back of the device (i.e. further from the participant). Push motions may be detected using an API such as Apple's Core Motion Manager. In certain preferred embodiments, a push motion is used to engage a view lock so that a device may be subsequently moved or laid down without changing the current location or orientation of the virtual camera. In other preferred embodiments, a push motion is used to disengage a view lock. In certain embodiments, one or more origins are determined by the posture of the device upon disengagement of the view lock; while in certain embodiments, one or more origins are unaffected by disengaging a view lock. A push motion may be used to both engage and disengage a view lock. View lock may also be engaged and/or disengaged with a button press or screen tap.

    [0186] Movements may be performed in succession (e.g. push then pull) to effect results. In certain embodiments, a push-based movement sequence is used to jump the virtual camera to an optimal viewing location (but not necessarily optimal orientation) in relation to content in view. In certain embodiments, such a gesture both jumps the virtual camera to this optimal viewing location and engages a view lock. The view lock may be used to establish a view lock neutral zone or to extend the range of a neutral zone already in place around one or more axes of device movement.

    [0187] In other preferred embodiments, a push motion may be used to initiate video interactions ranging from basic media transport functions (such as pause, fast-forward, rewind, skip forward and skip back) to traversing links from a video to related content (whether or not such related content is video), traversing seamless expansions, engaging interactive advertisements or otherwise directing the flow of a video or the experience.

    Interaction Sequences

    [0188] FIGS. 6A, 6B and 6C illustrate a pivot down interaction sequence and related visual display states. FIG. 6A represents a device starting posture while FIGS. 6B and 6C represent interaction sequence transition states. FIGS. 7A, 7B and 7C illustrate virtual camera posture 111 and orientation 112 states in VE 110 corresponding to interaction sequence states illustrated in FIGS. 6A, 6B and 6C respectively.

    [0189] FIG. 6A represents a device starting posture and FIG. 7A illustrates the virtual camera starting in the southwest region of the VE facing north. Videos mapped to video panes 113, 116 and 119 are playing. Video pane 113 and a portion of video pane 116 are visible on visual display 101. Virtual speakers 114 and 115 are directly ahead, while virtual speakers 117, 118, 120 and 121 are to the right. Auditory devices 102 and/or 106 emphasize (e.g. display at a higher relative volume) sounds virtually emanating from virtual speaker 114 while auditory devices 103 and/or 107 emphasize sounds virtually emanating from virtual speakers 115, 117, 118, 120 and 121. In other words, sounds to the left of the center of focus of the virtual camera in the VE are produced for a participant as if they're coming from the left; and sounds to the right of the center of focus of the virtual camera in the VE are produced for a participant as if they're coming from the right. Sounds emanating from virtual speakers closer to the virtual camera, such as 114 and 115, are emphasized over sounds emanating from virtual speakers farther from the virtual camera, such as 118 and 120. Devices with a single audio display, capable of monophonic sound only, may be limited to the latter distance-based distinction; however this limitation can be remedied by attaching stereo headphones 104 to the device.

    [0190] FIG. 6B illustrates a transitory pivot down interaction state and FIG. 7B illustrates that the virtual camera has moved north. Video mapped to video panes 113, 116 and 119 continue to play. Video pane 113 is displayed larger on visual display 101, while sounds from virtual speakers 114 and 115 are produced louder.

    [0191] FIG. 6C illustrates a second pivot down interaction state and FIG. 7C illustrates that the virtual camera has moved further north. Video mapped to video panes 113, 116 and 119 continue to play. Video pane 113 now fills the visual display 101, while sounds from virtual speakers 114 and 115 are produced even louder. In comparison with the state illustrated in FIG. 6A, the sounds from virtual speakers 114 and 115 are stereoscopically more distinct because the relative angle between the virtual camera orientation and each virtual speaker is pronounced.

    [0192] In a preferred embodiment, velocity of virtual camera forward movement is related to pivot down in the following manner using the following equations. First, data about the device's x-axis rotation posture is compared against an origin to determine whether the device is pivoted down—by subtracting an origin from the raw x-axisometer data to determine relative pivot down (if any). Second, if the relative pivot down is greater than a maximum pivot down of Δ50° then the relative pivot down is set to 50°. Third, the relative pivot down is compared against a neutral zone threshold. If the relative pivot down is greater than the threshold, then the threshold is subtracted from the relative pivot down to determine active pivot down. The active pivot down value is multiplied by (−cos (((v-axisometer data)+90.0)/180.0*Pi)) to determine a basis of travel along one vector of the floor of the VE; and the active pivot down value is multiplied by (−sin (((v-axisometer data)+90.0)/180.0*Pi)) to determine a basis of travel along the other vector of the floor of the VE. These bases of travel are normalized for consistency across devices with varying processor speeds, divided by a dampening factor of 60 and then added to each of the current location point variables. If the newly calculated location is outside the bounds of the VE, then the new location is set inside the bounds of the VE.

    [0193] Thus, in this embodiment, the virtual camera moves forward proportionally faster as the device is pivoted down farther from the origin. FIG. 15 clarifies that such an equation could be applied across a range of pivot down postures, such as those bounded by angle (a) and angle (b). In other preferred embodiments, the virtual camera moves forward at a fixed rate regardless of degree of pivot down. In other preferred embodiments, the forward movement of the virtual camera is speed limited, as exemplified by pivot down angle (b) of FIG. 15, which correlates to a maximum forward movement speed in the VE. A variety of equations may be used to translate pivot down data into virtual camera forward movement including but not limited to linear, exponential, geometric and other curved functions. FIG. 15 illustrates a practical clarifying framework for deploying curved and other such dynamic functions. In the example, speed of forward movement in the VE increases from device pivot down angle (a) to device pivot down angle (b) and then decreases from device pivot down angle (b) to device pivot down angle (c). In converse, speed increases from pivot down angle (c) to pivot down angle (b) and then decreases from pivot down angle (b) to pivot down angle (a).

    [0194] FIGS. 6D, 6E and 6F illustrate a pivot up interaction sequence and related visual display states. FIG. 6D represents a device starting posture while FIGS. 6E and 6F represent interaction sequence transition states. FIGS. 7D, 7E and 7F illustrate virtual camera location 111 and orientation 112 states corresponding to interaction sequence states illustrated in FIGS. 6D, 6E and 6F respectively.

    [0195] FIG. 6D represents a device starting posture and FIG. 7D illustrates the virtual camera starting in the northwest region of the VE facing north. Videos mapped to video panes 113, 116 and 119 are playing. Video pane 113 fills the visual display 101. Sounds emanating from virtual speaker 114 are produced primarily for display primarily by auditory devices 102 and/or 106 (as if coming from the left); while sounds emanating from virtual speakers 115, 117, 118, 120 and 121 are produced for display primarily by auditory devices 103 and/or 107 (as if coming from the right). Sounds from virtual speakers 114 and 115 are produced relatively louder than the other virtual speakers farther from the virtual camera. Devices with a single audio display, capable of monophonic sound only, may be limited to the latter distance-based distinction; however this limitation can be remedied by attaching stereo headphones 104 to the device.

    [0196] FIG. 6E illustrates a transitory pivot up interaction state and FIG. 7E illustrates that the virtual camera has moved south. Video mapped to video panes 113, 116 and 119 continue to play. Video pane 113 is displayed smaller on visual display 101, while sounds from virtual speakers 114 and 115 are produced quieter.

    [0197] FIG. 6F illustrates a second pivot up interaction state and FIG. 7F illustrates that the virtual camera has moved further south. Video mapped to video panes 113, 116 and 119 continue to play. Video pane 113 and a portion of video pane 116 are now visible on visual display 101, while sounds from virtual speakers 114 and 115 are produced even quieter. In comparison with the state illustrated in FIG. 6D, the sounds from virtual speakers 114 and 115 are stereoscopically less distinct because the relative angle between the virtual camera orientation and each virtual speaker is reduced.

    [0198] In a preferred embodiment, velocity of virtual camera backward movement is related to pivot up in the following manner using the following equations. First, data about the device's x-axis rotation posture is compared against an origin to determine whether the device is pivoted up—by subtracting an origin from the raw x-axisometer data to determine relative pivot up (if any). Second, if the relative pivot up is greater than a maximum pivot up of 450° then the relative pivot up is set to 50°. Third, the relative pivot up is compared against a neutral zone threshold. If the relative pivot down is greater than the threshold, then the threshold is subtracted from the relative pivot down to determine active pivot up. The active pivot up value is multiplied by (−cos (((v-axisometer data)+90.0)/180.0*Pi)) to determine a basis of travel along one vector of the floor of the VE; and the active pivot up value is multiplied by (−sin (((v-axisometer data)+90.0)/180.0*Pi)) to determine a basis of travel along the other vector of the floor of the VE. These bases of travel are normalized for consistency across devices with varying processor speeds, divided by a dampening factor of 60 and then added to each of the current location point variables. If the newly calculated location is outside the bounds of the VE, then the new location is set inside the bounds of the VE.

    [0199] Thus, in this embodiment, the virtual camera moves backward proportionally faster as the device is pivoted up farther from the origin. FIG. 16 clarifies that such an equation could be applied across a range of pivot up postures, such as those bounded by angle (d) and angle (e). In other preferred embodiments, the virtual camera moves backward at a fixed rate regardless of degree of pivot up. In other preferred embodiments, the backward movement of the virtual camera is speed limited, as exemplified by pivot down angle (e) of FIG. 16, which correlates to a maximum backward movement speed in the VE. A variety of equations may be used to translate pivot up data into virtual camera backward movement including but not limited to linear, exponential, geometric and other curved functions. FIG. 16 illustrates a practical clarifying framework for deploying curved and other such dynamic functions. In the example, speed of backward movement in the VE increases from device pivot up angle (d) to device pivot up angle (e) and then decreases from device pivot up angle (e) to device pivot up angle (f). In converse, speed increases from pivot up angle (f) to pivot up angle (e) and then decreases from pivot up angle (e) to pivot up angle (d).

    [0200] FIGS. 8A, 8B and 8C illustrate an aim right interaction sequence and related visual display states. FIG. 8A represents a device starting posture while FIGS. 8B and 8C represent interaction sequence transition states. FIGS. 9A, 9B and 9C illustrate virtual camera location 111 and orientation 112 states in VE 110 corresponding to interaction sequence states illustrated in FIGS. 8A, 8B and 8C respectively.

    [0201] FIG. 8A represents a device starting posture and FIG. 9A illustrates the virtual camera starting in the west region of the VE facing north. Videos mapped to video panes 113, 116 and 119 are playing. Video pane 113 fills a portion of visual display 101. Sounds emanating from virtual speaker 114 are produced primarily for display primarily by auditory devices 102 and/or 106 (as if coming from the left); while sounds emanating from virtual speakers 115, 117, 118, 120 and 121 are produced for display primarily by auditory devices 103 and/or 107 (as if coming from the right). Sounds from virtual speakers 114 and 115 are produced relatively louder than the other virtual speakers farther from the virtual camera. Devices with a single audio display, capable of monophonic sound only, may be limited to the latter distance-based distinction; however this limitation can be remedied by attaching stereo headphones 104 to the device.

    [0202] FIG. 8B illustrates a transitory aim right interaction state and FIG. 9B illustrates that the virtual camera orientation has rotated clockwise to face northeast. Video mapped to video panes 113, 116 and 119 continue to play. Video pane 116 is now centered on visual display 101 with portions of video panes 113 and 119 to the left and right. Sound from virtual speaker 114 is produced to be more clearly coming from the left.

    [0203] FIG. 8C illustrates a second aim right interaction state and FIG. 9C illustrates that the virtual camera has rotated further clockwise to face directly east. Video mapped to video panes 113, 116 and 119 continue to play. Video pane 119 is now centered on the visual display 101. Sounds from virtual speakers 114 and 115 are both now produced as if coming from the left while sounds from virtual speakers 120 and 121 are now produced as if coming from straight ahead.

    [0204] FIGS. 8D, 8E and 8F illustrate an aim left interaction sequence and related visual display states. FIG. 8D represents a device starting posture while FIGS. 8E and 8F represent interaction sequence transition states. FIGS. 9D, 9E and 9F illustrate virtual camera location 111 and orientation 112 states corresponding to interaction sequence states illustrated in FIGS. 8D, 8E and 8F respectively.

    [0205] FIG. 8D represents a device starting posture and FIG. 9D illustrates the virtual camera starting in the west region of the VE facing east. Videos mapped to video panes 113, 116 and 119 are playing. Video pane 119 fills a portion of visual display 101. Sounds emanating from virtual speaker 114, 115, 117, 118 are produced for display primarily by auditory devices 102 and/or 106 (as if coming from the left); while sounds emanating from virtual speakers 120 and 121 are generally centered between left and right, though somewhat stereoscopically distinct. Sounds from virtual speakers 114 and 115 are produced relatively louder than the other virtual speakers farther from the virtual camera. Devices with a single audio display, capable of monophonic sound only, may be limited to the latter distance-based distinction; however this limitation can be remedied by attaching stereo headphones 104 to the device.

    [0206] FIG. 8E illustrates a transitory aim left interaction state and FIG. 9E illustrates that the virtual camera orientation has rotated counter-clockwise to face northeast. Video mapped to video panes 113, 116 and 119 continue to play. Video pane 116 is now centered on visual display 101 with portions of video panes 113 and 119 to the left and right. Sound from virtual speaker 115 is produced to be more centrally sourced and less clearly coming from the left.

    [0207] FIG. 8F illustrates a second aim left interaction state and FIG. 9F illustrates that the virtual camera has rotated further counter-clockwise to face directly north. Video mapped to video panes 113, 116 and 119 continue to play. Video pane 113 is now centered on the visual display 101 and neither video pane 116 nor 119 are visible. Sounds emanating from virtual speakers 120 and 121 now are produced for display primarily by auditory devices 103 and/or 107 (as if coming from the right), while sounds from virtual speakers 114 and 115 are now generally centered between left and right, though somewhat stereoscopically distinct.

    [0208] FIGS. 10A, 10B and 10C illustrate a tip right interaction sequence and related visual display states. FIG. 10A represents a device starting posture while FIGS. 10B and 10C represent interaction sequence transition states. FIGS. 11A, 11B and 11C illustrate virtual camera location 111 and orientation 112 states in VE 110 corresponding to interaction sequence states illustrated in FIGS. 10A, 10B and 10C respectively.

    [0209] FIG. 10A represents a device starting posture and FIG. 11A illustrates the virtual camera starting in the southwest region of the VE facing north. Videos mapped to video panes 113, 116 and 119 are playing. Video pane 113 and a portion of video pane 116 are visible on visual display 101. Virtual speakers 114 and 115 are directly ahead, while virtual speakers 117, 118, 120 and 121 are to the right. Auditory devices 102 and/or 106 emphasize (e.g. display at a higher relative volume) sounds virtually emanating from virtual speaker 114 while auditory devices 103 and/or 107 emphasize sounds virtually emanating from virtual speakers 115, 117, 118, 120 and 121.

    [0210] FIG. 10B illustrates a transitory tip right interaction state and FIG. 11B illustrates that the virtual camera has moved eastward. Video mapped to video panes 113, 116 and 119 continue to play. Video panes 113 and 116 are centered on visual display 101, while sounds from virtual speakers 120 and 121 are produced louder on the right than before. The virtual camera has rotated around its z-axis counter-clockwise to bring the horizon in the VE closer to parallel with the real ground—counter-balancing the tip right of the device.

    [0211] FIG. 10C illustrates a second tip right interaction state and FIG. 11C illustrates that the virtual camera has moved further east. Video mapped to video panes 113, 116 and 119 continue to play. Video pane 116 is now centered in the visual display 101, while sounds from virtual speakers 120 and 121 are produced even louder on the right.

    [0212] In a preferred embodiment of the invention, tip right of a device will not result in movement of the virtual camera if the virtual camera is currently moving forward or backward in response to pivot down or pivot up interactions. It is generally easier for participants to do one thing at a time, and such separating of pivot axes reduces the chances of accidental actuation and simplifies the overall user experience. While rightward virtual camera movement is suppressed during forward and backward movement, counter-clockwise rotation of the virtual camera to fluidly maintain the virtual horizon is not suppressed. This maintains the illusion of multidirectional control without evidencing the aforementioned suppression. For skilled 3D navigators, however, enabling movement along both axes simultaneously can provide more interaction control.

    [0213] In a preferred embodiment, velocity of virtual camera movement to the right is related to tip right in the following manner using the following equations. First, data about the device's z-axis rotation posture is compared against an origin to determine whether the device is tipped right—by subtracting an origin from the raw z-axisometer data to determine relative tip right (if any). Second, if the relative tip right is greater than a maximum tip right of 50° then the relative tip right is set to 50°. Third, the relative tip right is compared against a neutral zone threshold. If the relative tip right is greater than the threshold, then the threshold is subtracted from the relative tip right to determine active tip right. The active tip right value is multiplied by (−cos (((v-axisometer data)+90.0)/180.0*Pi)) to determine a basis of travel along one vector of the floor of the VE; and the active tip right value is multiplied by (−sin (((v-axisometer data)+90.0)/180.0*Pi)) to determine a basis of travel along the other vector of the floor of the VE. These bases of travel are normalized for consistency across devices with varying processor speeds, divided by a dampening factor of 120 and then added to each of the current location point variables. If the newly calculated location is outside the bounds of the VE, then the new location is set inside the bounds of the VE.

    [0214] Thus, in this embodiment, the virtual camera moves right proportionally faster as the device is tipped right farther from the origin. FIG. 17B clarifies that such an equation could be applied across a range of tip right postures, such as those bounded by angle (i) and angle (j). In other preferred embodiments, the virtual camera moves right at a fixed rate regardless of degree of tip right. In other preferred embodiments, the rightward movement of the virtual camera is speed limited, as exemplified by tip right angle (j) of FIG. 17B, which correlates to a maximum rightward movement speed in the VE. A variety of equations may be used to translate tip right data into virtual camera rightward movement including but not limited to linear, exponential, geometric and other curved functions. FIG. 17B illustrates a practical clarifying framework for deploying curved and other such dynamic functions. In the example, speed of rightward movement in the VE increases from device tip right angle (i) to device tip right angle (j) and then decreases from device tip right angle (j) to device tip right angle (j.sub.1). In converse, speed increases from tip right angle (j.sub.1) to tip right angle (j) and then decreases from tip right angle (j) to tip right angle (i).

    [0215] FIGS. 10D, 10E and 10F illustrate a tip left interaction sequence and related visual display states. FIG. 10D represents a device starting posture while FIGS. 10E and 10F represent interaction sequence transition states. FIGS. 11D, 11E and 11F illustrate virtual camera location 111 and orientation 112 states corresponding to interaction sequence states illustrated in FIGS. 10D, 10E and 10F respectively.

    [0216] FIG. 10D represents a device starting posture and FIG. 11D illustrates the virtual camera starting in the southeast region of the VE facing north. Videos mapped to video panes 113, 116 and 119 are playing. Video pane 116 and a portion of video panes 113 and 119 are visible on visual display 101. Virtual speakers 114 and 115 are to the left, virtual speakers 117 and 118 are directly ahead, and virtual speakers 120 and 121 are to the right. Auditory devices 102 and/or 106 emphasize (e.g. display at a higher relative volume) sounds virtually emanating from virtual speaker 114, 115 and 117 while auditory devices 103 and/or 107 emphasize sounds virtually emanating from virtual speakers 118, 120 and 121.

    [0217] FIG. 10E illustrates a transitory tip left interaction state and FIG. 11E illustrates that the virtual camera has moved westward. Video mapped to video panes 113, 116 and 119 continue to play. Video panes 113 and 116 are centered on visual display 101, while sounds from virtual speakers 120 and 121 are produced softer on the right than before. The virtual camera has rotated around its z-axis clockwise to bring the horizon in the VE closer to parallel with the real ground—counter-balancing the tip left of the device.

    [0218] FIG. 10F illustrates a second tip left interaction state and FIG. 11F illustrates that the virtual camera has moved further west. Video mapped to video panes 113, 116 and 119 continue to play. Video pane 113 is now centered in the visual display 101, while sounds from virtual speakers 120 and 121 are produced even quieter on the right.

    [0219] In a preferred embodiment of the invention, tip left of a device will not result in movement of the virtual camera if the virtual camera is currently moving forward or backward in response to pivot down or pivot up interactions. It is generally easier for participants to do one thing at a time, and such separating of pivot axes reduces the chances of accidental actuation and simplifies the overall user experience. While leftward virtual camera movement is suppressed during forward and backward movement, clockwise rotation of the virtual camera to fluidly maintain the virtual horizon is not suppressed. This maintains the illusion of multidirectional control without evidencing the aforementioned suppression. For skilled 3D navigators, however, enabling movement along both axes simultaneously can provide more interaction control.

    [0220] In a preferred embodiment, velocity of virtual camera movement to the left is related to tip left in the following manner using the following equations. First, data about the device's z-axis rotation posture is compared against an origin to determine whether the device is tipped left—by subtracting an origin from the raw z-axisometer data to determine relative tip left (if any). Second, if the relative tip left is greater than a maximum tip left of 50° then the relative tip left is set to 50°. Third, the relative tip left is compared against a neutral zone threshold. If the relative tip left is greater than the threshold, then the threshold is subtracted from the relative tip left to determine active tip left. The active tip left value is multiplied by (−cos (((v-axisometer data)+90.0)/180.0*Pi)) to determine a basis of travel along one vector of the floor of the VE; and the active tip left value is multiplied by (−sin (((v-axisometer data)+90.0)/180.0*Pi)) to determine a basis of travel along the other vector of the floor of the VE. These bases of travel are normalized for consistency across devices with varying processor speeds, divided by a dampening factor of 120 and then added to each of the current location point variables. If the newly calculated location is outside the bounds of the VE, then the new location is set inside the bounds of the VE.

    [0221] Thus, in such an embodiment, the virtual camera moves left proportionally faster as the device is tipped left farther from the origin. FIG. 17A clarifies that such an equation could be applied across a range of tip left postures, such as those bounded by angle (g) and angle (h). In other preferred embodiments, the virtual camera moves left at a fixed rate regardless of degree of tip left. In other preferred embodiments, the leftward movement of the virtual camera is speed limited, as exemplified by tip left angle (h) of FIG. 17A, which correlates to a maximum leftward movement speed in the VE. A variety of equations may be used to translate tip left data into virtual camera leftward movement including but not limited to linear, exponential, geometric and other curved functions. FIG. 17A illustrates a practical clarifying framework for deploying curved and other such dynamic functions. In the example, speed of leftward movement in the VE increases from device tip left angle (g) to device tip left angle (h) and then decreases from device tip left angle (h) to device tip left angle (h.sub.1). In converse, speed increases from tip left angle (h.sub.1) to tip left angle (h) and then decreases from tip left angle (h) to tip left angle (g).

    [0222] In a preferred embodiment of the invention, the above described interaction mappings are combined to result in a coherent gestalt user experience. An example interaction sequence based on the VE model illustrated in FIG. 3 using device 100 might occur as follows. Start by standing in the northwest corner of the VE close to the soccer game playing in video pane 113, as illustrated in FIG. 6D. Pivot up to walk backward (south) through FIG. 6E to arrive at the state illustrated in FIG. 8A. Aim right to change the orientation of the virtual camera through FIGS. 8B and 8C to rest in the state illustrated in FIG. 8D, revealing the active videoconference stream in video pane 116 and the playing bird documentary in video pane 119 to the northeast and east, respectively. Now, as illustrated in FIG. 8D, game action emanating from virtual speakers 114 and 115 is only audible from the left audio display 102 and/or 106. To view said game action, aim left through FIGS. 8E and 8F to rest at the state illustrated by FIG. 10A. Finally, tip right to relocate the virtual camera eastward through FIGS. 10B and 10C to arrive at the state illustrated in FIG. 10D. The virtual camera is now located in the southeast region of the VE and the videoconference stream video pane is centered on the visual display. The soccer match is now audible to the left and the birds are now audible to the right.

    [0223] FIG. 12 illustrates a representative model of an architectural scale manifold vista VE 130 containing video pane 131 in the northwest region, video pane 133 in the southeast region, and video pane 132 centrally located. A virtual camera is located 134 on the west side of the space with a lens orientation 135 facing towards the east. An alternate location 136 of the virtual camera is on the east side of the space with an alternate lens orientation 137 facing towards the west. Video pane 133 is obscured from virtual camera location 134 by video pane 132; and video pane 131 is obscured from virtual camera location 136 by video pane 132. The invention is particularly useful in architectural scale manifold vista spaces because access to each content element requires the participant to travel though the space (with benefit of peripatetic sense) and to change orientation (with benefit of proprioceptive sense).

    Video Pane Characteristics

    [0224] Video panes and virtual speakers may appear, disappear, change size or shape, or change location in space at temporal locations predetermined before a given participant experience or at temporal locations determined in part or in whole on the fly.

    [0225] When a video pane is visible from more than one side, the video pane's content may be automatically flipped around the video pane's y-axis when viewed from the backside of the video pane to maintain the content's original facing. This approach would be critical in the event that words, such as subtitles, are part of the video content. Alternately, video content may be produced in reverse from the backside of the video pane.

    [0226] Video panes may be opaque to other video panes and objects in the VE, or may be transparent. In one preferred embodiment, video panes are produced at 75% opacity, hinting at detail necessary for navigating a manifold vista VE without compromising the experience of the video content.

    [0227] Whether transparent or not, participants may be permitted to walk directly through video panes or be blocked from such passage. If permitted to pass through video panes, audio feedback and/or video effects may assist in participant comprehension of such transaction. Video panes and other objects in the VE may optionally be used as portals that bridge non-neighboring regions of the VE—enabling a participant to travel, for example, directly from a pane located in the northwest region of a VE to a pane located in the southeast region of the VE. Portal interactions may also be used for traversing hyperlinks or for entry into and exit from a VE.

    [0228] It should be understood that characteristics of, transformations of, and interactions with video panes in a VE may be generalized to other content forms including but not limited to still images, text documents, web pages, maps, graphs and 3D objects.

    Exemplary Hardware Configuration

    [0229] FIG. 13 is a block diagram of an exemplary hardware configuration model 200 for a device implementing the participant experience described in reference to FIGS. 1-12 and 14-18. Exemplary hardware devices include Apple's iPad and iPhone devices, Samsung's Galaxy phones and tablets, mobile devices built on Google's Android platform, Microsoft's Surface tablet computers. Alternate hardware devices include Google's Project Glass wearable computers and Microsoft's X-BOX 360 game consoles equipped with Kinect motion sensing input hardware.

    [0230] From a participant interaction perspective, the device can include one or more visual display(s) 201 coupled with one or more visual display controller(s) 202, one or more auditory display(s) 203 coupled with one or more auditory display controller(s) 204, and one or more tactile display(s) 205 coupled with one or more tactile display controller(s) 206. It can include one or more accelerometer(s) 207, one or more magnetometer(s) 208, one or more gyro sensor(s) 209, one or more touch sensor(s) 210, and one or more other input hardware 211 (such as hardware button(s), camera(s) and/or other proximity sensing and/or motion sensing technologies) each coupled to one or more input interface(s) 212.

    [0231] The device can include one or more processor(s) 213 and one or more memory bank(s) 214 connected to one another and connected to the various display controller(s) and input interface(s) via one or more bus(es) 218. It can also be coupled with one or more wireless communication subsystem(s) 215 that communicate through one or more wireless network(s) 216 to one or more remote computing device(s) 217.

    Applicability

    [0232] The claimed invention may be used for navigating in a variety of contexts including but not limited to productions of artistic expression, theatrical prototyping, architectural simulation, street-view mapping, gaming, remote control of vehicles, augmented reality, virtual reality, videoconferencing and other telepresence applications, and user interfaces for document and image searching, browsing and retrieval. Virtual environments containing polylinear video and audio have already been discussed at length. The peripatetic proprioceptive experience principles and solutions disclosed apply to a variety of other applications making use of virtual or virtual-like environments.

    [0233] Architects can prototype buildings and museum exhibit curators can prototype the design of exhibits, then test virtual experiences of the space and fine tune before physical construction.

    [0234] Augmented reality applications can be enhanced by enabling participants to travel in the modeled space without having to change their location in the physical world.

    [0235] Games situated in virtual environments, for example, can be improved by enabling participants to move around more naturally without overloading the visual display with buttons.

    [0236] Street-view maps can be transformed into a form of VE. Rather than mouse-clicking or finger-tapping on a visual display interface to move from virtual camera location to virtual camera location, the present invention enables participants to more easily navigate the map environment and experience the captured streets (or other spaces) with proprioceptive perspective.

    [0237] Extemporaneous control of remote objects can be made more natural using the invention, enabling a participant to pivot, tip and aim a handheld or head mounted device to control a remote-controlled toy or full-sized military tank, for example. If the vehicle is outfitted with a camera, then the participant may see the remote location from first-person proprioceptive perspective.

    [0238] A frequently expressed need in the domain of videoconferencing involves effective techniques for spatializing and navigating amongst attendee video panes and related document content. The present invention can overturn the rigid seating arrangements and unwieldy display limitations of current-day multi-party videoconferencing systems in favor of a portable experience that uses intuitive and comfortable interactions.

    [0239] Other social media, such as a navigable VE-based telepresence event may be transformed by adding peripatetic proprioceptive interactions, complete with soundscape cocktail party effects. As a participant moves their avatar through the VE, conversations overheard amongst virtual attendees close by in the space are produced louder than conversations further away.

    [0240] The present invention may be used in improve the participant experience of searching, browsing and retrieving documents and images from large databases, whether locally or remotely stored. Contents of search results may be distributed in two or three dimensions akin to a distribution of video panes in a VE, thus enabling a participant to move through the plurality of results using peripatetic and/or proprioceptive interactions. Gestures such as push and pull may be used to tag and/or collect results of interest and/or initiate subsequent filtering of navigable results produced for browsing in the space.

    [0241] Having now set forth the preferred embodiments and certain modifications of the concepts underlying the present invention—which are meant to be exemplary and not limiting—various other embodiments and uses as well as certain variations and modifications thereto may obviously occur to those skilled in the art upon becoming familiar with the underlying concepts. It is to be understood, therefore, that the invention may be practiced otherwise than as specifically set forth herein, including using sensors, apparatus, programming languages, toolkits and algorithms (including adding steps, removing steps, reversing the interpretation of motions, and changing the order of procedures) other than those described to effectuate the user experiences disclosed herein.