Augmented virtuality self view
11659150 · 2023-05-23
Assignee
Inventors
Cpc classification
G02B27/0093
PHYSICS
H04N13/111
ELECTRICITY
G02B2027/0187
PHYSICS
International classification
H04N13/111
ELECTRICITY
Abstract
A processor system processes image data for rendering a virtual environment for a user present in a real environment. The system receives head tracking data indicative of the orientation of the head of the user. An image processor generates image data for rendering a viewport of the virtual environment on a display system based on the head tracking data. A real-view area is defined in the virtual environment, having at least one boundary. The boundary corresponds to predetermined coordinates in the virtual environment. Thereby a corresponding part of the real environment is made visible in the real-view area, the part showing the real environment as perceived from the user head pose. Effectively the virtual environment is augmented by integrating part of the real environment via the real-view area.
Claims
1. Processor system for processing image data for rendering a virtual environment on a display system for a user, the user being present in a real environment, wherein the processor system comprises: an interface for receiving head tracking data from a head tracking system, wherein the head tracking data is indicative of at least the orientation including the pitch orientation of the head of the user in the real environment, and an image processor configured to: generate image data for rendering a viewport of the virtual environment on the display system, the viewport being generated based on the head tracking data, thereby moving the viewport corresponding to head movements, define at least one real-view area in the virtual environment, determine at least one boundary of the real-view area, the boundary corresponding to predetermined coordinates in the virtual environment for making visible a corresponding part of the real environment in the real-view area, the part showing the real environment as perceived from the user head pose; and modify the coordinates of the boundary in the virtual environment based on a change of the position of the user in the real environment, wherein the boundary comprises a horizontal boundary comprising a separating line horizontal with respect to the virtual environment, while the real-view area is below the separating line, and the processor system is configured to maintain at least a part of the real-view area in the viewport by moving the horizontal boundary in the virtual environment when the pitch orientation of the head of the user as indicated by the tracking data exceeds a predetermined limit.
2. The processor system as claimed in claim 1, wherein the at least one boundary comprises a left separating line vertical with respect to the virtual environment and a right separating line vertical with respect to the virtual environment, the real-view area is between the left and right separating lines.
3. The processor system as claimed in claim 1, wherein the processor system comprises a camera interface for receiving camera data of the real environment as perceived from the user head pose from a camera mounted on the head of the user, and the image processor is arranged to show said part of the real environment in the real-view area based on the camera data.
4. The processor system as claimed in claim 3, wherein the image processor is configured to generate a data plane showing the camera data at least in the real-view area, and generate the image data of the virtual environment over the data plane outside of the real-view area, the data plane moving with respect to the real environment with head rotations and translations as indicated by the head tracking data.
5. The processor system as claimed in claim 3, wherein the image processor is configured to generate a local plane in the virtual environment on the position of the real-view area, and to show the camera data to the extent the camera data overlaps with the local plane.
6. The processor system as claimed in claim 3, wherein the image processor is configured to modify the camera data by adapting the scale or perspective for showing image data of the real environment as perceived from the user head pose.
7. The processor system as claimed in claim 1, wherein the image processor is configured to create a transition by showing, near the boundary, a combination of the virtual environment and the real environment.
8. Head mounted device comprising the processor system as claimed in claim 1, wherein the head mounted device comprises the display system.
9. The head mounted device as claimed in claim 8, wherein the device comprises a camera for providing camera data of the real environment, and the image processor is arranged to show said part of the real environment in the real-view area based on the camera data.
10. The head mounted device as claimed in claim 8, wherein the display system has a transparent part, and the image processor is configured to making visible said part of the real environment in the real-view area via the transparent part by not displaying image data in the real-view area.
11. The processor system as claimed in claim 1, wherein the moving the horizontal boundary in the virtual environment when the pitch of the head of the user as indicated by the tracking data exceeds a predetermined limit comprises putting the horizontal boundary higher in the virtual environment to maintain the at least part of the real-view area in the bottom part of the viewport when the head of the user looks up exceeding the predetermined limit.
12. The processor system as claimed in claim 1, wherein the moving the horizontal boundary in the virtual environment when the pitch of the head of the user as indicated by the tracking data exceeds a predetermined limit comprises putting the horizontal boundary lower in the virtual environment when the pitch of the head of the user exceeds the predetermined limit in downward direction, such that part of the virtual environment remains visible in the bottom part of the viewport.
13. Computer implemented processing method for processing image data for rendering a virtual environment on a display system for a user, the user being present in a real environment, wherein the method comprises: receiving head tracking data from a head tracking system, wherein the head tracking data is indicative of at least the orientation including the pitch orientation of the head of the user in the real environment, generating image data for rendering a viewport of the virtual environment on the display system, the viewport being generated based on the head tracking data, thereby moving the viewport corresponding to head movements, defining at least one real-view area in the virtual environment, and determining at least one boundary of the real-view area, the boundary corresponding to predetermined coordinates in the virtual environment, for making visible a corresponding part of the real environment in the real-view area, the part showing the real environment as perceived from the user head pose; modifying the coordinates of the boundary in the virtual environment based on a change of the position of the user in the real environment, wherein the boundary comprises a horizontal boundary comprising a separating line horizontal with respect to the virtual environment, while the real-view area is below the separating line, and maintaining at least a part of the real-view area in the viewport by moving the horizontal boundary in the virtual environment when the pitch orientation of the head of the user as indicated by the tracking data exceeds a predetermined limit.
14. A non-transitory computer-readable medium comprising a computer program, the computer program comprising instructions for causing a processor to perform the method according to claim 13.
15. The method as claimed in claim 13, wherein the moving the horizontal boundary in the virtual environment when the pitch of the head of the user as indicated by the tracking data exceeds a predetermined limit comprises putting the horizontal boundary higher in the virtual environment to maintain the at least part of the real-view area in the bottom part of the viewport when the pitch of the head of the user exceeds the predetermined limit in upward direction.
16. The method as claimed in claim 13, wherein the moving the horizontal boundary in the virtual environment when the pitch of the head of the user as indicated by the tracking data exceeds a predetermined limit comprises putting the horizontal boundary lower in the virtual environment when the pitch of the head of the user exceeds the predetermined limit in downward direction, such that part of the virtual environment remains visible in the bottom part of the viewport.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter. In the drawings,
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17) It should be noted that similar items in different figures may have the same reference numbers, may have similar structural features, functions, or signals. Where the function and/or structure of such an item has been explained, there is no necessity for repeated explanation thereof in the detailed description.
LIST OF REFERENCE AND ABBREVIATIONS
(18) The following list of references and abbreviations is provided for facilitating the interpretation of the drawings and shall not be construed as limiting the claims.
(19) AR augmented reality
(20) AV augmented virtuality
(21) ERP equirectangular projection
(22) FoV field of view
(23) HMD head mounted device
(24) VR virtual reality
(25) 100 processor system
(26) 105 camera interface
(27) 110 virtual reality source
(28) 115 interface for receiving head tracking data
(29) 116a,b head tracking system
(30) 120 image processor
(31) 125 scene controller
(32) 150 camera
(33) 180 head mounted device (HMD)
(34) 300 display
(35) 305 real-view area
(36) 310 boundary
(37) 315 coordinate system
(38) 320 virtual environment
(39) 400 display
(40) 405 real-view area
(41) 410 left boundary
(42) 411 right boundary
(43) 415 coordinate system
(44) 420 virtual environment
(45) 500 head of a user
(46) 505 real environment object
(47) 510 sphere with projected virtual environment
(48) 520 image data plane for rendering real world
(49) 530 virtual environment
(50) 540 rectangular real-view area
(51) 620 local data plane
(52) 650 viewport
(53) 660 camera field of view (FoV)
(54) 670 small field of view
(55) 680 large field of view
(56) 700 real environment
(57) 710 real world object
(58) 720 virtual environment
(59) 730 real-view area
(60) 800 processing method
(61) 810 steps for making visible the real environment
(62) 900 computer readable medium
(63) 910 non-transitory data
(64) 1000 exemplary data processing system
(65) 1002 processor
(66) 1004 memory element
(67) 1006 system bus
(68) 1008 local memory
(69) 1010 bulk storage device
(70) 1012 input device
(71) 1014 output device
(72) 1016 network adapter
(73) 1018 application
DETAILED DESCRIPTION OF EMBODIMENTS
(74)
(75) The processor system may further have a camera interface 105 to receive camera data from a camera 150. The camera may also be positioned on, or integrated in, the head mounted device 180. The processor system may further have a virtual reality (VR) source 110, which may be an interface to receive external virtual reality data or a local VR generating system. The processor system may further have a scene controller 125 to provide configuration data and settings for controlling the images that are displayed for the user.
(76) The image processor is configured to generate image data for rendering a viewport of the virtual environment on the display system. The viewport is the part of the virtual environment that is displayed to the user on the display system, and is thus dependent on the user's head orientation. The viewport is generated based on the head tracking data. The image processor is further configured to define at least one real-view area in the virtual environment, and to determine at least one boundary of the real-view area. The boundary corresponds to predetermined coordinates in the virtual environment. The system is arranged for making visible a corresponding part of the real environment in the real-view area. The part shows the real environment as perceived from the user head pose, i.e. how the user would see the real environment in this part of his field of view, giving his head pose.
(77) In a practical embodiment, the VR source provides a VR service representing the virtual environment, for example a 360-video showing a movie, a game environment, a windows or desktop environment, a social VR environment, a VR video conference environment etc. The VR service is consumed on a HMD or VR headset, i.e. displayed on a built-in display for the eyes of the user. The VR headset may have a camera mounted on the front, or a built-in camera. Optionally, the camera is a stereo camera, i.e. offering a view for each one of the eyes. To generate tracking data regarding the pose of the head of the user, the position and orientation of the HMD is tracked, either by an internal tracking system (i.e. inside-out tracking) or by one or more external sensors (i.e. outside-in tracking) or both. The output of the camera is captured and transferred to the processor system via the camera interface. The combination of the VR service and the camera images is to be rendered together for display on the HMD. The combination may be controlled by the scene controller, e.g. a WebVR based environment or a Unity project. The scene controller may be integrated within the VR service. The scene controller may provide a configuration that defines the real-view area. The configuration may define coordinates in a coordinate system of the virtual environment, e.g. horizontal and vertical angles, for which to show the camera data.
(78) Effectively, the above processor system defines one or more real-view areas in the virtual environment, which areas are either transparent or show camera data as perceived from the user head pose. The real-view area(s) may not show any virtual objects. The real-view areas are defined using a coordinate system of the virtual environment, e.g. in horizontal degrees and/or vertical degrees compared to the 0-axis of the virtual environment. For moving users, the coordinates may also include position, e.g. tracked based on the position of a HMD, i.e. taking into account head translation on the 3 axes of a 3D coordinate system.
(79) A camera may capture the view of the user on the real environment, i.e. as perceived from the head pose that defines the position of the user's eyes. For example, the head pose of the user wearing the HMD may be tracked, while a front-facing camera may be mounted on the HMD. Optionally a stereo camera or a color+depth camera is used and mounted on the HMD.
(80) In the image processing a plane may be defined in the virtual environment, e.g. projected behind the virtual environment and visible where there are said real-view areas, e.g. by ‘gaps’ in the virtual environment. The plane is straight in front of the user, i.e. it fills the users view if no virtual environment is rendered, and moves with head rotations/translations. Alternatively, a local plane may be created only on the position of the real-view area where the camera capture should be shown. The camera image is rendered to the extent it overlaps with this plane. Optionally, the camera data may be scaled so that the size of the video matches the vision of the user, i.e. shows the real environment as perceived from the head pose. Subsequently, the camera image is displayed on the plane. The camera image is only rendered to the extent the real-view area in the virtual environment is present in the user's current viewport.
(81) A VR environment may have a fixed axis system, in which (0, 0, 0) is the starting viewing position of the user, i.e. the ‘virtual camera’ position through which the user sees the environment. The north point is defined as the 0-degree forward point on the horizontal plane, as the starting viewing orientation. Together (position and orientation) they define the axis system for a VR service. The orientation can be either defined as the yaw, pitch and roll (illustrated in
(82)
(83)
(84)
(85) In an embodiment, a plane may be created that shows the camera image needed for the real-view area. Such plane may have certain properties: it may be fixed, i.e. either moving or not moving along with user's head movement, regarding position and orientation, which properties can be updated on events or continuously based on the tracking data. By defining a plane that is fixed in relation to the users view, i.e. always in front, it allows rendering camera data as provided by a camera mounted on a HMD. Furthermore, either ‘cutting a hole’ in the VR scene, i.e. leaving out parts of the VR scene, or by defining transparency in a certain part of the VR environment, allows the plane that is rendering the camera image to become visible there. For example, such plane is convenient for 360-degree experiences, e.g. using 360-degree photos or video. Cutting a hole in a picture or video, i.e. by leaving out pixels in a certain area, effectively creates the real-view area.
(86)
(87) As the image plane showing the camera image is rendered behind the projection of the virtual environment on the sphere, the camera image would not be visible to the user. Creating the real-view area is executed by removing part of the virtual environment, so that the camera image will be visible to the user while the image plane shows that specific part of the real environment. Removing a part can be done in various ways, either by actually removing a part, or by making a part transparent, or by preventing the rendering of a part. The removed part of the virtual environment creates the real-view area, which in this embodiment is the area in the virtual environment where the image plane is visible to the user.
(88)
(89) The virtual environment 530 shows an equirectangular projection (ERP) of a 360-degree image, e.g. a photo of an office environment, or perhaps one frame of a video of an office environment. This projection can be projected on a sphere to give a proper virtual environment to a user. The virtual environment data may not cover the full sphere, e.g. a bottom part or top part may be missing effectively giving the virtual environment a lower limit and/or an upper limit, as shown in
(90)
(91) In the embodiment of
(92) This rendering may be done by determining the overlap, cropping the camera data, e.g. the current video frame, and rendering the remaining data on the part of the rendering plane that is overlapping the captured frame. Alternatively, this rendering may be done by determining the non-overlapping part in the captured frame, and making this part fully transparent. This rendering does not require the real-view plane to be a rendering frame: no actual data is rendered on this plane. Similar to the example in
(93)
(94) A second camera FoV 670 is smaller than the viewport offered by the HMD, as shown in scenario B. This may limit the experience a user will have with the system. Only the part of the real-view area, as captured by the camera system, can be displayed, as only for that part, image data will be available. Still, as the camera is normally centered around the center of the user's viewport, this may be quite usable, as only part of the periphery is lost. Optionally, to compensate for parts of the real-view area for which there is no camera data, the virtual environment may be shown in the empty parts of the real-view area, i.e. effectively shrinking the real-view area based on the camera FoV.
(95) A third camera FoV 680 is larger than the viewport offered by the HMD, as shown in scenario C. This may result in a good experience in the sense of real-view area that can be shown. All parts of the real-view area can be covered, similar to the situation shown in scenario A. But, this will lead to some inefficiencies, as some areas get captured by the camera and are not used for display. The camera image data as captured may be cropped to limit the camera image data to the current viewport, to increase efficiency.
(96) In an embodiment, the camera may be shut down when the real-view area is not in the viewport of the user. So, required processing power and supply power may be further reduced.
(97)
(98) If a person is sitting, the front of the user may be larger than the back, extension-wise, as shown in scenario B. So, while a back boundary may remain e.g. at −70 degrees, the front boundary should be higher, i.e. a lower degree from the horizon such as −60 degrees, depicted as β.
(99) If a person is sitting at a desk, with their arms in front to e.g. handle a mouse and keyboard, the horizontal boundary may be even higher. This is depicted in scenario C. In such a case, the angle may e.g. be −40 degrees, depicted by γ.
(100) If different horizontal boundaries for front and back are defined, there may be a combination of vertical and horizontal boundaries at the sides of the user. And, various boundaries may change if the person changes his/her position. For example, if the user stands up, the boundary coordinates, e.g. in degrees, may change with your movement taking into account the user's position, e.g. a relevant part of a desk may be further down in degrees compared to sitting.
(101) For vertical boundaries, degrees may be configured from a North or straight in front position. Such position may be calibrated for a headset, e.g. for seated VR this may be the direction straight in front of the PC, e.g. sitting straight at your desk. The degrees can be either expressed in positive degrees or both positive and negative degrees. E.g. a boundary at 45 degrees on each side of 0 degrees (North) can be referenced as 45 and 315 degrees or as +45 and −45 degrees.
(102)
(103) The virtual environment may be a 360-degree photo or video. In this case, normally a static sphere is defined around the user, at a certain distance. As a 360 video is recorded from a single point, head translations do not have any influence on what a user sees. The virtual environment may also be a 3D environment, i.e. a graphical ‘game-like’ environment. In this case, the user is inside a virtual 3D world, and head translations will have effect. E.g. the user would be able to look behind a wall. The virtual environment has a coordinate system. Normally, when the user enters VR, he will be defined as the center, i.e. the (0, 0, 0) point, looking around in the virtual environment through a so-called virtual camera. This camera can either move around based on head tracking, e.g. in the 3D environment, or be statically placed, i.e. in case of a 360 photo or video. In both cases, the orientation, i.e. the direction the virtual camera is aimed at, will be updated based on the head rotations of the user, as detected by the HMD tracker. Also, the real environment has a real-world coordinate system, i.e. the usual physical world.
(104) There are at least 3 different embodiments of mode of operation of controlling the real-view area, as elucidated with examples B, C and D. The examples show, what may happen when the user, wearing the HMD, is standing up.
(105) In example (B) of a standing user, the real-view area stays at the same position in the virtual environment. This could may be happening if the user is in a 3D environment, and the user sees the real world through a specific part of the 3D environment, e.g. a dedicated area placed inside the virtual environment. The user will see a different part of the real-world, depending on the position of the user's viewpoint. This is similar to looking out a physical window, the user may change his head position to look at different parts outside. The 3D virtual environment may behave similar to a physical environment, i.e. rotating a user's head 360 degrees brings the user back to the same point, and moving his head 1 meter upwards takes you 1 meter upwards in the virtual environment also. If this is the case, example (B) is similar to having the real-view area having a fixed position in the real-world coordinate system. There are still some further design choices that e.g. if there is a virtual wall, moving around the virtual wall may lead to the real-view area no longer being visible, or the real-view area could always be visible no matter what. This scenario may also be used with 360 photo or video content as virtual environment.
(106) In example (C) of a standing user, the real-view area remains at the same directional coordinates in the VE coordinate system, relative the user's HMD position. This may be used when the virtual environment is a 360 photo or video, where the user is not expected to move much. The virtual environment may have e.g. a virtual desk position, which is situated at the real-view area. If the user slightly moves his or her head, he would see a slightly different part of the real world as well. As shown, when standing up, he would see a slightly different part of the table, and look over it for a part.
(107) In example (D) of a standing user, the real-view area is adjusted based on the position of the user's head, so that the user will have the same part of the real world in view through the real-view area, even during head translations such as standing up. This may be done, if the distance towards the real-world objects can be estimated or measured, as the adjustment for objects at a large distance is different from adjustment for objects close by. For example, the user HMD may contain a stereo camera looking outwards, and the stereo images may be used to estimate this distance.
(108) The above embodiments may be arranged to perform similar for head position movement in the horizontal direction, or for diagonal (i.e. horizontal and vertical) head position movement.
(109)
(110) A first embodiment of a camera system is shown in
(111)
(112) In an embodiment the display system has a partly transparent display. For example, see-through augmented reality (AR) headsets are headsets that use some kind of projection method to allow the user to both see the real world through a transparent part of the headset, while allowing overlays to be shown within this transparent part. Typical examples are the Microsoft HoloLens, the Magic Leap 1 and the Meta 2. Current AR glasses are still limited in their field of view, i.e. typically being somewhere around 45 degrees horizontally (HoloLens 2, Magic Leap) while the Meta 2 offers a higher field of view of around 90 degrees horizontal. Expectancy is this will increase in the future, and this would allow using an AR headset also as a VR headset, if the images can fill the entire user's view.
(113) In an embodiment the above processor system that provides a real-view area is be applied to AR headsets. The processor system is similar to VR headsets but adapted as follows. While the virtual environment is similarly projected, no images are needed for display in the real-view area. As long as nothing virtual is projected in the real-view area, this area will automatically offer a view of the physical world. Big advantage of this embodiment, is that the latency of the real-world view is 0, as there is a direct view through a transparent part of the display.
(114) Optionally an AR headset may also be used in exactly the same way as a VR headset by using a camera for capturing the real-world environment and using the projection/display capabilities of the AR headset to display this part in a real-view area of the headset.
(115)
(116) Section (B) shows the legend for the rest of the figure, showing a horizontal boundary HB and a vertical boundary VB, using two different styles of dashes. In the Figure, parts of each projection may be marked as follows: F=front, T=top, B=bottom, L=left, R=right, M=middle, BK=back,
(117)
(118) Section (D) shows a different packing projection, which may be called ERP with region-wise packing. Here boundaries may be drawn straight, but as less space (i.e. pixels) is used for top and bottom, they require separate boundaries, and they shift compared to scenario (C) as their width is only half of the width of the middle part.
(119) Section (E) shows cube map projection. For cube map projection, the boundaries are more complex. For the basic cube map projection, as long as the horizontal boundary goes through the front, it can be drawn easily. If it would have been in the top or bottom, it would have become a circle inside that cube face. For the vertical boundaries, the center points of the 360 sphere are the center points of the top and bottom face. So here the vertical boundaries in the top and bottom part go towards these center points, and there they end, as shown in section (E).
(120) Section (F) shows a region-wise packing projection to organize faces. Packing the faces as shown here will not change the boundaries, but it will look a bit different due to the different organization.
(121) Finally, section (G) shows a region-wise packing projection to organize faces having a higher quality front. Here the top and bottom part receive less vertical resolution, thus the center point shifts and thus the boundaries also change.
(122) Defining the real-view areas in such a projection can be used to cause part of the projection to be transparent, by either not supplying any image data for these parts, by using the alpha-channel to define transparency for these parts, by using a chroma color and later using chroma keying to remove the part during rendering, etc.
(123) The cut-off between the virtual environment and the real-view area may be strict on the boundary itself. In an embodiment, for a good immersive experience, additional measures may be taken to cause a smooth transition between the virtual environment and the real-view area. The measures may consist of a blending of parts of the virtual environment and the view of the real environment, by e.g. defining an area where the pixels are a mix of pixels from the virtual environment and the real environment, for example using a (small) area for the transition of 100% virtual environment to 100% real view. This can also be seen as a transition in transparency of either the virtual environment or the real view, also with a transition of 0% transparency to 100% transparency. Other methods may comprise image transformations to smooth the edges, e.g. by blurring, feathering, smoothing, etc. the boundary, or by introducing an additional element that ‘covers’ the edge, e.g. adding a kind of mist or cloud or discoloring at the boundary between the view of the virtual and the real environment.
(124) A number of examples explain the projection of a virtual environment on a sphere, as is normally done for 360-degree photos or videos. If the virtual environment is a graphical 3D environment or a point-cloud or mesh-based 3D environment, the step of projection on a sphere is normally not performed. Instead, during rendering, the 3D environment is rendered based on the user's head pose directly. The step of cutting a hole or making a part transparent may then be performed by not rendering the environment in a certain direction, e.g. within certain defined vector directions. This may align well with ray-tracing methods for rendering to determine what to render and what not (as parts of the 3D world may occlude each other). Another method may be to actually modify the virtual environment by “cutting away” the parts in the real-view area, e.g. if the 3D environment is built up out of meshes, just ‘deleting’ the meshes in this area. Any other method as known in the art may also be applied here, as various ways besides these mentioned here exist for doing so.
(125)
(126)
(127)
(128) Memory elements 1004 may include one or more physical memory devices such as, for example, local memory 1008 and one or more bulk storage devices 1010. Local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive, solid state disk or other persistent data storage device. The processing system 1000 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the number of times program code must be retrieved from bulk storage device 1010 during execution.
(129) Input/output (I/O) devices depicted as input device 1012 and output device 1014 may optionally be coupled to the data processing system. Examples of input devices may include, but are not limited to, for example, a microphone, a keyboard, a pointing device such as a mouse, a touchscreen or the like. Examples of output devices may include, but are not limited to, for example, a monitor or display, speakers, or the like. Input device and/or output device may be coupled to data processing system either directly or through intervening I/O controllers. A network interface 1016 may also be coupled to, or be part of, the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network interface may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to said data and a data transmitter for transmitting data to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network interface that may be used with data processing system 1000.
(130) As shown in
(131) In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
(132) The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
(133) While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.
REFERENCES
(134) [1] Milgram, Paul, et al. “Augmented reality: A class of displays on the reality-virtuality continuum.” Telemanipulator and telepresence technologies. Vol. 2351. International Society for Optics and Photonics, 1995. [2] McGill, Mark, et al. “A dose of reality: Overcoming usability challenges in yr head-mounted displays.” Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 2015. [3] Budhiraja, Pulkit, et al. “Where's my drink? Enabling peripheral real world interactions while using HMDs.” arXiv preprint arXiv: 1502.04744 (2015). [4] Günther, Tobias, Ingmar S. Franke, and Rainer Groh. “Aughanded virtuality-the hands in the virtual environment.” 2015 IEEE Symposium on 3D User Interfaces (3DUI). IEEE, 2015. [5] Kanamori, Kohei, et al. “Obstacle avoidance method in real space for virtual reality immersion.” 2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, 2018. [6] von Willich, Julius, et al. “You Invaded my Tracking Space! Using Augmented Virtuality for Spotting Passersby in Room-Scale Virtual Reality.” Proceedings of the 2019 on Designing Interactive Systems Conference. ACM, 2019. [7] Suma, Evan A., David M. Krum, and Mark Bolas. “Sharing space in mixed and virtual reality environments using a low-cost depth sensor.” 2011 IEEE International Symposium on VR Innovation. IEEE, 2011. [8] Simsarian, Kristian T., and Karl-Petter Akesson. “Windows on the world: An example of augmented virtuality.” (1997).