G06F3/011

DYNAMIC EXPANSION AND CONTRACTION OF EXTENDED REALITY ENVIRONMENTS
20230052418 · 2023-02-16 ·

In one example, a method performed by a processing system including at least one processor includes rendering an extended reality environment, monitoring social interactions of a plurality of users within the extended reality environment, adjusting the extended reality environment in response to the social interactions of the plurality of users, and adjusting a rule associated with the extended reality environment in response to the adjusting the extended reality environment.

SYSTEMS AND METHODS FOR CONVEYING VIBROTACTILE AND THERMAL SENSATIONS THROUGH A WEARABLE VIBROTHERMAL DISPLAY FOR SOCIO-EMOTIONAL COMMUNICATION

Various embodiments of a wearable haptic and thermal feedback display system are disclosed herein. In particular, the system includes an array of vibrotactile actuators and thermal units affixed on a flexible casing that could be worn around the forearm. The collocated vibrotactile and thermal stimulations could enable richer haptic communication due to better control over the generated patterns. In addition, the device could be wirelessly controlled using a smartphone which further proves its applicability in long distance haptic communication

DYNAMIC WIDGET PLACEMENT WITHIN AN ARTIFICIAL REALITY DISPLAY
20230046155 · 2023-02-16 ·

The disclosed computer-implemented method may include (1) identifying a trigger element within a field of view presented by a display element of an artificial reality device, (2) determining a position of the trigger element within the field of view, (3) selecting a position within the field of view for a virtual widget based on the position of the trigger element, and (4) presenting the virtual widget at the selected position via the display element. Various other methods, systems, and computer-readable media are also disclosed.

Multimode Haptic Patch and Multimodal Haptic Feedback Interface
20230053027 · 2023-02-16 ·

A multimodal haptic feedback interface installed with a multimode haptic patch stimulates a skin area of a user to provide a haptic feedback including first and second haptic-feedback components to be sensed under static tactile sensing and dynamic tactile sensing, respectively. The patch is mounted with mechanical actuators, electrostimulation electrodes and thermoelectric pellets. The actuators generate a two-dimensional pattern of pressure on the skin area for generating the first haptic-feedback component. The electrostimulation electrodes electrically stimulates the skin area, causing the user to feel a vibration or pressure for generating the second haptic-feedback component. The actuators and electrostimulation electrodes are optimized only for static tactile sensing and dynamic tactile sensing, respectively, reducing an implementation cost while optimized for accuracy in haptic feedback generation. The thermoelectric pellets, realized as Peltier-effect heat pumps, generate a two-dimensional pattern of temperature change on the skin area for providing a thermal feedback to the user.

Prism based light redirection system for eye tracking systems

A head-mounted device (HMD) contains a display, an optics block, a redirection structure, and an eye tracking system. The display is configured to emit image light and provide it to an eye of a user. The optics block is configured to direct the emitted light in order to allow it to reach the eye. The eye tracking system contains a camera, an illumination source, and a controller. The camera is configured to capture image data using infrared light reflected from the eye. The controller is configured to use this image data to determine eye tracking information. The illumination source is configured to illuminate the eye with infrared light for the purpose of taking eye tracking measurements. The redirection structure is configured to direct infrared light reflected from the eye to the eye tracking system. In multiple embodiments, redirection structures may comprise prism arrays, lenses, liquid crystal layers, or grating structures.

Virtual and augmented reality signatures

A method implemented on a visual computing device to authenticate one or more users includes receiving a first three-dimensional pattern from a user. The first three-dimensional pattern is sent to a server computer. At a time of user authentication, a second three-dimensional pattern is received from the user. The second three-dimensional pattern is sent to the server computer. An indication is received from the server computer as to whether the first three-dimensional pattern matches the second three-dimensional pattern within a margin of error. When the first three-dimensional pattern matches the second three-dimensional pattern within the margin of error, the user is authenticated at the server computer. When the first three-dimensional pattern does not match the second three-dimensional pattern within the margin of error, user is prevented from being authenticated at the server computer.

System and method for an augmented reality goal assistant

A method for an augmented reality goal assistant is described. The method includes detecting an object associated with a behavioral goal of a user. The method also includes altering an appearance of the object based on the behavioral goal of the user. The method further includes displaying the altered appearance of a detected object on an augmented reality headset, such that the altered appearance of the detected object is modified based on the behavioral goal of the user.

Adaptive model updates for dynamic and static scenes

In one embodiment, a computing system may update a first 3D model of a region of an environment based on comparisons between the first 3D model and first depth measurements of the region generated during a first time period. The computing system may determine that the region is static by comparing the first 3D model to second depth measurements of the region generated during a second time period. The computing system may in response to determining that the region is static, detect whether the region changed after the second time period based on comparisons between a second 3D model of the region and third depth measurements of the region generated after the second time period, the second 3D model having a lower resolution than the first 3D model. The computing system may in response to detecting a change in the region, update the first 3D model of the region.

System and method for an interactive digitally rendered avatar of a subject person
11582424 · 2023-02-14 · ·

A system and method for an interactive digitally rendered avatar of a subject person to participate in a web meeting is described. In one embodiment, the method includes receiving an invite to a web meeting on a video conferencing platform, wherein the invite identifies a subject person and the video conferencing platform. The method also includes generating an interactive avatar of the subject person based on a data collection associated with the subject person stored in a database. The method further includes instantiating a platform integrator associated with the video conferencing platform identified in the invite and joining, by the interactive avatar of the subject person, the web meeting on the video conferencing platform. The platform integrator transforms outputs and inputs between the video conferencing platform and an interactive digitally rendered avatar system so that the interactive avatar of the subject person participates in the web meeting.

3D user interface depth forgiveness
11579747 · 2023-02-14 · ·

A head-worn device system includes one or more cameras, one or more display devices and one or more processors. The system also includes a memory storing instructions that, when executed by the one or more processors, configure the system to generate a virtual object, generate a virtual object collider for the virtual object, determine a conic collider for the virtual object, provide the virtual object to a user, detect a landmark on the user's hand in the real-world, generate a landmark collider for the landmark, and determine a selection of the first virtual object by the user based on detecting a collision between the landmark collider with the conic collider and with the virtual object collider.