A63F2300/8082

Providing a first person view in a virtual world using a lens

An interactive virtual world having avatars. Scenes in the virtual world as seen by the eyes of the avatars are presented on user devices controlling the avatars. In one approach, a method includes identifying a location of an avatar in a virtual world, and a point of gaze of the avatar; adjusting, based on the point of gaze, a lens that directs available light received by the lens so that the lens can focus on objects at all distances; collecting, using the adjusted lens, image data; and generating a scene of the virtual world as seen by the avatar, the scene based on the collected image data, the location of the avatar, and the point of gaze of the avatar.

GAME PROCESSING SYSTEM, METHOD OF PROCESSING GAME, AND STORAGE MEDIUM STORING PROGRAM FOR PROCESSING GAME
20230233930 · 2023-07-27 ·

A game processing system for processing of a game that provides interaction with a virtual character according to one embodiment includes a storage that stores action data for specifying one or more actions of the virtual character and one or more computer processors. The game includes a VR mode in which the game progresses in accordance with detection information obtained by a head mounted display. The one or more processors determine an action of the player performed toward the virtual character based on the detection information obtained by the head mounted display attached to the head of the player, cause the virtual character to interact with the player based on the action of the player; and suspend execution of the VR mode if a suspension condition is satisfied.

SYSTEM AND METHOD FOR PROVIDING A COMPUTER-GENERATED ENVIRONMENT
20230001304 · 2023-01-05 ·

A method and system for transition between three dimensional computer-generated environments comprises generating a first three dimensional computer-generated environment including a player avatar controllable by a user within the first three dimensional computer-generated environment, displaying, within the first three dimensional computer-generated environment, a portion of a second three dimensional computer-generated environment at a location visible to the player avatar, accepting interaction by the player avatar with the portion of the second three dimensional computer-generated environment indicating that the player avatar wishes to join the second three dimensional computer-generated environment, and causing the player avatar to transition from the first three dimensional computer-generated environment to the second three dimensional computer-generated environment in response to the interaction.

VIRTUAL EXPO BOOTH PREVIEWS

One example method for virtual expo booth previews includes joining a virtual expo hosted by a video conference provider, the virtual expo including a plurality of virtual expo booths; presenting a graphical representation of the virtual expo and one or more virtual expo booths of the plurality of virtual expo booths; receiving an input indicating a first expo booth of the plurality of virtual expo booths; receiving, from the video conference provider, one or more multimedia streams associated with the first virtual expo booth; and presenting the one or more multimedia streams

CREATING VIDEO CONFERENCE EXPOS
20230239330 · 2023-07-27 ·

One example method includes receiving configuration information for a virtual expo, the configuration information includes information associated with one or more virtual expo booths; generating a virtual expo floor based on the one or more virtual expo booths; establishing a virtual meeting comprising the virtual expo; receiving requests to join the virtual expo from a plurality of client devices; joining each client device of the plurality of client devices to the virtual meeting; providing, to each joined client device, information describing the virtual expo floor and locations of each of the one or more virtual expo booths; providing a location of a respective participant avatar on the virtual expo floor; receiving, from a first client devices, a selection of a first virtual expo booth; and join the first client device to a second virtual meeting associated with the first expo booth.

HMD transitions for focusing on specific content in virtual-reality environments

Methods and systems for presenting an object on a screen of a head mounted display (HMD) include receiving an image of a real-world environment in proximity of a user wearing the HMD. The image is received from one or more forward facing cameras of the HMD and processed for rendering on a screen of the HMD by a processor within the HMD. A gaze direction of the user wearing the HMD, is detected using one or more gaze detecting cameras of the HMD that are directed toward one or each eye of the user. Images captured by the forward facing cameras are analyzed to identify an object captured in the real-world environment that is in line with the gaze direction of the user, wherein the image of the object is rendered at a first virtual distance that causes the object to appear out-of-focus when presented to the user. A signal is generated to adjust a zoom factor for lens of the one or more forward facing cameras so as to cause the object to be brought into focus. The adjustment of the zoom factor causes the image of the object to be presented on the screen of the HMD at a second virtual distance that allows the object to be discernible by the user.

SERVER SYSTEM AND SYSTEM

A computer system controls a start of a gathering event that gathers a user character of a participating user at a given gathering place in a virtual space upon acceptance of a gathering request operation from a terminal of a directing user. The computer system controls a display of a map image displaying the gathering place in the virtual space in an identifiable display mode on the terminals of the directing user and the participating user. The computer system determines a success/failure of the gathering event using a position of the user character operated by the participating user. The computer system gives a given reward to the directing user and/or the participating user when the gathering event is determined to be a success in the event success/failure determination.

AUGMENTED REALITY PLACEMENT FOR USER FEEDBACK
20230021433 · 2023-01-26 ·

Methods and systems are provided for generating augmented reality (AR) scenes where the AR scenes include one or more artificial intelligence elements (AIEs) that are rendered as visual objects in the AR scenes. The method includes generating an AR scene for rendering on a display; the AR scene includes a real-world space and virtual objects projected in the real-world space. The method includes analyzing a field of view into the AR scene; the analyzing is configured to detect an action by a hand of the user when reaching into the AR scene. The method includes generating one or more AIEs rendered as virtual objects in the AR scene, each AIE is configured to provide a dynamic interface that is selectable by a gesture of the hand of the user. In one embodiment, each of the AIEs is rendered proximate to a real-world object present in the real-world space; the real-world object is located in a direction of where the hand of the user is detected to be reaching when the user makes the action by the hand.

AUGMENTED REALITY ARTIFICIAL INTELLIGENCE ENHANCE WAYS USER PERCEIVE THEMSELVES
20230025585 · 2023-01-26 ·

Methods and systems are provided for generating augmented reality (AR) scenes where the AR scenes can be adjusted to modify at least part of an image of the physical features of a user to produce a virtual mesh of the physical features. The method includes generating an augmented reality (AR) scene for rendering on a display for a user wearing AR glasses, the AR scene includes a real-world space and virtual objects overlaid in the real-world space. The method includes analyzing a field of view into the AR scene from the AR glasses; the analyzing is configured to detect images of physical features of the user when the field of view is directed toward at least part of said physical features of the user. The method includes adjusting the AR scene, in substantial real-time, to modify at least part of the images of the physical features of the user when the physical features of the user are detected to be in the AR scene as viewed from the field of view of the AR glasses, wherein said modifying includes detecting depth data and original texture data from said physical features to produce a virtual mesh of said physical features; the virtual mesh is changed in size and shape and rendered using modified texture data that blends with said original texture data. In one embodiment, the modified physical features of the user appear to the user when viewed via the AR glasses as existing in the real-world space. In this way, when the physical features of a user are detected to be in the AR scene, the physical features are augment in the AR scene which can result in the self-perception of the user improving which in turn can provide the user with confidence to overcome challenging tasks or obstacles during the gameplay of the user.

HUMAN COMPUTER INTERACTION DEVICES
20230229237 · 2023-07-20 ·

Various examples are provided related to devices for human-computer interactions. In one example, a human computer interaction device includes a sensing platform with at least one inner zone and an outer zone. The sensing platform can provide control inputs to a computing device in response to detecting movement of a foot of a user on the sensing platform and can provide haptic feedback to the user in response to the detected movement. The haptic feedback can be provided via the at least one inner zone, the outer zone or a combination thereof.