Patent classifications
A63F13/53
Management of streaming video data
User action data characterizing action by a player in a game environment executing at a user client is received at a server. The game environment is created by the user client separate from the server. Data characterizing a selected viewing position is received. The selected viewing position is different than a player viewing position. The selected viewing position characterizes a viewing location within the game environment. A recreated game environment is generated from the user action data at the server. A video stream of the recreated game environment is generated. The video stream includes video from a perspective of the selected viewing position. The video stream is transmitted to a viewing client. Related apparatus, systems, articles, and techniques are also described.
Management of streaming video data
User action data characterizing action by a player in a game environment executing at a user client is received at a server. The game environment is created by the user client separate from the server. Data characterizing a selected viewing position is received. The selected viewing position is different than a player viewing position. The selected viewing position characterizes a viewing location within the game environment. A recreated game environment is generated from the user action data at the server. A video stream of the recreated game environment is generated. The video stream includes video from a perspective of the selected viewing position. The video stream is transmitted to a viewing client. Related apparatus, systems, articles, and techniques are also described.
Method and apparatus for enabling multiple timeline support for omnidirectional content playback
A method, apparatus and computer program product enable multiple timeline support in playback of omnidirectional media content with overlay. The method, apparatus and computer program product receive a visual overlay configured to be rendered as a multi-layer visual content with an omnidirectional media content file (30). The omnidirectional media content file is associated with a first presentation timeline. The visual overlay is associated with a second presentation timeline. The method, apparatus and computer program product construct an overlay behavior definition file associated with the visual overlay (32). The overlay behavior definition file indicates a behavior of the second presentation timeline with respect to the first presentation in an instance that a pre-defined user interaction switch occurs during a playback of the omnidirectional media content file.
Systems and methods for virtual reality based driver training
Systems and methods for providing driver training in a virtual reality environment are disclosed. According to some aspects, an appropriate virtual reality driving simulation may be determined based on one or more input parameters provided by a user. The virtual reality driving simulation may include: (i) an instructional lesson, to be rendered in virtual reality, for teaching driving-related rules and/or skills to a user, and (ii) a driving scenario, to be rendered in virtual reality, for the user to practice the driving-related rules and/or skills taught by the instructional lesson. While the virtual reality driving simulation is rendered, user performance data may be recorded. Based on an analysis of the user performance data, a driving competency score and/or user feedback may be determined.
Systems and methods for virtual reality based driver training
Systems and methods for providing driver training in a virtual reality environment are disclosed. According to some aspects, an appropriate virtual reality driving simulation may be determined based on one or more input parameters provided by a user. The virtual reality driving simulation may include: (i) an instructional lesson, to be rendered in virtual reality, for teaching driving-related rules and/or skills to a user, and (ii) a driving scenario, to be rendered in virtual reality, for the user to practice the driving-related rules and/or skills taught by the instructional lesson. While the virtual reality driving simulation is rendered, user performance data may be recorded. Based on an analysis of the user performance data, a driving competency score and/or user feedback may be determined.
Guidance information relating to a target image
In some examples, an electronic device receives selection of a target image relating to an augmented reality presentation, displays, in a display screen of the electronic device, captured visual data of an environment acquired by the electronic device, and displays, in the display screen, guidance information relating to the target image to assist a user in finding a physical target, corresponding to the target image, in the captured visual data of the environment.
RELEVANCY-BASED VIDEO HELP IN A VIDEO GAME
Techniques for improving a user video game experience are described. In an example, a computer system accesses videos showing separate completions of an activity by a plurality of video game players. From a definition of the activity, it is determined that the activity is a parent of sub-activities. Links to video portions of the videos are generated, wherein each video portion corresponds to a sub-activity. A score is generated for each video portion based on a relevance of each video portion to a user. The links are presented in a user interface to the user based on the score for each video portion, wherein upon selection of a first link, the user interface displays a first video to the user starting at a first video portion showing a completion of a sub-activity by a video game player.
Reflective lens headset configuration detection
A system and method for detecting a condition of an augmented reality system and/or controlling an aspect of the augmented reality system.
Using HMD camera touch button to render images of a user captured during game play
Methods and systems for presenting an image of a user interacting with a video game includes providing images of a virtual reality (VR) scene of the video game for rendering on a display screen of a head mounted display (HMD). The images of the VR scene are generated as part of game play of the video game. An input provided at a user interface on the HMD received during game play is used to initiate a signal to pause the video game and to generate an activation signal to activate an image capturing device. The activation signal causes the image capturing device to capture an image of the user interacting in a physical space. The image of the user captured by the image capturing device during game play is associated with a portion of the video game that corresponds with a time when the image of the user was captured. The association causes the image of the user to be transmitted to the HMD for rendering on the display screen of the HMD.
Using HMD camera touch button to render images of a user captured during game play
Methods and systems for presenting an image of a user interacting with a video game includes providing images of a virtual reality (VR) scene of the video game for rendering on a display screen of a head mounted display (HMD). The images of the VR scene are generated as part of game play of the video game. An input provided at a user interface on the HMD received during game play is used to initiate a signal to pause the video game and to generate an activation signal to activate an image capturing device. The activation signal causes the image capturing device to capture an image of the user interacting in a physical space. The image of the user captured by the image capturing device during game play is associated with a portion of the video game that corresponds with a time when the image of the user was captured. The association causes the image of the user to be transmitted to the HMD for rendering on the display screen of the HMD.