Patent classifications
G06F3/0304
System and Method for Authoring Freehand Interactive Augmented Reality Applications
An augmented reality (AR) application authoring system is disclosed. The AR application authoring system enables the real-time creation of freehand interactive AR applications with freehand inputs. The AR application authoring system enables intuitive authoring of customized freehand gesture inputs through embodied demonstration while using the surrounding environment as a contextual reference. A visual programming interface is provided with which users can define freehand interactions by matching the freehand gestures with reactions of virtual AR assets. Thus, users can create personalized freehand interactions through simple trigger-action programming logic. Further, with the support of a real-time hand gesture detection algorithm, users can seamlessly test and iterate on the authored AR experience.
Protection of and access to data on computing devices
Techniques for changing the presentation of information on a user interface based on presence are described. In an example, a computer system determines, based on an image sensor associated with the system, a first presence of a first user relative to a computing device. The computer system also determines an identifier of the first user. The identifier is associated with operating the computing device. The operating comprises a presentation of the user interface by the computing device. The computer system also determines, based on the image sensor, a second presence of a second person relative to the computing device. The computer system causes an update to the user interface based on the second presence.
MONITORING
A method comprising: automatically processing recorded first sensor data from a scene to recognise automatically a first user input from user action in the scene; in response to recognition of the first user input, automatically entering a learning state to enable: automatic processing of the first sensor data from the scene to capture an ad-hoc sequence of spatial events in the scene subsequent to the first user input and automatic processing of subsequently recorded second sensor data from the scene different to the first sensor data of the scene, to recognise automatically a sequence of spatial events in the subsequently recorded second video corresponding to the captured sequence of spatial events.
Method for Adjusting Screen Displaying Direction and Terminal
Provided is a method for adjusting a screen display direction and a terminal. The method includes the follows. At least one authority fingerprint is acquired, and the authority fingerprint is a fingerprint having the authority of changing a screen display direction. A first fingerprint is acquired. When the first fingerprint is the authority fingerprint, an input direction of the first fingerprint is acquired. The screen display direction is adjusted according to the input direction of the first fingerprint. If the first fingerprint is acquired again, a time period for acquiring the first fingerprint is recorded. The size of the screen display area is adjusted according to the recorded time period of the first fingerprint.
DEVICES AND METHODS FOR GENERATING INPUT
Devices and methods are disclosed for generating input. In one implementation, a stylus is provided for generating writing input. The stylus includes an elongated body having a distal end, and a light source configured to project coherent light on an opposing surface adjacent the distal end. The stylus further includes at least one sensor configured to measure first reflections of the coherent light from the opposing surface while the distal end moves in contact with the opposing surface, and to measure second reflections of the coherent light from the opposing surface while the distal end moves above the opposing surface and out of contact with the opposing surface. The stylus also includes at least one processor configured to receive input from the at least one sensor and to enable determining three dimensional positions of the distal end based on the first reflections and the second reflections.
ROBOT CONTROL USING GESTURES
A method and a device for operating a robot are provided. According to an example of the method, information of a first gesture is acquired from a group of gestures of an operator, each gesture from the group of gestures corresponding to an operation instruction from a group of operation instructions. A first operation instruction from the group of operation instructions is obtained based on the acquired information of the first gesture, the first operation corresponding to the first gesture. The first operation instruction is executed.
INTERACTION WITH VIRTUAL OBJECTS BASED ON DETERMINED RESTRICTIONS
Motion and/or rotation of an input mechanism can be tracked and/or analyzed to determine limits on a user's range of motion and/or a user's range of rotation in three-dimensional space. The user's range of motion and/or the user's range of rotation in three-dimensional space may be limited by a personal restriction for the user (e.g., a broken arm). The user's range of motion and/or the user's range of rotation in three-dimensional space may additionally or alternatively be limited by an environmental restriction (e.g., a physical object in a room). Accordingly, the techniques described herein can take steps to accommodate the personal restriction and/or the environmental restriction thereby optimizing user interactions involving the input mechanism and a virtual object.
Using HMD Camera Touch Button to Render Images of a User Captured During Game Play
Methods and systems for presenting an image of a user interacting with a video game includes providing images of a virtual reality (VR) scene of the video game for rendering on a display screen of a head mounted display (HMD). The images of the VR scene are generated as part of game play of the video game. An input provided at a user interface on the HMD received during game play is used to initiate a signal to pause the video game and to generate an activation signal to activate an image capturing device. The activation signal causes the image capturing device to capture an image of the user interacting in a physical space. The image of the user captured by the image capturing device during game play is associated with a portion of the video game that corresponds with a time when the image of the user was captured. The association causes the image of the user to be transmitted to the HMD for rendering on the display screen of the HMD.
Augmenting Virtual Reality Content With Real World Content
Methods, devices, and computer programs for augmenting a virtual reality scene with real world content are provided. One example method includes an operation for obtaining sensor data from an HMD of a user to determine that a criteria is met to overlay one or more real world objects into the virtual reality scene to provide an augmented virtual reality scene. In certain examples, the criteria corresponds to predetermined indicators suggestive of disorientation of a user when wearing the HMD and being presented a virtual reality scene. In certain other examples, the one or more real world objects are selected based on their effectiveness at reorienting a disoriented user.
Dynamic Entering and Leaving of Virtual-Reality Environments Navigated by Different HMD Users
Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene.