G06F3/04815

Augmented reality dynamic authentication

A system for performing authorization of a user in an augmented reality environment comprises an augmented reality user device, an automatic teller machine, and an authentication server. The automatic teller machine has a keypad with unmarked buttons. The augmented reality user device includes a display configured to overlay virtual objects onto a field of view of a user. The augmented reality user device receives a virtual keypad overlay, which assigns values to the unmarked buttons of the keypad. Using the overlay, the augmented reality user device displays the assigned values on the buttons of the keypad. The automatic teller machine detects an input sequence entered on the keypad and sends the input sequence to the authentication server. The authentication server determines an authentication code by combining the input sequence with the virtual keypad overlay, and compares the determined authentication code with an authentication code stored in a database.

Augmented reality dynamic authentication

A system for performing authorization of a user in an augmented reality environment comprises an augmented reality user device, an automatic teller machine, and an authentication server. The automatic teller machine has a keypad with unmarked buttons. The augmented reality user device includes a display configured to overlay virtual objects onto a field of view of a user. The augmented reality user device receives a virtual keypad overlay, which assigns values to the unmarked buttons of the keypad. Using the overlay, the augmented reality user device displays the assigned values on the buttons of the keypad. The automatic teller machine detects an input sequence entered on the keypad and sends the input sequence to the authentication server. The authentication server determines an authentication code by combining the input sequence with the virtual keypad overlay, and compares the determined authentication code with an authentication code stored in a database.

Viewpoint Navigation Control for Three-Dimensional Visualization Using Two-Dimensional Layouts
20180011620 · 2018-01-11 · ·

Systems and methods for supplying an open interface (e.g., web pages) for viewpoint navigation control of a three-dimensional (3-D) visualization of an object that is simple to create and fast and easy to use. This viewpoint navigation control application allows users to control the viewpoint in a 3-D environment by interacting with (e.g., clicking on) a 2-D hyperlink layout within a web browser (or other 2-D viewer with hyperlink capability). Position and orientation data for a selected viewpoint are transferred as part of a command message sent to the 3-D visualization application through an application programming interface when users select a hyperlink from a web page displayed by the 2-D layout application. The 3-D visualization application then retrieves data and displays a view of at least a portion of the 3-D model of the object with the predefined viewpoint specified in the command message.

ROLL TURNING AND TAP TURNING FOR VIRTUAL REALITY ENVIRONMENTS

Technologies are described for providing turning in virtual reality environments. For example, some implementations use roll turning that involves rotating around an outer edge of a control input, some implementations use tap turning to move directly to a location indicated by a control movement, and some implementations involve combinations of roll turning and tap turning.

Providing a first person view in a virtual world using a lens

An interactive virtual world having avatars. Scenes in the virtual world as seen by the eyes of the avatars are presented on user devices controlling the avatars. In one approach, a method includes identifying a location of an avatar in a virtual world, and a point of gaze of the avatar; adjusting, based on the point of gaze, a lens that directs available light received by the lens so that the lens can focus on objects at all distances; collecting, using the adjusted lens, image data; and generating a scene of the virtual world as seen by the avatar, the scene based on the collected image data, the location of the avatar, and the point of gaze of the avatar.

Providing a first person view in a virtual world using a lens

An interactive virtual world having avatars. Scenes in the virtual world as seen by the eyes of the avatars are presented on user devices controlling the avatars. In one approach, a method includes identifying a location of an avatar in a virtual world, and a point of gaze of the avatar; adjusting, based on the point of gaze, a lens that directs available light received by the lens so that the lens can focus on objects at all distances; collecting, using the adjusted lens, image data; and generating a scene of the virtual world as seen by the avatar, the scene based on the collected image data, the location of the avatar, and the point of gaze of the avatar.

Visual tracking system and method
11711880 · 2023-07-25 ·

The present invention is directed to a user-operated spotlight system and method for lighting a performer on a stage or performance space; the user-operated spotlight system comprising a screen which displays an image of the stage and a cursor, a screen cursor positioner adapted to be operated to move the cursor on the screen, a processor connected to the screen, and, a plurality of controllable spotlights which are connected to the processor and which plurality of controllable spotlights can be moved by a user moving the cursor on the screen. The advantage of providing such a user-operated spotlight system is that a single user can operate a plurality of spotlights.

Visual tracking system and method
11711880 · 2023-07-25 ·

The present invention is directed to a user-operated spotlight system and method for lighting a performer on a stage or performance space; the user-operated spotlight system comprising a screen which displays an image of the stage and a cursor, a screen cursor positioner adapted to be operated to move the cursor on the screen, a processor connected to the screen, and, a plurality of controllable spotlights which are connected to the processor and which plurality of controllable spotlights can be moved by a user moving the cursor on the screen. The advantage of providing such a user-operated spotlight system is that a single user can operate a plurality of spotlights.

Presentation of an enriched view of a physical setting

In one implementation, a non-transitory computer-readable storage medium stores program instructions computer-executable on a computer to perform operations. The operations include presenting, on a display of an electronic device, first content representing a standard view of a physical setting depicted in image data generated by an image sensor of the electronic device. While presenting the first content, an interaction with an input device of the electronic device is detected that is indicative of a request to present an enriched view of the physical setting. In accordance with detecting the interaction, second content is formed representing the enriched view of the physical setting by applying an enrichment effect that alters or supplements the image data generated by the image sensor. The second content representing the enriched view of the physical setting is presented on the display.

Generating in-app guided edits including concise instructions and coachmarks
11709690 · 2023-07-25 · ·

The present disclosure relates to systems, methods, and non-transitory computer readable media for generating coachmarks and concise instructions based on operation descriptions for performing application operations. For example, the disclosed systems can utilize a multi-task summarization neural network to analyze an operation description and generate a coachmark and a concise instruction corresponding to the operation description. In addition, the disclosed systems can provide a coachmark and a concise instruction for display within a user interface to, directly within a client application, guide a user to perform an operation by interacting with a particular user interface element.