G06F3/0426

Image projection device
11513637 · 2022-11-29 ·

An image projection device which can correctly discern content of touch operation when a user performs various kinds of touch operation on an image projected on a projection screen is provided. An imaging unit is adjusted to come into focus on the projection screen. An image data extracting unit extracts image data in which a finger or the like exists and in which the finger or the like is brought into focus in image data obtained by the imaging unit. An operation determining unit determines content of operation performed with the finger or the like, on the basis of the image data extracted by the image data extracting unit. An input control unit recognizes content of an input instruction corresponding to the operation performed with the finger or the like, on the basis of data relating to the content of the operation performed with the finger or the like, position data of the finger or the like, and reference data for specifying a position and a size of the image projected on the projection screen, and controls a projection unit in accordance with the recognized content of the input instruction.

Display systems and methods for aligning different tracking means

A display system including: display apparatus; display-apparatus-tracking means; input device; processor. The processor is configured to: detect input event and identify actionable area of input device; process display-apparatus-tracking data to determine pose of display apparatus in global coordinate space; process first image to identify input device and determine relative pose thereof with respect to display apparatus; determine pose of input device and actionable area in global coordinate space; process second image to identify user's hand and determine relative pose thereof with respect to display apparatus; determine pose of hand in global coordinate space; adjust poses of input device and actionable area and pose of hand such that adjusted poses align with each other; process first image, to generate extended-reality image in which virtual representation of hand is superimposed over virtual representation of actionable area; and render extended-reality image.

Virtual Keyboard Interaction Method and System
20220365655 · 2022-11-17 ·

The present disclosure provides a virtual keyboard interface method and system. The method includes pre-training a fingertip detection model; acquiring, by using the fingertip detection model, three-dimensional spatial position coordinates, relative to a preset reference position, of all fingertips on image data to be detected; determining, based on the three-dimensional spatial position coordinates, touch control regions corresponding to the fingertips; in a case where a touch control region overlaps a sensing region of a preset virtual keyboard, acquiring volume information of the touch control region submerged in the sensing region; and determining, based on the volume information and a preset rule, whether the virtual keyboard where the sensing region corresponding to the touch control region is located is triggered.

Virtual content generation

Systems, apparatuses (or devices), methods, and computer-readable media are provided for generating virtual content. For example, a device (e.g., an extended reality device) can obtain an image of a scene of a real-world environment, wherein the real-world environment is viewable through a display of the extended reality device as virtual content is displayed by the display. The device can detect at least a part of a physical hand of a user in the image. The device can generate a virtual keyboard based on detecting at least the part of the physical hand. The device can determine a position for the virtual keyboard on the display of the extended reality device relative to at least the part of the physical hand. The device can display the virtual keyboard at the position on the display.

Private control interfaces for extended reality

Systems, methods, and non-transitory media are provided for generating private control interfaces for extended reality (XR) experiences. An example method can include determining a pose of an XR device within a mapped scene of a physical environment associated with the XR device; detecting a private region in the physical environment and a location of the private region relative to the pose of the XR device, the private region including an area estimated to be within a field of view (FOV) of a user of the XR device and out of a FOV of a person in the physical environment, a recording device in the physical environment, and/or an object in the physical environment; based on the pose of the XR device and the location of the private region, mapping a virtual private control interface to the private region; and rendering the virtual private control interface within the private region.

Electronic devices with a deployable flexible display

Examples disclosed herein provide electronic devices with a flexible display. An example electronic device includes an enclosure, a flexible display deployable from the enclosure, where a viewing angle of the flexible display with respect to the enclosure is adjustable. The electronic device includes a mechanism to autonomously deploy and retract the flexible display within the enclosure, and a supporting structure to reinforce the flexible display when deployed from the enclosure.

Method and system for ranking candidates in input method

A method and a system for ranking candidates in an input method are provided. The method comprises: receiving an initial key code string inputted by a user using an input method; for each character in the initial key code string, obtaining a weight of the character and weights of characters surrounding the character, and establishing a key code string weight list with a corresponding hierarchy according to a character input order. The method further comprises: when character combinations are obtained from a dictionary, according to a correspondence relationship between a hierarchy in the input method dictionary and the hierarchy in the key code string weight list, determining weights of the character combinations using the key code string weight list; and based on the weights of the character combinations, ranking candidates corresponding to the character combinations.

Precision tracking of user interaction with a virtual input device

An augmented reality or virtual reality (AR/VR) system can include a virtual input device that can be rendered by an HMD, and a wearable impact detection device, such as a ring, smart watch, wristband, etc. with an inertial measurement unit (IMU), that can be used in conjunction with the HMD to track a location of the user's hands relative to the perceived location of the rendered virtual keyboard using, e.g., vision-based tracking via the HMD and determine when an intended input (e.g., button press) is entered by the user by detecting an impact of the user's finger(s) on a physical surface. The AR/VR system can then determine which key is pressed based on the physical location of the user's hands (e.g., using the vision-based tracking) and, more precisely, the user's finger(s) causing the detected impact and the closest key of the virtual input device to the detected point of impact.

Systems and methods for configuring a hub-centric virtual/augmented reality environment

In certain embodiments, a sensing and tracking system detects objects, such as user input devices or peripherals, and user interactions with them. A representation of the objects and user interactions are then injected into the virtual reality environment. The representation can be an actual reality, augmented reality, virtual representation or any combination. For example, an actual keyboard can be injected, but with the keys pressed being enlarged and lighted.

Keyboards for virtual, augmented, and mixed reality display systems

User interfaces for virtual reality, augmented reality, and mixed reality display systems are disclosed. The user interfaces may be virtual or physical keyboards. Techniques are described for displaying, configuring, and/or interacting with the user interfaces.