Patent classifications
G06F3/0304
Electronic apparatus for providing a virtual keyboard and controlling method thereof
Disclosed are an electronic apparatus and a method of controlling the same. The electronic apparatus includes a camera, a display, a memory, and a processor configured to execute at least one instruction to: detect a plurality of fingers in a plurality of first image frames obtained through the camera and, in response to an identification that a pose of the plurality of detected fingers corresponds to a trigger pose, enter a character input mode, detect a first motion of a finger among the plurality of fingers in a plurality of second image frames obtained through in the character input mode, identify a key corresponding to the first motion, from among a plurality of keys mapped to the finger, based on a position of the finger by the first motion and a reference point set to the finger, and control the display to display information corresponding to the identified key.
Visual tracking system and method
The present invention is directed to a user-operated spotlight system and method for lighting a performer on a stage or performance space; the user-operated spotlight system comprising a screen which displays an image of the stage and a cursor, a screen cursor positioner adapted to be operated to move the cursor on the screen, a processor connected to the screen, and, a plurality of controllable spotlights which are connected to the processor and which plurality of controllable spotlights can be moved by a user moving the cursor on the screen. The advantage of providing such a user-operated spotlight system is that a single user can operate a plurality of spotlights.
GESTURE INPUT WITH MULTIPLE VIEWS, DISPLAYS AND PHYSICS
Gesture input with multiple displays, views, and physics is described. In one example, a method includes generating a three dimensional space having a plurality of objects in different positions relative to a user and a virtual object to be manipulated by the user, presenting, on a display, a displayed area having at least a portion of the plurality of different objects, detecting an air gesture of the user against the virtual object, the virtual object being outside the displayed area, generating a trajectory of the virtual object in the three-dimensional space based on the air gesture, the trajectory including interactions with objects of the plurality of objects in the three-dimensional space, and presenting a portion of the generated trajectory on the displayed area.
OBJECT CREATION USING BODY GESTURES
An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.
CONTACTLESS MOTION GAME DEVICE OR KIOSK
Provided are a contactless motion device including a contactless motion game device or a contactless kiosk. The contactless motion device according to an exemplary embodiment includes a substrate including at least one hole, a body unit including a sensor unit, which is disposed around the hole in a plan view, and a control unit connected to the sensor unit. The sensor unit includes a first sensor, which transmits a first signal, and a second sensor, which receives the first signal, the control unit is configured to determine whether a target object is inserted in the hole, based on the first signal received by the second sensor, and the hole completely penetrates the substrate in a thickness direction.
REDUNDANT TRACKING SYSTEM
A redundant tracking system comprising multiple redundant tracking sub-systems, enabling seamless transitions between such tracking sub-systems, provides a solution to this problem by merging multiple tracking approaches into a single tracking system. This system is able to combine tracking objects with six degrees of freedom (6DoF) and 3DoF through combining and transitioning between multiple tracking systems based on the availability of tracking indicia tracked by the tracking systems. Thus, as the indicia tracked by any one tracking system becomes unavailable, the redundant tracking system seamlessly switches between tracking in 6DoF and 3DoF thereby providing the user with an uninterrupted experience.
MULTIPURPOSE CONTROLLERS AND METHODS
Method and apparatus is disclosed for a user to communicate with an electronic device. A processor receives user intention actions comprising facial expression (FE) information indicative of facial expressions and body information indicative of motion or position of one or more body parts of the user. When the FE or body information crosses a first level, the processor starts generating first signals based on the FE or body information to communicate with the electronic device. When the FE or body information crosses a second level, the processor can end generation of the first signals or modify the first signals. An image processing or eye gaze tracking system can provide some FE information or body information. The signals can modify attributes of an object of interest. Use of thresholds that are independent of sensor position or orientation with respect to the user's body are also disclosed.
COMPUTER SYSTEM, APPARATUS, AND METHOD FOR AN AUGMENTED REALITY HAND GUIDANCE APPLICATION FOR PEOPLE WITH VISUAL IMPAIRMENTS
A system, device, application stored on non-transitory memory, and method can be configured to help a user of a device locate and pick up objects around them. Embodiments can be configured to help vision-impaired users find, locate, and pickup objects near them. Embodiments can be configured so that such functionality is provided locally via a single device so the device is able to provide assistance and hand guidance without a connection to the internet, a network, or another device (e.g. a remote server, a cloud based server, a server connectable to the device via an application programming interface, API, etc.).
USING 6DOF POSE INFORMATION TO ALIGN IMAGES FROM SEPARATED CAMERAS
Techniques for aligning images generated by an integrated camera physically mounted to an HMD with images generated by a detached camera physically unmounted from the HMD are disclosed. A 3D feature map is generated and shared with the detached camera. Both the integrated camera and the detached camera use the 3D feature map to relocalize themselves and to determine their respective 6 DOF poses. The HMD receives the detached camera's image of the environment and the 6 DOF pose of the detached camera. A depth map of the environment is accessed. An overlaid image is generated by reprojecting a perspective of the detached camera's image to align with a perspective of the integrated camera and by overlaying the reprojected detached camera's image onto the integrated camera's image.
MEDICAL DEVICE CONFIGURATION PROCEDURE GUIDANCE RESPONSIVE TO DETECTED GESTURES
Techniques disclosed herein relate to providing guidance for configuring a medical device to a user in response to detected gestures of the user. In some embodiments, the techniques involve obtaining sensor data indicative of one or more gestures of a user of a medical device, and detecting a configuration procedure being performed on the medical device by the user based on the sensor data, where the configuration procedure includes a sequence of tasks to be performed by the user to configure the medical device. The techniques also involve determining one or more tasks of the configuration procedure that have been performed by the user based on the sensor data, identifying a subsequent task of the configuration procedure to be performed by the user based on the one or more tasks, and generating guidance information for performing the subsequent task of the configuration procedure.