Patent classifications
G06F2203/04801
GESTURE BASED USER INTERFACES, APPARATUSES AND SYSTEMS USING EYE TRACKING, HEAD TRACKING, HAND TRACKING, FACIAL EXPRESSIONS AND OTHER USER ACTIONS
User interaction concepts, principles and algorithms for gestures involving facial expressions, motion or orientation of body parts, eye gaze, tightening muscles, mental activity, and other user actions are disclosed. User interaction concepts, principles and algorithms for enabling hands-free and voice-free interaction with electronic devices are disclosed. Apparatuses, systems, computer implementable methods, and non-transient computer storage media storing instructions, implementing the disclosed concepts, principles and algorithms are disclosed. Gestures for systems using eye gaze and head tracking that can be used with augmented, mixed or virtual reality, mobile or desktop computing are disclosed. Use of periods of limited activity and consecutive user actions in orthogonal axes is disclosed. Generation of command signals based on start and end triggers is disclosed. Methods for coarse as well as fine modification of objects are disclosed.
VIDEO CONFERENCE DEVICE AND OPERATION METHOD THEREOF
A video conference device includes a camera, a communication circuit and a processor. The camera is configured to capture a real-time video of a first location. The communication circuit is communicatively connected to a remote server and a first electronic device located at the first location. The processor executes an online conference application, and processes the real-time video and real-time visual signals received from the remote server via the communication circuit. The processor executes the online conference application to establish or join an online video conference on the remote server. The processor obtains first authentication information from the first electronic device via the communication circuit, sends the first authentication information for identity authentication, and receives first operation authorizations granted to the first electronic device by the remote server based on the first authentication information. The first operation authorizations enable the first electronic device to control a first representative cursor.
CURSOR INTEGRATION WITH A TOUCH SCREEN USER INTERFACE
In some embodiments, a cursor interacts with user interface objects on an electronic device. In some embodiments, an electronic device selectively displays a cursor in a user interface. In some embodiments, an electronic device displays a cursor while manipulating objects in the user interface. In some embodiments, an electronic device dismisses or switches applications using a cursor. In some embodiments, an electronic device displays user interface elements in response to requests to move a cursor beyond an edge of the display.
Systems and methods for moving content between virtual and physical displays
Systems, methods, and non-transitory computer readable media for transferring virtual content to a physical display device are disclosed. An extended reality environment may be presented in a room via a wearable extended reality appliance configured to be paired with multiple display devices located in the room. Each display device may be associated with a unique network identifier. Input to cause presentation of a specific virtual object in the extended reality environment on a target display device and image data depicting the target display device may be received. The image data may be analyzed to identify the target display device. A network identifier of the target display device may be determined. A communications link with the target display device may be established. Data representing the specific virtual object may be transmitted to the target display device, to enable the target display device to present the specific virtual object.
INPUT METHOD, DEVICE, AND ELECTRONIC APPARATUS
An input method, device and electronic apparatus are provided. The input method includes acquiring text information at an input cursor position, where the text information includes above text information located before the input cursor and/or below text information located after the input cursor; extracting keywords from the text information; searching through associative candidate lexicons of the keywords to obtain an enter-on-screen candidate word queue at the input cursor position; outputting the enter-on-screen candidate word queue. By acquiring the text information at the input cursor position and determining the enter-on-screen candidate word queue based on the keywords in the text information, embodiments of the present disclosure solve the issue in existing techniques that after the input cursor changes it position, no enter-on-screen candidate word may be provided by association because no reliable enter-on-screen entry is obtained.
USER INTERFACE THROUGH REAR SURFACE TOUCHPAD OF MOBILE DEVICE
According to an embodiment of the present disclosure, an electronic device, e.g., the mobile device, may comprise an input unit disposed on a first surface of the electronic device to receive a first signal, an output unit outputting a second signal and displaying a first user interface, a second user interface disposed on a second surface of the electronic device to receive a third signal, and a controller configured to perform a first operation according to the first signal, a second operation according to the second signal, and a third operation according to the third signal, wherein the third operation includes controlling the first user interface.
Head-mounted display and information display apparatus
To provide a head-mounted display and an information display apparatus that match the intuition of the user and are excellent in operability. A head-mounted display according to an embodiment of the present technology includes a reception unit, an image display element, and a display processing unit. The reception unit receives an operation signal including information on a relative position of a detection target in contact with an input operation surface, which is output from the input device. The image display element forms an image V1 presented to a user. The display processing unit causes, based on the operation signal, the image display element to display an operation image V10 with an auxiliary image P indicating a position of the detection target being overlapped on the image V1.
Devices, methods, and user interfaces for interacting with a position indicator within displayed text via proximity-based inputs
An electronic device with a touch-sensitive surface, a display, one or more first sensors to detect proximity of an input object above the touch-sensitive surface and one or more second sensors to detect intensity of contact displays a user interface object at a first location. While displaying the user interface object, the device detects a first portion of an input at the first location of the user interface object while the input object meets hover criteria. In response, the device dynamically changes an appearance of the user interface object in accordance with a current hover proximity parameter of the input object. Afterwards, the device detects a second portion of the input at an initial contact location that corresponds to the first location; and, in response, dynamically changes the appearance of the user interface object in accordance with a current intensity of a contact by the input object.
Systems and methods for extensions to alternative control of touch-based devices
Systems and methods of multi-modal control of a touch-based device include receiving multi-modal control inputs from one or more of voice commands, a game controller, a handheld remote, and physical gestures detected by a sensor; converting the multi-modal control inputs into corresponding translated inputs which correspond to physical inputs recognizable by the touch-based device; and providing the corresponding translated inputs to the touch-based device for control thereof, wherein the translated inputs are utilized by the touch-based device as corresponding physical inputs to control underlying applications executed on the touch-based device which expect the corresponding physical inputs.
Volatility Based Cursor Tethering
Modifying a tether linked to a cursor based on depth volatility of the cursor is disclosed. Multiple displays show a three-dimensional image that seems to be at the same real world location regardless of the location of the display. One person operates a cursor in the three-dimensional image. Volatility of depth of the cursor from the viewpoint of the cursor operator is tracked. The appearance of the tether is changed in other displays in response to the depth volatility. The tether may include a line from the cursor towards the cursor operator. The tether is not necessarily displayed all of the time so as to not obscure the view of the three-dimensional image. When there is not any depth volatility for some time, the tether is not displayed. In response to high depth volatility, the tether may be displayed as a long line from the cursor.