G06V40/113

Methods and associated systems for communicating with/controlling moveable devices by gestures

Methods and associated systems and apparatus for controlling a moveable device are disclosed herein. The moveable device includes an image-collection component and a distance-measurement component. A representative method includes generating an image corresponding to the operator and generating a first set of distance information corresponding to the operator. The method identifies a portion of the image in the generated image and then retrieves a second set of distance information from the first set of distance information based on the identified image portion corresponding to the operator. The method then identifies a gesture associated with the operator based on the second set of distance information. The method then further generates an instruction for controlling the moveable device based on the gesture.

Intent detection with a computing device

A method can perform a process with a method including capturing an image, determining an environment that a user is operating a computing device, detecting a hand gesture based on an object in the image, determining, using a machine learned model, an intent of a user based on the hand gesture and the environment, and executing a task based at least on the determined intent.

Mobility surrogates
11544906 · 2023-01-03 · ·

A mobility surrogate includes a humanoid form supporting at least one camera that captures image data from a first physical location in which the first mobility surrogate is disposed to produce an image signal and a mobility base. The mobility base includes a support mechanism, with the humanoid form affixed to the support on the mobility base and a transport module that includes mechanical drive mechanism and a transport control module including a processor and memory that are configured to receive control messages from a network and process the control messages to control the transport module according to the control messages received from the network.

GESTURE-BASED SYSTEMS AND METHODS FOR AIRCRAFT CABIN LIGHT CONTROL

A method of touchless activation of an electrically activated device may comprise: receiving, via a processor and through a sensor in an aircraft cabin, gesture data; comparing, via the processor, the gesture data to a predetermined gesture, the predetermined gesture being created by transitioning a hand from a first position to a second position, one of the first position and the second position being a closed fist, a remainder of the first position and the second position being an open palm; determining, via the processor, whether the gesture data matches the predetermined gesture; and commanding, via the processor, the electrically activated device to change from a first state to a second state.

Head mounted display device and operating method thereof

Provided are an HMD device and operating method thereof. The operating method of an HMD device includes displaying at least one object in a display area of a transparent display, obtaining an image of a hand of a user interacting with the displayed object; determining a direction in which the hand is facing based on the obtained image, and performing a function for the object corresponding the direction in which the hand is facing.

LOCAL PERSPECTIVE METHOD AND DEVICE OF VIRTUAL REALITY EQUIPMENT AND VIRTUAL REALITY EQUIPMENT
20220382380 · 2022-12-01 · ·

A local perspective method and device of a virtual reality equipment and a virtual reality equipment are disclosed. The method comprises: identifying a user's hand action; triggering a local perspective function of the virtual reality equipment if the user's hand action satisfies a preset trigger action; and under the local perspective function, determining a local perspective display area in a virtual scene according to a position of the user's hand action, so as to display a real scene in the local perspective display area. The local perspective method of the virtual reality equipment according to the present disclosure can determine the range of the area to be perspectively displayed by using the user's hand action. Compared with the conventional global perspective solution, it can be applicable to more and richer application scenarios, and can greatly improve the user's use experience.

GESTURE RECOGNITION METHOD AND DEVICE, GESTURE CONTROL METHOD AND DEVICE AND VIRTUAL REALITY APPARATUS
20220382386 · 2022-12-01 ·

The disclosure provides a gesture recognition method and device, a gesture control method and device and a virtual reality apparatus, the gesture recognition method includes: obtaining a hand image, acquired by each lens of a binocular camera, of a user; recognizing, through a pre-constructed recognition model, a first group of hand bone points from the obtained hand image, to obtain a hand bone point image in which the first group of recognized hand bone points is marked on a hand region of the hand image; obtaining, according to the obtained hand bone point image, two-dimensional positional relations and three-dimensional positional relations between various bone points in a second group of hand bone points as hand gesture data of the user; and recognizing a gesture of the user according to the hand gesture data.

AUGMENTED REALITY TRANSLATION OF SIGN LANGUAGE CLASSIFIER CONSTRUCTIONS

A method, computer system, and a computer program product for translating a classifier construction into a graphical representation is provided. The present invention may include observing a classifier handshape by an augmented reality device. The present invention may include analyzing the observed classifier handshape according to an object recognition algorithm to determine a contextual meaning of the classifier handshape. The present invention may include converting the contextual meaning of the observed classifier handshape into a graphical representation. The present invention may include displaying the graphical representation alongside the observed classifier handshape on the augmented reality device.

Method and system for human-to-computer gesture based simultaneous interactions using singular points of interest on a hand

Described herein is a method for enabling human-to-computer three-dimensional hand gesture-based natural interactions from depth images provided by a range finding imaging system. The method enables recognition of simultaneous gestures from detection, tracking and analysis of singular points of interests on a single hand of a user and provides contextual feedback information to the user. The singular points of interest of the hand: include hand tip(s), fingertip(s), palm center and center of mass of the hand, and are used for defining at least one representation of a pointer. The point(s) of interest is/are tracked over time and are analyzed to enable the determination of sequential and/or simultaneous “pointing” and “activation” gestures performed by a single hand.

Method and device for detecting hand gesture key points

A method for detecting gesture key points can include: acquiring a target image to be detected; determining a gesture category according to the target image, the gesture category being a category of a gesture contained in the target image; determining a target key point detection model corresponding to the gesture category from a plurality of key point detection models; and performing a key point detection on the target image by the target key point detection model.