Patent classifications
G06V40/11
GESTURE DETECTING APPARATUS AND GESTURE DETECTING METHOD
Provided is a gesture detecting apparatus that accurately determines a detection result of a hand of an occupant constituting the gesture. A gesture detecting apparatus includes a detection information acquisition unit, a determination unit, and a rejection unit. The detection information acquisition unit acquires the detection result of the hand in the hand gesture of the occupant of a vehicle and information of the hand in an image in which the hand is detected. The hand of the occupant of the vehicle is detected based on the image captured by an imaging device provided in the vehicle. Based on at least one predetermined condition regarding the information of the hand in the image, the determination unit determines whether or not the hand is a real hand. If the hand is not a real hand, the rejection unit rejects the detection result of the hand detected based on the image.
ELECTRONIC DEVICE AND PROGRAM
An electronic device may include: an acquisition section configured to acquire captured image data of a hand of an operator; a presumption section configured to presume, in accordance with the captured image data, skeleton data corresponding to the hand; and a determination section configured to determine, in accordance with the skeleton data, a cursor position for operating the electronic device.
BIMANUAL INTERACTIONS BETWEEN MAPPED HAND REGIONS FOR CONTROLLING VIRTUAL AND GRAPHICAL ELEMENTS
Example systems, devices, media, and methods are described for controlling the presentation of one or more virtual or graphical elements on a display in response to bimanual hand gestures detected by an eyewear device that is capturing frames of video data with its camera system. An image processing system detects a first hand and defines a mapped region relative to a surface of the detected first hand. The system presents a virtual target icon at a position within or near the mapped region. The image processing system detects a series of bimanual hand shapes, including the detected first hand and at least one fingertip of a second hand. In response to determining whether the detected series of bimanual hand shapes matches a predefined hand gesture, the system executes a selecting action that includes presenting one or more graphical elements on the display.
VIDEO-BASED HAND AND GROUND REACTION FORCE DETERMINATION
A method for determining a hand force and a ground reaction force for a musculoskeletal body of a subject includes obtaining video data for the musculoskeletal body during an action taken by the subject, generating, for each frame of the video data, three-dimensional pose data for the subject based on a three-dimensional skeletal model, and determining the hand force and the ground reaction force based on the three-dimensional pose data. Determining the hand force and the ground reaction force includes implementing a reconstruction of the hand force and the ground reaction force based on the three-dimensional pose data. The method additionally includes applying the three-dimensional pose data, the estimate for the ground reaction force, and the estimate of the hand force, to a neural network or other model to optimize the estimate of the hand force and the estimate of the ground reaction force.
Finger control of wearable devices
An object approaching a wearable device of a user is identified. In response to determining that the approaching object is a previously identified target of the user, one or more applications displayed in a user interface of the wearable device, and how said one or more applications are organized on the user interface of the wearable device, are modified based on the identified approaching object. The modified user interface is displayed to the user.
Posture display method, device and system of guiding channel, and readable storage medium
The present application relates to a posture display method, a device and a system for a guiding channel, and readable storage medium. The posture display method of a guiding channel includes determining an original position relation between a three-dimensional affected limb image and a virtual guiding channel displayed in a monitor according to an original position relation between the guiding channel and an affected limb and obtaining posture change data of the guiding channel. The method further includes adjusting the posture of the virtual guiding channel based on the posture change data, so as to the relative position relation between the virtual guiding channel and the three-dimensional affected limb image matches with the relative position relation between the guiding channel and the affected limb.
Apparatus, system, and method for detecting user input via hand gestures and arm movements
An artificial-reality system comprising (1) a wearable dimensioned to be donned on a body part of a user, wherein the wearable comprises (A) a set of electrodes that detect one or more neuromuscular signals via the body part of the user and (B) a transmitter that transmits an electromagnetic signal, (2) a head-mounted display communicatively coupled to the wearable, wherein the head-mounted display comprises a set of receivers that receive the electromagnetic signal, and (3) one or more processing devices that (1) determine, based at least in part on the neuromuscular signals, that the user has made a specific gesture and (2) determine, based at least in part on the electromagnetic signal, a position of the body part of the user when the user made the specific gesture. Various other apparatuses, systems, and methods are also disclosed.
HAND POSE ESTIMATION FROM STEREO CAMERAS
Systems and methods herein describe using a neural network to identify a first set of joint location coordinates and a second set of joint location coordinates and identifying a three-dimensional hand pose based on both the first and second sets of joint location coordinates.
METHOD AND SYSTEM FOR HAND POSE RECOGNITION, DEVICE AND STORAGE MEDIUM
The disclosure relates to a method and a system for hand pose recognition, a device and a storage medium are disclosed in embodiments of the disclosure. The method includes: capturing a RGB image of a hand from a RGB camera and capturing a depth image of the hand from an active depth camera, so as to obtain a hand pose data set according to the RGB image and the depth image; processing the hand pose data set to obtain a 3D joint position, and taking the 3D joint position as a data set for training a software model; extracting the RGB image by a feature extractor based on a depth neural network to obtain a feature map of a hand pose; and processing the feature map according to an attention mechanism to obtain a global feature map of the hand pose.
METHOD OF VERIFYING TARGET PERSON, AND SERVER AND PROGRAM
Provided are a method of verifying a target person, the method comprises collecting non-identifying personal data for a target person, determining a classification type for each of a plurality of classification criteria from the collected non-identifying personal data and performing verification based on a result of comparing the determined classification type for each of the plurality of classification criteria with a reference type for each of the plurality of classification criteria.