G06V40/113

Image processing apparatus, method, and program capable of recognizing hand gestures
09792494 · 2017-10-17 · ·

An image processing apparatus includes a facial image detection unit which detects a facial image from an input image; a posture estimation unit which estimates a posture of a person in the input image from a position of the facial image; a hand position detection unit which detects positions of hands of the person based on the posture; a hand image extraction unit which extracts a hand image of the person from the input image based on information regarding the positions of the hands of the person; a hand shape specifying unit which specifies shapes of the hands of the person based on the hand image; a hand shape time-series storage unit which stores the shapes of the hands in a time-series; and a hand gesture recognition unit which recognizes a hand gesture based on information regarding the shapes of the hands.

Systems and methods of tracking moving hands and recognizing gestural interactions

The technology disclosed relates to relates to providing command input to a machine under control. It further relates to gesturally interacting with the machine. The technology disclosed also relates to providing monitoring information about a process under control. The technology disclosed further relates to providing biometric information about an individual. The technology disclosed yet further relates to providing abstract features information (pose, grab strength, pinch strength, confidence, and so forth) about an individual.

Detecting apparatus, detecting method and computer readable recording medium recording program for detecting state in predetermined area within images

An imaging apparatus of an embodiment of the present invention includes a detecting unit 4d for detecting a state in a detection area T within an image displayed in a display panel 8a, an identifying unit 4b for identifying a subject from the image, an acquiring unit 4c for acquiring information relating to a predetermined subject in the case that the identifying unit 4b identifies the predetermined subject outside the detection area T, and a control unit (the detecting unit 4d) for controlling detection of the state in the detection area T based on the information relating to the predetermined subject acquired by the acquiring unit 4c.

Hand-gesture-based region of interest localization

A method, non-transitory computer readable medium, and apparatus for localizing a region of interest using a hand gesture are disclosed. For example, the method acquires an image containing the hand gesture from the ego-centric video, detects pixels that correspond to one or more hands in the image using a hand segmentation algorithm, identifies a hand enclosure in the pixels that are detected within the image, localizes a region of interest based on the hand enclosure and performs an action based on the object in the region of interest.

Smart tutorial for gesture control system

A method includes monitoring a plurality of system inputs, and detecting a behavioral pattern performed by a user and associated with the plurality of system inputs, When the behavioral pattern is detected, the method includes associating, in a memory, a gesture with at least one action, the at least one action being determined by the plurality of system inputs, and, upon detecting the gesture, executing the action associated with the gesture.

Target hand tracking method and apparatus, electronic device, and storage medium

Disclosed are a target tracking method and apparatus, an electronic device, and a storage medium. The method includes: detecting a to-be-processed image to obtain a hand detection result; in response to the hand detection result including a bounding box of hand, determining a hand in the bounding box with a hand pose conforming to a hand pose in a target gesture as a target hand; and tracking the target hand in a video stream according to the target hand in the to-be-processed image, where images in the video stream and the to-be-processed image are obtained by capturing a same target area, and the images in the video stream are captured after the to-be-processed image is captured.

Operating environment with gestural control and multiple client devices, displays, and users

Embodiments described herein includes a system comprising a processor coupled to display devices, sensors, remote client devices, and computer applications. The computer applications orchestrate content of the remote client devices simultaneously across the display devices and the remote client devices, and allow simultaneous control of the display devices. The simultaneous control includes automatically detecting a gesture of at least one object from gesture data received via the sensors. The detecting comprises identifying the gesture using only the gesture data. The computer applications translate the gesture to a gesture signal, and control the display devices in response to the gesture signal.

Image gestures for edge input
09740923 · 2017-08-22 · ·

An aspect provides a method, including: capturing, using an image sensor, an image of a user; detecting, using a processor, a user gesture forming an edge within the image; capturing, using the image sensor, at least one additional image of the user; detecting, using the processor, a user gesture relating to the edge of the image; and committing, using the processor, a predetermined action according to the user gesture relating to the edge of the image. Other aspects are described and claimed.

Feature-based pose detection
09740924 · 2017-08-22 · ·

In order to classify the presented pose of a human hand, a feature distribution map of the pose is compared to reference feature distribution maps. To generate a feature distribution map, each contour point of a hand is analyzed to determine a corresponding feature set. The feature set of a contour point includes a distance feature and an angle feature of the contour point in relation to one of its neighboring contour points. The feature sets generated from an observed pose are compared to feature sets of reference poses to determine which of the reference poses most closely matches the presented pose.

Utility Vehicle and Corresponding Apparatus, Method and Computer Program for a Utility Vehicle
20220309795 · 2022-09-29 ·

Various examples relate to a utility vehicle, and to a corresponding apparatus, method and computer program for a utility vehicle. The apparatus comprises at least one interface for obtaining video data from one or more cameras of the utility vehicle. The apparatus further comprises one or more processors. The one or more processors are configured to process, using a machine-learning model, the video data to determine pose information of a person being shown in the video data. The machine-learning model is trained to generate pose-estimation data based on video data. The one or more processors are configured to detect at least one pre-defined pose based on the pose information of the person. The one or more processors are configured to control the utility vehicle based on the detected at least one pre-defined pose.