G06V40/113

Hierarchical Context-Aware Extremity Detection
20170344838 · 2017-11-30 ·

In an example embodiment, a computer-implemented method receives image data from one or more sensors of a moving platform and detecting one or more objects from the image data. The one or more objects potentially represent extremities of a user associated with the moving platform. The method processes the one or more objects using two or more context processors and context data retrieved from a context database. The processing produces at least two confidence values for each of the one or more objects. The method filters at least one of the one or more objects from consideration based on the confidence value of each of the one or more objects.

GESTURE CONTROL DEVICE AND METHOD

A device for recognizing control gestures and determining which one device out of a plurality is the target of control acquires images of a gesture from each electronic device. A three dimensional coordinate system for each image is established, and coordinate of a central point of each electronic device determined. Extent of gesture to the left and to the right at different depths is determined and a regression plane equation is calculated. A distance between the regression plane and center points of each electronic device is determined and the electronic device with the closest (the shortest distance) center point is determined as the target device of the control gesture. A gesture control method is also provided.

Systems and methods for autonomous passenger transport
11676236 · 2023-06-13 · ·

Vehicles, methods, and computer readable storage media are provided for providing transportation of a gesturing pedestrian by an autonomous vehicle. The vehicle can be operated by identifying that an individual is interested in receiving a transportation. A second vehicle can then receive information about the individual that is interested in receiving the transportation. The first vehicle is then informed regarding the second vehicle.

SMART GLASSES, AND SYSTEM AND METHOD FOR PROCESSING HAND GESTURE COMMAND THEREFOR

Smart glasses, and a system and method for processing a hand gesture command using the smart glasses. According to an exemplary embodiment, the system includes smart glasses to capture a series of images including a hand gesture of a user and represent and transmit a hand image, included in each of the series of images, as hand representation data that is represented in a predetermined format of metadata; and a gesture recognition apparatus to recognize the hand gesture of a user by using the hand representation data of the series of images received from the smart glasses, and generate and transmit a gesture command corresponding to the recognized hand gesture.

System and apparatus for non-intrusive word and sentence level sign language translation

A sign language translation system may capture infrared images of the formation of a sign language sign or sequence of signs. The captured infrared images may be used to produce skeletal joints data that includes a temporal sequence of 3D coordinates of skeletal joints of hands and forearms that produced the sign language sign(s). A hierarchical bidirectional recurrent neural network may be used to translate the skeletal joints data into a word or sentence of a spoken language. End-to-end sentence translation may be performed using a probabilistic connectionist temporal classification based approach that may not require pre-segmentation of the sequence of signs or post-processing of the translated sentence.

SMART DEVICE
20170312614 · 2017-11-02 ·

An Internet of Thing (IoT) device includes a body with a processor, a camera and a wireless transceiver coupled to the processor.

SMART DEVICE
20170318360 · 2017-11-02 ·

An Internet of Thing (IoT) device includes a body with a processor, a camera and a wireless transceiver coupled to the processor.

Method for setting a tridimensional shape detection classifier and method for tridimensional shape detection using said shape detection classifier

Method for setting a tridimensional shape detection classifier for detecting tridimensional shapes from depth images in which each pixel represents a depth distance from a source to a scene, the classifier comprising a forest of at least a binary tree (T) for obtaining the class probability (p) of a given shape comprising nodes associated with a distance function (f) that taking at least a pixel position in a patch calculates a pixel distance. The method comprises per each leaf (L) node of the binary tree the configuration steps of creating candidate groups of parameters; obtaining positive patches (Ip) containing part of the shape to be detected; obtaining negative patches (In) not containing part of the shape to be detected; calculating in the leaf node the distance function of the obtained positive and negative patches comparing the result of the distance function with its pixel distance threshold and computing its statistics; and selecting for the leaf node the candidate group of parameters that best separate the positive and negative patches into two groups for calculating the class probability of the shape in that leaf node using the distance function. It is also disclosed a method for shape detection from a depth image using the shape detection classifier; a data processing apparatus comprising means for carrying out the methods; and a computer program adapted to perform the methods.

METHOD AND SYSTEM FOR RENDERING DOCUMENTS WITH DEPTH CAMERA FOR TELEPRESENCE
20170310920 · 2017-10-26 ·

A method of sharing documents is provided. The method includes capturing first image data associated with a document, detecting content of the document based on the captured first image data, capturing second image data associated with an object controlled by a user moved relative to the document, determining a relative position between the document and the object, combining a portion of the second image data with the first image data based on the determined relative position to generate a combined image signal that is displayed, and emphasizing a portion of the content in the displayed combined image signal, based on the relative position.

USER INTERFACE, MEANS OF MOVEMENT, AND METHODS FOR RECOGNIZING A USER'S HAND
20170300120 · 2017-10-19 ·

A hand of a user may be detected in free space, where a plurality of surface points are determined and include a center area surface point and at least two surface points of the plurality of surface points located on a periphery of the surface of the hand. A curve extending through the plurality of surface points may be determined based on a position of a curvature. The plurality of surface points are processed to determine if the plurality of surface points of the detected hand are arranged in at least one of a substantially concave area relative to the sensor, and/or a substantially convex area relative to the sensor. The detected hand may be identified as a palm or back of the hand based on the processing of the plurality of surface points.