H04N13/271

System for hand pose detection

A method for hand pose identification in an automated system includes providing depth map data of a hand of a user to a first neural network trained to classify features corresponding to a joint angle of a wrist in the hand to generate a first plurality of activation features and performing a first search in a predetermined plurality of activation features stored in a database in the memory to identify a first plurality of hand pose parameters for the wrist associated with predetermined activation features in the database that are nearest neighbors to the first plurality of activation features. The method further includes generating a hand pose model corresponding to the hand of the user based on the first plurality of hand pose parameters and performing an operation in the automated system in response to input from the user based on the hand pose model.

SYSTEMS, METHODS AND APPARATUSES FOR STEREO VISION AND TRACKING

A system, method and apparatus for stereo vision and tracking with a plurality of coupled cameras and optional sensors.

ADAPTIVE RESOLUTION OF POINT CLOUD AND VIEWPOINT PREDICTION FOR VIDEO STREAMING IN COMPUTING ENVIRONMENTS

A mechanism is described for facilitating adaptive resolution and viewpoint-prediction for immersive media in computing environments. An apparatus of embodiments, as described herein, includes one or more processors to receive viewing positions associated with a user with respect to a display, and analyze relevance of media contents based on the viewing positions, where the media content includes immersive videos of scenes captured by one or more cameras. The one or more processors are further to predict portions of the media contents as relevant portions based on the viewing positions and transmit the relevant portions to be rendered and displayed.

Damage detection from multi-view visual data

Reference images of an object may be mapped to an object model to create a reference object model representation. Evaluation images of the object may also be mapped to the object model via the processor to create an evaluation object model representation. Object condition information may be determined by comparing the reference object model representation with the evaluation object model representation. The object condition information may indicate one or more differences between the reference object model representation and the evaluation object model representation. A graphical representation of the object model that includes the object condition information may be displayed on a display screen.

Damage detection from multi-view visual data

Reference images of an object may be mapped to an object model to create a reference object model representation. Evaluation images of the object may also be mapped to the object model via the processor to create an evaluation object model representation. Object condition information may be determined by comparing the reference object model representation with the evaluation object model representation. The object condition information may indicate one or more differences between the reference object model representation and the evaluation object model representation. A graphical representation of the object model that includes the object condition information may be displayed on a display screen.

IMAGE SIGNAL REPRESENTING A SCENE

Generating an image signal comprises a receiver (401) receiving source images representing a scene. A combined image generator (403) generates combined images from the source images. Each combined image is derived from only parts of at least two images of the source images. An evaluator (405) determines prediction quality measures for elements of the source images where the prediction quality measure for an element of a first source image is indicative of a difference between pixel values in the first source image and predicted pixel values for pixels in the element. The predicted pixel values are pixel values resulting from prediction of pixels from the combined images. A determiner (407) determines segments of the source images comprising elements for which the prediction quality measure is indicative of a difference above a threshold. An image signal generator (409) generates an image signal comprising image data representing the combined images and the segments of the source images.

METHOD FOR INFRARED SMALL TARGET DETECTION BASED ON DEPTH MAP IN COMPLEX SCENE
20220174256 · 2022-06-02 ·

The present invention discloses a method for infrared small target detection based on a depth map in a complex scene, and belongs to the field of target detection. An infrared image is collected, the image is binarized by using priori knowledge of a to-be-detected target and adopting a pixel value method, the binary image is further limited based on deep priori knowledge, then static and dynamic scoring strategies are formulated to score a candidate connected component in the morphologically processed image, and an infrared small target in a complex scene is detected finally. The method can screen out targets within a specific range, has high reliability; has strong robustness; is simple in program and easy to implement, can be used in sea, land, and air, and has a significant advantage under a complex jungle background.

METHOD FOR INFRARED SMALL TARGET DETECTION BASED ON DEPTH MAP IN COMPLEX SCENE
20220174256 · 2022-06-02 ·

The present invention discloses a method for infrared small target detection based on a depth map in a complex scene, and belongs to the field of target detection. An infrared image is collected, the image is binarized by using priori knowledge of a to-be-detected target and adopting a pixel value method, the binary image is further limited based on deep priori knowledge, then static and dynamic scoring strategies are formulated to score a candidate connected component in the morphologically processed image, and an infrared small target in a complex scene is detected finally. The method can screen out targets within a specific range, has high reliability; has strong robustness; is simple in program and easy to implement, can be used in sea, land, and air, and has a significant advantage under a complex jungle background.

HYBRID THREE-DIMENSIONAL SENSING SYSTEM
20220171067 · 2022-06-02 ·

A hybrid three-dimensional (3D) sensing system includes a structured-light (SL) projector that generates an emitted light with a predetermined light pattern; a time-of-flight (ToF) sensor that generates a ToF signal according to a reflected light from a surface of an object incident by the emitted light; a ToF depth processing device that generates a ToF depth image according to the ToF signal; a lookup table (LUT) that gives a gray level for each index value of a sum of accumulated charges of the ToF signal; and an SL depth processing device that generates an SL depth image according to the ToF signal; and a depth weighting device that generates a weighted depth image according to the ToF depth image and the SL depth image.

Electronic device for controlling camera on basis of external light, and control method therefor

When a three-dimensional image of a specific subject is acquired by means of an infrared camera and an external light (for example, external light such as sunlight at the time of outdoor photography) having a relatively large intensity exists, it is difficult to acquire the image. To this end, the present invention proposes an electronic device for reducing a current peak by adaptively changing optical power and an exposure time of an infrared camera according to the intensity of external light.