G06V20/653

Method and System for Hand Pose Detection
20230044664 · 2023-02-09 ·

A method for hand pose identification in an automated system includes providing map data of a hand of a user to a first neural network trained to classify features corresponding to a joint angle of a wrist in the hand to generate a first plurality of activation features and performing a first search in a predetermined plurality of activation features stored in a database in the memory to identify a first plurality of hand pose parameters for the wrist associated with predetermined activation features in the database that are nearest neighbors to the first plurality of activation features. The method further includes generating a hand pose model corresponding to the hand of the user based on the first plurality of hand pose parameters and performing an operation in the automated system in response to input from the user based on the hand pose model.

Method and system for remote virtual visualization of physical locations

This application discloses methods, systems, and computer-implemented virtualization software applications and computer-implemented graphical user interface tools for remote virtual visualization of structures. Images are captured by an imaging vehicle of a structure and the captured images are transmitted to a remote server via a communication network. Using virtual 3D digital modeling software the server, using the images received from the imaging vehicle, generates a virtual 3D digital model of the structure and stores it in a database. This virtual 3D digital model can be accessed by remote users, using virtualization software applications, and used to view images of the structure. The user is able to manipulate the images and to view them from various perspectives and compare the before-the-damage images with images taken after damage have occurred. Based on all this the user is enabled to remotely communicate with an insurance agent and/or file an insurance claim.

FACE IMAGE PROCESSING METHOD, FACE IMAGE PROCESSING MODEL TRAINING METHOD, APPARATUS, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT

This application discloses a face image processing method performed by an electronic device. The method includes: acquiring a face image of a source face and a face template image of a template face; performing three-dimensional face modeling on the face image and the face template image to obtain a three-dimensional face image feature of the face image and a three-dimensional face template image feature of the face template image; fusing the three-dimensional face image feature and the three-dimensional face template image feature to obtain a three-dimensional fusion feature; performing face replacement feature extraction on the face image based on the face template image to obtain an initial face replacement feature; transforming the initial face replacement feature based on the three-dimensional fusion feature to obtain a target face replacement feature; and replacing the template face with the source face based on the target face replacement feature to obtain a target face image.

GENERATING OPTICAL FLOW LABELS FROM POINT CLOUDS

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an optical flow label from a lidar point cloud. One of the methods includes obtaining data specifying a training example, including a first image of a scene in an environment captured at a first time point and a second image of the scene in the environment captured at a second time point. For each of a plurality of lidar points, a respective second corresponding pixel in the second image is obtained and a respective velocity estimate for the lidar point at the second time point is obtained. A respective first corresponding pixel in the first image is determined using the velocity estimate for the lidar point. A proxy optical flow ground truth for the training example is generated based on an estimate of optical flow of the pixel between the first and second images.

Apparatus, systems and methods for providing three-dimensional instruction manuals in a simplified manner
11487828 · 2022-11-01 · ·

Interactive, electronic guides for an object may include one or more 3D models, and one or more associated tasks, such as how to assemble, operate, or repair an aspect of the object. A user electronic device may scan an encoded tag on the object, and transmit the scan data to an electronic guide distribution server. The server may receive an electronic guide generated by an electronic guide generator having a 3D model repository and a task repository, the guide associated with the encoded tag. Guide managers may add or modify 3D models and/or tasks to broaden the available guides, and tag producers may generate encoded tags using new and/or modified 3D models and tasks and apply tags to objects.

Object recognition method and device, and storage medium

An object recognition method is performed at an electronic device. The method includes: pre-processing a target image, to obtain a pre-processed image, the pre-processed image including three-dimensional image information of a target region of a to-be-detected object, processing the pre-processed image by using a target data model, to obtain a target probability, the target probability being used for representing a probability that an abnormality appears in a target object in the target region of the to-be-detected object; and determining a recognition result of the target region of the to-be-detected object according to the target probability, the recognition result being used for indicating the probability that the abnormality appears in the target region of the to-be-detected object. The object recognition method can effectively improve accuracy of object recognition and avoid a case of incorrect recognition.

Method, System, and Device of Generating a Reduced-Size Volumetric Dataset
20230093102 · 2023-03-23 ·

Device, system, and method of generating a reduced-size volumetric dataset. A method includes receiving a plurality of three-dimensional volumetric datasets that correspond to a particular object; and generating, from that plurality of three-dimensional volumetric datasets, a single uniform mesh dataset that corresponds to that particular object. The size of that single uniform mesh dataset is less than ¼ of the aggregate size of the plurality of three-dimensional volumetric datasets. The resulting uniform mesh is temporally coherent, and can be used for animating that object, as well as for introducing modifications to that object or to clothing or garments worn by that object.

TRAINING USING RENDERED IMAGES

Examples of methods for training using rendered images are described herein. In some examples, a method may include, for a set of iterations, randomly positioning a three-dimensional (3D) object model in a virtual space with random textures. In some examples, the method may include, for the set of iterations, rendering a two-dimensional (2D) image of the 3D object model in the virtual space and a corresponding annotation image. In some examples, the method may include training a machine learning model using the rendered 2D images and corresponding annotation images.

HEAD POSTURE ESTIMATION DEVICE AND HEAD POSTURE ESTIMATION METHOD

A face feature point extraction unit to extract multiple face feature points from a face image; a model posture change unit to change the position and posture of a first three-dimensional model in such a way that the errors between the positions of the multiple face feature points and the positions of multiple model feature points on the face image become less than or equal to respective thresholds for error determination; a degree of match acquisition unit to acquire the degrees of match on the face image between the positions of the multiple model feature points on the first three-dimensional model having the changed position and posture, and the positions of the multiple face feature points; and a calibration unit to calibrate the first three-dimensional model by shifting the position of a model feature point on the basis of the corresponding one of the degrees of match are included.

METHOD AND DEVICE FOR DETECTING DEFECTS IN AIRCRAFT TEMPLATE
20230090846 · 2023-03-23 ·

A method for detecting defects of an aircraft template, including: scanning the template; establishing a local coordinate system of a template point cloud; fitting plane parameters of a target local point cloud; acquiring an average of normal vectors of all points; calculating heights of all points; calculating an angle between a normal vector of the sinking point and a normal vector of the template point cloud plane; binarizing a point cloud image of the template; obtaining a 3D digital model of the template; aligning the 3D digital model with a resulting point cloud; and determining whether an actual distance exceeds a preset distance threshold, if not, the template is qualified, otherwise, determining whether the number of points whose actual distance exceeds the preset distance threshold exceeds a preset number threshold; if not, the template is qualified; otherwise, the template is not qualified. A detection device is also provided.