Patent classifications
G06V10/26
Face super-resolution realization method and apparatus, electronic device and storage medium
The present application discloses a face super-resolution realization method and apparatus, an electronic device and a storage medium, and relate to fields of face image processing and deep learning. The specific implementation solution is as follows: a face part in a first image is extracted; the face part is input into a pre-trained face super-resolution model to obtain a super-sharp face image; a semantic segmentation image corresponding to the super-sharp face image is acquired; and the face part in the first image is replaced with the super-sharp face image, by utilizing the semantic segmentation image, to obtain a face super-resolution image.
Face super-resolution realization method and apparatus, electronic device and storage medium
The present application discloses a face super-resolution realization method and apparatus, an electronic device and a storage medium, and relate to fields of face image processing and deep learning. The specific implementation solution is as follows: a face part in a first image is extracted; the face part is input into a pre-trained face super-resolution model to obtain a super-sharp face image; a semantic segmentation image corresponding to the super-sharp face image is acquired; and the face part in the first image is replaced with the super-sharp face image, by utilizing the semantic segmentation image, to obtain a face super-resolution image.
Human body attribute recognition method and apparatus, electronic device, and storage medium
The present disclosure describes human body attribute recognition methods and apparatus, electronic devices, and a storage medium. The method includes acquiring a sample image containing a plurality of to-be-detected areas being labeled with true values of human body attributes; generating, through a recognition model, a heat map of the sample image and heat maps of the to-be-detected areas to obtain a global heat map and local heat maps; fusing the global and the local heat maps to obtain a fused image, and performing human body attribute recognition on the fused image to obtain predicted values; determining a focus area of each type of human body attribute according to the global and the local heat maps; correcting the recognition model by using the focus area, the true values, and the predicted values; and performing, based on the corrected recognition model, human body attribute recognition on a to-be-recognized image.
SYSTEM AND METHOD FOR DATA PROCESSING AND COMPUTATION
A data processing device and a computer-implemented method are configured to execute in parallel a data hub process (6) comprising at least a segmentation sub-process (61) which segments input data into data segments and at least one keying sub-process (62) which provides keys to the data segments creating keyed data segments, wherein the data hub process (6) stores the keyed data segments in a shared memory device (4) as shared keyed data segments and a plurality of processes in the form of computation modules (7) wherein each computation module (7) is configured to access the at least one shared memory device (4) to look for modulo-specific data segments which are shared keyed data segments that are keyed with at least one key which is specific for at least one of the computation modules (7) and to execute a machine learning method on the module-specific data segments, said machine learning method comprising data interpretation and classification methods using at least one pre-trained neuronal network (71) and to output the result of the executed machine learning method to the shared memory device (4) or another computation module.
SYSTEM AND METHOD FOR DATA PROCESSING AND COMPUTATION
A data processing device and a computer-implemented method are configured to execute in parallel a data hub process (6) comprising at least a segmentation sub-process (61) which segments input data into data segments and at least one keying sub-process (62) which provides keys to the data segments creating keyed data segments, wherein the data hub process (6) stores the keyed data segments in a shared memory device (4) as shared keyed data segments and a plurality of processes in the form of computation modules (7) wherein each computation module (7) is configured to access the at least one shared memory device (4) to look for modulo-specific data segments which are shared keyed data segments that are keyed with at least one key which is specific for at least one of the computation modules (7) and to execute a machine learning method on the module-specific data segments, said machine learning method comprising data interpretation and classification methods using at least one pre-trained neuronal network (71) and to output the result of the executed machine learning method to the shared memory device (4) or another computation module.
3D MULTI-OBJECT SIMULATION
An occlusion metric is computed for a target object in a 3D multi-object simulation. The target object is represented in 3D space by a collision surface and a 3D bounding box. In a reference surface defined in 3D space, a bounding box projection is determined for the target object with respect to an ego location. The bounding box projection is used to determine a set of reference points in 3D space. For each reference point of the set of reference points, a corresponding ray is cast based on the ego location, and it is determined whether the ray is an object ray that intersects the collision surface of the target object. For each such object ray, it is determined whether the object ray is occluded. The occlusion metric conveys an extent to which the object rays are occluded.
SYSTEMS AND METHODS FOR IMAGE SEGMENTATION
Systems and methods for image segmentation are provided. The systems may obtain a target image and a template image relating to the target image. The template image may correspond to an initial mask reflecting initial segmentations of the template image. The systems may determine a first transformation and an intermediate template image by preliminarily registering the template image to the target image and generate an intermediate mask based on the initial mask and the first transformation. The systems may determine, based on the intermediate mask, one or more first regions from the target image and one or more second regions from the intermediate template image. The systems may determine a second transformation by registering each of the one or more second regions to a corresponding first region. The systems may determine a target mask according to which the target image can be segmented based on one or more second transformations.
SYSTEMS AND METHODS FOR IMAGE SEGMENTATION
Systems and methods for image segmentation are provided. The systems may obtain a target image and a template image relating to the target image. The template image may correspond to an initial mask reflecting initial segmentations of the template image. The systems may determine a first transformation and an intermediate template image by preliminarily registering the template image to the target image and generate an intermediate mask based on the initial mask and the first transformation. The systems may determine, based on the intermediate mask, one or more first regions from the target image and one or more second regions from the intermediate template image. The systems may determine a second transformation by registering each of the one or more second regions to a corresponding first region. The systems may determine a target mask according to which the target image can be segmented based on one or more second transformations.
Occlusion Detection
An occlusion detection model training method is provided. The training method includes the following steps: constructing a plurality of pieces of training sample data, where the training sample data includes a first face image added with an occlusion object, coordinate values of a first key point in the first face image, and occlusion information of the first key point; and using the first face image as input data, and using the coordinate values of the first key point and the occlusion information of the first key point as output data, to train an occlusion detection model, so that the occlusion detection model outputs, based on any input second face image, coordinate values of a second key point included in the second face image and an occlusion probability of the second key point.
Occlusion Detection
An occlusion detection model training method is provided. The training method includes the following steps: constructing a plurality of pieces of training sample data, where the training sample data includes a first face image added with an occlusion object, coordinate values of a first key point in the first face image, and occlusion information of the first key point; and using the first face image as input data, and using the coordinate values of the first key point and the occlusion information of the first key point as output data, to train an occlusion detection model, so that the occlusion detection model outputs, based on any input second face image, coordinate values of a second key point included in the second face image and an occlusion probability of the second key point.