Patent classifications
G06V10/809
Method and apparatus for detecting liveness based on phase difference
A method and apparatus for detecting a liveness based on a phase difference are provided. The method includes generating a first phase image based on first visual information of a first phase, generating a second phase image based on second visual information of a second phase, generating a minimum map based on a disparity between the first phase image and the second phase image, and detecting a liveness based on the minimum map.
METHOD AND DEVICE FOR PREDICTING BEAUTY BASED ON MIGRATION AND WEAK SUPERVISION, AND STORAGE MEDIUM
Disclosed are a method and device for predicting face beauty based on migration and weak supervision and a storage medium. The method includes: preprocessing an inputted face image; training a source domain network by using the preprocessed image, and migrating a parameter of the source domain network to a target domain network; inputting a noise image marked with a noise label and a truth-value image marked with a truth-value label into the target domain network to obtain an image feature; and inputting the image feature into a classification network to obtain a final face beauty prediction result.
HUMAN BODY ATTRIBUTE RECOGNITION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
The present disclosure describes human body attribute recognition methods and apparatus, electronic devices, and a storage medium. The method includes acquiring a sample image containing a plurality of to-be-detected areas being labeled with true values of human body attributes; generating, through a recognition model, a heat map of the sample image and heat maps of the to-be-detected areas to obtain a global heat map and local heat maps; fusing the global and the local heat maps to obtain a fused image, and performing human body attribute recognition on the fused image to obtain predicted values; determining a focus area of each type of human body attribute according to the global and the local heat maps; correcting the recognition model by using the focus area, the true values, and the predicted values; and performing, based on the corrected recognition model, human body attribute recognition on a to-be-recognized image.
IMAGE RECONSTRUCTION METHOD AND DEVICE, APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
An image reconstruction method, device and apparatus and non-transitory computer-readable storage medium are disclosed. The method may include: determining norms of convolution kernels of each convolutional layer of a deep neural network model; determining the convolution kernels with norms greater than or equal to a preset threshold in each convolutional layer to obtain a target convolution kernel set of each convolutional layer; processing an input image of each convolutional layer by using the convolution kernels in the target convolution kernel set of each convolutional layer respectively, to obtain a first image processing result; obtaining a second image processing result by performing interpolation on an initial image; and determining a fusion result according to the first image processing result and the second image processing result and reconstructing the initial image according to the fusion result.
Systems and methods for quantitative phenotyping of fibrosis
Systems and methods are provided for computer aided phenotyping of fibrosis-related conditions. A digital image indicates presence of collagens in a biological tissue sample. The image is processed to quantify parameters, each parameter describing a feature of the collagens that is expected to be different for different phenotypes of fibrosis. At least some features are tissue level features that describe macroscopic characteristics of the collagens, morphometric level features that describe morphometric characteristics of the collagens, and texture level features that describe an organization of the collagens. At least some of the plurality of parameters are statistics associated with histograms corresponding to distributions of the associated parameters across at least some of the digital image. At least some of the plurality of parameters are combined to obtain one or more composite scores that quantify a phenotype of fibrosis for the biological tissue sample.
OBJECT TRACKING IN LOCAL AND GLOBAL MAPS SYSTEMS AND METHODS
A detection device, such as an unmanned vehicle, is adapted to traverse a search area and generate sensor data associated with objects that may be present in the search area. The generated sensor data is used by a system including object detection inference models configured to receive the sensor data and output object data, a local object tracker configured to track detected objects in a local map, and a global object tracker configured to track detected objects on a global map. The local object tracker is configured to fuse object detections from the object detection inference models to identify locally tracked objects, and a Kalman filter processes frames of fused object data to resolve duplicates and/or invalid object detections. The global object tracker includes a pose manager, configured to track global objects in the global map and update the pose based on a map optimization process. User-in-the-loop processing includes a user interface for displaying and manual editing of detected object data.
DETECTING ROAD EDGES BY FUSING AERIAL IMAGE AND TELEMETRY EVIDENCES
A method to detect a roadway edge includes calculating a first likelihood of a roadway edge from an aerial image of a roadway by shifting a centerline of the roadway perpendicular to the centerline and overlapping the centerline with image gradients. A second likelihood of the roadway edge is determined using a vehicle telemetry fitting a probability distribution to telemetry points along the roadway. The first likelihood of the roadway edge and the second likelihood of the roadway edge are fused to identify a final likelihood of the roadway edge.
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND STORAGE MEDIUM
According to an embodiment, a vehicle control device includes a first recognizer configured to recognize a first road marking for partitioning a traveling lane of a vehicle on the basis of an output of a detection device that has detected a surrounding situation of the vehicle, a second recognizer configured to recognize a second road marking for partitioning the traveling lane with a means different from the first recognizer, a comparator configured to compare the first road marking with the second road marking, and a determiner configured to perform any one of a plurality of misrecognition determination processes including a process of determining that there is misrecognition in the first recognizer and a process of determining that there is misrecognition in one or both of the first recognizer and the second recognizer when there is a difference between the first road marking and the second road marking.
HYBRID DEEP LEARNING METHOD FOR RECOGNIZING FACIAL EXPRESSIONS
A computer implemented method for recognizing facial expressions by applying feature learning and feature engineering to face images. The method includes conducting feature learning on a face image comprising feeding the face image into a first convolution neural network to obtain a first decision, conducting feature engineering on a face image, comprising the steps of automatically detecting facial landmarks in the face image, transforming the facial features into a two-dimensional matrix, and feeding the two-dimensional matrix into a second convolution neural network to obtain a second decision, computing a hybrid decision based on the first decision and the second decision, and recognizing a facial expression in the face image in accordance to the hybrid decision.
Ophthalmologic apparatus, and method of controlling the same
An ophthalmologic apparatus of an embodiment example includes a front image acquiring device, a first search processor, and a second search processor. The front image acquiring device is configured to acquire a front image of a fundus of a subject's eye. The first search processor is configured to search for an interested region corresponding to an interested site of the fundus based on a brightness variation in the front image. The second search processor is configured to search for the interested region by template matching between the front image and a template image in the event that the interested region has not been detected by the first search processor.