Patent classifications
G06V10/48
Perceptual importance maps for image processing
The present disclosure is directed to techniques for determining a perceptual importance map. The perceptual importance map indicates the relative importance to the human visual system of different portions of an image. The techniques include obtaining cost values for the blocks of an image, where cost values are values used in determining motion vectors. For each block, a confidence value is derived from the cost values. The confidence value indicates the confidence with which the motion vector is believed to be correct. A perceptual importance value is determined based on the confidence value via one or more modifications to the confidence value to better reflect importance to the human visual system. The generated perceptual importance values can be used for various purposes such as allocating bits for encoding, identifying regions of interest, or selectively rendering portions of an image with greater or lesser detail based on relative perceptual importance.
Face feature point detection method and device, equipment and storage medium
Provided are a face feature point detection method, applied to an image processing device, where the image processing device stores a feature area detection model and a feature point detection model. The method includes: preprocessing a face image to be detected to obtain a preprocessed target face image; performing feature point extraction on the target face image according to the feature area detection model and the feature point detection model to obtain a target feature point coordinate located within a face feature area in the target face image; and performing coordinate transformation on the target feature point coordinate to obtain a face feature point coordinate corresponding to the face image to be detected. Further provided are a face feature point detection device, an equipment and a storage medium.
Method of recognizing median strip and predicting risk of collision through analysis of image
A method of recognizing a median strip and predicting risk of a collision through analysis of an image includes acquiring an image of the road ahead including a median strip and a road bottom surface through a camera of a moving vehicle (S110), generating a Hough space by detecting an edge from the image (S120), recognizing an upper straight line of the median strip from the Hough space (S130), generating a region of interest (ROI) of the median strip using information on the upper straight line of the median strip and a lane (S140), detecting an object from an internal part of the ROI of the median strip through a labeling scheme (S150), and determining a tracking-point set of the objects that satisfy a specific condition (S160).
Method of recognizing median strip and predicting risk of collision through analysis of image
A method of recognizing a median strip and predicting risk of a collision through analysis of an image includes acquiring an image of the road ahead including a median strip and a road bottom surface through a camera of a moving vehicle (S110), generating a Hough space by detecting an edge from the image (S120), recognizing an upper straight line of the median strip from the Hough space (S130), generating a region of interest (ROI) of the median strip using information on the upper straight line of the median strip and a lane (S140), detecting an object from an internal part of the ROI of the median strip through a labeling scheme (S150), and determining a tracking-point set of the objects that satisfy a specific condition (S160).
Data extraction from form images
An image processing system accesses an image of a completed form document. The image of the form document includes one or more features, such as form text, at particular locations within the image. The image processing system accesses a template of the form document and computes a rotation and zoom of the image of the form document relative to the template of the form document based on the locations of the features within the image of the form document relative to the locations of the corresponding features within the template of the form document. The image processing system performs a rotation operation and a zoom operation on the image of the form document, and extracts data entered into fields of the modified image of the form document. The extracted data can be then accessed or stored for subsequent use.
Data extraction from form images
An image processing system accesses an image of a completed form document. The image of the form document includes one or more features, such as form text, at particular locations within the image. The image processing system accesses a template of the form document and computes a rotation and zoom of the image of the form document relative to the template of the form document based on the locations of the features within the image of the form document relative to the locations of the corresponding features within the template of the form document. The image processing system performs a rotation operation and a zoom operation on the image of the form document, and extracts data entered into fields of the modified image of the form document. The extracted data can be then accessed or stored for subsequent use.
Systems and methods for generating clinically relevant images that preserve physical attributes of humans while protecting personal identity
A computer implemented method of generating at least one anonymous image, comprises: extracting and preserving at least one real facial region from at least one real image of a real human face, and generating at least one anonymous image comprising a synthetic human face and the preserved at least one real facial region, wherein an identity of the real human face is non-determinable from the at least one anonymous image.
Systems and methods for generating clinically relevant images that preserve physical attributes of humans while protecting personal identity
A computer implemented method of generating at least one anonymous image, comprises: extracting and preserving at least one real facial region from at least one real image of a real human face, and generating at least one anonymous image comprising a synthetic human face and the preserved at least one real facial region, wherein an identity of the real human face is non-determinable from the at least one anonymous image.
Target object identification
A target object identification system includes a first camera, a second camera, and a processor. The first camera acquires an image of a first target region. The second camera synchronously acquires an image of a second target region. The second target region includes part or all of the first target region. Resolution of the first camera is higher than that of the second camera, and field of view of the second camera is greater than that of the first camera. The processor identifies first target objects according to the image of the first target region, and second target objects according to the image of the second target region, and determines association relationships between the first target objects in the image of the first target region and the second target object in the synchronously acquired image of the second target region.
EDGE DETECTION METHOD AND DEVICE, ELECTRONIC APPARATUS AND STORAGE MEDIUM
An edge detection method, an edge detection device, an electronic apparatus, and a storage medium are provided. The method includes: processing an input image to obtain a line drawing of grayscale contours, where the input image includes an object with edges, and the line drawing includes lines; merging the lines to obtain reference boundary lines; processing the input image to obtain boundary regions corresponding to the object; for each of the reference boundary lines, comparing the reference boundary line with the boundary regions, calculating a number of pixels on the reference boundary line belonging to the boundary regions to serve as a score of the reference boundary line to determine a plurality of scores corresponding to the reference boundary lines one-to-one; determining target boundary lines according to the reference boundary lines, the scores, and the boundary regions; and determining edges of the object according to the target boundary lines.