Patent classifications
G06V10/806
METHOD FOR INFRARED SMALL TARGET DETECTION BASED ON DEPTH MAP IN COMPLEX SCENE
The present invention discloses a method for infrared small target detection based on a depth map in a complex scene, and belongs to the field of target detection. An infrared image is collected, the image is binarized by using priori knowledge of a to-be-detected target and adopting a pixel value method, the binary image is further limited based on deep priori knowledge, then static and dynamic scoring strategies are formulated to score a candidate connected component in the morphologically processed image, and an infrared small target in a complex scene is detected finally. The method can screen out targets within a specific range, has high reliability; has strong robustness; is simple in program and easy to implement, can be used in sea, land, and air, and has a significant advantage under a complex jungle background.
VIDEO CONTENT RECOGNITION METHOD AND APPARATUS, STORAGE MEDIUM, AND COMPUTER DEVICE
A video content recognition method is performed by a computer device, the method including: obtaining an image feature corresponding to a video frame set extracted from a target video; dividing the image feature into a plurality of image sub-features according to a preset sequence, and each image sub-feature having a corresponding channel; choosing, from the image sub-features based on the preset sequence, a current image sub-feature; image sub-feature fusing the current image sub-feature and a convolution processing result of a previous image sub-feature into a fused image sub-feature, and performing convolution processing on the fused image sub-feature, to obtain a convolved image sub-feature corresponding to the current image sub-feature; splicing a plurality of convolved image sub-features corresponding to the plurality of channels of the convolved image sub-feature, to obtain a spliced image feature; and determining video content corresponding to the target video based on the spliced image feature.
IMAGE PROCESSING METHOD, APPARATUS, AND DEVICE, AND STORAGE MEDIUM
An image processing method is provided. The image processing method includes: acquiring first second input images; extracting a content feature of the first input image; extracting an attribute feature of the second input image; performing feature fusion and mapping processing on the content feature of the first input image and the attribute feature of the second input image by using a feature transformation network to obtain a target image feature, the target image feature having the content feature of the first input image and the attribute feature of the second input image; and generating an output image based on the target image feature.
IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An object is to provide an image processing apparatus capable of appropriately detecting changes of a target object. An image processing apparatus may include: object-driven feature extractor means to extract relevant features of target object from input images; a feature merger means to merge the features extracted from the input images into a merged feature; a change classifier means to predict a probability of each change class based on the merged feature; an object classifier means to predict a probability of each object class based on the extracted features of each image; a multi-loss calculator means to calculate a combined loss from a change classification loss and an object classification loss; and a parameter updater means to update the parameters of the object-driven feature extractor means.
DETECTING OBJECTS NON-VISIBLE IN COLOR IMAGES
A computer-implemented method of detecting one or more objects in a driving environment located externally to a vehicle, and a vehicle imaging system configured to detect one or more objects. The computer-implemented method includes training a first neural network to detect objects in a color video stream, the first neural network having a plurality of mid-level color features at a plurality of scales, and training a second neural network, operatively coupled to color neural network and an infrared video stream, to match, at the plurality of scales, mid-level infrared features of the second neural network to mid-level color features of the first neural network. A pixel-level invisibility map is then generated from the color video stream and the infrared video stream by determining differences, at each of the plurality of scales, between mid-level color features at the first neural network and mid-level infrared features at the second infrared neural network, and coupling the result to a fusing function.
SIMULATED DEEP LEARNING METHOD BASED ON SDL MODEL
A method for simulating a deep learning model of function mapping uses algorithms that can be calculated numerically. In a functional mapping model of simulated deep learning by an algorithm, a SDL model enables fusion with a Gaussian distribution model. By combining two Gaussian distribution models and the mapping of functions, both features can be exhibited, and a powerful artificial intelligence model can be constructed. The SDL model clustering algorithm is the fusion of the function mapping model and the Gaussian distribution model. Optimal clustering of feature vectors is done through probability scale self-organization and probability space distances. The simulation method does not need a combination method as in conventional deep learning to obtain the training data to be identified. Thus, the support of big hardware such as GPU-like deep learning is not needed, black box problems do not occur, and there is no need for enormous data annotation work. Using small amount of training data can get the results of large data set training and achieve lower costs.
Lateral and longitudinal feature based image object recognition method, computer device, and non-transitory computer readable storage medium
An image object recognition method, apparatus, and computer device are provided. The image object recognition method includes: performing feature extraction in the direction of a horizontal angle of view and in the direction of a vertical angle of view of an image respectively, to extract a lateral feature sequence and a longitudinal feature sequence of the image; fusing the lateral feature sequence and the longitudinal feature sequence to obtain a fused feature; activating the fused feature by using a preset activation function to obtain an image feature; and recognizing an object in the image by decoding the image feature. This solution can improve the efficiency of the object recognition.
Advanced Gaming and Virtual Reality Control Using Radar
Techniques are described herein that enable advanced gaming and virtual reality control using radar. These techniques enable small motions and displacements to be tracked, even in the millimeter or submillimeter scale, for user control actions even when those actions are optically occluded or obscured.
FEATURE POINT RECOGNITION SYSTEM AND RECOGNITION METHOD
According to this feature point recognition system (1), data of a feature point of a first group obtained by a first algorithm calculation unit (12) with no mask processing is compared with data of a feature point of a second group detected by a third algorithm calculation unit (16) obtained through mask processing performed by a second algorithm calculation unit (14), whether the data is abnormal is determined, and thereby feature points of a subject P can be recognized more accurately and stably than in the related art.
FACE SEARCH METHOD AND APPARATUS
A face search method and apparatus are provided. The method includes obtaining a to-be-searched face image, and inputting the face image into a first feature extraction model to obtain a first face feature. The method further includes inputting the face image and the first face feature into a first feature mapping model for feature mapping, to output a standard feature corresponding to the first face feature, and performing face search for the face image based on the standard feature. Features extracted by using a plurality of feature extraction models are concatenated, and a concatenated feature is used as a basis for constructing a standard feature.