Patent classifications
G06V20/52
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
An information processing apparatus according to the present invention includes: a display control unit that displays, on a screen, a map of a search area, a camera icon indicating a location of a surveillance camera in the map, and a person image of a search target person; an operation receiving unit that receives an operation of superimposing, on the screen, one of the person image or the camera icon on the other; and a processing request unit that requests a matching process between the person image and a surveillance video captured by the surveillance camera corresponding to the camera icon based on the operation.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
An information processing apparatus according to the present invention includes: a display control unit that displays, on a screen, a map of a search area, a camera icon indicating a location of a surveillance camera in the map, and a person image of a search target person; an operation receiving unit that receives an operation of superimposing, on the screen, one of the person image or the camera icon on the other; and a processing request unit that requests a matching process between the person image and a surveillance video captured by the surveillance camera corresponding to the camera icon based on the operation.
QUEUE ANALYSIS APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLEMEDIUM
A queue analysis apparatus (2000) estimates a position and an orientation of each object (20) included in a target image (10). The target image (10) is generated by a camera (50) that captures the object (20). The queue analysis apparatus (2000) generates a queue line (40) that expresses, by a linear shape, a queue included in a queue region (30) being a region representing a queue in the target image (10), based on a position and an orientation being estimated for each object (20) included in the queue region (30).
QUEUE ANALYSIS APPARATUS, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLEMEDIUM
A queue analysis apparatus (2000) estimates a position and an orientation of each object (20) included in a target image (10). The target image (10) is generated by a camera (50) that captures the object (20). The queue analysis apparatus (2000) generates a queue line (40) that expresses, by a linear shape, a queue included in a queue region (30) being a region representing a queue in the target image (10), based on a position and an orientation being estimated for each object (20) included in the queue region (30).
METHOD AND APPARATUS FOR DETECTION AND TRACKING, AND STORAGE MEDIUM
In the field of video processing, a detection and tracking method and apparatus, and a storage medium, are provided. The method includes: performing feature point analysis on a video frame sequence, to obtain feature points on each video frame thereof; performing target detection on an extracted frame through a first thread based on the feature points, to obtain a target box in the extracted frame; performing target box tracking in a current frame through a second thread based on the feature points and the target box in the extracted frame, to obtain a result target box in the current frame; and outputting the result target box. As the target detection and the target tracking are divided into two threads, a tracking frame rate is unaffected by a detection algorithm, and the target box of the video frame can be outputted in real time, improving real-time performance and stability.
HUMAN-OBJECT INTERACTION DETECTION
A human-object interaction detection method, a neural network and a training method therefor is provided. The human-object interaction detection method includes: performing first target feature extraction on an image feature of an image; performing first interaction feature extraction on the image feature; processing a plurality of first target features to obtain target information of a plurality of detected targets; processing one or more first interaction features to obtain motion information of a motion, human information of a human target corresponding to each motion, and object information of an object target corresponding to each motion; matching the plurality of detected targets with one or more motions; and updating human information of a corresponding human target based on target information of a detected target matching the corresponding human target, and updating object information of a corresponding object target based on target information of a detected target matching the corresponding object target.
HUMAN-OBJECT INTERACTION DETECTION
A human-object interaction detection method, a neural network and a training method therefor is provided. The human-object interaction detection method includes: performing first target feature extraction on an image feature of an image; performing first interaction feature extraction on the image feature; processing a plurality of first target features to obtain target information of a plurality of detected targets; processing one or more first interaction features to obtain motion information of a motion, human information of a human target corresponding to each motion, and object information of an object target corresponding to each motion; matching the plurality of detected targets with one or more motions; and updating human information of a corresponding human target based on target information of a detected target matching the corresponding human target, and updating object information of a corresponding object target based on target information of a detected target matching the corresponding object target.
HUMAN-OBJECT INTERACTION DETECTION
A human-object interaction detection method, a neural network and a training method therefor is provided. The human-object interaction detection method includes: performing first target feature extraction on image features of an image to obtain first target features; performing first interaction feature extraction on image features to obtain first interaction features and scores thereof; determining at least some first interaction features in the first interaction features based on the score of each of the first interaction features; determining first motion features based on the at least some first interaction features and the image features; processing the first target features to obtain target information of targets in the image; processing the first motion features to obtain motion information of one or more motions in the image; and matching the targets with the motions to obtain a human-object interaction detection result.
HUMAN-OBJECT INTERACTION DETECTION
A human-object interaction detection method, a neural network and a training method therefor is provided. The human-object interaction detection method includes: performing first target feature extraction on image features of an image to obtain first target features; performing first interaction feature extraction on image features to obtain first interaction features and scores thereof; determining at least some first interaction features in the first interaction features based on the score of each of the first interaction features; determining first motion features based on the at least some first interaction features and the image features; processing the first target features to obtain target information of targets in the image; processing the first motion features to obtain motion information of one or more motions in the image; and matching the targets with the motions to obtain a human-object interaction detection result.
METHOD AND SYSTEM FOR IN-PROCESS MONITORING OF A COMPACTION ROLLER OF A COMPOSITE LAYUP MACHINE
There is provided a method that includes directing one or more infrared cameras at a compaction roller of a composite laying head of a composite layup machine. The one or more infrared cameras are mounted aft of the compaction roller. The method includes applying heat to a substrate by a heater. The heater is mounted forward of the compaction roller. The method further includes using the one or more infrared cameras, to obtain one or more infrared images of the compaction roller, during laying down of one or more composite tows of a composite layup onto the substrate by the compaction roller. The method further includes identifying, based on the one or more infrared images, one or more temperature profiles of the compaction roller, and analyzing identified temperature profiles, to determine one or more of, a layup quality of the composite layup, and a heat history of the composite layup.