Patent classifications
G06V2201/07
COMPUTER-IMPLEMENTED METHOD FOR PROVIDING AN OUTLINE OF A LESION IN DIGITAL BREAST TOMOSYNTHESIS
One or more example embodiments of the present invention relates to a computer-implemented method for providing an outline of a lesion in digital breast tomosynthesis includes receiving input data, wherein the input data comprises a reconstructed tomosynthesis volume dataset based on projection recordings, a virtual target marker within a lesion being in the tomosynthesis volume dataset; applying a trained function to at least a part of the tomosynthesis volume dataset to establish an outline enclosing the lesion, the part of the tomosynthesis volume dataset corresponding to a region surrounding the virtual target marker in the tomosynthesis volume dataset; and providing output data, wherein the output data is an outline of a two-dimensional area or a three-dimensional volume surrounding the target marker.
HUMAN-OBJECT INTERACTION DETECTION
A human-object interaction detection method, a neural network and a training method therefor is provided. The human-object interaction detection method includes: extracting a plurality of first target features and one or more first motion features from an image feature of an image to be detected; fusing each first target feature and some of the first motion features to obtain enhanced first target features; fusing each first motion feature and some of the first target features to obtain enhanced first motion features; processing the enhanced first target features to obtain target information of a plurality of targets including human targets and object targets; processing the enhanced first motion features to obtain motion information of one or more motions, where each motion is associated with one human target and one object target; and matching the plurality of targets with the one or more motions to obtain a human-object interaction detection result.
INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, METHOD FOR PROCESSING INFORMATION, AND INFORMATION PROCESSING SYSTEM
An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a subject image of a detection target from the observation data, calculates a plurality of individual indices indicating degrees of reliability, each of which relates to at least identification information or measurement information regarding the detection target, and also calculates an integrated index, which is obtained by integrating a plurality of calculated individual indices. The output interface outputs the integrated index.
SYSTEMS AND METHODS FOR CONTEXTUAL IMAGE ANALYSIS
In one implementation, a computer-implemented system is provided for real- time video processing. The system includes at least one memory configured to store instructions and at least one processor configured to execute the instructions to perform operations. The at least one processor is configured to receive real-time video generated by a medical image system, the real-time video including a plurality of image frames, and obtain context information indicating an interaction of a user with the medical image system. The at least processor is also configured to perform an object detection to detect at least one object in the plurality of image frames and perform a classification to generate classification information for at least one object in the plurality of image frames. Further, the at least one processor is configured to perform a video manipulation to modify the received real-time video based on at least one of the object detection and the classification. Moreover, the processor is configured to invoke at least one of the object detection, the classification, and the video manipulation based on the context information.
SURVEILLANCE SYSTEM, SURVEILLANCE APPARATUS, SURVEILLANCE METHOD, AND NON-TRANSITORYCOMPUTER-READABLE STORAGE MEDIUM
A surveillance apparatus (100) includes a feature value storage apparatus (200) that associates and stores a feature value of a person belonging to the same group, a detection unit (102) that detects an approach of a person not belonging to the same group to the person belonging to the same group within a reference distance by processing a captured image by using the feature value, and an output unit (104) that performs a predetermined output by using a detection result of the detection unit (102).
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
An information processing apparatus according to the present invention includes: a display control unit that displays, on a screen, a map of a search area, a camera icon indicating a location of a surveillance camera in the map, and a person image of a search target person; an operation receiving unit that receives an operation of superimposing, on the screen, one of the person image or the camera icon on the other; and a processing request unit that requests a matching process between the person image and a surveillance video captured by the surveillance camera corresponding to the camera icon based on the operation.
LEARNING APPARATUS, LEARNING METHOD, AND RECORDING MEDIUM
In a learning apparatus, an acquisition unit acquires image data and label data corresponding to the image data. An object candidate extraction unit extracts each object candidate rectangle from the image data. A correct answer data generation unit generates a background object label corresponding to each background object included in each object candidate rectangle as correct answer data corresponding to the object candidate rectangle by using the label data. A prediction unit predicts a classification using each object candidate rectangle and outputs a prediction result. An optimization unit optimizes the object candidate extraction unit and the prediction unit using the prediction result and the correct answer data.
IMAGE REGISTRATION METHOD AND ELECTRONIC DEVICE
An image registration method includes: acquiring a target image comprising a target object; inputting the target image to a preset network model, and outputting position information and rotation angle information of the target object; obtaining a reference image comprising the target object by querying a preset image database according to the position information and the rotation angle information; and performing image registration on the target image and the reference image to obtain a corresponding position of the target object of the target image in the reference image.
METHOD AND APPARATUS FOR DETECTION AND TRACKING, AND STORAGE MEDIUM
In the field of video processing, a detection and tracking method and apparatus, and a storage medium, are provided. The method includes: performing feature point analysis on a video frame sequence, to obtain feature points on each video frame thereof; performing target detection on an extracted frame through a first thread based on the feature points, to obtain a target box in the extracted frame; performing target box tracking in a current frame through a second thread based on the feature points and the target box in the extracted frame, to obtain a result target box in the current frame; and outputting the result target box. As the target detection and the target tracking are divided into two threads, a tracking frame rate is unaffected by a detection algorithm, and the target box of the video frame can be outputted in real time, improving real-time performance and stability.
HUMAN-OBJECT INTERACTION DETECTION
A human-object interaction detection method, a neural network and a training method therefor is provided. The human-object interaction detection method includes: performing first target feature extraction on an image feature of an image; performing first interaction feature extraction on the image feature; processing a plurality of first target features to obtain target information of a plurality of detected targets; processing one or more first interaction features to obtain motion information of a motion, human information of a human target corresponding to each motion, and object information of an object target corresponding to each motion; matching the plurality of detected targets with one or more motions; and updating human information of a corresponding human target based on target information of a detected target matching the corresponding human target, and updating object information of a corresponding object target based on target information of a detected target matching the corresponding object target.