G06V20/52

HEALTH TESTING AND DIAGNOSTICS PLATFORM

Systems and methods for providing a universal platform for at-home health testing and diagnostics are provided herein. In particular, a health testing and diagnostic platform is provided to connect medical providers with patients and to generate a unique, private testing environment. In some embodiments, the testing environment may facilitate administration of a medical test to a patient with the guidance of a proctor. In some embodiments, the patient may be provided with step-by-step instructions for test administration by the proctor within a testing environment. The platform may display unique, dynamic testing interfaces to the patient and proctor to ensure proper testing protocols and accurate test result verification.

HUMAN-OBJECT INTERACTION DETECTION

A human-object interaction detection method, a neural network and a training method therefor is provided. The human-object interaction detection method includes: extracting a plurality of first target features and one or more first motion features from an image feature of an image to be detected; fusing each first target feature and some of the first motion features to obtain enhanced first target features; fusing each first motion feature and some of the first target features to obtain enhanced first motion features; processing the enhanced first target features to obtain target information of a plurality of targets including human targets and object targets; processing the enhanced first motion features to obtain motion information of one or more motions, where each motion is associated with one human target and one object target; and matching the plurality of targets with the one or more motions to obtain a human-object interaction detection result.

HUMAN-OBJECT INTERACTION DETECTION

A human-object interaction detection method, a neural network and a training method therefor is provided. The human-object interaction detection method includes: extracting a plurality of first target features and one or more first motion features from an image feature of an image to be detected; fusing each first target feature and some of the first motion features to obtain enhanced first target features; fusing each first motion feature and some of the first target features to obtain enhanced first motion features; processing the enhanced first target features to obtain target information of a plurality of targets including human targets and object targets; processing the enhanced first motion features to obtain motion information of one or more motions, where each motion is associated with one human target and one object target; and matching the plurality of targets with the one or more motions to obtain a human-object interaction detection result.

EXTRACTING INFORMATION ABOUT PEOPLE FROM SENSOR SIGNALS

There is provided a computer implemented method of extracting information about a person. Incoming sensor signals for monitoring people within a field of view of a sensor system are received and processed. In response to detecting a person located within a notification region, an output device outputs a notification to the detected person. Processing of the incoming sensor signals continues in order to monitor behaviour patterns of the person and determine from his behaviour patterns whether he is currently in a consenting or non-consenting state. An extraction function attempts to extract information about the person irrespective of his determined state. A sharing function determines whether or not to share an extracted piece of information about the person with a receiving entity in accordance with his determined state, the information not being shared unless and until it is subsequently determined that the person is in the consenting state.

EXTRACTING INFORMATION ABOUT PEOPLE FROM SENSOR SIGNALS

There is provided a computer implemented method of extracting information about a person. Incoming sensor signals for monitoring people within a field of view of a sensor system are received and processed. In response to detecting a person located within a notification region, an output device outputs a notification to the detected person. Processing of the incoming sensor signals continues in order to monitor behaviour patterns of the person and determine from his behaviour patterns whether he is currently in a consenting or non-consenting state. An extraction function attempts to extract information about the person irrespective of his determined state. A sharing function determines whether or not to share an extracted piece of information about the person with a receiving entity in accordance with his determined state, the information not being shared unless and until it is subsequently determined that the person is in the consenting state.

VIDEO PROCESSING METHOD, APPARATUS AND SYSTEM

The present disclosure provides video processing methods, apparatuses and systems. The method includes: obtaining a to-be-processed video, where the to-be-processed video is obtained by performing feature removal processing for one or more objects in an original video; obtaining a feature restoration processing request for one or more to-be-processed objects; according to the feature restoration processing request for the one or more to-be-processed objects, obtaining feature image information corresponding to the one or more to-be-processed objects, where the feature image information for one of the one or more to-be-processed objects includes pixel position information of all or part of features for the one of the one or more to-be-processed objects in the original video; according to the feature image information for the one or more to-be-processed objects, performing feature restoration processing for the one or more to-be-processed objects in the to-be-processed video.

VIDEO PROCESSING METHOD, APPARATUS AND SYSTEM

The present disclosure provides video processing methods, apparatuses and systems. The method includes: obtaining a to-be-processed video, where the to-be-processed video is obtained by performing feature removal processing for one or more objects in an original video; obtaining a feature restoration processing request for one or more to-be-processed objects; according to the feature restoration processing request for the one or more to-be-processed objects, obtaining feature image information corresponding to the one or more to-be-processed objects, where the feature image information for one of the one or more to-be-processed objects includes pixel position information of all or part of features for the one of the one or more to-be-processed objects in the original video; according to the feature image information for the one or more to-be-processed objects, performing feature restoration processing for the one or more to-be-processed objects in the to-be-processed video.

SYSTEM AND METHOD FOR FAST CHECKOUT USING A DETACHABLE COMPUTERIZED DEVICE
20230048635 · 2023-02-16 ·

The presently disclosed subject matter includes a system and method for fast checkout from a retail store. The system includes a portable computerized device that is configured to track items which are inserted or removed from a shopping container.

SURVEILLANCE SYSTEM, SURVEILLANCE APPARATUS, SURVEILLANCE METHOD, AND NON-TRANSITORYCOMPUTER-READABLE STORAGE MEDIUM
20230050235 · 2023-02-16 · ·

A surveillance apparatus (100) includes a feature value storage apparatus (200) that associates and stores a feature value of a person belonging to the same group, a detection unit (102) that detects an approach of a person not belonging to the same group to the person belonging to the same group within a reference distance by processing a captured image by using the feature value, and an output unit (104) that performs a predetermined output by using a detection result of the detection unit (102).

SURVEILLANCE SYSTEM, SURVEILLANCE APPARATUS, SURVEILLANCE METHOD, AND NON-TRANSITORYCOMPUTER-READABLE STORAGE MEDIUM
20230050235 · 2023-02-16 · ·

A surveillance apparatus (100) includes a feature value storage apparatus (200) that associates and stores a feature value of a person belonging to the same group, a detection unit (102) that detects an approach of a person not belonging to the same group to the person belonging to the same group within a reference distance by processing a captured image by using the feature value, and an output unit (104) that performs a predetermined output by using a detection result of the detection unit (102).