G06T7/251

Object tracking in real-time applications

An object tracking, in particular adapted for real-time augmented reality applications, involves determining a location of an object (20) in a current frame (10) of a video stream (15), at a point in time following output of a preceding frame (11) of the video stream (15) but preceding output of the current frame (10), by starting from a location of the object (20) determined by an object-detection server (5) for a previous frame (12) of the video stream (15) and recursively track the location of the object (20) in frames (11) of the video stream (15) following the previous frame (12) up to the current frame (10) and recursively update a model of the object (20). Accurate objection detection from an object-detection server (5) can thereby be used even if the object was detected in a past frame (12) of the video stream (15) that has already been visualized.

Moving object tracking using object and scene trackers

A method of using both object features and scene features to track an object in a scene is provided. In one embodiment, the scene motion is compared with the object motion and if the motions differ greater than a threshold, then the pose from object tracker is used; otherwise, the pose from scene tracker is used. In another embodiment, the pose of an object is tracked by both scene tracker and object tracker and these poses are compared. If these comparison results in a difference greater than a threshold, the pose from object tracker is used; otherwise, the pose from scene tracker is used.

Operating light sources to project patterns for disorienting visual detection systems
11543502 · 2023-01-03 · ·

Methods and systems fort operating one or more light sources to project adversarial patterns generated to disorient a machine learning based detection system, comprising generating one or more adversarial patterns configured to disorient the machine learning based detection system and operating one or more light sources configured to project one or more of the adversarial pattern(s) in association with the targeted object in order to disorient the machine learning based detection system.

Camera calibration method using human joint points

A novel multiple camera calibration algorithm uses human joint points for matched key points. A recent machine-learning based human joint detector provides joint positions with labels (e.g. left wrist, right knee, and others). In single person situation, it directly provides matched key points between multiple cameras. Thus, the algorithm does not suffer a key-point matching problem, even in a very sparse camera configuration, which is challenging in the traditional image feature-based method. This algorithm provides easy setup for a multiple camera configuration for marker-less pose estimation.

Method for measuring motion response of dummy in crash test, device and storage medium

A method for measuring the motion response of a dummy in a crash test comprises: acquiring images of a measurement mark by a camera during the crash test, wherein the measurement mark is fixed on a part to be measured of the dummy, and the dummy is set in association with a preset platform; determining first coordinate positions of the measurement mark in the images; determining corresponding second coordinate positions of the first coordinate positions in a static coordinate system according to a preset conversion relationship, wherein the X-axis of the static coordinate system is parallel to the motion direction of the preset platform, the Y-axis of the static coordinate system is perpendicular to the motion direction of the preset platform; and determining a motion response trajectory of the part to be measured according to an initial position of the part to be measured and the second coordinate positions.

METHOD AND SYSTEM FOR ESTIMATING GESTURE OF USER FROM TWO-DIMENSIONAL IMAGE, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
20220415094 · 2022-12-29 · ·

There is provided a method of estimating a gesture of a user from a two-dimensional image. The method includes the steps of: acquiring a two-dimensional image relating to a user's body from a two-dimensional camera; specifying two-dimensional relative coordinate points corresponding first and second body parts of the user in a relative coordinate system dynamically defined in the two-dimensional image, and comparing a first positional relationship between the two-dimensional relative coordinate points of the first and second body parts at a first time point, and a second positional relationship between the two-dimensional relative coordinate points of the first and second body parts at a second time point; and estimating the gesture made by the user between the first and second time points based on the result of comparing and context information acquired from the two-dimensional image.

FALL PREVENTION SYSTEM
20220414898 · 2022-12-29 · ·

A fall prevention system includes an image capturing unit that captures an image of a space to be monitored; a skeleton model generating unit that generates a skeleton model representing a person in the image captured by the image capturing unit; a determination unit that determines a state of a person corresponding to the skeleton model generated by the skeleton model generating unit, by distinguishing between the person standing up and the person sitting down, based on the skeleton model; and an operation processing unit capable of executing a fall prevention process that is a process according to a determination result by the determination unit, and that prevents the person from falling over, based on the determination result.

GUIDANCE SYSTEM FOR THE CREATION OF SPATIAL ANCHORS FOR ALL USERS, INCLUDING THOSE WHO ARE BLIND OR LOW VISION

A method comprising: receiving, from a user device at a first location, first image information of a surrounding environment of the user device; determining further image information of the surrounding environment of the user device at the first location, wherein the further image information is required for generating a spatial anchor in the model, wherein the spatial anchor point links the first real-world location of the user device to a corresponding location in the model; providing guidance information to the user device for capturing the further image information; receiving the further image information from the user device; and generating a spatial anchor point in the model based on the first image information and the further image information; wherein following generation of the spatial anchor point, the spatial anchor point is discoverable to one or more users of the model to provide information of the first location.

Behavior recognition method and information processing apparatus
11538174 · 2022-12-27 · ·

A behavior recognition method includes extracting, by a computer, skeleton information including a plurality of joint positions for each frame of an image, calculating a first set of motion feature amounts from the skeleton information, calculating a plot position by plotting the first set of motion feature amounts on a feature amount space defined by a second set of motion feature amounts, the plot position being a position where the first set of motion feature amounts is plotted on the feature amount space, the feature amount space having a plurality of mapping areas in which respective ranges corresponding to predetermined higher-level behaviors to be recognized are mapped, and expanding, when a degree of divergence from a minimum distance to other distances among distances between the plot position and each of the plurality of mapping areas satisfies a predetermined criterion, a mapping area at the minimum distance from the plot position.

Fall Risk Assessment System
20220406159 · 2022-12-22 ·

The purpose of the present invention is to provide a fall risk evaluation system whereby risk of falling of an elderly person or other person to be managed can be easily evaluated on the basis of a captured image of daily life, instead of by a physical therapist, etc. To achieve this purpose, the present invention is a fall risk evaluation system comprising a stereo camera and a fall risk evaluation device, the fall risk evaluation device being provided with: a person authentication unit for authenticating a person to be managed who has been imaged by the stereo camera; a person tracking unit for tracking the person to be managed who is authenticated by the person authentication unit; an action extraction unit for extracting walking by the person to be managed; a feature value calculation unit for calculating a feature value of the walking extracted by the action extraction unit; an integration unit for generating integrated data obtained by integrating the outputs of the person authentication unit, the person tracking unit, the action extraction unit, and the feature value calculation unit; a fall index calculation unit for calculating a fall index value of the person to be managed, on the basis of a plurality of integrated data generated by the integration unit; and a fall risk evaluation unit for comparing the fall index value calculated by the fall index calculation unit and a threshold value to evaluate the risk of falling of the person to be managed.