G06T7/292

Homography error correction

An object tracking system that includes a sensor that is configured to capture frames of at least a portion of a global plane for a space. The system is configured to receive a first frame from the sensor, to identify a pixel location within the first frame, and to determine an estimated sensor location for the sensor by applying a homography to the pixel location. The homography includes coefficients that translate between pixel locations in a frame from the sensor and (x,y) coordinates in the global plane. The system is further configured to determine an actual sensor location for the sensor and to determine a location difference between the estimated sensor location and the actual sensor location. The system is further configured to compare the location difference to a difference threshold level and to recompute the homography in response to determining that the location difference exceeds the difference threshold level.

Homography error correction

An object tracking system that includes a sensor that is configured to capture frames of at least a portion of a global plane for a space. The system is configured to receive a first frame from the sensor, to identify a pixel location within the first frame, and to determine an estimated sensor location for the sensor by applying a homography to the pixel location. The homography includes coefficients that translate between pixel locations in a frame from the sensor and (x,y) coordinates in the global plane. The system is further configured to determine an actual sensor location for the sensor and to determine a location difference between the estimated sensor location and the actual sensor location. The system is further configured to compare the location difference to a difference threshold level and to recompute the homography in response to determining that the location difference exceeds the difference threshold level.

Motion based pre-processing of two-dimensional image data prior to three-dimensional object tracking with virtual time synchronization
11557044 · 2023-01-17 · ·

Methods, systems, and apparatus, including medium-encoded computer program products, for pre-processing image data before 3D object tracking includes, in at least one aspect, a method including: receiving, at a first computer, image frames from a camera; identifying, by the first computer, locations of interest in the image frames; finding sequences of the locations, wherein each of the sequences satisfies a motion criterion for locations identified in at least three image frames from the camera; and sending output data for the sequences of the locations to a second computer for processing the sequences in the output data by interpolating between specified 2D positions in specific image frames for the sequences, using timestamps of the specific image frames, to produce a virtual 2D position at a predetermined point in time, which is usable for constructing a 3D track of a ball in motion.

Motion based pre-processing of two-dimensional image data prior to three-dimensional object tracking with virtual time synchronization
11557044 · 2023-01-17 · ·

Methods, systems, and apparatus, including medium-encoded computer program products, for pre-processing image data before 3D object tracking includes, in at least one aspect, a method including: receiving, at a first computer, image frames from a camera; identifying, by the first computer, locations of interest in the image frames; finding sequences of the locations, wherein each of the sequences satisfies a motion criterion for locations identified in at least three image frames from the camera; and sending output data for the sequences of the locations to a second computer for processing the sequences in the output data by interpolating between specified 2D positions in specific image frames for the sequences, using timestamps of the specific image frames, to produce a virtual 2D position at a predetermined point in time, which is usable for constructing a 3D track of a ball in motion.

REAL-TIME SYSTEM FOR GENERATING 4D SPATIO-TEMPORAL MODEL OF A REAL WORLD ENVIRONMENT
20230008567 · 2023-01-12 ·

The present invention relates to a method for deriving a 3D data from image data comprising: receiving, from at least one camera, image data representing an environment; detecting, from the image data, at least one object within the environment; classifying the at least one detected object, wherein the method comprises, for each classified object of the classified at least one objects: determining a 2D skeleton of the classified object by implementing a neural network to identify features of the classified object in the image data corresponding to the classified object; and constructing a 3D skeleton for the classified object, comprising mapping the determined 2D skeleton to 3D.

FOOD AND/OR BEVERAGE ITEM COUNTING DEVICE

The present invention provides a food and/or beverage item counting device to be provided in a food and/or beverage item provision system including a transport path that passes along a customer table to transport carriers each configured to allow a food and/or beverage item to be placed thereon, the food and/or beverage item counting device including: a first information acquiring unit disposed upstream of the table and configured to acquire information relating to each of the carriers on the transport path; a second information acquiring unit disposed downstream of the table and configured to acquire information relating to each of the carriers on the transport path; and a first calculating unit configured to calculate the number of the food and/or beverage items taken out from the transport path to the table, wherein each of the carriers is provided with identification information for identifying the carrier.

FOOD AND/OR BEVERAGE ITEM COUNTING DEVICE

The present invention provides a food and/or beverage item counting device to be provided in a food and/or beverage item provision system including a transport path that passes along a customer table to transport carriers each configured to allow a food and/or beverage item to be placed thereon, the food and/or beverage item counting device including: a first information acquiring unit disposed upstream of the table and configured to acquire information relating to each of the carriers on the transport path; a second information acquiring unit disposed downstream of the table and configured to acquire information relating to each of the carriers on the transport path; and a first calculating unit configured to calculate the number of the food and/or beverage items taken out from the transport path to the table, wherein each of the carriers is provided with identification information for identifying the carrier.

Monitoring device, and method for monitoring a man overboard situation
11594035 · 2023-02-28 · ·

The invention relates to a monitoring device 1 for monitoring a man-overboard situation in a ship section 5, wherein the ship section 5 is monitored by video technology using at least one camera 2, and the camera 2 is designed to provide surveillance in the form of video data. The monitoring device comprises an analysis device 9, said analysis device 9 having an interface 10 for transferring the video data, and the analysis device 9 is designed to detect a moving object in the ship section 5 on the basis of the video data and determine a kinematic variable of the moving object. The analysis device 9 is also designed to determine a scale on the basis of the video data and the kinematic variable in order to determine the extent 8 of the moving object and evaluate the moving object as a man-overboard event on the basis of the extent 8 thereof.

Monitoring device, and method for monitoring a man overboard situation
11594035 · 2023-02-28 · ·

The invention relates to a monitoring device 1 for monitoring a man-overboard situation in a ship section 5, wherein the ship section 5 is monitored by video technology using at least one camera 2, and the camera 2 is designed to provide surveillance in the form of video data. The monitoring device comprises an analysis device 9, said analysis device 9 having an interface 10 for transferring the video data, and the analysis device 9 is designed to detect a moving object in the ship section 5 on the basis of the video data and determine a kinematic variable of the moving object. The analysis device 9 is also designed to determine a scale on the basis of the video data and the kinematic variable in order to determine the extent 8 of the moving object and evaluate the moving object as a man-overboard event on the basis of the extent 8 thereof.

INTELLIGENT PROCESSING METHOD AND SYSTEM FOR VIDEO DATA
20180007429 · 2018-01-04 ·

The present application discloses an intelligent processing method and system for video data, wherein, in the method an intelligent camera set a warning rule, the method comprises: the intelligent camera collecting video data and analyzing the collected video data in real time, generating intelligent data if the warning rule is met, which intelligent data contain an encoder identifier and motion trajectory information; the intelligent camera packaging the video data and the intelligent data into a program stream and sending it to a frame analyzing component in a cloud storage system; the frame analyzing component unpacking the received program stream to obtain the video data and the intelligent data, and storing the video data and the intelligent data in storage components respectively; the storage components sending storage address information of the video data and the intelligent data to an index server for recording respectively. The solutions of the present application can perform intelligent processing for the collected video data in real time.