Patent classifications
G06V20/44
IMAGING SYSTEM FOR DETECTING HUMAN-OBJECT INTERACTION AND A METHOD FOR DETECTING HUMAN-OBJECT INTERACTION
The present application discloses an imaging system for detecting human-object interaction and a method for detecting human-object interaction thereof. The imaging system includes an event sensor, an image sensor, and a controller. The event sensor is configured obtain an event data set of the targeted scene according to variations of light intensity sensed by pixels of the event sensor when an event occurs in the targeted scene. The image sensor is configured capture a visual image of the targeted scene. The controller is configured to detect human according to the event data set, trigger the image sensor to capture the visual image when the human is detected, and detect the human-object interaction in the targeted scene according to the visual image and a series of event data sets obtained by the event sensor during the event.
Regaining frictionless status of shoppers
A method for addressing a shopper's eligibility for frictionless checkout may include identifying at least one shopper in a retail store designated as not eligible for frictionless checkout; in response to the identification of the at least one shopper designated as not eligible for frictionless checkout, automatically identifying an ineligibility condition associated with the at least one shopper's designation as not eligible for frictionless checkout; determining one or more actions for resolving the ineligibility condition; causing implementation of the one or more actions for resolving the ineligibility condition; receiving an indication of successful completion of the one or more actions; and in response to receipt of the indication of successful completion of the one more actions, generating a status indicator indicating that the at least one shopper is eligible for frictionless checkout and storing the generated status indicator in a memory.
Homography error correction
An object tracking system that includes a sensor that is configured to capture frames of at least a portion of a global plane for a space. The system is configured to receive a first frame from the sensor, to identify a pixel location within the first frame, and to determine an estimated sensor location for the sensor by applying a homography to the pixel location. The homography includes coefficients that translate between pixel locations in a frame from the sensor and (x,y) coordinates in the global plane. The system is further configured to determine an actual sensor location for the sensor and to determine a location difference between the estimated sensor location and the actual sensor location. The system is further configured to compare the location difference to a difference threshold level and to recompute the homography in response to determining that the location difference exceeds the difference threshold level.
Generation of business process model
One embodiment provides a method, including: obtaining at least one video capturing images of a writing capture device used during a business process design session, wherein the images comprise portions of the process flow; obtaining at least one audio recording corresponding to the business process design session; identifying an intended business process model shape; determining at least one business process model shape missing from the process flow provided on the writing capture device; identifying a task dependency for pairs of business process model shapes; and generating a business process model from (i) the intended business process model shapes, (ii) the at least one business process model shape missing from the process flow, and (iii) the identified task dependencies.
Learning highlights using event detection
A highlight learning technique is provided to detect and identify highlights in sports videos. A set of event models are calculated from low-level frame information of the sports videos to identify recurring events within the videos. The event models are used to characterize videos by detecting events within the videos and using the detected events to generate an event vector. The event vector is used to train a classifier to identify the videos as highlight or non-highlight.
Video event recognition method, electronic device and storage medium
Technical solutions for video event recognition relate to the fields of knowledge graphs, deep learning and computer vision. A video event graph is constructed, and each event in the video event graph includes: M argument roles of the event and respective arguments of the argument roles, with M being a positive integer greater than one. For a to-be-recognized video, respective arguments of the M argument roles of a to-be-recognized event corresponding to the video are acquired. According to the arguments acquired, an event is selected from the video event graph as a recognized event corresponding to the video.
Investigation system for finding lost objects
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for finding lost objects. In some implementations, a request for a location of an item is obtained. Current video data from one or more cameras is obtained. It is determined that the item is not shown in the current video data. Sensor data corresponding to historical video data is obtained. Events that likely occurred with the item and corresponding likelihoods for each of the events are determined. A likely location for the item is determined based on the likelihoods determined for the events. An indication of the likely location of the item is provided.
SYSTEM AND METHOD FOR DETECTING ERRORS IN A TASK WORKFLOW FROM A VIDEO STREAM
A system for detecting errors in task workflows from a real time video feed records. The video feed that shows a plurality of steps being performed to accomplish a plurality of tasks through an automation process system. The system splits the video feed into a plurality of video recordings which are valid breakpoints determined through cognitive Machine Learning Engine, where each video recording shows a single task. For each task from among the plurality of tasks, the system determines whether the task fails and the exact point of failure for that task. If the system determines that the task fails, the system determines a particular step where the task fails. The system flags the particular step as a failed step. The system reports the flagged step for troubleshooting.
Systems and methods for improved operations of ski lifts
Systems and methods for improved operations of ski lifts increase skier safety at on-boarding and off-boarding locations by providing an always-on, always-alert system that “watches” these locations, identifies developing problem situations, and initiates mitigation actions. One or more video cameras feed live video to a video processing module. The video processing module feeds resulting sequences of images to an artificial intelligence (AI) engine. The AI engine makes an inference regarding existence of a potential problem situation based on the sequence of images. This inference is fed to an inference processing module, which determines if the inference processing module should send an alert or interact with the lift motor controller to slow or stop the lift.
Time-series based analytics using video streams
Methods and systems for detecting and predicting anomalies include processing frames of a video stream to determine values of a feature corresponding to each frame. A feature time series is generated that corresponds to values of the identified feature over time. A matrix profile is generated that identifies similarities of sub-sequences of the time series to other sub-sequences of the feature time series. An anomaly is detected by determining that a value of the matrix profile exceeds a threshold value. An automatic action is performed responsive to the detected anomaly.