Patent classifications
G08B13/19615
CUSTOM EVENT DETECTION FOR SURVEILLANCE CAMERAS
A system trains and uses event recognition models for recognizing custom events types defined by a user within a camera feed of a surveillance camera, The camera can be fixed-view, with a relatively constant position and angle, and the background of the video images video can be likewise relatively constant. A user interface receives, from a user, positive and negative samples of the event in question, such as a designation of live or pre-recorded portions of a camera feed as being positive or negative examples of the event in question. Based on the samples, the user system trains an event recognition model (e.g., using few-shot learning techniques) to detect occurrences of custom event types in the camera feed. A response is performed based on detected occurrences of the event. The user can flag mistakes (false positive or false negative) which can be incorporated into the model to enhance its accuracy.
Systems and methods for categorizing motion events
The various embodiments described herein include methods, devices, and systems for categorizing motion events. In one aspect, a method is performed at a camera device. The method includes: (1) capturing a plurality of video frames via the image sensor, the plurality of video frames corresponding to a scene in a field of view of the camera; (2) sending the video frames to the remote server system in real-time; (3) while sending the video frames to the remote server system in real-time: (a) determining that motion has occurred within the scene; (b) in response to determining that motion has occurred within the scene, characterizing the motion as a motion event; and (c) generating motion event metadata for the motion event; and (4) sending the generated motion event metadata to the remote server system concurrently with the video frames.
Systems and Methods for Categorizing Motion Events
The various embodiments described herein include methods, devices, and systems for categorizing motion events. In one aspect, a method is performed at a camera device. The method includes: (1) capturing a plurality of video frames via the image sensor, the plurality of video frames corresponding to a scene in a field of view of the camera; (2) sending the video frames to the remote server system in real-time; (3) while sending the video frames to the remote server system in real-time: (a) determining that motion has occurred within the scene; (b) in response to determining that motion has occurred within the scene, characterizing the motion as a motion event; and (c) generating motion event metadata for the motion event; and (4) sending the generated motion event metadata to the remote server system concurrently with the video frames.
Auto-configuration for a motion detector of a security device
Configuring motion detection by a security device. The security device includes a camera configured to capture images of an environment in front of the security device and a motion sensor for detecting motion within the environment. The system also includes a server and a client device that are interconnected by a network. An AI learning module interacts with a user of the client device to capture an image of the user within the environment in front of the security device, and determines parameters for the motion sensor based upon analysis of the image to determine a location of an exempt area relative to the security device. The motion detector is configured based upon the determined parameters.
Multilayer information dynamics for activity and behavior detection
Described is a system for activity and behavior detection in a target system. Raw data extracted from various heterogeneous sources of the target system is fused across spatial and temporal scales into a multi-graph representation. Information flows of the multi-graph representation are analyzed using a set of multi-layer information dynamic measures. Based on the set of multi-layer information dynamic measures, at least one of an economic and social indicator of emerging activity of interest in the target system is derived. The indicator is then used for prediction of future activity of interest in the target system.
Hazard detection through computer vision
Systems and methods for detecting a hazard in a facility include the use of one or more cameras coupled with a hazard detection server. The hazard detection server is adapted to analyze images from the cameras, determine probabilities of hazards being present in the images, and provide an alert to a manager or workers when the probabilities exceed a hazard threshold.
Predictive alarm analytics
Systems and techniques are provided for identifying patterns activity at a facility that seem to precede alarm events. In some implementations, a monitoring system is configured to monitor a property and includes a sensor configured to generate sensor data. A monitor control unit is configured to receive the sensor data and based on the sensor data, determine an alarm status of the monitoring system. A monitoring application server is configured to receive the sensor data and data indicating the alarm status of the monitoring system. The server applies the sensor data to a model that determines an estimated alarm status of the monitoring system and determines the estimated alarm status of the monitoring system is different from the alarm status. Based on determining that the estimated alarm status of the monitoring system is different from the alarm status of the monitoring system, the server overrides the alarm status.
SECURITY ECOSYSTEM
A system, method, and apparatus for implementing workflows across multiple differing systems and devices is provided herein. During operation a workflow for a first camera is automatically suggested, or a new workflow generated for the first camera, based upon a workflow being created for a second camera having a similar field of view as the first camera. In particular, a workstation (or server) will receive an indication that a workflow was created for a camera. The workstation (or server) then determines if any other cameras have similar field of views. New workflows will then be suggested (or implemented) for the cameras having similar field of views. The suggested/implemented workflows will have a similar trigger and a similar action.
Multi-camera system to perform movement pattern anomaly detection
A method of performing movement pattern anomaly detection with targeted alerts can include receiving input from each camera of a multi-camera system and for each input: performing video content analysis; generating a critical analysis matrix associated with the input from that camera; assigning a fusion value for each vector of the critical analysis matrix using a fusion map that indicates particular fusion values associated with possible elements of the critical analysis matrix; and triggering an alert according to whether the fusion value exceeds a threshold associated with that camera. The critical analysis matrix includes output from at least two different computer vision algorithms of the video content analysis applied to the input from a camera.
VIDEO ANALYTICS SYSTEM
A security system can use video analytics and/or other input parameters to identify a theft event. Optionally, the security system can take remedial action in response. For example, the security system can use video analytics to determine that a person has reached into a shelf multiple times at a rate above a threshold, which can indicate that a thief is quickly removing items from the shelf. The security system can also use video analytics to determine that a person has reached into a shelf via a sweeping action, which can indicate that a thief is gathering and removing a large quantity of items from the shelf in one motion. In response, the security system can alert security personnel, cause a speaker to output an audible message in the target area, flag portions of the video relating to the theft event, activate or ready other sensors or systems, and/or the like.