Patent classifications
G08B13/19641
Target object identification
A target object identification system includes a first camera, a second camera, and a processor. The first camera acquires an image of a first target region. The second camera synchronously acquires an image of a second target region. The second target region includes part or all of the first target region. Resolution of the first camera is higher than that of the second camera, and field of view of the second camera is greater than that of the first camera. The processor identifies first target objects according to the image of the first target region, and second target objects according to the image of the second target region, and determines association relationships between the first target objects in the image of the first target region and the second target object in the synchronously acquired image of the second target region.
ACCIDENT SIGN DETECTION SYSTEM AND ACCIDENT SIGN DETECTION METHOD
Provided is a system used in various facilities, which enables every specific event as a sign of an accident to be detected without missing to ensure transmission of an alert message at a proper time, thereby preventing accidents from occurring. The system includes cameras for capturing images of a monitoring area, and a monitoring server for controlling transmission of an alert message based on the images, wherein the monitoring server is configured to: set a sensing area around an entrance to a risky point (escalator entrance) and a notifying area closer to the risky point than the sensing area; sense a person in the sensing area and detect a specific event associated with the person based on images captured by each camera; and, when the sensed person enters the notifying area, transmits an alert message corresponding to the specific event associated with the person.
Surveillance and monitoring system
A method and system provides centralized redundant monitoring suitable for effectively recording and tracking video monitoring systems at a plurality of remote surveillance locations. The method and system are configured to track, monitor, capture, and record video originating from transportation vehicles using a novel technological configuration that minimizes overall and sub-system downtime relative to conventional technologies. The remote surveillance locations are capable of utilizing self-healing and recovery mechanisms and reporting status information to the centralized monitoring system. The centralized monitoring system can use information received from the remote surveillance locations to remotely monitor the status of the remote surveillance systems, to initiate remotely self-healing and recovery mechanisms, and request previously recorded surveillance data and live surveillance data in real time.
BEACON-AUGMENTED SURVEILLANCE SYSTEMS AND METHODS
Systems and methods are disclosed for operating a surveillance system and managing beacon-augmented surveillance data. A surveillance system may include a camera, a controller, and a transceiver. The camera may be configured to generate image data. The controller may be configured to generate image metadata that indicates a surveillance system identifier, upload the image data and the image metadata to a server, and generate a beacon that indicates the surveillance system identifier. The transceiver may be configured to transmit the beacon.
Method and device for monitoring a monitoring region
A method and a device for monitoring a monitoring region with at least two image sensors. A sub-region of the monitoring region is monitored by each of the image sensors, wherein each image sensor detects objects to be monitored that are located within the sub-region monitored by said image sensor, and each image sensor outputs data relating to the detected objects and are disposed and oriented in such a way that the monitored sub-regions overlap and that each object to be monitored that is located in the monitoring region is always detected by at least one image sensor.
NAVIGABLE 3D VIEW OF A PREMISES ALARM EVENT
A control device for a premises security system is provided. The control device includes processing circuitry configured to receive a plurality of video streams associated with the plurality of image capture devices, stitch together at least a portion of the plurality of video streams to generate a three-dimensional (3D) view, determine an alarm event associated with the premises security system, and overlay at least one virtual object onto the 3D view, where the at least one virtual object indicates the alarm event and data associated with the alarm event.
Mobile terminal security systems
Systems that can track an object as it moves across the field of view of the security device, integrate transaction data with the object information, and can identify events that trigger an alert and methods for using the systems.
Real-time consumer goods monitoring
In accordance with aspects of the present disclosure, a system of monitoring one or more consumer goods is provided. The system includes at least one monitoring device configured to capture and transmit data about the one or more consumer goods and at least one computing device configured to receive data from the at least one monitoring device, perform analysis on the captured data, and generate one or more notifications or triggers based on the analysis.
PERSON TRACKING SYSTEM AND PERSON TRACKING METHOD
In surveillance camera system 10, face detection is performed with Cam-A or Cam-F and in a case where there is a match with the face image of a specific person as a result of collation of face images, appearance feature information is transmitted from tracking client (30) to other Cam-B to Cam-E grouped in association with Cam-A or Cam-F. Upon detecting the appearance feature information, the other Cam-B to Cam-E transmit the person discovery information to a tracking client (30).
SYSTEM AND METHOD FOR DISTRIBUTED INTELLIGENT PATTERN RECOGNITION
Embodiments include a system, method, and computer program product for distributed intelligent pattern recognition. Embodiments include a cooperative multi-agent detection system that enables an array of disjunctive devices (e.g., cameras, sensors) to selectively cooperate to identify objects of interest over time and space, and to contribute an object of interest to a shared deep learning pattern recognition system based on a bidirectional feedback mechanism. Embodiments provide updated information and/or algorithms to one or more agencies for local system learning and pattern updating recognition models. Each of the multiple agencies may in turn, update devices (e.g., cameras, sensors) coupled to the local machine learning and pattern recognition models.