G08B13/19643

Camera apparatus for generating machine vision data and related methods
11115604 · 2021-09-07 · ·

Example camera apparatus for generating machine vision data and related methods are disclosed herein. An example apparatus disclosed herein includes a first camera coupled to a movable turret and a second camera coupled to the movable turret. The first camera and the second camera are co-bore sighted. The first camera and the second camera are to generate image data of an environment. The example apparatus includes a processor in communication with at least one of the first camera or the second camera. The processor is to generate a first image data feed and a second image data feed based on the image data. The first image data feed includes a first image data feature and the second image data feed includes a second image data feature different than the first image data feature. The processor is to transmit the second image data feed for analysis by a machine vision analyzer.

Method and device for capturing target object and video monitoring device

A method and an apparatus for capturing a target object and a video monitoring device are disclosed. The method includes: detecting target objects in a current panoramic video frame acquired by a panoramic camera; determining detail camera position information corresponding to each of the target objects, and determining a magnification corresponding to the target objects; performing block processing on the target objects to obtain at least one target block; for each target block, identifying first target objects, at edge positions, among target objects included in the target block, and determining, based on the first target objects, the detail camera position information and the magnification corresponding to the target block; for each target block, controlling the detail camera to adjust its position and magnification based on the detail camera position information and the magnification corresponding to the target block, and controlling the adjusted detail camera to capture the target block.

Methods and systems of multi-camera with multi-mode monitoring

The present disclosure relates to systems and methods of multi-camera. The method may include determining a level of a camera in a multi-camera system based on the status of an object, and determining a mode of the camera based on the level. The mode may relate to one of a bit rate, an I frame interval, or a coding algorithm. The method may further include generating a monitor file relating to the object based on the level of the camera in the multi-camera system.

VIRTUAL ENHANCEMENT OF SECURITY MONITORING
20210174654 · 2021-06-10 ·

Methods, systems, and apparatus, including computer programs encoded on storage devices, for monitoring, security, and surveillance of a property. In one aspect, a system includes a virtual reality headset, a plurality of cameras, a plurality of sensors that includes a first sensor, a control unit, wherein the control unit includes a network interface, a processor, a storage device that includes instructions to perform operations that comprise receiving data from the first sensor that is indicative of an alarm event, determining a location of the first sensor, identifying a set of one or more cameras from the plurality of cameras that are associated with the first sensor, selecting a particular camera from the identified set of one or more cameras; and transmitting one or more instructions to the particular camera that command the particular camera to stream a live video feed to a user interface of the virtual reality headset.

IMAGING SYSTEMS AND METHODS FOR TRACKING OBJECTS
20210182570 · 2021-06-17 ·

A first imager has a relatively high resolution and a relatively narrow first field-of-view. Information about objects in an environment is detected or captured, and used to steer the first field-of-view of the first imager. The sensor(s) may take the form of a second imager with a relatively lower resolution and relatively wider second field-of-view. Alternatively, other types of sensors, for instance presence/absence sensors may be employed. The first field-of-view may be directed toward an object that satisfies one or more conditions, for instance matching a particular SKU. The first field-of-view may track a moving object, for instance via a tracking mirror and actuator. This approach may be employed in retail locations, for example in grocery or convenience stores, for instance to reduce various forms of theft or in industrial environments.

METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR EMULATING DEPTH DATA OF A THREE-DIMENSIONAL CAMERA DEVICE

A method, system and computer program product for emulating depth data of a three-dimensional camera device is disclosed. The method includes concurrently operating the radar device and the 3D camera device to generate training radar data and training depth data respectively. Each of the radar device and the 3D camera device has a respective field of view. The field of view of the radar device overlaps the field of view of the 3D camera device. The method also includes inputting the training radar and depth data to the neural network. The method also includes employing the training radar and depth data to train the neural network. Once trained, the neural network is configured to receive real radar data as input and to output substitute depth data.

SYSTEM FOR MONITORING EVENT RELATED DATA
20210118283 · 2021-04-22 · ·

A system for monitoring event related data including a sensor data analyzer, an event analyzer and an actuator is disclosed. The sensor data analyzer detects events based on sensor data, the event analyzer couples to the sensor data analyzer and estimates the size of the detected entire events based on event related data of the detected events from the sensor data analyzer, and the actuator couples to the sensor data analyzer and the event analyzer and actuates a predetermined device based on the estimated size of the detected entire events.

System for monitoring event related data
10970995 · 2021-04-06 · ·

A system for monitoring event related data including a sensor data analyzer, an event analyzer and an actuator is disclosed. The sensor data analyzer detects events based on sensor data, the event analyzer couples to the sensor data analyzer and estimates the size of the detected entire events based on event related data of the detected events from the sensor data analyzer, and the actuator couples to the sensor data analyzer and the event analyzer and actuates a predetermined device based on the estimated size of the detected entire events.

Audio/video recording and communication devices in network communication with additional cameras

Audio/video (A/V) recording and communication devices in network communication with additional cameras in accordance with various embodiments of the present disclosure are provided. In one embodiment, an audio/video (A/V) recording and communication device is provided comprising: a first camera configured to capture image data at a first resolution; a communication module; and a processing module operatively connected to the first camera and the communication module, wherein the processing module is in network communication with a backend server, the processing module comprising: a processor; and a camera application that configures the processor to: maintain the first camera in a low-power state; receive a power-up command signal from the backend server based on an output signal from a second camera; power up the first camera in response to the power-up command signal; and capture image data using the first camera in response to the power-up command signal.

Generating motion extracted images

Described are systems, methods, and apparatus for generating motion extracted images having a high dynamic range (“HDR”) based on image data obtained from one or more image sensors at different times. The implementations described herein may be used with a single image sensor or camera that obtains images at different exposures sequentially in time. The images may be processed to detect an object moving within the field of view and pixel information corresponding to that moving object extracted. The non-extracted image data may then be combined to produce a motion extracted HDR image that is substantially devoid of the moving object.