H04N5/14

Vehicle vision system with object detection failsafe

A method for determining a safe state for a vehicle includes disposing a camera at a vehicle and disposing an electronic control unit (ECU) at the vehicle. Image data is captured via the camera and provided to the ECU. An image processor of the ECU processes captured image data. A condition is determined via processing at the image processor of the ECU captured image data. The condition comprises a shadow present in the field of view of the camera within ten frames of captured image data or a damaged condition of the imager within two minutes of operation of the camera. The condition is indicative of a condition where processing of captured image data degrades in performance. The ECU determines a safe state for the vehicle responsive to determining the condition.

Vehicle vision system with object detection failsafe

A method for determining a safe state for a vehicle includes disposing a camera at a vehicle and disposing an electronic control unit (ECU) at the vehicle. Image data is captured via the camera and provided to the ECU. An image processor of the ECU processes captured image data. A condition is determined via processing at the image processor of the ECU captured image data. The condition comprises a shadow present in the field of view of the camera within ten frames of captured image data or a damaged condition of the imager within two minutes of operation of the camera. The condition is indicative of a condition where processing of captured image data degrades in performance. The ECU determines a safe state for the vehicle responsive to determining the condition.

Content Validation Using Scene Modification
20220337761 · 2022-10-20 ·

Methods and systems are described for managing content. A content stream may be generated based on source content. Scenes identified in the content stream may be compared with scenes in the source content. An iterative matching process may be used to modify scene boundaries that may be compared to the scene boundaries of the content stream.

Analysis of objects of interest in sensor data using deep neural networks

Sensor data captured by one or more sensors may be received at an analysis system. A neural network may be used to detect an object in the sensor data. A plurality of polygons surrounding the object may be generated in one or more subsets of the sensor data. A prediction of a future position of the object may be generated based at least in part on the polygons. One or more commands may be provided to a control system based on the prediction of the future position.

Video signal processing device, video freeze detection circuit and video freeze detection method
11652953 · 2023-05-16 · ·

A video signal processing device includes: a video signal dividing unit configured to divide a video signal into first to k-th (k is an integer of 2 or greater) partial video signals for each frame; a video change detection unit configured to determine, for each of the first to k-th partial video signals, whether or not a video based on the partial video signals has changed between respective frames, and generate first to k-th video change detection signals representing the respective detection results; and a video sameness determination unit configured to generate a video sameness signal indicating that the video signal has not changed, if the number of video change detection signals that indicate the video has not changed, among the first to k-th video change detection signals, is greater than a prescribed number.

REMOTE MONITORING SYSTEM, APPARATUS, AND METHOD

A video reception unit obtains a first video and a second video that are different in an imaging position from each other. An important video determination unit determines a video having a higher degree of importance on the basis of the first video and the second video. A video adjustment report unit transmits a transmission video adjustment report that is used to adjust qualities of the first video and the second video in accordance with a result of determining a degree of importance. A first video adjustment unit adjusts the first video on the basis of the transmission video adjustment report. A second video adjustment unit adjusts the second video on the basis of the transmission video adjustment report.

Systems and methods for stabilizing videos
11647289 · 2023-05-09 · ·

Visual content is captured by an image capture device during a capture duration. The image capture devices experiences motion during the capture duration. The intentionality of the motion of the image capture device is determined based on angular acceleration of the image capture device during the capture duration. A punchout of the visual content is determined based on the intentionality of the motion of the image capture device. The punchout of the visual content is used to generate stabilized visual content.

METHODS AND APPARATUS FOR MOTION-BASED VIDEO TONAL STABILIZATION
20170374334 · 2017-12-28 ·

One general aspect for motion-based video tonal stabilization uses a keyframe and motion estimation techniques to determine the level of spatial correspondence between input images and the keyframe. When the level of spatial correspondence is high, tonal stabilization is performed through regression and power law tonal transformation to minimize the color differences between images caused by automatic camera parameters without a priori knowledge of the camera model. Tonal error accumulation is reduced by using long-term tonal propagation.

SURVEILLANCE SYSTEM AND OPERATING METHOD THEREOF
20170372573 · 2017-12-28 ·

A surveillance system including a battery camera and a gateway. The battery camera includes a battery configured to supply power to the battery camera, a camera module configured to capture a surveillance area, a connector configured to directly connect to the gateway, a network module configured to communicate with the gateway, and a processor configured to charge the battery through the connector. The processor transmits high-quality images through the connector when the connector is connected to the gateway and transmits low-quality images through the network module when the connector is disconnected from the gateway.

Systems and methods for producing a flipbook

A system for producing a flipbook includes a processor that receives a video comprising a plurality of frames, selects a start frame and an end frame, and a plurality of frames therebetween. The processor can analyze the frames of the segment to determine an average rate of change of the plurality of frames and a threshold of relative image difference based on the average rate of change of the plurality of frames and a baseline frame rate. The processor can select, based on the results of its analysis, a plurality of selected frames, each of the selected frames being separated from two other selected frames by a sub-segment of the video, wherein each pair of adjacent frames comprises a relative image difference above the threshold and wherein each selected frame meets quality criteria not met by one or more local frames. The processor arranges the selected frames in temporal order, adds a protruding edge to each of the selected frames, and transmits data representing each of the selected frames to a printer for printing and binding a flipbook.