Patent classifications
G06T7/269
HOLISTIC CAMERA CALIBRATION SYSTEM FROM SPARSE OPTICAL FLOW
Holistic systems and methods are used for calibrating image capture devices. An image capture device includes a lens, an image sensor, an inertial measurement unit (IMU), and an image signal processor (ISP). The image sensor detects images as frames and the IMU captures motion data. The ISP detects one or more key points on the frames and matches the one or more key points between the frames. The ISP computes one or more calibration parameters. The one or more calibration parameters are based on the matched key points and a model. The model includes an optical component, an IMU component, and a sensor component. The ISP performs a calibration using the calibration parameters.
HOLISTIC CAMERA CALIBRATION SYSTEM FROM SPARSE OPTICAL FLOW
Holistic systems and methods are used for calibrating image capture devices. An image capture device includes a lens, an image sensor, an inertial measurement unit (IMU), and an image signal processor (ISP). The image sensor detects images as frames and the IMU captures motion data. The ISP detects one or more key points on the frames and matches the one or more key points between the frames. The ISP computes one or more calibration parameters. The one or more calibration parameters are based on the matched key points and a model. The model includes an optical component, an IMU component, and a sensor component. The ISP performs a calibration using the calibration parameters.
LOOP CLOSURE DETECTION METHOD AND SYSTEM, MULTI-SENSOR FUSION SLAM SYSTEM, ROBOT, AND MEDIUM
The present invention provides a loop closure detection method and system, a multi-sensor fusion SLAM system, a robot, and a medium. Said system runs on a mobile robot, and comprises a similarity detection unit, a visual pose solving unit, and a laser pose solving unit. According to the loop closure detection system, the multi-sensor fusion SLAM system and the robot provided in the present invention, the speed and accuracy of loop closure detection in cases of a change in a viewing angle of the robot, a change in the environmental brightness, a weak texture, etc. can be significantly improved.
METHODS AND SYSTEMS FOR GENERATING END-TO-END DE-SMOKING MODEL
The disclosure herein relates to methods and systems for generating an end-to-end de-smoking model for removing smoke present in a video. Conventional data-driven based de-smoking approaches are limited mainly due to lack of suitable training data. Further, the conventional data-driven based de-smoking approaches are not end-to-end for removing the smoke present in the video. The de-smoking model of the present disclosure is trained end-to-end with the use of synthesized smoky video frames that are obtained by source aware smoke synthesis approach. The end-to-end de-smoking model localize and remove the smoke present in the video, using dynamic properties of the smoke. Hence the end-to-end de-smoking model simultaneously identifies the regions affected with the smoke and performs the de-smoking with minimal artifacts. localized smoke removal and color restoration of a real-time video.
METHODS AND SYSTEMS FOR GENERATING END-TO-END DE-SMOKING MODEL
The disclosure herein relates to methods and systems for generating an end-to-end de-smoking model for removing smoke present in a video. Conventional data-driven based de-smoking approaches are limited mainly due to lack of suitable training data. Further, the conventional data-driven based de-smoking approaches are not end-to-end for removing the smoke present in the video. The de-smoking model of the present disclosure is trained end-to-end with the use of synthesized smoky video frames that are obtained by source aware smoke synthesis approach. The end-to-end de-smoking model localize and remove the smoke present in the video, using dynamic properties of the smoke. Hence the end-to-end de-smoking model simultaneously identifies the regions affected with the smoke and performs the de-smoking with minimal artifacts. localized smoke removal and color restoration of a real-time video.
VEHICULAR ACCESS CONTROL BASED ON VIRTUAL INDUCTIVE LOOP
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for monitoring events using a Virtual Inductive Loop system. In some implementations, image data is obtained from cameras. A region depicted in the obtained image data is identified, the region comprising lines spaced by a distance that satisfies a distance threshold. For each line included in the region: an object depicted crossing the line is determined whether to satisfy a height criteria indicating that the line is activated. In response to determining that an object depicted crossing the line satisfies the height criteria, an event is determined to have likely occurred using data indicating (i) which lines of the lines were activated and (ii) an order in which each of the lines were activated. In response to determining that an event likely occurred, actions are performed using at least some of the data.
BEHAVIOR RECOGNITION METHOD AND SYSTEM, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
A behavior recognition method and system, including: dividing video data into a plurality of video clips, performing frame extraction processing on each video clip to obtain frame images, and performing optical flow extraction on the frame images to obtain optical flow images; performing feature extraction on the frame images and the optical flow images to obtain feature maps of the frame images and the optical flow images; performing spatio-temporal convolution processing on the feature maps of the frame images and the optical flow images, and determining a spatial prediction result and a temporal prediction result; fusing the spatial prediction results of all the video clips to obtain a spatial fusion result, and fusing the temporal prediction results of all the video clips to obtain a temporal fusion result; and performing two-stream fusion on the spatial fusion result and the temporal fusion result to obtain a behavior recognition result.
BEHAVIOR RECOGNITION METHOD AND SYSTEM, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
A behavior recognition method and system, including: dividing video data into a plurality of video clips, performing frame extraction processing on each video clip to obtain frame images, and performing optical flow extraction on the frame images to obtain optical flow images; performing feature extraction on the frame images and the optical flow images to obtain feature maps of the frame images and the optical flow images; performing spatio-temporal convolution processing on the feature maps of the frame images and the optical flow images, and determining a spatial prediction result and a temporal prediction result; fusing the spatial prediction results of all the video clips to obtain a spatial fusion result, and fusing the temporal prediction results of all the video clips to obtain a temporal fusion result; and performing two-stream fusion on the spatial fusion result and the temporal fusion result to obtain a behavior recognition result.
Real-time marine snow noise removal from underwater video
Optical flow refers to the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow algorithms can be used to detect and delineate independently moving objects, even in the presence of camera motion. The present invention uses optical-flow algorithms to detect and remove marine snow particles from live video. Portions of an image scene which are identified as marine snow are reconstructed in a manner intended to reveal underwater scenery which had been occluded by the marine snow. Pixel locations within the regions of marine snow are replaced with new pixel values that are determined based on either historical data for each pixel or a mathematical operation, such as one which uses data from neighboring pixels.
SYSTEMS, PROCESSES AND DEVICES FOR OCCLUSION DETECTION FOR VIDEO-BASED OBJECT TRACKING
Processes, systems, and devices for occlusion detection for video-based object tracking (VBOT) are described herein. Embodiments process video frames to compute histogram data and depth level data for the object to detect a subset of video frames for occlusion events and generate output data that identifies each video frame of the subset of video frames for the occlusion events. Threshold measurement values are used to attempt to reduce or eliminate false positives to increase processing efficiency.