Patent classifications
G06T2207/30236
METHOD AND APPARATUS FOR DETECTING TRAFFIC ANOMALY, DEVICE, STORAGE MEDIUM AND PROGRAM PRODUCT
The present disclosure provides a method and apparatus for detecting a traffic anomaly, a device, a storage medium and a computer program product, relates to the field of artificial intelligence, and specifically to computer vision and deep learning technologies, and can be applied to intelligent transportation scenarios. A specific implementation of the method comprises: acquiring a traffic video stream; performing vehicle detection tracking on the traffic video stream to determine whether there is an abnormally stopped vehicle, wherein a stop with a time length exceeding a preset time length belongs to an abnormal stop; and performing a traffic anomaly classification on a video frame corresponding to the abnormal stop using a decision tree to obtain a traffic anomaly type, if there is the abnormally stopped vehicle, wherein the decision tree is generated based on features for a traffic anomaly detection.
OBJECT POSE ESTIMATION
A depth image of an object can be input to a deep neural network to determine a first four degree-of-freedom pose of the object. The first four degree-of-freedom pose and a three-dimensional model of the object can be input to a silhouette rendering program to determine a first two-dimensional silhouette of the object. A second two-dimensional silhouette of the object can be determined based on thresholding the depth image. A loss function can be determined based on comparing the first two-dimensional silhouette of the object to the second two-dimensional silhouette of the object. Deep neural network parameters can be optimized based on the loss function and the deep neural network can be output.
Systems and Methods for Adaptive Beam Steering for Throughways
Systems and methods for monitoring a throughway using a radio frequency identification (RFID) detection system. The RFID detection system includes (i) an image sensor configured to have a field of view directed towards a lane of the throughway; (ii) an RFID transceiver arrangement configured to interrogate RFID tags disposed on vehicles within the lane of the throughway; and (iv) a controller operatively connected to the image sensor and the RFID transceiver arrangement. The controller is configured to (1) cause the image sensor to capture a frame of image data representative of the lane of the throughway; (2) analyze the frame of image data to detect a presence of a vehicle in the lane of the throughway; (3) based on the analysis, determine a position of the vehicle relative to the RFID transceiver arrangement; and (4) configure an antenna array to generate a beam directed at the position of the vehicle.
Unmanned aerial vehicle (UAV) data collection and claim pre-generation for insured approval
Systems and methods are described for using data collected by unmanned aerial vehicles (UAVs) to generate insurance claim estimates that an insured individual may quickly review, approve, or modify. When an insurance-related event occurs, such as a vehicle collision, crash, or disaster, one or more UAVs are dispatched to the scene of the event to collect various data, including data related to vehicle or real property (insured asset) damage. With the insured's permission or consent, the data collected by the UAVs may then be analyzed to generate an estimated insurance claim for the insured. The estimated insurance claim may be sent to the insured individual, such as to their mobile device via wireless communication or data transmission, for subsequent review and approval. As a result, insurance claim handling and/or the online customer experience may be enhanced.
RESAMPLED IMAGE CROSS-CORRELATION
A computer-implemented system and method of image cross-correlation improves the sub-pixel accuracy of the correlation surface and subsequent processing thereof. One or both of the template or search windows are resampled using the fractional portions of the correlation offsets X and Y produced by the initial image cross-correlation. The resampled window is then correlated with the other original window to produce a resampled cross-correlation surface. Removing the fractional or sub-pixel offsets between the template and search windows improves the “sameness” of the represented imagery thereby improving the quality and accuracy of the correlation surface, which in turn improves the quality and accuracy of the FOM or other processing of the correlation surface. The process may be iterated to improve accuracy or modified to generate resampled cross-correlation surfaces for multiple possible offsets and to accept the one with the most certainty.
VEHICLE SPEED ESTIMATION SYSTEMS AND METHODS
A speed estimation system includes: a detection module configured to: detect an object on a surface in an image captured using a camera; and generate a bounding box around the object; a Jacobian module configured to generate a Jacobian for the object based on the bounding box; and a speed module configured to determine a speed that the object is traveling on the surface based on the Jacobian.
POSITIONING SYSTEM AND CALIBRATION METHOD OF OBJECT LOCATION
A positioning system and a calibration method of an objection location are provided. The calibration method includes the following. Roadside location information of a roadside unit (RSU) is obtained. Object location information of one or more objects is obtained. The object location information is based on a satellite positioning system. An image identification result of the object or the RSU is determined according to images of one or more image capturing devices. The object location information of the object is calibrated according to the roadside location information and the image identification result. Accordingly, the accuracy of the location estimation may be improved.
Driving scenario machine learning network and driving environment simulation
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a driving scenario machine learning network and providing a simulated driving environment. One of the operations is performed by receiving video data that includes multiple video frames depicting an aerial view of vehicles moving about an area. The video data is processed and driving scenario data is generated which includes information about the dynamic objects identified in the video. A machine learning network is trained using the generated driving scenario data. A 3-dimensional simulated environment is provided which is configured to allow an autonomous vehicle to interact with one or more of the dynamic objects.
System and method for determining a viewpoint of a traffic camera
A system and method for determining a viewpoint of a traffic camera includes obtaining images of a real road captured by the traffic camera, segmenting a road surface from the captured images to generate a mask of the real road, generating a 3D model of a simulated road corresponding to the real road, from geographical data of the real road, adding a simulated camera corresponding to the traffic camera to a location in the 3D model that is corresponding to a location of the traffic camera in the real road, generating a plurality of simulated images of the simulated road using the 3D model, each corresponding to a set of viewpoint parameters of the simulated traffic camera, selecting the simulated image that provides the best fit between the simulated image and the mask, and generating mapping between pixel locations in the captured images and locations on the real road.
IMAGE FUSION METHOD AND APPARATUS AND TRAINING METHOD AND APPARATUS FOR IMAGE FUSION MODEL
An image fusion method and apparatus and a training method and apparatus for an image fusion model are provided, which relate to the field of artificial intelligence, and specifically, to the field of computer vision. The image fusion method includes: obtaining a to-be-processed color image, an infrared image, and a background reference image, where the infrared image and the to-be-processed color image are shot for a same scene; and inputting the to-be-processed color image, the infrared image, and the background reference image into an image fusion model for feature extraction, and performing image fusion based on extracted features to obtain a fused image. This method can improve image quality of a fused image, and also ensure accurate and natural color of the fused image.