Patent classifications
G06T2207/30248
Systems and Methods for Utilizing Machine-Assisted Vehicle Inspection to Identify Insurance Buildup or Fraud
A remotely-controlled (RC) and/or autonomously operated inspection device, such as a ground vehicle or drone, may capture one or more sets of imaging data indicative of at least a portion of an automotive vehicle, such as all or a portion of the undercarriage. The one or more sets of imaging data may be analyzed based upon data indicative of at least one of vehicle damage or a vehicle defect being shown in the one or more sets of imaging data. Based upon the analyzing of the one or more sets of imaging data, damage to the vehicle or a defect of the vehicle may be identified. The identified damage or defect may be compared to a claimed damage or defect to determine whether the claimed damage or defect occurred.
Laser speed measuring method, control device and laser velocimeter
The present disclosure provides a laser speed measuring method, control device and a laser velocimeter, and relates to the technical field of security inspection. The laser speed measuring method comprises the steps of: acquiring detection data within a predetermined detection angle range in a plurality of paralleled horizontal planes having different heights, from a plurality of laser rays detected towards a road extending direction in the horizontal plane; acquiring three-dimensional point cloud data according to the detection data; determining a position of a measured object in the road extending direction according to the three-dimensional point cloud data; and determining a speed of the measured object according to the position change of the measured object along the road extending direction at different timing.
AR BASED PERFORMANCE MODULATION OF A PERSONAL MOBILITY SYSTEM
A method of controlling a personal mobility system includes displaying a virtual object on an augmented reality wearable device, the virtual object being located in a position in the field of view of the augmented reality device corresponding to a position in the real world. Proximity of the personal mobility system or a user of the personal mobility system with the position in the real world is detected. In response to the detection of proximity, a performance characteristic of the personal mobility system is modified.
System and Method for Providing an Interactive Vehicle Diagnostic Display
A client computing system (CCS) receives a download including (i) an image representative of a vehicle component, (ii) symbol data associated with a first symbol, (iii) a set of one or more selectable identifiers, and (iv) supplemental information associated with the vehicle component. Each selectable identifier can indicate a respective portion of the supplemental information. After receiving the download, the CCS displays the image and the first symbol without displaying the set and the supplemental information. While the image and the first symbol are displayed without the set, the CCS receives a first input corresponding to selection of the first symbol. The CCS then responsively displays the set. While the set is displayed, the CCS receives a second input corresponding to selection of a first selectable identifier from the set. The CCS then responsively displays the respective portion of the supplemental information indicated by the first selectable identifier.
Method and apparatus for positioning autonomous vehicle
Embodiments of the present disclosure disclose a method and apparatus for positioning an autonomous vehicle. The method includes: matching a current point cloud projected image of a first resolution with a map of the first resolution to generate a first histogram filter based on the matching result; determining at least two first response areas in the first histogram filter based on a probability value of an element in the first histogram filter; generating a second histogram filter based on a result of matching a current point cloud projected image of a second resolution with a map of the second resolution and the at least two first response areas, the first resolution being less than the second resolution; and calculating a weighted average of probability values of target elements in the second histogram filter to determine a positioning result of the autonomous vehicle in the map of the second resolution.
METHOD AND SYSTEM FOR DETECTING UNMANNED AERIAL VEHICLE USING PLURALITY OF IMAGE SENSORS
Provided are a method and system for detecting a UAV using a plurality of image sensors. A method of detecting a UAV includes detecting, by each of a plurality of detection image sensors, a UAV in a UAV detection area, transmitting, when the UAV is detected, position information of the detection image sensor detecting the UAV and distance information of the UAV to a classification image sensor, acquiring, by the classification image sensor, a magnified image of the UAV by setting a parameter of a camera of the classification image sensor according to the position information and the distance information, and classifying a type of the UAV by analyzing the magnified image.
MONOCULAR 2D SEMANTIC KEYPOINT DETECTION AND TRACKING
A method for 2D semantic keypoint detection and tracking is described. The method includes learning embedded descriptors of salient object keypoints detected in previous images according to a descriptor embedding space model. The method also includes predicting, using a shared image encoder backbone, salient object keypoints within a current image of a video stream. The method further includes inferring an object represented by the predicted, salient object keypoints within the current image of the video stream. The method also includes tracking the inferred object by matching embedded descriptors of the predicted, salient object keypoints representing the inferred object within the previous images of the video stream based on the descriptor embedding space model.
Intensity data visualization
Techniques for coloring a point cloud based on colors derived from LIDAR (light detection and ranging) intensity data are disclosed. In some embodiments, the coloring of the point cloud may employ an activation function that controls the colors assigned to different intensity values. Further, the activation function may be parameterized based on statistics computed for a distribution of intensities associated with a 3D scene and a user-selected sensitivity. Alternatively, a Fourier transform of the distribution of intensities or a clustering of the intensities may be used to estimate individual distributions associated with different materials, based on which the point cloud coloring may be determined from intensity data.
Auto calibrating a single camera from detectable objects
Techniques for improved camera calibration are disclosed. An image is analyzed to identify a first set of key points for an object. A virtual object is generated. The virtual object has a second set of key points. A reprojected version of the second set is fitted to the first set in 2D space until a fitting threshold is satisfied. To do so, a 3D alignment of the second set is generated in an attempt to fit (e.g., in 2D space) the second set to the first set. Another operation includes reprojecting the second set into 2D space. In response to comparing the reprojected second set to the first set, another operation includes determining whether a fitting error between those sets satisfies the fitting threshold. A specific 3D alignment of the second set is selected. The camera is calibrated based on resulting reprojection parameters.
Systems and Methods for Image-Based Location Determination and Parking Monitoring
Embodiments relate to systems, methods and computer readable media for parking monitoring in an urban area by image processing operations. Embodiments perform parking monitoring by capturing images of an urban area, comparing captured images with reference images to determine location and parking conditions. Embodiments processes captured images to detect licence plates, vehicles or parking signs to determine compliance of vehicles with parking conditions.