Patent classifications
G06T2207/30256
IMAGE PROCESSING DEVICE, MOBILE OBJECT CONTROL DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
An image processing device of the embodiment includes an acquirer configured to acquire a first image captured in time series by an imager mounted on a mobile object, a setter configured to set one or more positions of interest based on a position of the mobile object in the first image, a converter configured to convert a partial image set on the basis of the position of interest set by the setter into a second image, and a target detector configured to detect a target near the mobile object on the basis of the second image obtained by the conversion by the converter, in which the setter changes the position of interest on the basis of at least one of a result of detection by the target detector and a situation of the mobile object.
ARTIFICIAL INTELLIGENCE USING CONVOLUTIONAL NEURAL NETWORK WITH HOUGH TRANSFORM
Artificial intelligence using convolutional neural network with Hough Transform. In an embodiment, a convolutional neural network (CNN) comprises convolution layers, a Hough Transform (HT) layer, and a Transposed Hough Transform (THT) layer, arranged such that at least one convolution layer precedes the HT layer, at least one convolution layer is between the HT and THT layers, and at least one convolution layer follows the THT layer. The HT layer converts its input from a first space into a second space, and the THT layer converts its input from the second space into the first space. The CNN may be applied to an input image to perform semantic image segmentation, so as to produce an output image representing a result of the semantic image segmentation.
DRIVING ASSISTANCE APPARATUS, VEHICLE, DRIVING ASSISTANCE METHOD, AND STORAGE MEDIUM
The present invention provides a driving assistance apparatus that assists driving of a vehicle, comprising: an image capturing unit configured to capture an image of the front of the vehicle; an identification unit configured to identify a traffic light in the image obtained by the image capturing unit; a detection unit configured to detect, from the image, an installation height of the traffic light identified by the identification unit; and a determination unit configured to determine whether or not the traffic light identified by the identification unit is a target traffic light indicating whether or not the vehicle travels, based on the installation height detected by the detection unit.
SYSTEMS AND METHODS FOR DETECTING OBJECTS IN AN IMAGE OF AN ENVIRONMENT
In some implementations, a device may receive an image that depicts an environment associated with a vehicle. The device may partition the image into a plurality of subsections. The device may analyze the plurality of subsections to determine respective subsection information, wherein subsection information, for an individual subsection, indicates: a probability score that the subsection includes a line segment associated with an object class, a position of a representative point of the line segment, and a direction of the line segment. The device may identify, based on the respective subsection information of the plurality of subsections, a line associated with the object class that is associated with a set of subsections of the plurality of subsections. The device may perform one or more actions based on identifying the line associated with the object class.
TARGET OBJECT DETECTION METHOD AND APPARATUS, AND READABLE STORAGE MEDIUM
A target object detection method, including: obtaining images collected by more than one camera installed on a target vehicle; determining a high-dimensional parameter feature in a high-dimensional space corresponding to parameter information of each camera; and fusing features of the images via a target object detection model according to the high-dimensional parameter features, and determining position information of a target object based on the fused features, an order of the cameras corresponding to the images being the same as an order of the cameras corresponding to the high-dimensional parameter features.
METHOD FOR DRIVABLE AREA DETECTION AND AUTONOMOUS OBSTACLE AVOIDANCE OF UNMANNED HAULAGE EQUIPMENT IN DEEP CONFINED SPACES
A method for drivable area detection and autonomous obstacle avoidance of unmanned haulage equipment in deep confined spaces is disclosed, which includes the following steps: acquiring 3D point cloud data of a roadway; computing a 2D image drivable area of the coal mine roadway; acquiring a 3D point cloud drivable area of the coal mine roadway; establishing a 2D grid map and a risk map, and performing autonomous obstacle avoidance path planning by using an improved particle swarm path planning method designed for deep confined roadways; and acquiring an optimal end point to be selected of a driving path by using a greedy strategy, and enabling an unmanned auxiliary haulage vehicle to drive according to the optimal end point and an optimal path. According to the present disclosure, images of a coal mine roadway are acquired actively by use of a single-camera sensor device, a 3D spatial drivable area of an auxiliary haulage vehicle in a deep underground space can be computed stably, accurately and rapidly, and the autonomous obstacle avoidance driving of the unmanned auxiliary haulage vehicle in a deep confined roadway is completed according to the drivable area detection and safety assessment information, and therefore, the method of the present disclosure is of great significance to the implementation of an automatic driving technology for an auxiliary haulage vehicle for coal mines.
Systems and methods for creating and/or analyzing three-dimensional models of infrastructure assets
Systems and methods for detecting, geolocating, assessing, and/or inventorying infrastructure assets. In some embodiments, a plurality of images captured by a moving camera may be used to generate a point cloud. A plurality of points corresponding to a pavement surface may be identified from the point cloud. The plurality of points may be used to generate at least one synthetic image of the pavement surface, the at least one synthetic image having at least one selected camera pose. The at least one synthetic image may be used to assess at least one condition of the pavement surface.
Image collection system and image collection method
An image collection system includes: a captured-image analysis unit that determines whether a captured image, captured by a camera mounted on a mobile object, of surroundings of the mobile object is a first captured image including an image portion of a specified monitoring target object; an image-capturing-condition recognition unit that recognizes a first image-capturing condition that is an image-capturing condition at the time when the camera captures the first captured image; and a monitoring-target-object-information providing unit that transmits monitoring-target-object image information in which the first captured image and the first image-capturing condition are associated with each other, to a specified provision destination.
AUTOMATED REAL-TIME CALIBRATION
Provided are systems and methods for detecting a vehicle with sensors that are not calibrated properly and calibrating such sensor in real-time. In one example, a method may include iteratively capturing sensor data of a road while the vehicle is travelling on the road;, monitoring a calibration of the sensors of the vehicle based on the sensor data, determining that the sensors of the vehicle are not calibrated properly based on the monitoring, generating a calibration target of an object on the road based on the sensor data, and adjusting a calibration parameter of the one or more sensors of the vehicle based on the generated calibration target.
CONTROL METHOD, VEHICLE, AND STORAGE MEDIUM
The present disclosure provides a control method, a vehicle, and a storage medium, wherein the control method comprises: determining lane line information according to image information or map information; determining a parking trajectory according to the lane line information; and controlling the vehicle according to the parking trajectory. In the method, the problem that the vehicle, when the autonomous driving system fails, cannot be safely parked is solved; the image information or the map information is taken as auxiliary information for safe parking, lane line information of a road where the vehicle is located is determined according to the image information or the map information, and assisted parking is performed through the lane line information. The parking trajectory is determined through the lane line information, the vehicle is controlled according to the parking trajectory, and the safe parking of the vehicle is achieved.