Patent classifications
G06T2207/30261
Vehicle using spatial information acquired using sensor, sensing device using spatial information acquired using sensor, and server
Provided is a method of sensing a three-dimensional (3D) space using at least one sensor. The method can include acquiring spatial information over time for the sensed 3D space, applying a neural network based object classification model to the acquired spatial information over time to identify at least one object in the sensed 3D space. The method can also include tracking the sensed 3D space including the identified at least one object, and using information related to the tracked 3D space.
Multiple resolution deep neural networks for vehicle autonomous driving systems
Techniques for training multiple resolution deep neural networks (DNNs) for vehicle autonomous driving comprise obtaining a training dataset for training a plurality of DNNs for an autonomous driving feature of the vehicle, sub-sampling the training dataset to obtain a plurality of training datasets comprising the training dataset and one or more sub-sampled datasets each having a different resolution than a remainder of the plurality of training datasets, training the plurality of DNNs using the plurality of training datasets, respectively, determining a plurality of outputs for the autonomous driving feature using the plurality of trained DNNs and the input data, receiving input data for the autonomous driving feature captured by a sensor device, and determining a best output for the autonomous driving feature using the plurality of outputs.
DETERMINING OBJECT MOBILITY PARAMETERS USING AN OBJECT SEQUENCE
A system can use semantic images, lidar images, and/or 3D bounding boxes to determine mobility parameters for objects in the semantic image. In some cases, the system can generate virtual points for an object in a semantic image and associate the virtual points with lidar points to form denser point clouds for the object. The denser point clouds can be used to estimate the mobility parameters for the object. In certain cases, the system can use semantic images, lidar images, and/or 3D bounding boxes to determine an object sequence for an object. The object sequence can indicate a location of the particular object at different times. The system can use the object sequence to estimate the mobility parameters for the object.
METHOD FOR CONTROLLING VEHICLE, VEHICLE AND ELECTRONIC DEVICE
A method for controlling a vehicle is provided. The vehicle includes an image capturing device. The method includes: controlling the image capturing device to collect an image of a scene where the vehicle is located; acquiring projection information of a projection pattern in the image; determining image-altering information corresponding to the projection pattern according to the projection information; and controlling movement of the vehicle according to the image-altering information.
BARRIER DATA COLLECTION DEVICE, BARRIER DATA COLLECTION METHOD, AND BARRIER DATA COLLECTION PROGRAM
A barrier data collection device includes a unit estimation unit that estimates, based on sensor data with position information including height at a time when a mobile body including a flying mobile body moving in the air moves, the sensor data being collected in advance about each of geographical ranges, with an estimator, about each of sets of the geographical ranges and heights included in the sensor data in a predetermined time unit, a barrier state obtained by estimating a state of which of barrier types the set is, a shape estimation unit that estimates, about a set satisfying a condition among the sets, a barrier shape based on the sensor data and an estimation result of the barrier state estimated about each of the sets in the time unit, and a barrier estimation unit that estimates, based on the estimation result of the barrier state estimated about the each of the sets, the estimated barrier shape, and a correct answer ratio of the estimator calculated in advance, a probability for each of the barrier types corresponding to each of the sets and estimates the barrier type corresponding to the set from the estimated probability for each of the barrier types.
OBJECT RECOGNITION DEVICE
Provided is an object recognition device capable of accurately estimating a distance from an own vehicle to an object such as a pedestrian. There are provided: a part specification unit 104 that determines a specific part region in an object by analyzing an image of a region corresponding to the object detected by an overlapping region object detection unit 103; an image information storage unit 105 that stores image information on the part region determined by the part specification unit 104; a three-dimensional information acquisition unit 106 that acquires three-dimensional information of the part region specified by the part specification unit 104 on the basis of the distance information acquired by a distance information acquisition unit 102; a non-overlapping region object detection unit 107 that detects the part region specified by the part specification unit 104 with reference to the image information stored in the image information storage unit 105 for the non-overlapping region; and a distance calculation unit 108 that calculates a distance to an object including the part region on the basis of the detection region information on an image of the part region detected by the non-overlapping region object detection unit 107 and the three-dimensional information acquired by the three-dimensional information acquisition unit 106.
ROBOT AND METHOD FOR CONTROLLING THEREOF
A robot may include a LiDAR sensor, and a processor configured to acquire, based on a sensing value of the LiDAR sensor, a first map that covers a space where the robot is located, detect one or more obstacles existing in the space based on the sensing value of the LiDAR sensor, acquire a number of times that each of a plurality of areas in the first map is occupied by the one or more obstacles, based on location information of the one or more obstacles, determine an obstacle area based on the number of times that each of the plurality of areas is occupied by the one or more obstacles, and acquire a second map indicating the obstacle area on the first map to determine a driving route of the robot based on the second map.
SAFETY MONITORING SYSTEM
To make it easier for an operator to notice an object in the surroundings of a construction machine. A safety monitoring system includes a detection device, a display unit, and a control unit. The detection device detects an object in the surroundings of the work vehicle. The display unit displays a captured image of the surroundings of the work vehicle. The control unit controls the detection device and the display unit. If the detection device detects an object, the control unit controls the display unit to display an image with a defined edge obtained by applying an edge-defining process to the image of the surroundings.
Collision detection method and apparatus based on an autonomous vehicle, device and storage medium
Embodiments of the present application provide a collision detection method and apparatus based on an autonomous vehicle, a device and a storage medium, where the method includes: acquiring first point cloud data of each obstacle in each region around the autonomous vehicle, where the first point cloud data represents coordinate information of the obstacle and the first point cloud data is based on a world coordinate system; converting the first point cloud data of the each obstacle into second point cloud data based on a relative coordinate system, where an origin of the relative coordinate system is a point on the autonomous vehicle; determining, according to the second point cloud data of the each obstacle in all regions, a possibility of collision of the autonomous vehicle. A de-positioning manner for collision detection is provided, thereby improving the reliability and stability of collision detection.
System and method for occluding contour detection
A system and method for occluding contour detection using a fully convolutional neural network is disclosed. A particular embodiment includes: receiving an input image; producing a feature map from the input image by semantic segmentation; learning an array of upscaling filters to upscale the feature map into a final dense feature map of a desired size; applying the array of upscaling filters to the feature map to produce contour information of objects and object instances detected in the input image; and applying the contour information onto the input image.