Patent classifications
G06T2207/30261
Method and apparatus for detecting obstacle, electronic device and storage medium
A method and apparatus for detecting an obstacle, an electronic device, and a storage medium. A specific implementation of the method includes: detecting, by a millimeter-wave radar, position points of candidate obstacles in front of a vehicle; detecting, by a camera, a left road boundary line and a right road boundary line of a road on which a vehicle is located; separating the position points of the candidate obstacles according to the left road boundary line and the right road boundary line of the road on which the vehicle is located, and extracting position points between the left road boundary line and the right road boundary line; projecting the position points between the left road boundary line and the right road boundary line onto an image; and detecting, based on projection points of the position points on the image, a target obstacle in front of the vehicle.
NAVIGATION BASED ON PARTIALLY OCCLUDED PEDESTRIANS
Systems and methods are provided for navigating a host vehicle. In an embodiment, a processing device may be configured to receive a captured image acquired by a camera onboard the host vehicle; provide the captured image to an analysis module configured to generate an output including an indicator of a contact position of the occluded pedestrian with the ground surface, the analysis module including a trained model trained based a plurality of training images having been modified to occlude a region where a training pedestrian contacts a training ground surface; receive from the analysis module the generated output, including the indicator of the contact position of the occluded pedestrian with the ground surface; and cause at least one navigational action by the host vehicle based on the indicator of the contact position of the occluded pedestrian with the ground surface.
Vehicle vision system with object detection failsafe
A method for determining a safe state for a vehicle includes disposing a camera at a vehicle and disposing an electronic control unit (ECU) at the vehicle. Image data is captured via the camera and provided to the ECU. An image processor of the ECU processes captured image data. A condition is determined via processing at the image processor of the ECU captured image data. The condition comprises a shadow present in the field of view of the camera within ten frames of captured image data or a damaged condition of the imager within two minutes of operation of the camera. The condition is indicative of a condition where processing of captured image data degrades in performance. The ECU determines a safe state for the vehicle responsive to determining the condition.
Colorization To Show Contribution of Different Camera Modalities
Techniques for generating an enhanced image. A first image is generated using a camera of a first modality, and a second image is generated using a camera of a second modality. Pixels that are common between the two images are identified. An alpha map is generated. The alpha map reflects edge detection weights that are computed for the common pixels based on saliency values. A determination is made as to how much texture from the images to use to generate an enhanced image. This determination is based on the edge detection weights included within the alpha map. Based on the edge detection weights, textures are merged from the common pixels to generate the enhanced image. Color is also added to the enhanced image, where the color reflects an additional property (e.g., the texture source for the pixel) that is associated with one or both of the images.
NEURAL NETWORK OBJECT DETECTION
A first six degree-of-freedom (DoF) pose of an object from a perspective of a first image sensor is determined with a neural network. A second six DoF pose of the object from a perspective of a second image sensor is determined with the neural network. A pose offset between the first and second six DoF poses is determined. A first projection offset is determined for a first two-dimensional (2D) bounding box generated from the first six DoF pose. A second projection offset is determined for a second 2D bounding box generated from the second six DoF pose. A total offset is determined by combining the pose offset, the first projection offset, and the second projection offset. Parameters of a loss function are updated based on the total offset. The updated parameters are provided to the neural network to obtain an updated total offset.
SYSTEM AND METHODS FOR UPDATING HIGH DEFINITION MAPS
Systems and methods for vehicle-based determination of HD map update information. Sensor-equipped vehicles may determine locations of various detected objects relative to the vehicles. Vehicles may also determine the location of reference objects relative to the vehicles, where the location of the reference objects in an absolute coordinate system is also known. The absolute coordinates of various detected objects may then be determined from the absolute position of the reference objects and the locations of other objects relative to the reference objects. Newly-determined absolute locations of detected objects may then be transmitted to HD map services for updating.
OBSTACLE DETECTION DEVICE AND OBSTACLE DETECTION METHOD
An obstacle detection device includes: a sensor unit including a camera, a laser distance measuring device, and an optical element, and setting optical axes of the camera and the laser distance measuring device to an optical axis in the same direction using the optical element; a monitoring area control unit calculating a monitoring area to be monitored by the sensor unit using a position of a train, map information indicating a position of a track of the train, and an attitude angle of the train, and performing control to cause a direction of the optical axis of the sensor unit to be on a course of the train using a drive mirror; and an obstacle determination unit detecting an obstacle on the course of the train based on monitoring results of the camera and the laser distance measuring device, and determining whether a collision avoidance action is necessary.
Around view synthesis system and method
The present invention discloses an around view synthesis system, including: a plurality of cameras each mounted in a vehicle to capture respective different areas around the vehicle; a boundary setting unit setting a synthesis boundary of images captured in an overlapping region where images captured by the plurality of cameras are overlapped; and an image synthesizer receiving the images captured by the plurality of cameras and synthesizing the received images according to the synthesis boundary set by the boundary setting unit.
Bird's eye view based velocity estimation via self-supervised learning
Systems and methods determining velocity of an object associated with a three-dimensional (3D) scene may include: a LIDAR system generating two sets of 3D point cloud data of the scene from two consecutive point cloud sweeps; a pillar feature network encoding data of the point cloud data to extract two-dimensional (2D) bird's-eye-view embeddings for each of the point cloud data sets in the form of pseudo images, wherein the 2D bird's-eye-view embeddings for a first of the two point cloud data sets comprises pillar features for the first point cloud data set and the 2D bird's-eye-view embeddings for a second of the two point cloud data sets comprises pillar features for the second point cloud data set; and a feature pyramid network encoding the pillar features and performing a 2D optical flow estimation to estimate the velocity of the object.
SURROUNDINGS SENSING DEVICE FOR WORK MACHINE
A specifying part sets: a first end point which is on a straight line and attributes to a first apex of a plurality of apexes of a frame that is at a first side in a front orthogonal direction and closer to a to-machine side; a second end point which is on the straight line and attributes to a second apex of the plurality of apexes of the frame that is at a second side in the front orthogonal direction and closer to the to-machine side; and a midpoint between the first end point and the second end point. The specifying part determines one of the first end point, the second end point, and the midpoint as a coordinate indicative of a position of an object according to an operation pattern of an operating part.