Patent classifications
G06T2207/30261
OBJECT DETECTION SYSTEM
A vehicular object detection system includes a camera and a lidar. With the camera mounted at a windshield of a vehicle, and with the lidar mounted at an exterior portion of the vehicle, and based at least in part on processing of image data captured by the camera and lidar data captured by the lidar, a plurality of individual objects present exterior of the vehicle are detected. Based at least in part on processing of captured image data and captured lidar data, (i) respective proximity relative to the vehicle of individual objects is determined, (ii) respective speed relative to the vehicle of individual objects is determined and (iii) respective location relative to the vehicle of individual objects is determined. Based at least in part on processing of captured image data and/or processing of captured lidar data, the system determines collision potential between the vehicle and an individual object.
DRIVER ASSISTANCE SYSTEM AND OPERATION METHOD THEREOF
Disclosed is a driver assistance system including a first processor that receives a radar signal from a radar and detects one or more first objects based on the radar signal, a second processor that receives a camera signal from a camera and detects one or more second objects based on the camera signal, a laser controller that controls a beam generator to radiate the laser beam, a segmentation unit that extracts one or more first segments and one or more second segments corresponding to the one or more second objects, a mapping unit that maps at least one of the first segments and at least one of the second segments, a classification unit that classifies a target object based on an image signal and determines a possibility of collision with the target object, and a vehicle controller that controls a vehicle drive device based on the possibility of collision.
Three-dimensional object detection device and foreign matter detection device
A three-dimensional object detection has an image capturing device, an image conversion unit, a three-dimensional object detection unit, a three-dimensional object assessment unit, first and second foreign matter detection units and a controller. The image capturing device captures images rearward of a vehicle. The three-dimensional object detection unit detects three-dimensional objects based on image information. The three-dimensional object assessment unit assesses whether or not a detected three-dimensional object is another vehicle. The foreign matter detection units detect whether or not foreign matter has adhered to a lens based on the change over time in luminance values for each predetermined pixel of the image capturing element and the change over time in the difference between an evaluation value and a reference value. The controller outputs control commands to the other means to suppress the assessment of foreign matter as another vehicle when foreign matter has been detected.
IMAGE PROCESSING APPARATUS
An image processing apparatus is provided. The apparatus comprises an image acquisition unit; and a processing unit configured to process an image acquired by the image acquisition unit, wherein the processing unit recognizes a moving object from the image acquired by the image acquisition unit, performs masking processing on the moving object recognized in the image; and estimates spatial motion of the image processing apparatus based on the image subjected to the masking processing.
Adversarial Approach to Usage of Lidar Supervision to Image Depth Estimation
Techniques are disclosed for improving upon the usage of Light Detection and Ranging (LIDAR) supervision to perform image depth estimation. The techniques use a generator and adversary network to generate respective models that “compete” against one another to enable the generator model to output a desired output image that compensates for a LIDAR image having a structured or lined data pattern. The techniques described herein may be suitable for use by vehicles and/or other agents operating in a particular environment as part of machine vision algorithms that are implemented to perform autonomous and/or semi-autonomous functions.
LEARNING METHOD, LEARNING DEVICE, AND RECORDING MEDIUM
A learning method includes acquiring a learning image including an object, and correct information including a correct class a correct box; calculating an evaluation value for a learning model in accordance with a difference between the correct information and an object detection result that includes a detected class and a detected box and that is obtained by inputting the learning image to the learning model; and adjusting parameters of the learning model in accordance with the evaluation value, The calculating of the evaluation value includes performing at least one of processing for varying a weight that is assigned to each of differences of two or more positions or lengths between the correct box and the detected box, or processing for varying a weight that is assigned to a difference between the correct class and the detected class.
METHOD OF CONTROLLING A ROBOTIC HARVESTING DEVICE
A method of controlling a robotic harvesting device to pick a target piece of fruit from a plant includes determining a position of the target piece of fruit, determining a position of one or more obstacles relative to the target piece of fruit , and plotting a path for the robotic harvesting device to be moved along from a starting position to a picking position. The path is plotted such that the movement of the robotic harvesting device along the path serves to move at least some of the obstacles relative to the target piece of fruit so that the obstacles do not impede the picking of the target piece of fruit .
Method for determining a current distance and/or a current speed of a target object based on a reference point in a camera image, camera system and motor vehicle
A method for determining a current distance and/or a current speed of a target object relative to a motor vehicle based on an image of the target object, in which the image is provided by a camera of the motor vehicle, where characteristic features of the target object are extracted from the image and a reference point associated with the target object is determined based on the characteristic features for determining the distance and/or the speed, wherein the distance and/or the speed are determined based on the reference point, and a baseline is determined in the image based on the characteristic features, which is in a transition area from the depicted target object to a ground surface depicted in the image, and a point located on the baseline is determined as the reference point.
VEHICLE-MOUNTED STEREO CAMERA DEVICE AND METHOD FOR CORRECTING THE SAME
A vehicle-mounted stereo camera device that achieves high-precision distance detection is provided. The provided vehicle-mounted stereo camera device includes a left camera and right camera disposed on a vehicle via a holder to cause visual fields to overlap each other, a stereo processor that calculates a distance to a body outside the vehicle based on images captured by the left camera and right camera and on positions on the vehicle, first and second geomagnetic sensors respectively disposed near the left camera and right camera, and a third geomagnetic sensor disposed on the holder. The stereo processor compares a geomagnetic value detected by the first or second geomagnetic sensor with a geomagnetic value detected by the third geomagnetic sensor, detects a displacement amount of the left camera or right camera, and changes a cutout position in the image captured by the left camera or right camera based on the displacement amount.
DEVICE AND METHOD FOR DETECTING A CURBSTONE IN AN ENVIRONMENT OF A VEHICLE AND SYSTEM FOR CURBSTONE CONTROL FOR A VEHICLE
A method for detecting a curbstone in an environment of a vehicle. The method includes recognizing at least one line segment that belongs to the curbstone with the aid of image data that are read in by an interface to a camera device of the vehicle. The line segment is projected onto a ground plane of the environment in order to generate a projected line segment. A subset of a plurality of three-dimensionally triangulated points in the environment of the vehicle is assigned to the line segment as a function of a position of the points relative to a position of the camera device, relative to a starting point of the projected line segment, and relative to an end point of the projected line segment. A flank plane of the curbstone is ascertained with the aid of the assigned points.