Patent classifications
G06T2207/30261
Disparity value deriving device, equipment control system, movable apparatus, and robot
A disparity value deriving device includes an acquisition unit configured to acquire a degree of matching between a reference region in a reference image captured from a first imaging position and each of a plurality of region in a designated range including a corresponding region to the reference region in a comparison image captured from a second imaging position; a synthesizer configured to synthesize the degree of matching of a reference region in neighborhood of a predetermined reference region in the reference image and the degree of matching of the predetermined reference region in the reference image; and a deriving unit configured to derive a disparity value of an object whose image is being captured in the predetermined reference region and a corresponding region to the predetermined reference region, based on a synthesized degree of matching obtained by the synthesizer.
Image processing apparatus and image processing method
An object recognizer of an image processing apparatus separately extracts a first feature point and a second feature point of a first line segment and a second line segment of which a variation is equal to or greater than a variation threshold value and the first feature point and the second feature point of the first line segment and the second line segment of which the variation is smaller than the variation threshold value, and determines a corresponding point in a second captured image corresponding to the first feature point of the first line segment of which the variation is equal to or greater than the variation threshold value as the second feature point of the second line segment corresponding to the first line segment of which the variation is equal to or greater than the variation threshold value to recognize an object.
METHOD FOR PROVIDING OBSTACLE AVOIDANCE USING DEPTH INFORMATION OF IMAGE AND UNMANNED AERIAL VEHICLE
Method for providing obstacle avoidance using depth information of image is provided. The method includes the following steps. Shoot a scene to obtain a depth image of the scene. Determine a flight direction and a flight distance according to the depth image. Then, fly according to the flight direction and the flight distance.
METHOD AND DEVICE FOR INTERPRETING A SURROUNDING ENVIRONMENT OF A VEHICLE, AND VEHICLE
A method is described for interpreting a surrounding environment of a vehicle. The method includes a step of providing an image of the surrounding environment of the vehicle, a step of forming an item of relational information using a first position of a first object in the image and a second position of a second object in the image, the item of relational information representing a spatial relation between the first object and the second object in the surrounding environment of the vehicle, and a step of use of the item of relational information in order to interpret the surrounding environment of the vehicle.
DRIVING SUPPORT APPARATUS AND DRIVING SUPPORT METHOD
Disclosed relates to a driving support apparatus including at least: a detection unit that detects a driving lane on which a user's vehicle and a front vehicle located in front of the user's vehicle are driving based on image data output from a camera; a first calculation unit that calculates a second front vehicle width for the front vehicle based on a first front vehicle width for the front vehicle measured on the image data, a first driving lane width for the driving lane measured on the image data, and a second driving lane width predetermined according to a characteristic of the driving lane; and a second calculation unit that calculates a distance from the front vehicle based on a focal length of the camera, the first front vehicle width, and the second front vehicle width, thereby precisely measuring the distance from the front vehicle based on camera image data.
Obstacle detection and mapping
An autonomous driving system for a vehicle comprising: an I/O module operative to communicate with an obstacle avoidance server; at least one sensor operative to provide at least an indication of an obstacle in a path of the vehicle; processing circuitry; and an autonomous driving manager to be executed by the processing circuitry and operative to: detect the at least an indication of an obstacle based on data provided by the at least one sensor, drive the vehicle in accordance with a driving policy associated with the obstacle, and send an obstacle report with obstacle information associated with the at least an indication of an obstacle to the obstacle avoidance server.
TEMPORAL DATA ASSOCIATIONS FOR OPERATING AUTONOMOUS VEHICLES
Systems and method are provided for controlling a vehicle. In one embodiment, a method includes: selecting, by a controller onboard the vehicle, first data for a region from a first device onboard the vehicle based on a relationship between a time associated with the first data and a frequency associated with a second device, obtaining, by the controller, second data from the second device, the second data corresponding to the region, correlating, by the controller, the first data and the second data, and determining, by the controller, a command for operating one or more actuators onboard the vehicle in a manner that is influenced by the correlation between the first data and the second data.
Three-dimensional object detection device
A three-dimensional object detection device can enhance the accuracy in detecting a three-dimensional object regardless of the brightness in the detection environment. The device has an image capture means that captures an image of a predetermined area and an image conversion means that converts the image through a viewpoint conversion into birds-eye view image. A first three-dimensional object detection means aligns positions of bird's-eye view images at different times obtained by the image conversion means, counts the number of pixels that exhibit a predetermined difference on a differential image of the aligned bird's-eye view images to generate a frequency distribution thereby creating differential waveform information, and detects a three-dimensional object on the basis of the differential waveform information. A second three-dimensional object detection means detects edge information from the bird's-eye view image obtained by the image conversion means and detects a three-dimensional object on the basis of the edge information.
VEHICULAR TRAILER HITCHING ASSIST SYSTEM
A vehicular trailer hitching assist system includes a rear backup camera disposed at a rear portion of a vehicle and a control disposed in the vehicle. A display device includes a video display screen operable to display video images for viewing by a driver of the vehicle. During a hitching maneuver event undertaken by the driver to hitch a trailer hitch of the vehicle to a trailer tongue of a trailer, and while the driver of the vehicle is maneuvering the vehicle to hitch the vehicle to the trailer, the control outputs video images derived from image data captured by the rear backup camera for display by the video display screen. At least one overlay overlays the displayed video images to guide the driver during the hitching maneuver. The at least one overlay aids in guiding connection of the trailer hitch of the vehicle to the trailer tongue of the trailer.
Vision system and method for a motor vehicle
A vision system (10) for a motor vehicle with a stereo imaging apparatus (11) with imaging devices (12) adapted to capture images from a surrounding of the motor vehicle, and a processing device (14) adapted to process images captured by the imaging devices (12) and to detect objects, and track detected objects over several time frames, in the captured images. The processing device (14) is adapted to obtain an estimated value for the intrinsic yaw error of the imaging devices (12) by solving a set of equations, belonging to one particular detected object (30), using a non-linear equation solver method, where each equation corresponds to one time frame and relates a frame time, a disparity value of the particular detected object, an intrinsic yaw error and a kinematic variable of the vehicle.