Patent classifications
B60R2300/607
ELECTRIC DRIVE VEHICLE
An electric drive vehicle is provided with first and second slider plates contacting a trolley line on the high-voltage side and on the ground side, respectively. A camera has a photographing range of the whole of the second slider plate held in contact with the trolley line on the ground side and a part of the trolley line residing around the second slider plate. A controller controls a display of a monitor, wherein the controller includes a relative distance calculation section that calculates a relative distance between the trolley line on the ground side and a reference position of the second slider plate, and a bird's eye view image data generation section that generates bird's eye view image data reflecting the relative distance calculated by the relative distance calculation section and that outputs the bird's eye view image data to the monitor.
VEHICLE VISION SYSTEM WITH ENHANCED LANE TRACKING
A driver assistance system for a vehicle includes a camera disposed at a vehicle and having a field of view forward of the vehicle. A control includes an image processor that is operable to process image data captured by the camera. Responsive to processing by the image processor of image data captured by the camera, the image processor is operable to determine lane markings demarcating the lane in which the vehicle is traveling. Responsive to processing by the image processor of captured image data and responsive to at least one of (i) a map input and (ii) a location input, the control estimates a path of travel for the vehicle to maintain the vehicle in the lane in which the vehicle is traveling in situations where the lane markings demarcating the lane in which the vehicle is traveling are not readily determinable.
Apparatus, method and system for adjusting predefined calibration data for generating a perspective view
The present application relates to a system for generating a surround view and a method of operating the system. A synthesizer module synthesizes an output frame from input frames in accordance with predefined calibration data. The input frames have an overlapping region imaging an overlapping field of view captured by two adjacent cameras. An adjustment module receives height level information representative of a height level in the overlapping region; selects a data record out of a set of predefined calibration data records in accordance with the height level information; and updates the predefined a part of the calibration data with the selected data record.
Vehicle camera calibration apparatus and method
A vehicle camera calibration apparatus and method are provided. The vehicle camera calibration apparatus includes a camera module configured to acquire an image representing a road from a plurality of cameras installed in a vehicle, an input/output module configured to receive, as an input, the acquired image from the camera module, or output a corrected image, a lane detection module configured to detect a lane and extract a feature point of the lane from an image received from the input/output module, and a camera correction module configured to estimate a new external parameter using a lane equation and a lane width based on initial camera information and external parameter information in the image received from the input/output module, and to correct the image.
METHOD AND DEVICE FOR DISPLAYING A TOP VIEW IMAGE OF A VEHICLE
A method of displaying a top view image of a vehicle includes: extracting, by a controller, a location of maximum resolution of a top view image of a vehicle generated by cameras; setting, by the controller, a replacement image area that is larger than an image area of the vehicle included in the top view image of the vehicle by a reference value at the location of the maximum resolution of the top view image of the vehicle; and replacing, by the controller, a non-obtainable mage area of the top view image of the vehicle with the replacement image area, in which the reference value is a value larger than the non-obtainable image area.
SYSTEM AND METHOD FOR WORK MACHINE
A system includes a processor and a display. A plurality of cameras capture surroundings images of a work machine. The processor acquires image data indicative of the surroundings images. The processor synthesizes the surroundings images and generates a panorama moving image from viewpoints that move around the work machine. The display displays, based on a signal from the processor, the panorama moving image from the viewpoints that move around the work machine. The system may include the cameras and the work machine.
TARGETLESS VEHICULAR CAMERA CALIBRATION SYSTEM
A vehicular camera calibration system includes a camera disposed at a vehicle, and an electronic control unit (ECU). The camera calibration system utilizes an intrinsic parameter of the camera and uses a kinematic model of motion of the vehicle that is determined at least in part via processing of multiple frames of captured image data. The camera calibration system, responsive to processing of multiple frames of image data captured by the camera as the vehicle moves along a path of travel, and based at least in part on (i) an intrinsic parameter of the camera and (ii) the kinematic model of motion of the vehicle, determines misalignment of the camera. The camera calibration system determines camera misalignment without use of a fiducial marker in the field of view of the camera as the vehicle moves along the path of travel and without use of reference points on the vehicle.
Method and apparatus for calibrating a plurality of cameras
A camera calibration method includes obtaining a plurality of images of surroundings of a vehicle captured by a plurality of cameras, setting a region of interest (ROI) in each of the images, detecting one or more feature points of the set ROIs, matching a first feature point of a first ROI and a second feature point of a second ROI based on the detected feature points, calculating a first bird-view coordinate of the first feature point and a second bird-view coordinate of the second feature point, and calibrating the cameras by adjusting an extrinsic parameter of each of the cameras based on an error between the first bird-view coordinate and the second bird-view coordinate.
Method for stitching image data captured by multiple vehicular cameras
A method for stitching image data captured by multiple vehicular cameras includes equipping a vehicle with a vehicular vision system having a control and a plurality of cameras disposed at the vehicle so as to have respective fields of view exterior the vehicle. Image data captured by first and second cameras of the plurality of cameras is processed to detect and track an object present in and moving within an overlapping portion of the fields of view of the first and second cameras. Image data captured by the first and second cameras is stitched, via processing provided captured image data, to form stitched images. Stitching of captured image data is adjusted responsive to determination of a difference between a feature of a detected and tracked object as captured by the first camera and the feature of the detected and tracked object as captured by the second camera.
MULTI-CAMERA VEHICULAR VISION SYSTEM
A vehicular vision system includes at least two cameras disposed at a vehicle. The at least two cameras includes a first camera and a second camera. The first camera is disposed at an in-cabin side of a windshield of the vehicle and views through the windshield forward of the vehicle. Each of the first camera and the second camera includes a first CMOS imaging array having at least one million photosensor elements arranged in rows and columns. The second camera includes an encryptor that encrypts image data that is captured by the second camera. The first camera includes a decryptor that decrypts image data encrypted by the second camera. Image data captured by the first camera and image data captured by the second camera may be processed at an electronic control unit (ECU) for a machine vision system of the vehicle.