Patent classifications
B60R2300/30
Lane Detection Systems And Methods
Example lane detection systems and methods are described. In one implementation, a method receives an image from a front-facing vehicle camera and applies a geometric transformation to the image to create a birds-eye view of the image. The method analyzes the birds-eye view of the image using a neural network, which was previously trained using side-facing vehicle camera images, to determine a lane position associated with the birds-eye view of the image.
CAMERA MONITOR SYSTEM FOR COMMERCIAL VEHICLES INCLUDING WHEEL POSITION ESTIMATION
A method for estimating a trailer wheel position includes identifying a first set of wheel locations in a first image. Each of the wheel locations in the first set of wheel locations is associated with a corresponding trailer angle. The first set of wheel locations is clustered and a primary cluster in the first set of wheel locations is identified. A best fit curve is applied to the primary cluster. The best fit curve is a curve associating wheel position to trailer angle. An estimated wheel position is determined by applying a determined trailer angle to the best fit curve in response to the wheel being hidden in the first image. The estimated wheel position is output to at least one additional vehicle system.
Apparatus, systems and methods for classifying digital images
The present disclosure is directed to apparatuses, systems and methods for automatically classifying images of occupants inside a vehicle. More particularly, the present disclosure is directed to apparatuses, systems and methods for automatically classifying images of occupants inside a vehicle by comparing current image feature data to previously classified image features.
METHOD FOR DISPLAYING A VIRTUAL VIEW OF THE AREA SURROUNDING A VEHICLE, COMPUTER PROGRAM, CONTROL UNIT, AND VEHICLE
A method for displaying a virtual view of the area surrounding a vehicle, in particular a surround view or panoramic view. The method comprises: capturing a camera image of a part of the surroundings using a camera having a wide-angle lens;
ascertaining an item of image information dependent on the captured camera image, the captured camera image being geometrically corrected; and displaying the virtual view by projecting the ascertained item of image information onto a virtual projection plane. When ascertaining the item of image information, the resolution of the geometrically corrected camera image is increased in a first partial region.
Method and System for Providing Behavior of Vehicle Operator Using Virtuous Cycle
A method or system is capable of detecting operator behavior (“OB”) utilizing a virtuous cycle containing sensors, machine learning center (“MLC”), and cloud based network (“CBN”). In one aspect, the process monitors operator body language captured by interior sensors and captures surrounding information observed by exterior sensors onboard a vehicle as the vehicle is in motion. After selectively recording the captured data in accordance with an OB model generated by MLC, an abnormal OB (“AOB”) is detected in accordance with vehicular status signals received by the OB model. Upon rewinding recorded operator body language and the surrounding information leading up to detection of AOB, labeled data associated with AOB is generated. The labeled data is subsequently uploaded to CBN for facilitating OB model training at MLC via a virtuous cycle.
Display control apparatus and method for stepwise deforming of presentation image radially by increasing display ratio
A display control apparatus includes a receiver that receives a recognition result of a change in environment around a vehicle, and a controller that controls an image generation apparatus to generate an image corresponding to a presentation image to be displayed on the display medium. The controller generates and outputs a control signal to the image generation apparatus to control the image generation apparatus based on the recognition result so as to deform the presentation image radially on the display medium such that the deformed presentation image moves toward at least one of sides of the display medium and disappears sequentially to the outside of the display medium across edges of the display medium.
Vehicle collision warning prevention method using optical flow analysis
A vehicle collision warning prevention method includes the steps of: (a) extracting a forward video of a vehicle and video recognition information from a video recognition module mounted in a vehicle, and detecting a size change rate of a forward object included in the video recognition information at each frame of the forward video; (b) calculating an average OFCR of a predetermined frame section; (c) determining whether a value obtained by subtracting the average OFCR from a current OFCR is less than a predetermined threshold value; (d) determining that a brake operation signal is generated when it is determined that the value is less than the threshold value; (e) determining whether a collision warning signal is generated within a predetermined time after the step (d); and (f) preventing an output of the collision warning signal when the collision warning signal is generated at the step (e).
METHOD FOR ADAPTING A BRIGHTNESS OF A HIGH-CONTRAST IMAGE AND CAMERA SYSTEM
The invention relates to a method for adapting a brightness (28) of a high-contrast image (20, 22) of an environmental region (9) of a motor vehicle (1) including the following steps of: a) capturing a first image with a first camera parameter of a camera system (2) of the motor vehicle (1) and a second image with a second camera parameter of the camera system (2) by means of the camera system (2), b) generating a first high-contrast image (20) of the environmental region (9) with the first image and the second image, c) determining a high-contrast brightness value (23) of the first high-contrast image (20), d) comparing the high-contrast brightness value (23) to a predetermined high-contrast target brightness value, e) adapting the first high-contrast image (20) depending on the comparison according to step d), f) determining a first brightness value of the first image and/or a second brightness value of the second image, g) comparing the first brightness value to a first target brightness value (26) and/or the second brightness value to a second target brightness value (27), h) adapting the first camera parameter and/or the second camera parameter depending on the comparison according to step g), i) capturing a third image of the environmental region (9) with the adapted first camera parameter and a fourth image of the environmental region (9) with the adapted second camera parameter by means of the camera system (2), j) generating a second high-contrast image (22) of the environmental region (9) with the third image and the fourth image, k) providing the second high-contrast image (22) as a high-contrast image (20, 22) adapted in brightness for representing the environmental region (9) of the motor vehicle (1).
Image system for a vehicle
A system comprises: image capture devices associated with a host vehicle and configured to capture image data indicative of an environment of the host vehicle; sensors associated with the host vehicle and configured to capture object data indicative of the presence of an object in a vicinity of the host vehicle; and a processor communicatively coupled to the image capture devices and the sensors to: receive the captured image data and captured object data; aggregate the object data captured by each sensor; determine, in dependence on the aggregated object data, a geometrical parameter of a virtual projection surface; generate a virtual projection surface in dependence on the geometrical parameter; determine, in dependence on the captured image data, an image texture; and map the image texture onto the generated virtual projection surface.
APPARATUS AND METHOD FOR PROCESSING AN IMAGE OF A VEHICLE
The present disclosure relates to a vehicle image processing device and a method therefor. A vehicle image processing apparatus may include a storage that stores optical property information of a first camera among a plurality of cameras for obtaining a vehicle periphery image, a processor that determines whether backlight is present in the vehicle periphery image and generates a display image based on whether the backlight is present, and a communication device controlled by the processor and communicating with a device in the vehicle. The processor may calculate location information of a light source for at least one of the first camera or the vehicle by using coordinates of a shadow object of the vehicle, which is recognized from the vehicle periphery image, and coordinates of the vehicle, and may determine whether the backlight is present, by comparing location information of the light source with the optical property information.