Patent classifications
B60R2300/60
GENERATING VIRTUAL IMAGES BASED ON CAPTURED IMAGE DATA
Systems and methods for generating a virtual view of a virtual camera based on an input image are described. A system for generating a virtual view of a virtual camera based on an input image can include a capturing device including a physical camera and a depth sensor. The system also includes a controller configured to determine an actual pose of the capturing device; determine a desired pose of the virtual camera for showing the virtual view; define an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera; and generate a virtual image depicting objects within the input image according to the desired pose of the virtual camera for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input image, and the desired pose of the virtual camera.
OBJECT DETECTION VISION SYSTEM
An object detection vision system and methods are disclosed. A method for detecting objects in a vision system of an industrial machine includes receiving image data from one or more vision cameras and receiving detection data from one or more detect devices. The detection data includes one or more detected objects. The method includes combining the detection data and the image data and transforming the detection data in the image data based on one or more objects in the image data. The method also includes displaying an indication of the detected one or more objects in the image data based on the transformed detection data.
VEHICLE MIRROR IMAGE SIMULATION
A method of providing image includes obtaining at least one first image of a surrounding area (52) from a first camera (26, 33, 38A, 38B, 40A, and 40B). At least one second image of the surrounding area (52) is obtained from a second camera (26, 33, 38A, 38B, 40A, and 40B). The at least one first image is fused with the at least one second image to generate a three-dimensional model (51) of the surrounding area (52). A first image (54A) of the three dimensional model is provided to a display by determining a first position of an operator. A second image (54B) of the three-dimensional model is provided to the display by determining when the operator is in a second position to simulate motion parallax.
Method for displaying the surroundings of a vehicle on a display device, processing unit and vehicle
A method for displaying an environment of a vehicle on a display includes: recording the environment with at least two cameras, each having a different field of view, wherein fields of view of adjacent cameras overlap; creating a panoramic image from at least two images taken by differing cameras, the images being projected into a reference plane for creating the panoramic image; ascertaining depth information pertaining to an object in the environment by triangulation from at least two differing individual images taken by the same camera; and generating an overlay structure as a function of the ascertained depth information, the overlay structure having been uniquely assigned to an imaged object; and, representing the created panoramic image, containing the at least one object, and the at least one generated overlay structure on the display such that the overlay structure is displayed on, and/or adjacent to, the assigned object.
Apparatus and method for displaying rear image of vehicle
Disclosed are an apparatus and method for displaying a rear image of a vehicle. The apparatus for displaying a rear image of a vehicle includes a rear camera configured to capture a rear image of the vehicle, a steering angle sensor configured to sense a steering angle of a steering wheel, a display unit, and an image processing unit configured to convert the rear image captured by the rear camera when the vehicle moves backward into an image in a direction where the vehicle is to move backward according to the steering angle of the steering wheel that is sensed by the steering angle sensor, and display the converted image in the direction where the vehicle is to move backward through the display unit.
LOW-SPEED MANEUVER ASSISTING SYSTEM AND METHOD
Maneuver assisting system and method are provided. The system includes at least one processor configured to process operations of the system; one or more memories for storing instructions; a communication unit for communicating between components of the system, and between the system and the vehicle and/or a user equipment; a Surrounding View Monitoring (SVM) unit comprising a plurality of sensors for providing a plurality of vehicle-related information; a Human-Machine Interface (HMI) configured for a driver of the vehicle to interact with the system; a display unit configured to display the HMI; a motion planning unit configured to generate a trajectory for the vehicle to follow; and a motion control unit configured to control the automated maneuvers of the vehicle. In particular, the HMI is configured to be implemented in the user equipment in order for the driver to remotely operate the system using the user equipment.
Intelligent vehicle systems and control logic for surround view augmentation with object model recognition
Presented are intelligent vehicle systems with networked on-body vehicle cameras with camera-view augmentation capabilities, methods for making/using such systems, and vehicles equipped with such systems. A method for operating a motor vehicle includes a system controller receiving, from a network of vehicle-mounted cameras, camera image data containing a target object from a perspective of one or more cameras. The controller analyzes the camera image to identify characteristics of the target object and classify these characteristics to a corresponding model collection set associated with the type of target object. The controller then identifies a 3D object model assigned to the model collection set associated with the target object type. A new “virtual” image is generated by replacing the target object with the 3D object model positioned in a new orientation. The controller commands a resident vehicle system to execute a control operation using the new image.
Transparent Trailer Articulation View
A method for providing a panoramic view (152) of an environment behind a trailer (106) of a vehicle-trailer system (100). The method includes receiving a first image (133, 133b) from a rear trailer camera (132, 132b), a second image (133, 133c) from a right-side trailer camera (132, 132c), and a third image (133, 133d) from a left-side trailer camera (132, 132d). The method includes determining a panoramic view (152) based on the first image (133, 133b), the second image (133, 133c), and the third image (133, 133d). Additionally, the method includes determining a trailer angle (α) based on sensor system data (131) received from a sensor system (130). The method includes determining a viewing area (154) within the panoramic view (152) based on the trailer angle (α) and sending instructions (156) to a display (122) to display the viewing area (154).
Enhanced visibility system for work machines
An enhanced visibility system for a work machine includes an image capture device, a sensor, one or more control circuits, and a display. The image capture device is configured to obtain image data of an area surrounding the work machine. The sensor is configured to obtain data regarding physical properties of the area surrounding the work machine. The control circuits are configured to receive the image data and the data regarding the physical properties, and augment the image data with the data regarding the physical properties to generate augmented image data. The display is configured to display the augmented image data to provide an enhanced view of the area surrounding the work machine.
Parking assistance device and method
A parking assistance device includes a camera configured to capture a rear-view image of a vehicle, a plurality of sensors configured to sense an obstacle located around the vehicle, a controller configured to generate a parking guide line to guide the vehicle into a target parking space and assist a driver of the vehicle to park based on a separation distance between a predicted entrance trajectory corresponding to a steering angle of the vehicle and the parking guide line, and a display configured to match and display the rear-view image of the vehicle with the parking guide line.