Patent classifications
B60R1/31
Hitch assist system
A hitch assist system is provided herein that includes a sensing system configured to detect a trailer proximate a vehicle. The hitch assist system also includes a controller for determining an environmental visibility level; determining an offset from a first sensor in high visibility levels and from a second sensor in low visibility levels; and maneuvering the vehicle along a path to align a hitch ball with a coupler of the trailer.
IMAGE GENERATION DEVICE AND IMAGE GENERATION METHOD
An image generation device includes: an image acquisition unit which acquires images taken by cameras installed in a vehicle; a storage unit which stores self vehicle data indicating the vehicle having such gloss that a certain region is higher in brightness than the other region; a bird's eye image generation unit which generates a neighborhood image that is an image of the vehicle and a neighborhood of the vehicle as viewed from a virtual point of view based on the acquired images; a 3D image generation unit which generates a three-dimensional self-vehicle image being a three-dimensional image of the vehicle; and an image synthesis unit which generates a synthesized image in which the self-vehicle image is superimposed on the neighborhood image, and the 3D image generation unit generates the three-dimensional self-vehicle image based on the self vehicle data that is read out from the storage unit.
VEHICLE CONTROL DEVICE, VEHICLE CONTROL METHOD, AND STORAGE MEDIUM
A vehicle control device includes a display configured to display information, an inputter to which a user's operation is input, a driving controller configured to control at least one of a speed and steering of a vehicle, a determiner configured to determine a lane in which the vehicle is present when a first operation that allows a lane change by the vehicle is input to the inputter as a reference lane, and a display controller that is configured to cause the display to display a first image depicting a road on which the vehicle is present and a second image depicting the reference lane which are superimposed.
Virtual window display
A virtual window display includes: a plurality of collimated projector modules having contiguous exit pupils. Each collimated projector module includes: a display device configured to display a spatially modulated image along a flat or curved geometric image surface, and one or more optical lenses sharing a common curvature symmetry axis and configured to optically collimate light from the display device in the collimated projector module. The curvature symmetry axes of the plurality of collimated projector modules point in a plurality of angular directions and are substantially convergent along at least one axis at a finite distance from the plurality of collimated projectors. The total display FOV of the virtual window display is larger than the module FOV of each of the plurality of collimated projector modules.
VEHICLE IMAGING SYSTEM AND METHOD FOR A PARKING SOLUTION
A vehicle imaging system and method for providing a user with an easy to use vehicle parking solution that displays an integrated and intuitive backup camera view, such as a first-person composite camera view. The first-person composite camera view may include composite image data from a plurality of cameras mounted around the vehicle that has been joined or stitched together, as well as augmented graphics with computer-generated simulations of parts of the vehicle that provide the user with intuitive information concerning the point-of-view being displayed. The point-of-view of the first-person composite camera view is that of an observer located within the vehicle, and is designed to emulate the point-of-view of a driver. It is also possible to provide a direction indicator that allows the user to engage a touch screen display and manually change the direction of the first-person composite camera view so that the user can intuitively explore the vehicle surroundings.
Hitch assist system
A hitch assist system is provided herein. The hitch assist system includes a sensing system configured to detect a hitch assembly and a coupler. A controller is configured to generate commands for maneuvering a vehicle along a first path or a second path. A user input device includes a display, the display configured to illustrate the first and second paths.
Estimating subsurface feature locations during excavation
In one embodiment, techniques are provided for capturing accurate information describing the location of subsurface features (e.g., subsurface utilities such as water pipes, sewer pipes, electrical conduits, etc.) usable in providing an augmented reality view. A set of images is captured with a camera rig coupled to a mobile portion (e.g., the boom) of a piece of heavy construction equipment (e.g., an excavator) being used by workers to conduct an excavation that exposes the subsurface features. The set of images is provided to a structure-from-motion (SfM) photogrammetry that generates a 3D reality mesh. Relative and/or absolute locations of the subsurface features are calculated based on the 3D reality mesh and provided to an augmented reality application executing on an augmented reality device for use in providing an augmented reality view.
System and method for trailer alignment
A vehicle maneuvering system comprises at least one image device configured to capture image data and a controller. The controller is configured to identify a coupler position of a trailer in the image data and control motion of the vehicle navigating a hitch ball of the vehicle toward the coupler position. The controller is further configured to monitor a coupler distance extending from the coupler position and the hitch ball. In response to the coupler distance being less than or equal to the distance threshold, the controller is configured to classify a plurality of portions of the image data as trailer portions and non-trailer portions and identify the coupler position by processing the image via a feature extraction operation.
DYNAMICALLY AUGMENTED BIRD'S-EYE VIEW
In accordance with an example embodiment, a vehicle includes a moveable member, posture sensing system, bird's-eye camera system, display, and controller. The posture sensing system indicates the moveable member's posture. The bird's-eye camera system provides images of its field of view, including the ground adjacent to the vehicle. The controller receives a posture signal from the posture sensing system, receives images from the camera system, creates a rendered vehicle representation, creates a rendered path projection, and generates a dynamically augmented bird's-eye view then displays it on the display. The moveable member is positioned in the rendered vehicle representation using the posture signal. The rendered path projection includes an outer envelope line projecting the path of an outermost point of the vehicle, determined using the posture signal. The dynamically augmented bird's-eye view is generated using the images, rendered vehicle representation, and rendered path projection.
VIRTUAL WINDOW DISPLAY
A virtual window display includes: a plurality of collimated projector modules having contiguous exit pupils. Each collimated projector module includes: a display device configured to display a spatially modulated image along a flat or curved geometric image surface, and one or more optical lenses sharing a common curvature symmetry axis and configured to optically collimate light from the display device in the collimated projector module. The curvature symmetry axes of the plurality of collimated projector modules point in a plurality of angular directions and are substantially convergent along at least one axis at a finite distance from the plurality of collimated projectors. The total display FOV of the virtual window display is larger than the module FOV of each of the plurality of collimated projector modules.