B60R2300/302

CAMERA-ONLY-LOCALIZATION IN SPARSE 3D MAPPED ENVIRONMENTS

Techniques for localizing a vehicle include obtaining an image from a camera, identifying a set of image feature points in the image, obtaining an approximate location of the vehicle, determining a set of sub-volumes (SVs) of a map to access based on the approximate location, obtaining map feature points and associated map feature descriptors associated with the set of SVs, determining a set of candidate matches between the set of image feature points and the obtained map feature points, determining a set of potential poses of the camera from candidate matches from the set of candidate matches and an associated reprojection error estimated for remaining points to select a first pose of the set of potential poses having a lowest associated reprojection error, determining the first pose is within a threshold value of an expected vehicle location, and outputting a vehicle location based on the first pose.

DISPLAY CONTROL DEVICE AND DISPLAY CONTROL METHOD

A display control device includes a memory that stores therein a program; and a processor that is connected to the memory. The processor performs processing by executing the program. The processing includes: detecting a number of lanes indicating the number of one or more lanes including a lane on which a host vehicle travels; determining a processing condition for an area other than an essential display area indicating a predefined display area in a captured image obtained by imaging surroundings of the host vehicle, according to the number of lanes detected; and controlling that includes processing the captured image according to the processing condition determined and displaying the processed captured image on a display device.

System of vehicles equipped with imaging equipment for high-definition near real-time map generation

Described are street level intelligence platforms, systems, and methods that can include a fleet of swarm vehicles having imaging devices. Images captured by the imaging devices can be used to produce and/or be integrated into maps of the area to produce high-definition maps in near real-time. Such maps may provide enhanced street level intelligence useful for fleet management, navigation, traffic monitoring, and/or so forth.

Method and device for determining an area cut with a cutting roll by at least one construction machine or mining machine
11823131 · 2023-11-21 · ·

In a method for determining an area milled by at least one construction machine or at least one mining machine by means of a milling drum (2) by means of working a predetermined area in several milling trajectories by at least one machine (1), determining the length of the milling trajectories along which a milling operation has taken place by evaluating the continuous machine positions, adding up the previously milled partial areas taking into account the length of the milling trajectory and the installed width of the milling drum (2), wherein the partial area currently milled along the milling trajectory is checked, either continuously or subsequently, for overlapping or multiple overlapping with any previously milled partial areas, and any partial areas which overlap are deducted, as overlapping areas, from the added-up previously milled partial areas, the total added-up partial areas milled minus the total overlapping areas established give the milled area.

Vehicle periphery monitoring device

A vehicle periphery monitoring device stores images photographed with a front camera, a rear camera, a left side camera, and a right side camera as a past front image, a past rear image, a past left side image, and a past right side image, respectively. When a vehicle travels while making a turn, the vehicle periphery monitoring device generates an underfloor image indicating a condition of an underfloor of the vehicle using at least one of the past front image and the past rear image and at least one of the past left side image and the past right side image. The vehicle periphery monitoring device displays the generated underfloor image on a display.

Actively modifying a field of view of an autonomous vehicle in view of constraints
11829152 · 2023-11-28 · ·

Methods and devices for actively modifying a field of view of an autonomous vehicle in view of constraints are disclosed. In one embodiment, an example method is disclosed that includes causing a sensor in an autonomous vehicle to sense information about an environment in a first field of view, where a portion of the environment is obscured in the first field of view. The example method further includes determining a desired field of view in which the portion of the environment is not obscured and, based on the desired field of view and a set of constraints for the vehicle, determining a second field of view in which the portion of the environment is less obscured than in the first field of view. The example method further includes modifying a position of the vehicle, thereby causing the sensor to sense information in the second field of view.

AUGMENTED MACHINE USER INTERFACE SYSTEM

A method and system for generating an augmented machine view. The augmented machine view may include a dynamic virtual machine model representing an actual working machine, overlayed with camera footage from around the working machine. The dynamic virtual machine model may be synched in time with the camera footage. The system for generating the augmented machine view may include cameras mounted on the actual machine and machine sensors monitoring the position and movements of individual components of the machine.

GLARE AND OCCLUDED VIEW COMPENSATION FOR AUTOMOTIVE AND OTHER APPLICATIONS

Often when there is a glare on a display screen the user may be able to mitigate the glare by tilting or otherwise moving the screen or changing their viewing position. However, when driving a car there are limited options for overcoming glares on the dashboard, especially when you are driving for a long distance in the same direction. Embodiments are directed to eliminating such glare. Other embodiments are related to mixed reality (MR) and filling in occluded areas.

Speed difference indicator on head up display

A head up display (HUD) system includes: a difference module configured to determine a speed difference based on a difference between (a) a present speed of a vehicle and (b) a target speed of the vehicle; a light source configured to, via a windshield of the vehicle, generate a virtual display that is visible within a passenger cabin of the vehicle; and a display control module configured to control the light source to include a visual indicator of the speed difference in the virtual display.

GLARE AND OCCLUDED VIEW COMPENSATION FOR AUTOMOTIVE AND OTHER APPLICATIONS

Often when there is a glare on a display screen the user may be able to mitigate the glare by tilting or otherwise moving the screen or changing their viewing position. However, when driving a car there are limited options for overcoming glares on the dashboard, especially when you are driving for a long distance in the same direction. Embodiments are directed to eliminating such glare. Other embodiments are related to mixed reality (MR) and filling in occluded areas.