B60R2300/20

AUTO PANNING CAMERA MIRROR SYSTEM INCLUDING IMAGE BASED TRAILER ANGLE DETECTION
20220314881 · 2022-10-06 ·

An method for automatically panning a view for a commercial vehicle includes analyzing a portion of a first view at a first time to determine a position of a vehicle feature within the first view, wherein the first view is a subset of a second view, estimating an expected position of the vehicle feature in the first view at a second time subsequent to the first time, defining a region of interest centered on the expected position of the vehicle feature in the second view and analyzing the region of interest to determine an exact position of the vehicle feature at the second time, and determining a current trailer angle based on a position of the vehicle feature within the second view.

Periphery monitoring device

A periphery monitoring device includes: an acquisition section acquiring a steering angle of a vehicle; an image acquisition section acquiring a captured image from an image capturing section that captures an image of a periphery of the vehicle; a detection section acquiring detection information of an object around the vehicle; and a control section causing a display section to display a synthesized image including a vehicle image showing the vehicle and a periphery image showing the periphery of the vehicle based on the captured image. When the object is detected on a course of the vehicle traveling at the steering angle by a predetermined distance, the control section causes a virtual vehicle image to be displayed in the synthesized image to be superimposed on a course to the object with a position of the vehicle as a reference.

VIDEO STREAMING ANOMALY DETECTION
20230156179 · 2023-05-18 ·

Monitoring of a vehicle is provided. A plurality of video feeds captured from cameras of the vehicle are received, over a network from a vehicle, each of the plurality of video feeds including a plurality frames, each of the frames of each of the video feeds being assigned a sequence number that increases for each successive frame. The sequence numbers are analyzed to identify missing frames, delayed frames, or stale frames. The plurality of video feeds is displayed, to one or more monitors, the sequence numbers corresponding to the displayed frames, and for each of the plurality of video feeds, indications of whether any missed frames, delayed frames, or stale frames were identified.

Method for detecting an object via a vehicular vision system
11648877 · 2023-05-16 · ·

A method for detecting an object via a vehicular vision system includes disposing first and second cameras at respective locations at a vehicle so as to have respective first and second fields of view rearward of the vehicle. The locations are vertically and horizontally spaced apart by respective vertical and horizontal separation distances. With the first and second cameras disposed at the respective locations, the first field of view at least partially overlaps the second field of view. An electronic control unit (ECU) is provided that includes an image processor. Image data is captured by the cameras and provided to the ECU and processed at the ECU. The ECU, responsive to processing of captured image data and based on the separation distances, determines presence of an object in the overlapping region of the first and second fields of view and determines the location of the object relative to the vehicle.

Display system and display method

A luminance determining unit 14 determines a luminance distribution of an exterior circumstantial image in a line-of-sight direction of a driver 300, and a luminance changing unit 15 determines a bright region 301 of this luminance distribution. Further, the luminance changing unit 15 determines a luminance after the change in a peripheral region 302 of this bright region 301. A virtual-image creating unit 16A creates a virtual image based on the determined luminance, and a display processing unit 17 displays this virtual image on a display unit 200. In this manner, since a display system 1A displays the virtual image for use in increasing the luminance in periphery of the bright region 301, the feeling of the brightness for the driver 300 can be moderated, and therefore, the visual recognition can be improved. That is, the display system 1A can output the virtual image depending on the circumstances.

Driving assistance device, driving situation information acquisition system, driving assistance method, and program
11643012 · 2023-05-09 · ·

A driving assistance device includes: a line-of-sight direction detection unit that detects a direction of a line of sight of a driver of a moving body; an obstacle detection unit that detects a position of an obstacle in environs of the moving body; an assessment criteria determination unit that determines assessment criteria of a look at the obstacle by the driver based on at least one of time, location, weather, a state of the moving body, and a state of the driver; and a warning processing unit that obtains an assessment result by applying, to the assessment criteria, a score computed based on the direction of the line of sight of the driver and the position of the obstacle, the warning processing unit determining at least one of whether or not a warning needs to be issued to the driver and a level of the warning, based on the assessment result.

Display system and display device for rendering of a virtual object of a departure point

When a theme park that is a facility configured based on a specific theme is set as a departure point, an AR display device transmits information of the theme park of the departure point to a server. The server includes a theme park-specific character storage unit and a transmission unit. The theme park-specific character storage unit stores information on a character set for the theme park of the departure point. The transmission unit transmits, to the AR display device, image data of the character set for the theme park of the departure point, with the character being set as a virtual object of the departure point.

VEHICLE ANALYSIS ENVIRONMENT WITH DISPLAYS FOR VEHICLE SENSOR CALIBRATION AND/OR EVENT SIMULATION
20230206499 · 2023-06-29 ·

A vehicle analysis environment includes one or more display screens, such as a display screen wall or an array of display screens. While a vehicle is in the vehicle analysis environment, a vehicle analysis system renders and displays one or more vehicle sensor calibration targets and/or one or more simulated events on the one or more display screens. Vehicle sensors of the vehicle capture sensor data while in the vehicle analysis environment. The sensor data depict the vehicle sensor calibration targets and/or the simulated events that are displayed on the one or more display screens. The vehicle can output actions based on the simulated event and/or can calibrate its vehicle sensors based on the vehicle sensor calibration targets.

CAMERA MONITORING SYSTEM FOR VEHICLES INCLUDING AUTOMATICALLY CALIBRATING CAMERA

A camera monitoring system includes a first mirror replacement camera extending outward from a vehicle. The first mirror replacement camera defines a rearward facing field of view including at least one image feature during at least a first set of operating conditions. The at least one image feature has a fixed position within the field of view during the at least the first set of operating conditions. A camera monitoring system controller is configured to automatically calibrate an orientation of the first mirror replacement camera relative to the vehicle by comparing an expected position of the at least one image feature to an actual position of the at least one image feature while the vehicle is operating under the first set of operating conditions and identifying a shift of camera orientation based on the difference.

Speech Recognition System and Method for Providing Speech Recognition Service
20230206918 · 2023-06-29 ·

A vehicle may include: a display provided inside the vehicle; and a controller configured to control the display, based on a condition being satisfied, to display a vehicle image comprising a graphic object and a plurality of indicators, wherein the graphic object indicates a plurality of portions of the vehicle image, and wherein each of the plurality of indicators is respectively associated with one of the plurality of portions. Based on a user utterance associated with at least one indicator of the plurality of indicators, the controller may be configured to control a control target corresponding to the at least one indicator.