B60Y2400/3015

Apparatus, systems and methods for classifying digital images

The present disclosure is directed to apparatuses, systems and methods for automatically classifying images of occupants inside a vehicle. More particularly, the present disclosure is directed to apparatuses, systems and methods for automatically classifying images of occupants inside a vehicle by comparing current image feature data to previously classified image features.

Apparatuses, systems and methods for classifying digital images

The present disclosure is directed to apparatuses, systems and methods for automatically classifying digital images of occupants inside a vehicle. More particularly, the present disclosure is directed to apparatuses, systems and methods for automatically classifying digital images of occupants inside a vehicle by comparing current image data to previously classified image data.

SYSTEM AND METHOD FOR ONLINE REAL-TIME MULTI-OBJECT TRACKING
20220215672 · 2022-07-07 ·

A system and method for online real-time multi-object tracking is disclosed. A particular embodiment can be configured to: receive image frame data from at least one camera associated with an autonomous vehicle; generate similarity data corresponding to a similarity between object data in a previous image frame compared with object detection results from a current image frame; use the similarity data to generate data association results corresponding to a best matching between the object data in the previous image frame and the object detection results from the current image frame; cause state transitions in finite state machines for each object according to the data association results; and provide as an output object tracking output data corresponding to the states of the finite state machines for each object.

System and method for online real-time multi-object tracking
11295146 · 2022-04-05 · ·

A system and method for online real-time multi-object tracking is disclosed. A particular embodiment can be configured to: receive image frame data from at least one camera associated with an autonomous vehicle; generate similarity data corresponding to a similarity between object data in a previous image frame compared with object detection results from a current image frame; use the similarity data to generate data association results corresponding to a best matching between the object data in the previous image frame and the object detection results from the current image frame; cause state transitions in finite state machines for each object according to the data association results; and provide as an output object tracking output data corresponding to the states of the finite state machines for each object.

Lane change assist apparatus for vehicle
11142246 · 2021-10-12 · ·

A driving support Electronic Control Unit (ECU) initializes a target trajectory calculation parameter at a start of Lane Change Assist Control (LCA), calculates, based on the target trajectory calculation parameter, a target trajectory function representing a target lateral position in accordance with an elapsed time from the start of LCA; and calculates a target control amount according to the target trajectory function. When it is determined that the own vehicle has crossed a boundary white line, the driving support ECU again initializes the target trajectory calculation parameter, and calculate the target trajectory function based on the target trajectory calculation parameter.

Appartuses, Systems and Methods for Compressing Image Data that is Representative of a Series of Digital Images

The present disclosure is directed to apparatuses, systems and methods for automatically compressing digital image data. More particularly, the present disclosure is directed to apparatuses, systems and methods for automatically compressing digital image data that is representative of a series of digital images.

Method for operating a display device for a motor vehicle and motor vehicle

A method of operating a display device for a motor vehicle having a plurality of display areas includes providing a plurality of non-contact input operations where a first non-contact input operation of a user is detected and verified by a second non-contact input operation of the user in order to select at least one display element on a first display area, and at least one third non-contact input operation of the user is detected.

Surroundings monitoring apparatus

A surroundings monitoring apparatus obtains the position of an object from a captured image of a region in a heading direction of a vehicle, and obtains a position obtainment accuracy. When the distance between the object and the vehicle becomes relatively short, the position obtainment accuracy increases. However, the distance between the object and the vehicle becomes shorter, the position obtainment accuracy may decrease. Therefore, if collision avoidance control is performed for an object selected on the basis of the position obtainment accuracy, there is a possibility that the collision avoidance control is not performed for an object which is most likely to collide with the vehicle. In view of this, the apparatus obtains, for each object, a required deceleration which is the magnitude of acceleration necessary for stoppage at a position before the object, and performs the collision avoidance control for an object which is the largest in the required deceleration.

SYSTEM AND METHOD FOR DETERMINING AGRICULTURAL VEHICLE GUIDANCE QUALITY BASED ON A CROP ROW BOUNDARY CONSISTENCY PARAMETER
20210298285 · 2021-09-30 ·

A system for determining agricultural vehicle guidance quality includes an imaging device configured to capture image data depicting a plurality of crops rows present within a field as an agricultural vehicle travels across the field. Additionally, the system includes a controller communicatively coupled to the imaging device. As such, the controller configured to determine a guidance line for guiding the agricultural vehicle relative to the plurality of crop rows based on the captured image data. Furthermore, the controller is configured to determine a crop row boundary consistency parameter associated with one or more crop rows of the plurality of crop row present within a region of interest of the captured image data. Moreover, the controller is configured to determine a quality metric for the guidance line based on the crop row boundary consistency parameter.

Image processing device adjusting pixel-luminance characteristics
11044415 · 2021-06-22 · ·

An image processing device is provided with an image acquiring unit that acquires the image from an imaging unit mounted on a vehicle, a light determining unit that determines a state of a light mounted on the vehicle, and an adjusting unit that adjusts a relationship between a luminance at an object to be captured by the imaging unit, and a pixel value in the image. The adjusting unit is configured to set, when the light determining unit determines the state of the light is in the low-beam state, the illuminance where the pixel value is at the lower limit, to be lower than that of a case when the light determining unit determines the state of the light is in the high-beam state.