G06T2207/30264

System and method for camera or sensor-based parking spot detection and identification
10720058 · 2020-07-21 · ·

The present invention provides an on-board vehicle system and method for camera or sensor-based parking spot detection and identification. Advantageously, this system and method utilizes a standard front (or side or rear) camera or sensor image to detect and identify one or more parking spots at a distance via vector or like representation using a deep neural network trained with data annotated using an annotation tool, without first transforming the standard camera or sensor image(s) to a bird's-eye-view (BEV) or the like. The system and method can be incorporated in a driver-assist (DA) or autonomous driving (AD) system.

HIGH PRECISION OBJECT LOCATION IN A PARKING LOT

A method of high precision object location in a parking lot including capturing via an entry stereo camera a template image and an entry video of a vehicle entering a parking lot, measuring a distance versus time of the vehicle entering the parking lot based on the entry video, line scanning via at least one LIDAR an outline versus time of the vehicle entering the parking lot, constructing a scanned image based on the measured distance versus time and the outline versus time and forming a three dimensional construct of the vehicle based on the template image and the scanned image.

Method of guiding a user to a suitable parking spot
10713945 · 2020-07-14 ·

A method of guiding a user to a suitable parking spot is a method used to aid a user in finding a suitable parking spot near a desired location. The method uses at least one remote server and a mobile computing device. The remote server manages information related to potential parking spots, and performs calculations to determine if a potential parking spot is a suitable parking spot. The mobile computing device monitors the location of the user and allows the user to interact with the method. A parking search request is sent from the mobile computing device to the remote server. Upon receiving the parking search request, the geospatial vehicle detection system is used to locate potential parking spots. At least one filtering process is used to identify suitable parking spots from the potential parking spots, before the suitable parking spots are displayed to the user.

Trailer backup assist system with predictive hitch angle functionality

A trailer backup assist system is provided herein. The system includes a calibration feature for calibrating an imaging device used for hitch angle detection. The system also employs multiple hitch angle detection methods, a number of which may run in parallel to increase the intervals at which a hitch angle can be measured. The system additionally includes a predictive feature in which a hitch angle can be predicted in instances where a trailer sensing device fails. The system is further configured to estimate a trailer length and generate steering commands that are invariant to the estimated trailer length under certain conditions.

Wide area parking spot identification

Embodiments detailed herein can include performing an initial calibration that maps each optical character of the plurality of optical characters visible in a field-of-view of a digital camera of a wide-area parking space monitoring system with corresponding parking spaces. The digital camera of the system may capture an image facing downward toward the parking spaces. The system may identify one or more optical characters that are visible within the image. The system may determine one or more parking spaces of the plurality of parking spaces that are mapped to the identified one or more optical characters, The system may output an indication of the determined one or more parking spaces that indicates the one or more parking spaces are available.

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

An image processing device includes: a setting unit configured to set an area, as a first area, in which at least one delimiting line for delimiting a parking space is detected in a first image of plural images continuously captured while moving; and a prediction unit configured to predict, based on the first area, a second area in which the at least one delimiting line is to be detected in at least one second image of the plural images, the at least one second image being captured later in time than the first image.

IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD

An image processing device includes: a delimiting line detection unit configured to detect a delimiting line candidate based on image data obtained by capturing a surrounding of a vehicle, the delimiting line candidate being a candidate of a delimiting line that delimits a parking space; and an exclusion determination unit configured to determine whether or not to exclude the delimiting line candidate detected by the delimiting line detection unit from the candidate of the delimiting line. In a case where a plurality of the delimiting line candidates is detected within a predetermined range in the image data, the exclusion determination unit determines whether or not to exclude the delimiting line candidate from the candidate of the delimiting line by comparing edge strength of the plurality of delimiting line candidates.

Camera parameter calculation method, recording medium, camera parameter calculation apparatus, and camera parameter calculation system

Three-dimensional point group data indicating three-dimensional coordinate sets of three-dimensional points included in a common imaging space of one or more cameras is received, one or more images captured by the one or more cameras are transmitted, the one or more transmitted images are received, initial camera parameters of each cameras are decided based on one or more mounting locations and one or more directions of the cameras, corresponding points in the one or more images are calculated for each of the three-dimensional points, based on the three-dimensional point group data and the initial camera parameters, and one or more camera parameters of the one or more cameras are calculated based on pixel values at the corresponding points in the one or more images.

Calibration methods for autonomous vehicle operations
10678260 · 2020-06-09 · ·

Systems and method are provided for controlling a vehicle. The vehicle includes a first device onboard the vehicle providing first data, a second device onboard the vehicle providing second data, one or more sensors onboard the vehicle, one or more actuators onboard the vehicle, and a controller. The controller detects a stationary condition based on output of the one or more sensors, obtains a first set of the first data from the first device during the stationary condition, filters horizontal edge regions from the first set resulting in a filtered set of the first data, obtains a second set of the second data during the stationary condition, determines one or more transformation parameter values based on a relationship between the second set and the filtered set, and autonomously operates the one or more actuators onboard the vehicle in a manner that is influenced by the transformation parameter values.

Detection apparatus, imaging apparatus, vehicle, and detection method
10668855 · 2020-06-02 · ·

A detection apparatus includes an image acquisition interface and a controller. The image acquisition interface acquires captured images from an imaging apparatus that is installed in a vehicle and captures images of an area surrounding the vehicle. The controller calculates, on the basis of the captured images, the distance from the imaging apparatus to a road edge in the width direction of the road on which the vehicle is traveling or to an obstacle on the road and determines, on the basis of the calculated distance and information on an outer dimension of the vehicle, whether travel on the road by other vehicles will be possible after the vehicle has parked on the road.