G06T2207/30256

GEOGRAPHIC OBJECT DETECTION APPARATUS AND GEOGRAPHIC OBJECT DETECTION METHOD

A geographic object recognition unit (120) recognizes, using image data (192) obtained by photographing in a measurement region where a geographic object exists, a type of the geographic object from an image that the image data (192) represents. A position specification unit (130) specifies, using three-dimensional point cloud data (191) indicating a three-dimensional coordinate value of each of a plurality of points in the measurement region, a position of the geographic object.

IN-VEHICLE CONTROL DEVICE

A method determines the region where a vehicle can travel without changing clarity of the position information of the object detected by a sensor. An in-vehicle control device includes an object detection unit detecting the position of an object from image information captured by an image pickup device, an object information storage unit stores a pre-processing grid map including a position of a detected object set as an object occupied region and a position where no object has been detected is as an object unoccupied region, an information processing unit generates a determination grid map in which part of the object unoccupied region of the map is replaced with the object occupied region, and a road surface region determination unit that generates an automatic driving grid map in which a closed space surrounded by the object occupied region of the determination grid map is set as a road surface region.

Evaluating and presenting pick-up and drop-off locations in a situational awareness view of an autonomous vehicle

In one embodiment, a method includes sending a set of instructions to present, on a computing device, one or more available locations for a vehicle to pick-up or drop-off a user in an area. The one or more available locations are based on sensor data of the area that is captured by the vehicle. The method includes receiving a user selection to select a location associated with the area for the vehicle to pick-up or drop-off the user. The method includes adjusting a viability value of one or more locations to pick-up or drop-off the user. The viability value is adjusted based at least on the selected location. The method includes, based on the adjusted viability value of the one or more locations, determining a location from the one or more locations. The method includes instructing the vehicle to travel to the determined location.

Camera calibration apparatus and operating method

A camera calibration includes; a camera configured to acquire a first forward image from a first viewpoint and a second forward image from a second viewpoint; an event trigger module configured to determine whether to perform camera calibration; a motion estimation module configured to acquire information related to motion of a host vehicle; a three-dimensional reconstruction module configured to acquire three-dimensional coordinate values based on the first forward image and the second forward image; and a parameter estimation module configured to estimate an external parameter of the camera based on the three-dimensional coordinate values.

Identifying objects for display in a situational-awareness view of an autonomous-vehicle environment

In one embodiment, a method includes receiving sensor data corresponding to an environment external of a vehicle. The sensor data include data points. The method includes determining one or more subsets of the data points. The method includes comparing the one or more subsets of the data points to one or more predetermined data patterns. Each of the one or more predetermined data patterns corresponds to an object classification. The method includes computing a confidence score for each subset of data points of the one or more subsets of the data points as corresponding to each of the one or more predetermined data patterns based on the comparison. The method includes generating a classification for an object in the environment external of the vehicle based on the confidence score.

Classification of surfaces as hard/soft for combining data captured by autonomous vehicles for generating high definition maps
11162788 · 2021-11-02 · ·

A high-definition map system receives sensor data from vehicles traveling along routes and combines the data to generate a high definition map for use in driving vehicles, for example, for guiding autonomous vehicles. A pose graph is built from the collected data, each pose representing location and orientation of a vehicle. The pose graph is optimized to minimize constraints between poses. Points associated with surface are assigned a confidence measure determined using a measure of hardness/softness of the surface. A machine-learning-based result filter detects bad alignment results and prevents them from being entered in the subsequent global pose optimization. The alignment framework is parallelizable for execution using a parallel/distributed architecture. Alignment hot spots are detected for further verification and improvement. The system supports incremental updates, thereby allowing refinements of subgraphs for incrementally improving the high-definition map for keeping it up to date.

Lane recognition device and method thereof

A lane recognition device includes: a camera configured to capture an image in front of a vehicle; and a controller configured to detect a lane from the image in front of the vehicle; generate a plurality of lane equations based on a curved point of the lane; and recognize the lane based on the plurality of lane equations.

END-TO-END LEARNED LANE BOUNDARY DETECTION BASED ON A TRANSFORMER

A method for an end-to-end boundary lane detection system is described. The method includes gridding a red-green-blue (RGB) image captured by a camera sensor mounted on an ego vehicle into a plurality of image patches. The method also includes generating different image patch embeddings to provide correlations between the plurality of image patches and the RGB image. The method further includes encoding the different image patch embeddings into predetermined categories, grid offsets, and instance identifications. The method also includes generating lane boundary keypoints of the RGB image based on the encoding of the different image patch embeddings.

LANE DETECTION AND TRACKING TECHNIQUES FOR IMAGING SYSTEMS

A system for detecting boundaries of lanes on a road is presented. The system includes an imaging system configured to produce a set of pixels associated with lane markings on a road. The system also includes one or more processors configured to detect boundaries of lanes on the road, including: receive, from the imaging system, the set of pixels associated with lane markings; partition the set of pixels into a plurality of groups, each of the plurality of groups associated with one or more control points; and generate a first spline that traverses the control points of the plurality of groups, the first spline describing a boundary of a lane on the road.

Geolocation system

A computer-implemented method and system for determining the geographical location of a user based on the characteristics of intersecting features, such as a road intersection. Specifically, the geometry of each intersection in a geographical area is used to derive a unique fingerprint for each individual intersection, the fingerprint comprising information relating to the geometry, the geographical location of the intersection, and other characteristics. These fingerprints can then be stored locally to a device, for example, a mobile phone, a tablet, a wearable computing device, an in-vehicle infotainment (IVI) system and the like. To determine the geographical location of the device, the geometry of a nearby intersection may be analysed by some means and compared to the stored set of unique fingerprints to identify the intersection and its associated location.