Methods and systems for filtering data points when merging LIDAR sensor datasets
11860281 ยท 2024-01-02
Assignee
Inventors
- Mikhail Vladimirovich KOROBKIN (Lipetsk, RU)
- Dmitry Andreevich KOVALENKO (Butovo, RU)
- Andrey Anatolevich MININ (Vidnoye, RU)
Cpc classification
B60W2422/00
PERFORMING OPERATIONS; TRANSPORTING
B60W60/001
PERFORMING OPERATIONS; TRANSPORTING
G06F18/2415
PHYSICS
G06V20/56
PHYSICS
International classification
B60W60/00
PERFORMING OPERATIONS; TRANSPORTING
G06F18/2415
PHYSICS
G06V10/80
PHYSICS
Abstract
Method and device for processing LIDAR sensor data are disclosed. The method includes (i) receiving from the LIDAR sensor a first dataset having a plurality of first data points representative of respective coordinates and associated with respective normal vectors, (ii) determining an uncertainty parameter for a given first data point based on a normal covariance of the normal vector of the given first data point where the normal covariance takes into account a measurement error of the LIDAR sensor when determining the respective coordinates of the given first data point, (iii) in response to the uncertainty parameter being above a pre-determined threshold, excluding the given first data point from the plurality of first data points, (iv) using the filtered plurality of first data points, instead of the plurality of first data points, for merging the first dataset of the LIDAR sensor with a second dataset of the LIDAR sensor.
Claims
1. A computer-implemented method of processing LIDAR sensor data, the method executable by an electronic device operatively coupled to a LIDAR sensor, the method comprising: receiving, by the electronic device from the LIDAR sensor, a first 3D point cloud comprising a plurality of first data points, a given first data point from the plurality of first data points being (i) representative of respective coordinates in a 3D space and (ii) associated with a respective normal vector from a plurality of normal vectors; determining, by the electronic device, an uncertainty parameter for the given first data point based on: a normal covariance of the normal vector of the given first data point, wherein the normal covariance is calculated based on a measurement error of the LIDAR sensor when determining the respective coordinates of the given first data point, the measurement error of the LIDAR sensor affecting calculation of the normal vector; in response to the uncertainty parameter being above a pre-determined threshold, removing, by the electronic device, the given first data point from the first 3D point cloud, wherein the first 3D point cloud comprises a filtered plurality of first data points after removing the given first data point; after removing the given first data point from the first 3D point cloud, using, by the electronic device, the filtered plurality of first data points for merging the first 3D point cloud of the LIDAR sensor with a second 3D point cloud of the LIDAR sensor, thereby generating a merged 3D point cloud; and using, by the electronic device, the merged 3D point cloud for controlling operation of a Self-Driving Car (SDC).
2. The method of claim 1, wherein the method further comprises: receiving, by the electronic device from the LIDAR sensor, the second 3D point cloud having a plurality of second data points, a given second data point from the plurality of second data points being (i) representative of respective coordinates in a 3D space and (ii) associated with a respective normal vector from an other plurality of normal vectors; determining, by the electronic device, an other uncertainty parameter for the given second data point based on: a normal covariance of the normal vector of the given second data point, the normal covariance taking into account the measurement error of the LIDAR sensor when determining the respective coordinates of the given second data point, the measurement error of the LIDAR sensor affecting calculation of the normal vector of the given second data point; in response to the other uncertainty parameter being above a pre-determined threshold, removing, by the electronic device, the given second data point from the second 3D point cloud, thereby determining a filtered plurality of second data points; and using, by the electronic device, the filtered plurality of second data points, for merging the first 3D point cloud of the LIDAR sensor with the second 3D point cloud of the LIDAR sensor.
3. The method of claim 1, wherein the method further comprises: determining, by the electronic device, the normal covariance of the normal vector of the given first data point while taking into account an uncertainty in the respective coordinates of the given first data point.
4. The method of claim 3, wherein the uncertainty is approximated as a Gaussian random variable.
5. The method of claim 1, wherein the measurement error of the LIDAR sensor is approximated by a sphere with a standard deviation.
6. The method of claim 1, wherein the using the filtered plurality of first data points comprises using, by the electronic device, the filtered plurality of first data points instead of the plurality of first data points during a matching step of an ICP algorithm.
7. The method of claim 6, wherein the using the filtered plurality of first data points comprises: estimating, by the electronic device, a transformation rule between the first 3D point cloud and the second 3D point cloud.
8. The method of claim 7, wherein the transformation rule is an output of the ICP algorithm.
9. The method of claim 1, wherein the LIDAR sensor is mounted onto the Self-Driving Car (SDC).
10. An electronic device for processing LIDAR sensor data, the electronic device operatively coupled to a LIDAR sensor, the electronic device being configured to: receive, from the LIDAR sensor, a first 3D point cloud comprising a plurality of first data points, a given first data point from the plurality of first data points being (i) representative of respective coordinates in a 3D space and (ii) associated with a respective normal vector from a plurality of normal vectors; determine, by the electronic device, an uncertainty parameter for the given first data point based on: a normal covariance of the normal vector of the given first data point, wherein the normal covariance is calculated based on a measurement error of the LIDAR sensor when determining the respective coordinates of the given first data point, the measurement error of the LIDAR sensor affecting calculation of the normal vector; in response to the uncertainty parameter being above a pre-determined threshold, removing the given first data point from the first 3D point cloud, wherein the first 3D point cloud comprises a filtered plurality of first data points after removing the given first data point; after removing the given first data point from the 3D point cloud, use the filtered plurality of first data points for merging the first 3D point cloud of the LIDAR sensor with a second 3D point cloud of the LIDAR sensor, thereby generating a merged 3D point cloud; and use the merged 3D point cloud for controlling operation of a Self-Driving Car (SDC).
11. The electronic device of claim 10, wherein the electronic device is further configured to: receive, from the LIDAR sensor, the second 3D point cloud having a plurality of second data points, a given second data point from the plurality of second data points being (i) representative of respective coordinates in a 3D space and (ii) associated with a respective normal vector from an other plurality of normal vectors, determine, by the electronic device, an other uncertainty parameter for the given second data point based on: a normal covariance of the normal vector of the given second data point, the normal covariance taking into account the measurement error of the LIDAR sensor when determining the respective coordinates of the given second data point, the measurement error of the LIDAR sensor affecting calculation of the normal vector of the given second data point; in response to the other uncertainty parameter being above a pre-determined threshold, remove the given second data point from the second 3D point cloud, thereby determining a filtered plurality of second data points; and use the filtered plurality of second data points for merging the first 3D point cloud of the LIDAR sensor with the second 3D point cloud of the LIDAR sensor.
12. The electronic device of claim 10, wherein the electronic device is further configured to: determine the normal covariance of the normal vector of the given first data point while taking into account an uncertainty in the respective coordinates of the given first data point.
13. The electronic device of claim 12, wherein the uncertainty is approximated as a Gaussian random variable.
14. The electronic device of claim 10, wherein the measurement error of the LIDAR sensor is approximated by a sphere with a standard deviation.
15. The electronic device of claim 10, wherein the electronic device is configured to use the filtered plurality of first data points instead of the plurality of first data points during a matching step of an ICP algorithm.
16. The electronic device of claim 15, wherein the electronic device is configured to estimate a transformation rule between the first 3D point cloud and the second 3D point cloud.
17. The electronic device of claim 16, wherein the transformation rule is an output of the ICP algorithm.
18. The electronic device of claim 10, wherein the LIDAR sensor is mounted onto the Self-Driving Car (SDC).
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) These and other features, aspects and advantages of the present technology will become better understood with regard to the following description, appended claims and accompanying drawings where:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9) Furthermore, APPENDIX A has been enclosed following the Detailed Description. The APPENDIX A comprises an article providing information regarding at least some aspects of the present technology described herein and/or additional aspects of the present technology. The APPENDIX A and the information forming part thereof have been enclosed for reference purposes and are to be deleted from the application prior to the publication of the application as a patent.
(10) Furthermore, APPENDIX B has been enclosed following the Detailed Description. The APPENDIX B comprises a poster providing information regarding at least some aspects of the present technology described herein and/or additional aspects of the present technology. The APPENDIX B and the information forming part thereof have been enclosed for reference purposes and are to be deleted from the application prior to the publication of the application as a patent.
DETAILED DESCRIPTION
(11) The examples and conditional language recited herein are principally intended to aid the reader in understanding the principles of the present technology and not to limit its scope to such specifically recited examples and conditions. It will be appreciated that those skilled in the art may devise various arrangements which, although not explicitly described or shown herein, nonetheless embody the principles of the present technology and are included within its spirit and scope.
(12) Furthermore, as an aid to understanding, the following description may describe relatively simplified implementations of the present technology. As persons skilled in the art would understand, various implementations of the present technology may be of a greater complexity.
(13) In some cases, what are believed to be helpful examples of modifications to the present technology may also be set forth. This is done merely as an aid to understanding, and, again, not to define the scope or set forth the bounds of the present technology. These modifications are not an exhaustive list, and a person skilled in the art may make other modifications while nonetheless remaining within the scope of the present technology. Further, where no examples of modifications have been set forth, it should not be interpreted that no modifications are possible and/or that what is described is the sole manner of implementing that element of the present technology.
(14) Moreover, all statements herein reciting principles, aspects, and implementations of the technology, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof, whether they are currently known or developed in the future. Thus, for example, it will be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the present technology. Similarly, it will be appreciated that any flowcharts, flow diagrams, state transition diagrams, pseudo-code, and the like represent various processes which may be substantially represented in computer-readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
(15) The functions of the various elements shown in the figures, including any functional block labeled as a processor, may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term processor or controller should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read-only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included.
(16) Software modules, or simply modules which are implied to be software, may be represented herein as any combination of flowchart elements or other elements indicating performance of process steps and/or textual description. Such modules may be executed by hardware that is expressly or implicitly shown.
(17) With these fundamentals in place, we will now consider some non-limiting examples to illustrate various implementations of aspects of the present technology.
(18) Referring initially to
(19)
(20) The vehicle 220 to which the electronic device 210 is associated may comprise any leisure or transportation vehicle such as a private or commercial car, truck, motorbike or the like. The vehicle may be user operated or a driver-less vehicle. It should be noted that specific parameters of the vehicle 220 are not limiting, these specific parameters including: vehicle manufacturer, vehicle model, vehicle year of manufacture, vehicle weight, vehicle dimensions, vehicle weight distribution, vehicle surface area, vehicle height, drive train type (e.g. 2 or 4), tyre type, brake system, fuel system, mileage, vehicle identification number, and engine size.
(21) The implementation of the electronic device 210 is not particularly limited, but as an example, the electronic device 210 may be implemented as a vehicle engine control unit, a vehicle CPU, a vehicle navigation device (e.g. TomTom vehicle navigation device, Garmin vehicle navigation device), a tablet, a personal computer built into the vehicle 220 and the like. Thus, it should be noted that the electronic device 210 may or may not be permanently associated with the vehicle 220. Additionally or alternatively, the electronic device 210 can be implemented in a wireless communication device such as a mobile telephone (e.g. a smart-phone or a radio-phone). In certain embodiments, the electronic device 210 has a display 270.
(22) The electronic device 210 may comprise some or all of the components of the computer system 100 depicted in
(23) In accordance with the non-limiting embodiments of the present technology, the electronic device 210 further comprises or has access to a plurality of sensors 230. According to these embodiments, the plurality of sensors 230 may comprise sensors allowing for various implementations of the present technology. Examples of the plurality of sensors include but are not limited to: cameras, LIDAR systems, and RADAR systems, etc. Each of the plurality of sensors 230 is operatively coupled to the processor 110 for transmitting the so-captured information to the processor 110 for processing thereof, as will be described in greater detail herein below.
(24) Each or some of the plurality of sensors 230 can be mounted on an interior, upper portion of a windshield of the vehicle 220, but other locations are within the scope of the present disclosure, including on a back window, side windows, front hood, rooftop, front grill, or front bumper of the vehicle 220. In some non-limiting embodiments of the present technology, each or some of the plurality of sensors 230 can be mounted in a dedicated enclosure (not depicted) mounted on the top of the vehicle 220.
(25) Further, the spatial placement of each or some of the plurality of sensors 230 can be designed taking into account the specific technical configuration thereof, configuration of the enclosure, weather conditions of the area where the vehicle 220 is to be used (such as frequent rain, snow, and other elements) or the like.
(26) In some non-limiting embodiments of the present technology, the plurality of sensors comprises at least a first sensor 240 and a second sensor 260. In these embodiments, both the first sensor 240 and the second sensor 260 can be configured to capture a 3D point cloud data of the surrounding area 250 of the vehicle 220. In this regard, each of the first sensor 240 and the second sensor 260 may comprise a LIDAR instrument.
(27) LIDAR stands for LIght Detection and Ranging. It is expected that a person skilled in the art will understand the functionality of the LIDAR instrument, but briefly speaking, a transmitter (not depicted) of one of the first sensor 240 and the second sensor 260 implemented as the LIDAR sends out a laser pulse and the light particles (photons) are scattered back to a receiver (not depicted) of one of the first sensor 240 and the second sensor 260 implemented as the LIDAR instrument. The photons that come back to the receiver are collected with a telescope and counted as a function of time. Using the speed of light (310.sup.8 m/s), the processor 110 can then calculate how far the photons have traveled (in the round trip). Photons can be scattered back off of many different entities surrounding the vehicle 220, such as other particles (aerosols or molecules) in the atmosphere, other cars, stationary objects or potential obstructions in front of the vehicle 220.
(28) In a specific non-limiting example, each one of the first sensor 240 and the second sensor 260 can be implemented as the LIDAR-based systems that may be of the type available from Velodyne LiDAR, Inc. of 5521 Hellyer Avenue, San Jose, CA 95138, the United States of America. It should be expressly understood that the first sensor 240 and the second sensor 260 can be implemented in any other suitable equipment.
(29) However, in the non-limiting embodiments of the present technology, the first sensor 240 and the second sensor 260 do not have to be implemented based on the same LIDAR-based sensor, as such, respective technical characteristics of the first sensor 240 may differ from those of the second sensor 260.
(30) In some embodiments of the present technology, the first sensor 240 and the second sensor 260 can be housed in the above-mentioned enclosure (not separately depicted) located on the roof of the vehicle 220. Further, in the non-limiting embodiments of the present technology, the plurality of sensors 230 may comprise more than LIDAR-based sensors, such as three or any other suitable number. In these embodiments, all LIDAR-based sensors, along with the first sensor 240 and the second sensor 260, can be housed in the above-mentioned enclosure (not separately depicted).
(31) In the non-limiting embodiments of the present technology, the first sensor 240 and the second sensor 260 are calibrated such that for a first 3D point cloud captured by the first sensor 240 and a second 3D point cloud captured by the second sensor 260, the processor 110 is configured to identify overlapping regions by merging the first 3D point cloud and the second 3D point cloud. This calibration can be executed during the manufacturing and/or set up of the vehicle 220. Or at any suitable time thereafter or, in other words, the calibration can be executed during retrofitting the vehicle 220 with the first sensor 240 and the second sensor 260 in accordance with the non-limiting embodiments of the present technology contemplated herein. Alternatively, the calibration can be executed during equipping the vehicle 220 with the first sensor 240 and the second sensor 260 in accordance with the non-limiting embodiments of the present technology contemplated herein.
(32) In some non-limiting embodiments of the present technology, each of the first sensor 240 and the second sensor 260 may be respective rotational LIDAR systems, or sometimes referred to as spinning LIDAR systems, each operating with its pre-determined scanning frequency. Accordingly, in these non-limiting embodiments, the processor 110 may be configured to synchronize, for example, the first sensor 240 with the second sensor 260 by adjusting the associated scanning frequencies such that, at a given moment in time, the first sensor 240 and the second sensor 260 are at a same angular position relative to their respective vertical central axes. By so doing, the processor 110 is configured to cause the first sensor 240 and the second sensor 260 to capture 3D data indicative of one and the same scene of the surrounding area 250 of the vehicle 220.
(33) In some non-limiting embodiments of the present technology, the synchronization of the first sensor 240 and the second sensor 260 may be initialized, by the processor 110, inter alia, during maintenance periods of the vehicle 220; at moments of starting the vehicle 220; or during the operation of the vehicle 220 with a certain periodicity.
(34) It is also contemplated that the plurality of sensors 230 may further comprise other sensors (not depicted), such as cameras, radars, Inertial Measurement Unit (IMU) sensors, and the like.
(35) In some non-limiting embodiments of the present technology, the communication network 245 is the Internet. In alternative non-limiting embodiments, the communication network 245 can be implemented as any suitable local area network (LAN), wide area network (WAN), a private communication network or the like. It should be expressly understood that implementations for the communication network are for illustration purposes only. A communication link (not separately numbered) between the electronic device 210 and the communication network 245 is implemented will depend, inter alia, on how the electronic device 210 is implemented. Merely as an example and not as a limitation, in those embodiments of the present technology where the electronic device 210 is implemented as a wireless communication device such as a smartphone or a navigation device, the communication link can be implemented as a wireless communication link. Examples of wireless communication links include, but are not limited to, a 3G communication network link, a 4G communication network link, and the like. The communication network 245 may also use a wireless connection with the server 235.
(36) In some embodiments of the present technology, the server 235 is implemented as a conventional computer server and may comprise some or all of the components of the computer system 100 of
(37) In some non-limiting embodiments of the present technology, the processor 110 of the electronic device 210 can be in communication with the server 235 to receive one or more updates. The updates can be, but are not limited to, software updates, map updates, routes updates, weather updates, and the like. In some embodiments of the present technology, the processor 110 can also be configured to transmit to the server 235 certain operational data, such as routes travelled, traffic data, performance data, and the like. Some or all data transmitted between the vehicle 220 and the server 235 may be encrypted and/or anonymized.
(38) In the description provided herein, when certain processes and method steps are executed by the processor 110 of the electronic device 210, it should be expressly understood that such processes and method steps can be executed solely by the processor 110, in a shared manner (i.e. distributed) between the processor 110 and the server 235, or solely by the server 235. In other words, when the present description refers to the processor 110 or the electronic device 210 executing certain processes or method steps, it is to expressly cover processes or steps executed by the processor 110, by the server 235, or jointly executed by the processor 110 and the server 235.
(39) With reference to
(40) It should be noted that as the vehicle 220 is travelling on a road 302, the electronic device 210 is configured to acquire LIDAR data 310 from the rotational LIDAR system and which is representative of objects in the surrounding area 250 of the vehicle 220.
(41) Broadly speaking, the LIDAR data 310 is acquired by the electronic device 210 in a form of a plurality of captured 3D point clouds (not numbered) including: a first captured 3D point cloud 312, a second captured 3D point cloud 332, an n.sup.th captured 3D point cloud 332, and so forth. It should be noted that a given captured 3D point cloud comprises a large number of data points registered by the rotational LIDAR system during a respective scan of the surrounding area 250.
(42) To better illustrate this, let's take the example of the first captured 3D point cloud 312 acquired by the electronic device 210 at a moment in time t.sub.1. The first captured 3D point cloud 312 comprises a large number of data points captured by the rotational LIDAR system during a first scan thereby of the surrounding area 250. For example, the first captured 3D point cloud 312 may include 30 000 captured data points, 50 000 captured data points, 150 000 captured data points, or the like, which are representative of objects in the surrounding area 250.
(43) One of the captured data points of the first captured 3D point cloud 312 is a captured data point 314. Data indicative of the captured data point 314 may include spatial coordinates of the captured data point 314 in 3D space as determined/captured by the rotational LIDAR system. It is contemplated that additional data may be associated with the captured data point 314. For instance, one or more additional parameters such as, for example, distance, intensity, and/or angle, as well as other parameters, may be associated with the captured data point 314, as known in the art.
(44) It is contemplated that the rotational LIDAR system may captured a respective captured 3D point cloud at a respective time step while the vehicle 220 is travelling. Such time step may in some cases correspond to an interval of time that is necessary for the rotational LIDAR system to perform a scan of the surrounding area 250. Such time step may also correspond to an interval of time that is necessary for the rotational LIDAR system to perform a full rotation about an azimuthal axis thereof.
(45) For example, once the rotational LIDAR system performs the first scan of the surroundings, the rotational LIDAR system may provide to the electronic device 210, at the moment t.sub.1, the first captured 3D point cloud 312. In the same example, once the rotational LIDAR system performs the second scan of the surroundings, the rotational LIDAR system may provide to the electronic device 210, at the moment t.sub.2, the captured 3D point cloud 322. In the same example, once the rotational LIDAR system performs the n.sup.th scan of the surroundings, the rotational LIDAR system may provide to the electronic device 210, at the moment t.sub.n, the n.sup.th captured 3D point cloud 332.
(46) Overall, it can be said that, as the vehicle 220 is travelling on the road 302, the electronic device 210 may be configured to acquire from the rotational LIDAR system of the vehicle 220 the LIDAR data 310. The LIDAR data 310 may comprise inter alia the plurality of 3D point clouds that has been captured by the rotational LIDAR system during respective scans of the surrounding area 250.
(47) In some embodiments of the present technology, the electronic device 210 may be configured to merge the plurality of captured 3D point clouds. For example, by merging the plurality of registered 3D point clouds, the electronic device 210 may be configured to generate a 3D map representation of the surrounding area 250 of the vehicle 220. Such 3D map representation of the surrounding area 250 may be employed by the electronic device 210 for controlling operation of the vehicle 220 when travelling on the road 302.
(48) To that end, the electronic device 210 may be configured to perform one or more 3D point cloud merging techniques. In at least some embodiments of the present technology, the electronic device 210 may be configured to merge a given pair of captured 3D point clouds from the LIDAR data 310 by executing an Iterative Closest Point (ICP) algorithm.
(49) Broadly speaking, the ICP algorithm is configured to minimize the distance between a pair of 3D point clouds. Usually, during execution of the ICP algorithm, one 3D point cloud is referred to as a target or reference 3D point cloud, while the other 3D point cloud is referred to as a source 3D point cloud. The goal of the ICP algorithm is to keep the target 3D point cloud fixed and to identify a transformation rule that, when applied onto the source 3D point cloud, would result in the pair of 3D point clouds being merged with each other.
(50) Typically, the ICP algorithm includes the following steps executed iteratively: initializing the merging process by performing an initial guess of how the pair of registered 3D clouds are to be located in a same 3D space selecting and/or filtering the registered 3D points clouds, for determining which data points are to be used for a next step of the ICP algorithm; matching the selected/filtered data points, thereby determining correspondences between data points from the target 3D point cloud and data points form the source 3D point cloud; assigning an error metric; and minimizing the error metric by applying one of transformation rules.
(51) It is contemplated that in some cases, the electronic device 210 may be configured to perform weighting of some matched pairs of data points and/or reject other matched pairs of data points, prior to minimizing the error metric.
(52) It should be noted that the electronic device 210 may be configured to estimate one or more the transformation rules that may be used for evaluating an error metric at a respective iteration of the ICP algorithm. Hence, it can be said that, by iteratively applying estimated transformation rules to minimize the error metric between the corresponding pairs of data points from the pair of 3D point clouds, the electronic device 210 causes the source 3D point cloud to go through a plurality of intermediate positions to a final position at which the selected and matched data point pairs are substantially merged.
(53) Some examples of known methods of performing the ICP algorithm are described in the article entitled Efficient Variants of the ICP Algorithm, written by Szymon Rusinkiewicz and Marc Levoy, and published by Stanford University; the content of which is hereby incorporated by reference in its entirety.
(54) It should be noted that (i) selecting/filtering data points and (ii) matching corresponding data points have an important effect on the efficiency of the ICP algorithm performed by the electronic device 210 for merging the pair of 3D point clouds.
(55) Modified Selection Step
(56) In a first broad aspect of the present technology, the developers of the present technology have devised methods and devices for improving the selection/filtration step of data points during the ICP algorithm. Put another way, in at least some embodiments of the present technology, the electronic device 210 is configured to perform a modified ICP algorithm during which a particular type of data point filtration is used for ameliorating the efficiency of the merging procedure between a pair of 3D point clouds.
(57) First of all, data point filtration may allow the electronic device 210 to perform the merging of 3D point clouds faster. It should be noted that conventional LIDAR systems may provide a given captured 3D point cloud having a large number of captured data points, as mentioned above (e.g., between 30 000 and 150 000 registered data points). As such, taking into account each and every one of these captured data points during execution of the ICP algorithm can hinder the computation performance of the electronic device 210.
(58) Second of all, it should be noted that the electronic device 210 may be configured to estimate and use normal vectors, or simply normals, associated with respective captured data points for performing one or more steps of the ICP algorithm. However, normals for at least some captured data points may be miscalculated (erroneously estimated) for a variety of reasons. For example, a given normal may be miscalculated due to an improper selection of neighbouring captured data pointsthat is, a given normal for a given captured data point may be estimated based on neighbouring captured data points that are not located on the same surface as the given captured data point. Such miscalculation typically occurs when the given captured data point is located near a boundary of a surface on which it is located. As a result, captured data points having erroneously estimated normals ought to be filtered out from following steps of the ICP algorithm since such erroneously estimated normals may negatively affect the accuracy of the transformation estimation step of the ICP algorithm.
(59) However, developers of the present technology have realized that, in addition to an improper selection of neighbouring captured data points during estimation of a given normal of a given registered data point, other reasons may cause the electronic device 210 to erroneously estimate the given normal of the given captured data point. For example, the developers of the present technology have realized that the electronic device 210 may erroneously estimate the given normal of the given captured data point due to an erroneous capturing of the given captured data point by the LIDAR system. More specifically, the developers of the present technology have realized that the electronic device 210 may erroneously estimate the given normal of the given captured data point due to an erroneous measurement of the spatial coordinates of the given captured data point by the LIDAR system.
(60) Put another way, the developers of the present technology have realized that spatial coordinates of a given captured data point are subject to a measurement error attributed to the LIDAR system itself during capturing. Hence, if the given captured data point is associated with erroneous spatial coordinates, the electronic device 210 is likely to erroneously estimate the normal for that given captured data point.
(61) As a result, the developers of the present technology have devised a particular type of computer-implemented data point filter to be used during the filtration step of the ICP algorithm and which accounts for the uncertainty in spatial coordinates of captured data points by treating this uncertainty as Gaussian random variables. In the context of the present specification, this computer-implemented data point filter will be referred to as a Normal Covariance Filter (NCF).
(62) Developers of the present technology have realized that employing the NCF during the filtration step of the ICP algorithm allows (i) reduce a number of captured data points in some or each 3D point cloud that are to be merged, which allows accelerating execution of the ICP algorithm, and (ii) filtering out captured data points with erroneously registered spatial coordinates causing respective miscalculated normals, which allows reducing negative effects on the precision of the transformation estimation step of the ICP algorithm.
(63) How the electronic device 210 is configured to implement the NCF and how the electronic device 210 is configured to filter a given captured 3D point cloud via the NCF will now be described in greater details.
(64) Filtering a Registered 3D Point Cloud by Using the NCF
(65) How the electronic device 210 is configured to filter a given captured 3D point cloud for determining a respective filtered 3D point cloud will now be discussed in greater detail with reference to a single given captured data point from the given captured 3D point cloud. With reference to
(66) As depicted in
(67) In some embodiments of the present technology, the electronic device 210 may use data indicative of the set of captured data points 402 during execution an SVD algorithm of the modified filtration step of the ICP algorithm. Broadly speaking, the electronic device 210 may be configured to input into the SVD algorithm data representative of the set of captured data points 402. For example, it is contemplated that the electronic device 210 may be configured to input into the SVD algorithm a matrix D where:
DR.sup.{k3}(1)
where k is a number of captured data points in the set of captured data points 402. As such, the matrix D inputted into the SVD algorithm represents the spatial coordinates of a k number of captured data points in the set of captured data points 402. For greater clarity, it should be noted that a notation such as D.sub.i,j will refer to an entry i,j from the matrix D, and a notation D.sub.i will refer to a column i of the matrix D when counting from zero.
(68) By inputting the matrix D into the SVD algorithm, the electronic device 210 may receive, in response thereto, an output from the SVD algorithm such as:
D=USV where UR.sup.{k3},SR.sup.{33},VR.sup.{33}(2)
where U, S, and V are matrix components of the SVD algorithm, and where the matrices U and V are orthogonal matrices representing a collection of basis vectors, and where S represents, in a sense, influence of data points in the space represented by V.sub.i vectors from the matrix V. Additional details regarding matrices U, S, and V are provided in the APPENDICES A and B.
(69) It should be noted that in cases where entries of matrix D are random variables, V.sub.2 may also include random variables. Developers of the present technology contemplate that the entries of the matrix D may be Gaussian random variables. Further, it should be noted that V.sub.2 from the matrix V corresponds to a normal vector 406 of an estimated plane 404, as illustrated in
(70) It is contemplated that the electronic device 210 may be configured to estimate the covariance of the normal vector V.sub.2 (e.g., plane normal) by propagation of the uncertainty technique. Broadly speaking, propagation of uncertainty (or propagation of error) refers to the effect of uncertainty of variables on the uncertainty of a function that is based thereon. When the variables are the values of experimental measurements, for example as in this case the spatial coordinates of captured data points, they have uncertainties due to measurement limitations (e.g., instrument-related measurement error of the LIDAR system) which propagate due to the combination of variables in the respective function.
(71) Thus, it can be said that the electronic device 210 may be configured to estimate the covariance of the normal vector V.sub.2 (the normal vector 406) by propagation of the uncertainty technique via the following:
(72)
where Cov[D] is the covariance of the matrix D and is computed as follows:
Cov[D]=.Math.I.sup.{33}(4)
and where the function in the Equation (3) is a sequence of computations that the electronic device 210 may be configured to perform in order to yield V.sub.2 from the matrix D, where I is an identity matrix, and where the measurement error of the LIDAR system is approximated by a sphere with a standard deviation .
(73) It should be noted that the measurement error of the LIDAR system being approximated via a sphere with a standard deviation means that spatial coordinates of a given captured data point may be located within a respective sphere with a standard deviation . It is contemplated that the value of may be obtained from a manufacturer of the LIDAR system.
(74) It should be noted however that covariance of V.sub.2 (i.e., Cov[V.sub.2]), lies in the system of coordinates of the LIDAR system. Therefore, the electronic device 210 may be configured to perform an alignment step so that Cov[V.sub.2] is in alignment with V.sub.2, such that:
Cov[V.sub.2]=QCQ.sup.1 or C=Q.sup.1 Cov[V.sub.2]Q(5)
where matrix C is a covariance of V.sub.2 in a normal-aligned coordinate system (e.g., coordinate system defined by V.sub.0, V.sub.1, and V.sub.2), and where Q and Q.sup.1 are matrices used for performing the alignment, or in other words, matrices indicative of a transformation between an original coordinate system (e.g., the coordinate system of the LIDAR system) and the normal-aligned coordinate system.
(75) The electronic device 210 may be configured to use a given element of the matrix C for performing a decision-making process on whether the given captured data point 401 is to be filtered out/excluded from the filtered 3D point cloud. More specifically, the electronic device may be configured to compare element C.sub.2,2 from the matrix C to determine whether the given captured data point 401 is to be filtered out. To that end, the electronic device 210 may be configured to compare C.sub.2,2 against a threshold value c.sub.T:
C.sub.2,2C.sub.T(6)
where c.sub.T is a pre-determined threshold, and where C.sub.2,2 is the element of the matrix C.
(76) It should be noted that C.sub.2,2 represents a distribution of potential normal vectors, for the given captured data point 401, projected on an axis aligned with V.sub.2. It should be noted that a cone 408 illustrated in
(77) Hence, it is contemplated that if the value C.sub.2,2 for the given captured data point 401 is above the threshold value c.sub.T, the electronic device 210 may be configured to exclude the given captured data point 401 from further processing since the given captured data point 401 is associated with a normal uncertainty value that is too high (e.g., above the threshold value c.sub.T).
(78) As previously alluded to, the electronic device 210 may be configured to compute the matrix C for each captured data point from the given captured 3D point cloud, similarly to how the electronic device 210 computes the matrix C for the given captured data point 401. Hence, the electronic device 210 may be configured to compute the value C.sub.2,2 for each captured data point from the given captured 3D point cloud and may compare it to the threshold value c.sub.T, similarly to how the electronic device 210 is configured to compute the value C.sub.2,2 for the given captured data point 401 and compare it to the threshold value c.sub.T.
(79) Therefore, the electronic device 210 may be configured to filter out, from the given captured 3D point cloud, a set of excluded captured data points with the highest normal uncertainty values by comparing respective values of C.sub.2,2 against the threshold value c.sub.T, while taking into account the uncertainty of respective spatial coordinates caused by the measurement error of the LIDAR system.
(80) As a result, by applying the above filtration process 400 on respective captured data points, the electronic device 210 is configured to obtain a first filtered 3D point cloud, from the first captured 3D point cloud, which includes data points with precise normal vectors. The electronic device 210 may then be configured to employ the first filtered 3D point cloud, instead of the first captured 3D point cloud, during subsequent steps of the ICP algorithm. It should be noted that at least some aspects of the present technology related to the filtration process 400 are discussed in greater detail in APPENDICES A and B of the present specification. It is contemplated that the electronic device 210 may be configured to determine a second filtered 3D point cloud based on the second captured 3D point cloud in a similar manner to how the electronic device 210 is configured to determine first filtered 3D point cloud based on the first registered 3D point cloud.
(81) Modified Matching Step
(82) In a second broad aspect of the present technology, the developers of the present technology have devised methods and devices for improving the matching step of the ICP algorithm. Put another way, in at least some embodiments of the present technology, the electronic device 210 is configured to perform a modified ICP algorithm during which geometric information regarding the LIDAR system, which captured the two 3D point clouds being merged, is used for rejecting at least some pairs of data points. So-rejecting at least some pairs of data points may allow the electronic device 210 to ameliorate the efficiency of the merging procedure between the two 3D point clouds.
(83) It should be noted that in some embodiments, the electronic device 210 may be configured to execute this modified matching step of the ICP algorithm onto the first filtered 3D point cloud and the second filtered 3D point cloud, as described above, instead of the first captured 3D point cloud and the second captured 3D point cloud. However, in alternative non-limiting embodiments of the present technology, only one of the modified steps (i.e. either modified filtering step or modified matching step) may be executed.
(84) Optionally, it is contemplated that the electronic device 210 may be configured to perform this modified matching step of the ICP algorithm onto filtered 3D point clouds that have been filtered by the electronic device 210 during a selection step of the ICP algorithm in a different manner to what is described abovethat is, the electronic device 210 may be configured to execute this modified matching step onto 3D point clouds having been determined in any suitable manner for a given application.
(85) Broadly speaking, the purpose of the modified matching step being executed by the electronic device 210 is to (i) determine pairs of matched pairs of data points (also referred to as correspondences), and (ii) to reject some of these pairs so as to ameliorate the efficiency of the merging procedure.
(86) Therefore, it can be said that the electronic device 210 may be configured to employ the first filtered 3D point cloud and the second filtered 3D point cloud for matching data points into respective pairs, such that each pair includes (i) given first data point from the first filtered 3D point cloud and (ii) a given second data point from the second filtered 3D point cloud. By so doing, the electronic device 210 may be configured to determine correspondences between the two filtered 3D point clouds.
(87) The electronic device 210 may determine initial correspondences between data points in a variety of ways. In one example, the electronic device 210 may be configured to determine the initial pairs of data point based on a shortest distance therebetween. In another example, the electronic device 210 may be configured to determine pairs of data points, not only based on the shortest distance therebetween, but also based on an analysis of respective normal vectors.
(88) Nevertheless, irrespective of how the electronic device 210 is configured to determine the initial pairs of data points between the first filtered 3D point cloud and the second filtered 3D point cloud (e.g., the initial correspondences), the electronic device 210 may be configured to employ the geometric correspondence rejector (GCR) for rejecting at least some initial pairs of data points, thereby determining a reduced plurality of pairs of data points to be used during a next step of the ICP algorithm.
(89) As mentioned above, the GCR may be met when a distance between data points of a given initial pair of data points is below a geometry-based threshold. The geometry-based threshold may be referred herein as a neighbour beam distance. As it will become apparent from the description herein further below, in at least some embodiments of the present technology, the geometry-based threshold may be a longest neighbour beam distance amongst a set of neighbour beam distances determined for a given data point. How the electronic device 210 may be configured to determine this geometry-based threshold will now be described.
(90) It is contemplated that the electronic device 210 may be configured to determine a plurality of initial pairs (initial correspondences) between the two 3D point clouds, and then, may be configured to use at least one neighbour beam distance for respective initial pairs to determine a reduced plurality of pairs between the two 3D point clouds. For example, the electronic device 210 may be configured to determine:
M={{tilde over (m)}.sub.k{tilde over (M)}:IsInlier(p.sub.k,p.sub.k;{tilde over (m)}.sub.k}(7)
where M is the reduced plurality of pairs, {tilde over (M)} is the plurality of initial pairs (prior to rejection of at least some initial pairs), {tilde over (m)}.sub.k being a given initial pair from the plurality of initial pairs, p.sub.k being a given first data point from the first 3D point cloud and being in the given initial pair {tilde over (m)}.sub.k, and p.sub.k being a given second data point from second 3D point cloud and being in the given initial pair {tilde over (m)}.sub.k.
(91) It should be noted that the IsInlier may be a computer-implemented procedure performed by the electronic device 210 and makes use of the geometry-based threshold for determining whether or not a given initial pair is to be considered as an inlier, and therefore should be included in the reduced plurality of pairs, or whether the given initial pair is to be considered as an outlier, and therefore should be excluded from the reduced plurality of pairs. Hence, it can be said that the IsInlier is, in a sense, a computer-implemented test that the electronic device 210 may employ for determining the reduced plurality of pairs. It should be noted that the IsInlier test is described in greater detail in the APPENDICES A and B.
(92) To better illustrate the IsInlier test, it should be noted that a given one of the plurality of initial pairs is characterized by:
{tilde over (m)}.sub.k=p.sub.kp.sub.k(8)
where {tilde over (m)}.sub.k is the Euclidean distance between the data points in the given initial pair. It is contemplated that a point-to-point distance between the data points in the given initial pair may correspond to the Euclidean distance between these data points.
(93) As such, it can be said that the IsInlier test follows the following logic:
{tilde over (m)}.sub.k<d(p.sub.k,p.sub.l)(9)
where the a given d(p.sub.k,p.sub.l), or shortly d.sub.k,l, is a given neighbour beam distance for the first given data point p.sub.k from the given initial pair {tilde over (m)}.sub.k. Put another way, the Equation (9) means that the purpose of the IsInlier test performed by the electronic device 210 is to compare (i) the Euclidean distance between the initial pair of data points against an upper bound of a number neighbour beam distances for the first data point p.sub.k. In other words, the purpose of the IsInlier test performed by the electronic device 210 is to compare (i) the point-to-point distance between the initial pair of data points against a longest one of a number neighbour beam distances for the first data point p.sub.k.
(94) To better illustrate this, reference will now be made to
(95) As such, the electronic device 210 may be configured to determine a geometry-based threshold for the first data point 512 for executing the IsInlier test onto the initial pair of data points p.sub.k and p.sub.k. To that end, the electronic device 210 may be configured to identify neighbouring data points (also referred herein as neighbour data points or neighbours) from the first 3D point cloud to the first data point 512.
(96) As depicted in
(97) It should be noted that these neighbour data point 522, 532, 542, and 552 are produced by lasers of the rotational LIDAR system that are vertically adjacent to the laser having produced the first given data point 512 (p.sub.k). For example, the neighbour data points 522 and 532 are registered via a first given laser that produces beams 520 and 530, the first data point 512 (p.sub.k) is registered via a second given laser that produces beam 510, and the neighbour data points 542 and 552 are registered via a third given laser that produces beams 550.
(98) Conventionally, it should be noted that for rotational LIDAR systems, a level of each laser of the rotational LIDAR system is called a ring, so if p.sub.k is part of the ring r.sub.j, the neighbour data points p.sub.l where l{0 . . . 3} are on adjacent rings r.sub.j1 and r.sub.j+1. Put another way, the first given laser corresponds to a ring 580, the second given laser corresponds to a ring 581, and the third given laser corresponds to a ring 582.
(99) Also, it should be noted that (i) left neighbour data points 522 (p.sub.l=0) and 552 (p.sub.l=3) are captured at a previous increment of LIDAR azimuthal rotation, (ii) the first data point 512 (p.sub.k) is captured at a current increment of LIDAR azimuthal rotation, and (iii) right neighbour data points 532 (p.sub.l=1) and 542 (p.sub.1=2) are captured at a next increment of LIDAR azimuthal rotation.
(100) It should be noted that the electronic device 210 may be configured to acquire angular distances between the first data point 512 (p.sub.k) and each one of the neighbour data points 522, 532, 542, and 552. For example, for a calibrated rotational LIDAR system, the electronic device 210 may be configured to acquire calibration parameters such as: being an angular increment of the LIDAR system azimuthal rotation, and .sub.j1 and .sub.j+1 being angular distances in pitch between the rings (from ring 581 to ring 580 and from ring 581 to ring 582, respectively).
(101) It can be said that the first data point 512 (p.sub.k) and the neighbours 522, 532, 542, and 552 (p.sub.l where l{0 . . . 3}) may be represented on a segment of the unit-sphere. Put another way, the first data point 512 (p.sub.k) and the neighbouring data points 522, 532, 542, and 552 define a segment of the unit-sphere. This segment could be approximated by a plane 502 in
(102) It should be noted that a diagonal vector p.sub.k,l between the first data point 512 (p.sub.k) and a given neighbour p.sub.l on the plane 502 is defined as follows:
p.sub.k,l=[.Math.u.sub.j1,j.Math.v].Math.p.sub.k(10)
where u and v are unitary vectors, and where u is aligned on the plane 502 with the direction of azimuthal rotation of the LIDAR system, and where v perpendicular to u and is aligned on the plane 502 with the vertical direction. In other words, u and v define absciss and ordinate axes of p.sub.k,l coordinate frame. Additional information regarding how the electronic device 210 may be configured to determine u and v vectors is described in greater detail in the APPENDICES A and B.
(103) For example, the electronic device 210 may be configured to determine by using the Equation 10, (i) for the neighbour data point 522 (p.sub.l=0) a respective diagonal vector 540 (p.sub.k,l=0), for the neighbour data point 522 (p.sub.l=3) a respective diagonal vector 541 (p.sub.k,l=3), and so on. It should be noted that diagonal vectors (p.sub.k,l=1 and p.sub.k,l=2) for the neighbour data point 532 (p.sub.l=1) and for the neighbour data point 542 (p.sub.l=2), respectively, are not depicted in
(104) Once the diagonal vector p.sub.k,l for a given neighbour data point p.sub.l is determined by the electronic device 210, the electronic device 210 may determine a neighbour beam distance d.sub.k,l between (i) the first data point 512 p.sub.k and a (ii) respective neighbour data point p.sub.l via the following:
(105)
where d.sub.k,l is a projection of a respective diagonal vector p.sub.k,l onto a reflecting surface 504 and which is orthogonal to normal n.sub.k along the laser beam flight direction. It should be noted that the electronic device 210 may be configured to use information regarding the normal vector associated with the first data point 512 p.sub.k and/or a given normal vector associated with any one of the neighbour data points as the normal n.sub.k in the Equation (12).
(106) Put another way, the electronic device 210 may be configured to determine via the Equation (12), the number of neighbour beam distances for the first data point 512 p.sub.k, which include: (i) a neighbour beam distance 528 d.sub.k,l=0 for the neighbour data point 522 p.sub.l=0, (i) a neighbour beam distance 558 d.sub.k,l=3 for the neighbour data point 522 p.sub.l=3, and so on. It should be noted that neighbour beam distances d.sub.k,l=1 and d.sub.k,l=2 are not depicted in
(107) The electronic device 210 may then be configured to identify the largest one amongst the four neighbour beam distances d.sub.k,l=0, d.sub.k,l=1, d.sub.k,l=2, and d.sub.k,l=3 as d(p.sub.k,p.sub.l) from the Equation (9). In other words, the electronic device 210 may be configured to identify the largest one of the four neighbour beam distances d.sub.k,l=0, d.sub.k,l=1, d.sub.k,l=2, and d.sub.k,l=3 as the geometry-based threshold for the first data point 512 p.sub.k.
(108) Then, the electronic device 210 may employ the Equation (9) for determining whether the point-to-point distance between the first data point 512 p.sub.k and the given second data point p.sub.k is above or below the geometrically-based threshold for the first data point 512 p.sub.k.
(109) It should be noted that at least some aspects of the present technology are discussed in greater detail in APPENDICES A and B of the present specification.
(110) Rejecting Pairs of Data Points by Using the GCR
(111) To better illustrate this, let it be assumed that m.sub.k (the point-to-point distance between the given initial pair of data points) is below the geometrically-based threshold for the first data point 512 p.sub.k (the largest one of the four neighbour beam distances d.sub.k,l=0, d.sub.k,l=1, d.sub.k,l=2, and d.sub.k,l=3). In such a case, the electronic device 210 may be configured to keep the given initial pair of data points for further processing.
(112) Now let it be assumed that m.sub.k (the point-to-point distance between the given initial pair of data points) is above the geometry-based threshold for the first data point 512 p.sub.k (the largest one of the four neighbour beam distances d.sub.k,l=0, d.sub.k,l=1, d.sub.k,l=2, and d.sub.k,l=3). In such a case, the electronic device 210 may determine that the point-to-point distance between the given initial pair of data points is to long, and hence, the electronic device 210 may be configured to exclude the given initial pair of data points from further processing.
(113) In some embodiments of the present technology, the electronic device 210 may be configured to execute a method 600 of processing LIDAR sensor data, as depicted in
(114) STEP 602: Receiving, from the LIDAR Sensor, a First Dataset Having a Plurality of First Data Points
(115) The method 600 begins at step 602 with the electronic device 210 configured to receive the first dataset from the LIDAR sensor which has a plurality of first data points. For example, the first dataset may include the first captured 3D point cloud, as mentioned above. It should be noted that this first dataset is captured by the LIDAR sensor and which may be biased by the measurement error of the LIDAR sensor itself.
(116) It should also be noted that a given first data point from the plurality of first data point is representative of respective spatial coordinates in a 3D space. The given first data point is also associated with a respective normal vector from a plurality of normal vectors. For example, a respective one of the plurality of first data points may be associated with a respective one of the plurality of normal vectors.
(117) It is contemplated that the LIDAR sensor may be mounted to a Self-Driving Car (SDC), such as for example, the vehicle 220 depicted in
(118) STEP 604: Determining, by the Electronic Device, an Uncertainty Parameter for the Given First Data Point Based on a Normal Covariance of the Normal Vector of the Given First Data Point which Takes into Account a Measurement Error of the LIDAR Sensor when Determining the Respective Coordinates of the Given First Data Point
(119) The method 600 continues to step 604 with the electronic device 210 being configured to determine an uncertainty parameter for the given first data point based on a normal covariance of the normal vector of the given first data point, which normal covariance takes into account a measurement error of the LIDAR sensor when determining the respective spatial coordinates of the given first data point.
(120) It is contemplated that the uncertainty parameter for given first data point may correspond to the value C.sub.2,2 for the given first data point and may be obtained via the Equation (5) above.
(121) It should be noted that, in order to determine the value C.sub.2,2, the electronic device 210 may be configured to estimate Cov[V.sub.2] (e.g., covariance of the normal vector 406) by propagation of the uncertainty technique summarized in the Equations (3) and (4). It should be noted that the estimation of Cov[V.sub.2] depends on the measurement error of the LIDAR system during determination of spatial coordinates of captured data points (see the Equation (4)). Once the electronic device 210 estimates the Cov[V.sub.2], the electronic device 210 may be configured to align it with the normal vector V.sub.2that is, the electronic device 210 is configured to perform the alignment step for Cov[V.sub.2] as defined in the Equation (5). The result of the alignment step performed by the electronic device 210 yields the matrix C which is the covariance of V.sub.2 in a normal-aligned coordinate system (e.g., coordinate system defined by V.sub.0, V.sub.1, and V.sub.2). The electronic device 210 may then be configured to identify the value C.sub.2,2 from the matrix C as the uncertainty parameter for the given first data point.
(122) In some embodiments, the electronic device 210 may be configured to determine the normal covariance of the normal vector of the given first data point while taking into account an uncertainty in the respective coordinates of the given first data point. It is contemplated that the uncertainty may be approximated as a Gaussian random variable. It is further contemplated that the measurement error of the LIDAR sensor may be approximated by a sphere with a standard deviation (see Equation the (4)).
(123) STEP 606: In Response to the Uncertainty Parameter being Above a Pre-Determined Threshold, Excluding the Given First Data Point from the Plurality of First Data Points
(124) The method 600 continues to step 606 with the electronic device 210 configured to, in response to the uncertainty parameter for the given first data point being above a pre-determined threshold, exclude the given first data point from the plurality of first data points.
(125) For example, the pre-determined threshold may correspond to the threshold value c.sub.T from the Equation (7). It should be noted that if the uncertainty parameter (e.g., C.sub.2,2) for the given first data point is above the threshold value c.sub.T, the electronic device 210 may be configured to exclude the given first data point from further processing during the ICP algorithm.
(126) Therefore, by performing the step 608, the electronic device 210 may be configured to determine a filtered plurality of first data points based on the first plurality of data points. It can also be said that the electronic device 210 is thus configured to determine the first filtered 3D point cloud based on the first captured 3D point cloud.
(127) STEP 608: Using the Filtered Plurality of First Data Points, Instead of the Plurality of First Data Points, for Merging the First Dataset of the LIDAR Sensor with a Second Dataset of the LIDAR Sensor
(128) The method 600 continues to step 608 with the electronic device 210 being configured to use the filtered plurality of first data points (determined during the step 606), instead of the plurality of first data points (received during the step 602), for merging the first dataset of the LIDAR sensor with a second dataset of the LIDAR sensor.
(129) In some embodiments, it is contemplated that the electronic device 210 may be configured to use the filtered plurality of first data points, instead of the plurality of first data points, during the matching step of the ICP algorithm. For example, the electronic device may be configured to estimate a transformation rule between the first dataset and the second dataset during the ICP algorithm based on the first filtered plurality of first data points. It is contemplated that the transformation rule may be an output of the ICP algorithm.
(130) In addition, in some embodiments of the present technology, the electronic device 210 may also use the merged first and second datasets for controlling operation of the SDC.
(131) As mentioned above, the electronic device 210 may be configured to execute the method 700 that will now be described in greater details.
(132) STEP 702: Receiving, from the LIDAR Sensor, an Indication of the LIDAR Sensor Data Including a First Dataset Having a Plurality of First Data Points and a Second Dataset Having a Plurality of Second Data Points
(133) The method 700 begins at step 702 with the electronic device 210 configured to receive, from the LIDAR sensor, an indication of the LIDAR sensor data including a first dataset having a plurality of first data points and a second dataset having a plurality of second data points.
(134) It should be noted that each one from the plurality of first data points and each one from the plurality of second data points is representative of respective spatial coordinates in a 3D space and is associated with a respective normal vector from a plurality of normal vectors.
(135) It should be noted that the first dataset and the second dataset (e.g., including a first 3D point cloud and a second 3D point cloud) may have been captured during sequential scanning phases of the LIDAR sensor.
(136) STEP 704: Matching at Least Some of the Plurality of First Data Points with at Least Some of the Plurality of Second Data Points, Thereby Determining a Plurality of Pairs
(137) The method 700 continues to step 704 with the electronic device 210 configured to match at least some of the plurality of first data points with at least some of the plurality of second data points, thereby determining a plurality of pairs.
(138) For example, by performing this matching, the electronic device 210 may be configured to determine the plurality of initial pairs, as mentioned above. This means that the electronic device 210 may be configured to determine {tilde over (M)} as defined in the Equation (7), where {tilde over (M)} is the plurality of initial pairs (prior to rejection of at least some initial pairs).
(139) It should be noted that a given one of the plurality of these initial pairs includes (i) a given first data point and (ii) a given second data point, and where the given first data point and the given second data points are separated by a point-to-point actual distance. For example, a given one of the plurality of these initial pairs may be defined as {acute over (m)}.sub.k that represents a given initial pair from the plurality of initial pairs. The given initial pair {acute over (m)}.sub.k includes (i) the given one of the first plurality of data points p.sub.k from the first 3D point cloud, and (ii) the corresponding one of the plurality of second data points p.sub.k from second 3D point cloud. In this example, the point-to-point actual distance between p.sub.k and p.sub.k may be defined as the Euclidean distance therebetween (see the Equation (8)). Hence, it is contemplated that the point-to-point actual distance may be a Euclidean distance between the respective initial pair of data points in the 3D space.
(140) It should be noted that the electronic device 210 may be configured to perform such matching in a variety of ways. For example, the electronic device 210 may be configured to perform the matching in a different manner for a given application.
(141) STEP 706: For the Given One of the Plurality of Pairs, Determining a Pair-Specific Filtering Parameter
(142) The method 700 continues to step 706 with the electronic device 210 configured to determining a pair-specific filtering parameter for the given one of the plurality of (initial) pairs that is determined during the step 704.
(143) For example, the electronic device 210 may be configured to determine the pair-specific filtering parameter to be positive for a given initial pair of data points if the point-to-point distance (e.g., the Euclidean distance between the initial pair of data points) is above the geometry-based threshold for the given initial pair of data points.
(144) It is contemplated that the electronic device 210 may be configured to determine the pair-specific filtering parameter by defining/identifying for the given second data point in the given initial pair, a set of neighbouring data points (e.g., see first data points 522, 532, 542, and 552 in
(145) In at least some embodiments of the present technology, the set of neighbouring data points may comprise four neighbouring data points. For example, as depicted in
(146) In some embodiments, it is contemplated that the given first data point and the set of neighbouring points define a segment of a unit sphere. It is also contemplated that the approximation plane generated by the electronic device 210 may be based on an assumption that laser beams generated by the plurality of lasers are parallel.
(147) As explained above, it can be said that the first data point 512 (p.sub.k) and the neighbouring data points 522, 532, 542, and 552 (p.sub.l where l{0 . . . 3}) may be represented on a segment of the unit-sphere. Put another way, the first data point 512 (p.sub.k) and the neighbouring data points 522, 532, 542, and 552 define a segment of the unit-sphere. This segment could be approximated by a plane 502 in
(148) It is contemplated that the electronic device 210 may be configured to calculate the neighbour beam distances between the given first data point 512 and respective ones the set of neighbouring points 522, 532, 542, and 552. A given neighbour beam distance d.sub.k,l is representative of a linear distance between the given first data point 512 and a respective one of the set of neighbouring points (p.sub.l).
(149) It is contemplated that, in order to calculate the calculating the neighbour beam distances, the electronic device 210 may be configured to generate (i) a given diagonal vector p.sub.k,l between the given first data point 512 p.sub.k and a respective one of the set of neighbouring points in a first coordinate system (unitary vectors u and v), and (ii) calculate the respective neighbour beam distance d.sub.k,l by projecting the given diagonal vector p.sub.k,l onto the reflecting surface 504 orthogonal to a laser path direction.
(150) For example, the electronic device 210 may be configured to calculate the neighbour beam distances d.sub.k,l=0, d.sub.k,l=1, d.sub.k,l=2, and d.sub.k,l=3 via the Equation (12).
(151) It is further contemplated that the electronic device 210 may calculate the neighbour beam distances between the given first data point 512 and the set of neighbouring points based on an angular increment of azimuthal rotation and angular distances in pitch between vertically spaced lasers (e.g., and ).
(152) Then, it is contemplated that the electronic device 210 may be configured to identify a largest neighbour beam distance amongst all the neighbourhood beam distances (in this case, amongst (d.sub.k,l=0, d.sub.k,l=1, d.sub.k,l=2, and d.sub.k,l=3). Then, it is contemplated that the electronic device 210 may be configured to determine, in response to the point-to-point actual distance being above the largest neighbour beam distance, that the pair-specific filtering parameter is to be positive.
(153) STEP 708: In Response to the Pair-Specific Parameter being Positive, Excluding the Given One of the Plurality of Pairs from Further Processing
(154) The method 700 continues to step 708 with the electronic device 210 configured to discard/exclude the given one of the plurality of (initial) pairs (in the examples above the initial pair including p.sub.k and p.sub.k) from further processing. The electronic device 210 is thereby configured to define a reduced plurality of pairs (e.g., excluding at least the initial pair p.sub.k and p.sub.k)
(155) STEP 710: Processing the Reduced Subset of Pairs for Merging the First Dataset and the Second Dataset
(156) The method 700 continues to step 710 with the electronic device 210 configured to process the reduced plurality of pairs for merging the first dataset and the second dataset.
(157) In some embodiments, the electronic device 210 may be configured to (as part of the processing of the reduced plurality of pairs) estimate a transformation rule between the first dataset and the second dataset. For example, the transformation rule may be output of an ICP algorithm performed by the electronic device.
(158) In addition, the electronic device 210 may be configured to use the merged first and second datasets for controlling operation of the vehicle 220.
(159) Modifications and improvements to the above-described implementations of the present technology may become apparent to those skilled in the art. The foregoing description is intended to be exemplary rather than limiting. The scope of the present technology is therefore intended to be limited solely by the scope of the appended claims.