STEERING AUTOMATED VEHICLES BASED ON TRAJECTORIES DETERMINED FROM FUSED OCCUPANCY GRIDS
20250229790 · 2025-07-17
Assignee
Inventors
- Elias Lukas HAMPP (Zürich, CH)
- David MAUDERLI (Zürich, CH)
- Alexander HEILMEIER (Munich, DE)
- Alexander Domahidi (Zürich, CH)
- Stefano LONGO (Zürich, CH)
Cpc classification
B62D6/00
PERFORMING OPERATIONS; TRANSPORTING
B60W60/001
PERFORMING OPERATIONS; TRANSPORTING
B60W50/0097
PERFORMING OPERATIONS; TRANSPORTING
B60W50/06
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60W50/00
PERFORMING OPERATIONS; TRANSPORTING
B62D6/00
PERFORMING OPERATIONS; TRANSPORTING
B60W60/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
The invention is notably directed to a method of steering an automated vehicle (2) in a designated area, thanks to a set (10) of offboard perception sensors (110-140). The method comprises repeatedly executing algorithmic iterations, where each iteration comprises the following steps. First, sensor data are dispatched to K processing systems (11, 12), whereby each processing system k of the K processing systems receives N.sub.k datasets of the sensor data as obtained from N.sub.k respective sensors of the set (10) of offboard perception sensors (110-140), where k=1 to K, K2, and N.sub.k2. The N.sub.k datasets are subsequently processed at each processing system k to obtain M.sub.k occupancy grids corresponding to perceptions from M.sub.k respective sensors of the offboard perception sensors, respectively, where N.sub.kM.sub.k1. The M.sub.k occupancy grids overlap at least partly. Data from the M.sub.k occupancy grids obtained are then fused, at each processing system k, to form a fused occupancy grid, whereby K fused occupancy grids are formed by the K processing systems (11, 12), respectively. The K fused occupancy grids are then forwarded to a further processing system (14), which merges the K fused occupancy grids to obtain a global occupancy grid for the designated area. Eventually, a trajectory is determined for the automated vehicle (2), based on the global occupancy grid. This trajectory is then forwarded to a drive-by-wire system (20) of the automated vehicle (2), to accordingly steer the latter. The invention is further directed to related systems and computer program products.
Claims
1. A computer-implemented method of steering an automated vehicle in a designated area using a set of offboard perception sensors, wherein the method comprises repeatedly executing algorithmic iterations and each iteration of the several algorithmic iterations comprises: dispatching sensor data to K processing systems, whereby each processing system k of the K processing systems receives N.sub.k datasets of the sensor data as obtained from N.sub.k respective sensors of the set of offboard perception sensors, where k=1 to K, K2, and N.sub.k2; processing, at said each processing system k, the N.sub.k datasets received to obtain M.sub.k occupancy grids corresponding to perceptions from M.sub.k respective sensors of the offboard perception sensors, respectively, where N.sub.kM.sub.k1 and wherein the M.sub.k occupancy grids overlap at least partly; fusing, at said each processing system k, data from the M.sub.k occupancy grids obtained to form a fused occupancy grid, whereby K fused occupancy grids are formed by the K processing systems, respectively; forwarding the K fused occupancy grids to a further processing system; merging, at the further processing system, the K fused occupancy grids to obtain a global occupancy grid for the designated area; and determining, based on the global occupancy grid, a trajectory for the automated vehicle and forwarding the determined trajectory to a drive-by-wire (DbW) system of the automated vehicle.
2. The method according to claim 1, wherein the N.sub.k datasets received at said each iteration by said each processing system k are respectively associated with N.sub.k first timestamps, and said each iteration further comprises: assigning K second timestamps to the K fused occupancy grids, where each of the K second timestamps is equal to an oldest of the N.sub.k first timestamps associated with the N.sub.k datasets as processed at said each processing system k; and assigning a global timestamp to the global occupancy grid, where the global timestamp is obtained as an oldest of the K second timestamps, and said trajectory is determined in accordance with the global timestamp.
3. The method according to claim 2, wherein processing the N.sub.k datasets at said each processing system k further comprises discarding any of the N.sub.k datasets that is older than a reference time for the N.sub.k datasets by more than a predefined time period, whereby M.sub.k is at most equal to N.sub.k, and the reference time is computed as an average of the N.sub.k timestamps.
4. The method according to claim 1, wherein each sensor of the offboard perception sensors is a 3D laser scanning Lidar, and each of the N.sub.k datasets received by said each processing system k captures a point cloud model of an environment of a respective one of the N.sub.k sensors.
5. The method according to claim 4, wherein, at processing the N.sub.k datasets, each of the N.sub.k datasets is processed at said each processing system k to determine a first 2D grid, defined in a polar coordinate system, and then convert the first 2D grid into a second 2D grid, defined in a cartesian coordinate system, whereby the M.sub.k occupancy grids as eventually obtained at said each processing system k are obtained as 2D grids having rectangular cells of given dimensions, and the K fused occupancy grids and the global occupancy grid are, each, formed as a 2D grid having rectangular cells of the same given dimensions, wherein cells of the global occupancy grid coincide with cells of the K fused occupancy grids, and cells of the K fused occupancy grids themselves coincide with cells of the M.sub.k occupancy grids as eventually obtained at each of the K processing systems.
6. The method according to claim 5, wherein the first 2D grid is determined by determining states of cells thereof, in accordance with hit points captured in the corresponding one of the N.sub.k datasets.
7. The method according to claim 5, wherein data from the M.sub.k occupancy grids obtained are fused by computing, for each cell of each of the K fused occupancy grids, a value based on a state of each of the rectangular cells of each grid of the M.sub.k occupancy grids obtained, and associating the computed value with said each cell.
8. The method according to claim 7, wherein said value is computed as a count, which is incremented if a corresponding cell of any of the M.sub.k occupancy grids is in a free state, decremented if a corresponding cell of any of the M.sub.k occupancy grids is in an occupied state, and left unchanged if a corresponding cell of any of the M.sub.k occupancy grids is in an unknown state.
9. The method according to claim 8, wherein said each iteration further comprises, after merging the K fused occupancy grids, identifying cells of the global occupancy grid that is in the unknown state and refining states of such cell based on corresponding cell memory values, each reflecting a history of a corresponding cell, and updating the cell memory values, whereby each of the cell memory values is increased, respectively decreased, if the corresponding cell is determined to be in the free state, respectively the occupied state, and is modified so that its absolute value is decreased if the corresponding cell is determined to be in the unknown state.
10. The method according to claim 1, wherein said each iteration further comprises updating a state of the automated vehicle by reconciling states of the automated vehicle as obtained from, on the one hand, the global occupancy grid and, on the other hand, odometry signals obtained from the automated vehicle, whereby said trajectory is subsequently determined in accordance with the updated state of the automated vehicle.
11. The method according to claim 10, wherein the method further comprises synchronizing the K processing systems and the further processing system according to a networking protocol for clock synchronization.
12. The method according to claim 1, wherein said several algorithmic iterations are executed at an average frequency that is between 5 and 20 hertz, preferably equal to 10 hertz.
13. A computer program product for steering an automated vehicle in a designated area, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by processing means of a computerized system, which comprises a set of offboard perception sensors, K processing systems, and a further processing system, to cause the computerized system to execute several algorithmic iterations, each comprising: dispatching sensor data to the K processing systems, whereby each processing system k of the K processing systems receives N.sub.k datasets of the sensor data as obtained from N.sub.k respective sensors of the set of offboard perception sensors, where k=1 to K, K2, and N.sub.k2; processing, at said each processing system k, the N.sub.k datasets received to obtain M.sub.k occupancy grids corresponding to perceptions from M.sub.k respective sensors of the offboard perception sensors, respectively, where N.sub.kM.sub.k1 and wherein the M.sub.k occupancy grids overlap at least partly; fusing, at said each processing system k, data from the M.sub.k occupancy grids obtained to form a fused occupancy grid, whereby K fused occupancy grids are formed by the K processing systems, respectively; forwarding the K fused occupancy grids to the further processing system; merging, at the further processing system, the K fused occupancy grids to obtain a global occupancy grid for the designated area; and determining, based on the global occupancy grid, a trajectory for the automated vehicle and forwarding the determined trajectory to a drive-by-wire (DbW) system of the automated vehicle.
14. A system for steering an automated vehicle in a designated area, wherein the system comprises a set of offboard perception sensors, K processing systems, and a further processing system, and the system is configured to execute several algorithmic iterations, wherein each iteration of the several algorithmic iterations comprises: dispatching sensor data to the K processing systems, whereby each processing system k of the K processing systems receives N.sub.k datasets of the sensor data as obtained from N.sub.k respective sensors of the set of offboard perception sensors, where k=1 to K, K2, and N.sub.k2; processing, at said each processing system k, the N.sub.k datasets received to obtain M.sub.k occupancy grids corresponding to perceptions from M.sub.k respective sensors of the offboard perception sensors, respectively, where N.sub.kM.sub.k1 and wherein the M.sub.k occupancy grids overlap at least partly; fusing, at said each processing system k, data from the M.sub.k occupancy grids obtained to form a fused occupancy grid, whereby K fused occupancy grids are formed by the K processing systems, respectively; forwarding the K fused occupancy grids to the further processing system; merging, at the further processing system, the K fused occupancy grids to obtain a global occupancy grid for the designated area; and determining, based on the global occupancy grid, a trajectory for the automated vehicle and forwarding the determined trajectory to a drive-by-wire (DbW) system of the automated vehicle.
15. The system according to claim 14, wherein the system comprises two redundant sets of processing systems, where each of the redundant sets comprises K processing systems, and the system is further configured to check whether the M.sub.k occupancy grids obtained by each of the redundant sets match.
16. The system according to claim 15, wherein K4.
17. The system according to claim 14, wherein each sensor of the offboard perception sensors is a 3D laser scanning Lidar.
18. The method according to claim 3, wherein said predefined time period is equal to 150 ms.
19. The method according to claim 6, determining the first 2D grid by determining states of cells thereof causes the cells to be marked as being in one: a free state, an occupied state, and an unknown state.
20. The method according to claim 8, wherein aid count is incremented by a unit value if a corresponding cell of any of the Mk occupancy grids is in a free state, and decremented by a unit value if a corresponding cell of any of the Mk occupancy grids is in an occupied state.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] These and other objects, features and advantages of the present invention will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings. The illustrations are for clarity in facilitating one skilled in the art in understanding the invention in conjunction with the detailed description. In the drawings:
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025] The accompanying drawings show simplified representations of devices or parts thereof, as involved in embodiments. Technical features depicted in the drawings are not necessarily to scale. Similar or functionally similar elements in the figures have been allocated the same numeral references, unless otherwise indicated.
[0026] Methods, computerized systems, and computer program products embodying the present invention will now be described, by way of non-limiting examples.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
[0027] The following description is structured as follows. General embodiments and high-level variants are described in section 1. Section 2 addresses particularly preferred embodiments. Section 3 concerns technical implementation details. Note, the present method and its variants are collectively referred to as the present methods. All references Sn refer to methods steps of the flowcharts of
1. General Embodiments and High-Level Variants
[0028] Referring to
[0029] The vehicle 2 is partly automated, i.e., it includes a drive-by-wire (DbW) system 20, but typically has no sensing capability. That is, the automated vehicle 2 does not necessarily include perception sensors. In typical application scenarios, the vehicle 2 does actually not include any perception sensor at all. In other cases, the vehicle may happen to include such perception sensors. However, the latter are preferably not active, i.e., not used to calculate the trajectories referred to in the present methods. In variants, such sensors may be involved to perform further redundancy checks, in addition to the method steps described below.
[0030] Note, the terminologies autonomous and automated are sometimes used as synonyms. In general, autonomous, semi-autonomous, and partly autonomous, refer to concepts that involve some self-governance of machines that are capable of sensing their environment to safely move therein, avoiding obstacles and collisions with other objects, whether static or in motion. In this document, the terminology automated is to be understood as meaning that the automated vehicle incorporates automation to move (e.g., drive); it can automatically drive from one location to another, based on trajectories that are computed offboard and then communicated to the vehicle 2.
[0031] That is, in the context of the present document, such trajectories are primarily obtained from offboard (external) sensors 21-24, while the vehicle does typically not have (or make use) of sensing capability to sense its environment. However, the automated vehicle 2 is equipped with a DbW system 20, as seen in
[0032] The automated vehicle 2 is a ground vehicle, typically an automated car. In principle, such vehicles can be of any type, e.g., cars, vans, transit buses, motorcoaches, trucks, lorries, or any other types of ground vehicles that may benefit from automation. In typical embodiments, though, the present automated vehicles are production cars, vans, electric vehicles, or the likes, which benefit from automatic driving.
[0033] The vehicle 2 can be considered to form part of the system 1, or not. The set of offboard perception sensors 110-140 preferably includes Lidars (e.g., 3D laser scanning Lidars). Such perception sensor may advantageously be complemented by other types of sensors, such as cameras, radars, sonars, GPS, and/or inertial measurement units, if only to allow heterogenous redundancy checks, as in preferred embodiments. Such sensors are arranged in a designated area 5 (e.g., a parking lot, as assumed in
[0034] Various processing systems 11, 12, 14, 15 may form part of a control unit, which is in communication with the perception sensors 110-140 and the DbW system 20 of the vehicle 2. I.e., the control unit can send data to, and receive data from, the vehicle 2. To that extent, the control unit occupies a central position and can therefore be regarded as a central control unit (CCU), notwithstanding the several processing components 11, 12, 14 it includes.
[0035] The proposed method revolves around the repeated execution of algorithmic iterations (or algorithmic cycles), as exemplified in the flow of
[0036] In detail, the sensor data are first dispatched S20 to K processing systems 11, 12, where K2. In preferred embodiments, the number of such processing systems is larger than or equal to 4. That is, each processing system k of the K processing systems (i.e., k=1 to K) receives, at each iteration, N.sub.k datasets of sensor data, where N.sub.k2k. Such data are obtained (step S5) from N.sub.k respective sensors of the set 10 of offboard perception sensors 110-140.
[0037] The N.sub.k datasets are then processed S30 at each processing system k to obtain M.sub.k occupancy grids. The M.sub.k grids reflect perceptions from M.sub.k sensors of the offboard perception sensors 11, respectively. Ideally, N.sub.k occupancy grids are obtained upon completing step S30, i.e., M.sub.k=N.sub.k. However, in embodiments, some of the N.sub.k datasets may be discarded, for reasons discussed later. Thus, in general, N.sub.kM.sub.k1. In all cases, the M.sub.k occupancy grids obtained are grids that overlap at least partly in space, for reasons that will become apparent later.
[0038] As illustrated in
[0039] To that aim, the K fused occupancy grids are first forwarded S50 to a further processing system 14, which differs from the systems 11, 12. The processing system 14 merges S60 the K fused occupancy grids to obtain a global occupancy grid for the designated area 5. The merge operation S60 and the fusion operations S40 are similar operations, which can even be identical, conceptually speaking. Both steps S40, S60 rely on data fusion, and aim at reconciling data obtained from distinct sources, with a view to forming a more complete, consistent, and accurate representation of the designated area 5, or portions thereof.
[0040] Once a global occupancy grid has been obtained, the method can proceed to determine (or update) S90 a trajectory for the automated vehicle 2 based on the global occupancy grid, and forward S100 this trajectory to the DbW system 20 of the automated vehicle 2. Steps S90 and S100 can be performed by additional processing systems 15, i.e., systems that are distinct from the system 11, 12, and 14, as assumed in
[0041] Note, updating a trajectory amounts to determining a new trajectory, albeit close to the previous trajectory. Trajectories sent to the DBW system 20 are translated into commands for the electromechanical actuators of the DbW system 20, to allow actuation of the vehicle. I.e., the automated system 20 takes control of the vehicle, which will accordingly start, accelerate, brake, steer, and stop, so as to move from one location to another. Practically speaking, a trajectory can be defined as series of commands for respective actuators (acceleration, steering, and braking) and for successive time points. That is, such commands form a timeseries that embody a trajectory, which is determined in accordance with a goal set in space, or preferably set in both space and time.
[0042] The control unit may comprise distinct sets of processors, where each of the sets comprises one or more processors. In particular, the processing systems 11, 12, 14, 15 can advantageously be mapped onto respective ones of the distinct sets of processors. Even, the processing systems 11, 12, 14, 15 are preferably implemented as distinct computers of the control unit. The exact mapping, however, may depend on the security levels offered by the (sets of) processors. In variants, the control unit may be embodied as a single computer, provided that its sets of processors are sufficiently safe. An example of suitable functional safety standard is defined by the ISO26262 standard for the development of electrical and electronic systems in road vehicles.
[0043] In the present context, several sensor datasets need to be repeatedly processed, at a high frequency. This translates into high throughput and compute requirements, which are difficult to meet, particularly with a secure computing system. To address this problem, the present systems and methods rely on a scalable architecture, which allows to meet the above requirements, irrespective of the redundancy level desired (the processing systems 11, 12 can be redundant, for safety reasons).
[0044] Namely, according to the proposed approach, several processing devices 11, 12 are provided to handle sensor datasets from respective, distinct subsets of the perception sensors (e.g., Lidars), so as to allows tractable computations. The processing systems 11, 12 produce occupancy grids which are pre-fused (locally, at the level of the systems 11, 12), before being merged by a distinct processing system, which relies on distinct (sets of) processors. Performing the prefusion at the level of the processing systems 11, 12 makes it possible to save bandwidth. The trajectories can then be computed (e.g., by the system 14 or one or more other processing systems 15) according to any known, suitable scheme.
[0045] All this is now described in detail, in reference to particular embodiments of the invention. To start with, the N.sub.k datasets can be subjected to a specific timestamping scheme, as in embodiments. In practice, the N.sub.k datasets received at each iteration by each processing system k are respectively associated with N.sub.k first timestamps, corresponding to times at which the sensor measurements were performed. Now, such times may slightly differ, giving rise to time differences that may have to be adequately handled, for security reasons. To that aim, each iteration may further comprise assigning K second timestamps to the K fused occupancy grids (step S40), where each of the K second timestamps is conservatively chosen to be equal to the oldest source timestamp. That is, each of the K second timestamps is set equal to the oldest of the N.sub.k first timestamps associated with the N.sub.k datasets as processed at each processing system k.
[0046] Similarly, a global timestamp may be assigned (at step S60) to the global occupancy grid eventually obtained at each iteration, where this global timestamp is set equal to the oldest of the K second timestamps. Eventually, the trajectory is determined or updated S90 in accordance with the global timestamp as set at step S60. The above timestamp assignment scheme makes it possible to check the temporality of incoming data and its validity for subsequent processing, something that is particularly advantageous in a distributed system such as shown in
[0047] In particular, such a timestamp management makes it possible to discard any dataset that is too old from the occupancy grid calculation at step S30. I.e., the processing step S30 may discard any of the N.sub.k datasets (as processed by any processing system k) that is older than a reference time for the N.sub.k datasets by more than a predefined time period. This time period can for instance be set equal to 150 ms. As a result, M.sub.k is at most equal to N.sub.k. Note, the reference time can be computed as an average of the N.sub.k timestamps, e.g., using a geometric or arithmetic average. More generally, any suitable definition of the average can be used, e.g., as derived from the generalized mean formula, preferably using an exponent that is larger than or equal to zero. In all cases, any datasets that is older than the average time for the N.sub.k datasets by more than a predefined time period is preferably discarded to ensure safer trajectory calculations.
[0048] As noted earlier, each sensor of the offboard perception sensors 110-140 is preferably a 3D laser scanning Lidar. In that case, each of the N.sub.k datasets received by each processing system k captures a point cloud model of an environment of a respective one of the N.sub.k sensors. For example, each Lidar can be configured to scan its surroundings by scanning rays at flat angles. Such angles are preferably separated by at most one degree of angle in the transverse plane (called elevation plane), i.e., a plane corresponding to a given azimuth, which is transverse to a reference plane of the area 5, e.g., corresponding to the ground level of the area 5. The ground level corresponds to the horizontal plane on which the vehicle rests or drives in the designated area 5. The ground is typically flat or essentially flat. That is, small deformations may be present, e.g., ramps, steps, and/or bumps. The reference plane of the area may typically be an average plane of the ground level or the lowest plane of the ground level.
[0049] Preferably, the flat angles span a range of at least 30 degrees of angle in each elevation plane. Each Lidar may for instance allow up to 64 000 points per full rotation. I.e., the rays are 360 scanned around the zenith direction, at a flat elevation angle from the reference plane of the Lidar, parallel to the ground level. For example, each lidar may scan 32 rays distributed between 16 and +15 with respect to a given reference angle. Several Lidars can have different reference angles, depending on how they are arranged across the area 5. That said, the present invention does not depend on a particular Lidar technology.
[0050] The Lidar data can be leveraged to populate 2D or 3D occupancy grids. In the present context, it is normally sufficient to rely on 2D grids, which minimizes the amount of data to be handled next. A Lidar implementation lends itself well to determining 2D grids in polar coordinate systems. However, such a coordinate system makes it complicated to reconcile overlapping 2D grids.
[0051] Therefore, in embodiments, each of the N.sub.k datasets is processed S30 (at each processing system k) so as to initially determine a first 2D grid, which is defined in a polar coordinate system. This first grid is then converted into a second 2D grid, which is defined in a cartesian coordinate system. I.e., the M.sub.k occupancy grids as eventually obtained at each processing system k are obtained as 2D grids having rectangular cells of given dimensions. Similarly, the resulting K fused occupancy grids and the global occupancy grid are, each, formed as a 2D grid having rectangular cells of the same given dimensions. Moreover, such grids are arranged in such a manner that cells of the global occupancy grid coincide with cells of the K fused occupancy grids, and cells of the K fused occupancy grids themselves coincide with cells of the M.sub.k occupancy grids as eventually obtained at each of the K processing systems 11, 12.
[0052] This is illustrated in the simple example of
[0053] In embodiments, the grids 110g-140g are determined by determining states of the cells of the grid, in accordance with hit points captured in the datasets received from the Lidars. As illustrated in the more realistic example of grid shown in
[0054] Data from the M.sub.k occupancy grids can be fused S40 by computing, for each cell of the K fused occupancy grids, a value based on a state of each of the rectangular cells of each grid of the M.sub.k occupancy grids obtained earlier S30. The computed value is then associated with the respective cell. A similar or identical mechanism can be implemented to merge the K fused grids. For example, referring to
[0055] Now, after merging the K fused occupancy grids, the present methods may advantageously identify S70 cells of the global occupancy grid that be in the unknown state and infer correct states of such cells based on corresponding cell memory values. A cell memory value reflects a history of the corresponding cell. Such cell memory values are updated S80 during each iteration. For example, each of the cell memory values can be increased or decreased, if the corresponding cell is determined to be in the free state or an occupied state, respectively. For completeness, each cell memory value is modified so that its absolute value is decreased if the corresponding cell is determined to be in the unknown state and, this, during each iteration.
[0056] Such a scheme is particularly advantageous where occlusion occurs. A grid map is indeed susceptible to brief occlusions, e.g., when the vehicle drives through a gate. Thus, it may be beneficial to introduce a value (the cell memory) that represents the current certainty of the cell state based on the cell history.
[0057] A concrete example is now discussed in reference to
[0058]
[0059] During the first iteration (Iteration #1), the state of each of the cells (cell #1, cell #2, and cell #3) is determined to be the occupied state. A majority vote obviously concludes to an occupied state for the fused cell (fourth column). Accordingly, the cell memory value of the fused cell is decreased (fifth column), so that the updated cell memory value is equal to 1. This value is initially set equal to 0; the operation performed reads 01.fwdarw.1, such that the new cell memory value is equal to 1. No inference (sixth column) is needed here as the majority vote unambiguously concludes to an occupied state. For the same reason, no additional update (seventh column) is required, as there is no need to forget the accumulated value. During the second iteration, the cell states remain the same for cell #2 and cell #3, while the first cell is now determined to be unknown. A majority vote again concludes to an occupied state for the fused cell. The cell memory value is thus decreased again (11.fwdarw.2). The same observations are made during the third iteration (the cell states remain unchanged), such that the vote concludes to an occupied state. The cell memory value is accordingly decreased (21.fwdarw.3). However, during each of the next four iterations (Iteration #4 to #7), all cells now happen to be in the unknown state, something that may typically results from a temporary occlusion or a signal alteration. In such cases, the actual state of the fused cell can be inferred to be occupied, based on the history of the cell but only as long as the forgetting mechanism permits.
[0060] Practically speaking, use is made of the last known cell memory value (i.e., 3) during the 4.sup.th iteration. This value indicates that the state is probably still occupied (3.Math.Occupied). As no new information is available (the state determined last is the unknown state), the cell memory value is decreased toward zero (i.e., 3+1.fwdarw.2), as a result of the forgetting mechanism. The same repeats over the next two iterations (2+1.fwdarw.1, 1+1.fwdarw.0), until the cell memory value reaches the value zero. From this point on, it can no longer be assumed that the fused cell is occupied, and its state now switches to unknown during the 7.sup.th iteration. If the next vote (8.sup.th iteration) concludes to a free state, however, then the count can be increased again (0+1.fwdarw.1), and so on. Note, various practical implementations can be contemplated for the above mechanism. In particular, the distinction between the 5th and 7th column is made for the sake of clarity. In practice, however, the cell memory would likely be updated in a single step.
[0061] Given the simplicity of the operations required (mere arithmetic operations), the above correction mechanism may possibly be implemented for each grid, i.e., upon completing step S30, upon completing step S40, and after merging S60 the fused grids.
[0062] Referring back to
[0063] The trajectories are preferably computed by dedicated processing systems 15, which are preferably distinct from the K processing systems 11, 12 and the further processing system 14. The systems 15 may for instance implement a main perception system and an auxiliary perception system, as assumed in
[0064] This way, the vehicle can be remotely steered from the control unit, through the DbW system 20, based on the validated trajectories forwarded by the control unit to the DbW system. All the sensors of the set are used to form the main perception. However, instead of re-using all of the perception sensors to form a full redundancy, only a subset of the sensors are used to form the auxiliary perception that is then used to validate the trajectories. In other words, distinct perceptions are formed from overlapping sets of sensors, whereby one of the perceptions formed is used to validate trajectories obtained from the other. This approach requires less computational efforts, inasmuch as less signals (and therefore less information) are required to form the auxiliary perception. Still, this approach is more likely to allow inconsistencies to be detected, thanks to the heterogeneity of sensor signals used to obtain the main and auxiliary perceptions.
[0065] Referring back to
[0066] As noted earlier, the system preferably comprises redundant sets (e.g., two sets) of processing systems 11, 12, where each set comprises K processing systems (e.g., K4). In that case, the system 1 is further configured to check whether occupancy grids obtained by each of the redundant sets match. Downstream computations carry on as long as the occupancy grids match, else an auxiliary procedure (e.g., an emergency stop) is triggered.
[0067] Another, but closely related, aspect of the invention concerns a computer program product. As indicated earlier, this product comprises a computer readable storage medium, which has program instructions embodied therewith. The program instructions can be executed by processing means of a computerized system 1 as described above, to cause the computerized system to execute several algorithmic iterations as described in reference to the present methods.
[0068] The above embodiments have been succinctly described in reference to the accompanying drawings and may accommodate a number of variants. Several combinations of the above features may be contemplated. Examples are given in the next section.
2. Particularly Preferred Embodiments
2.1 Preferred Architecture
[0069] As illustrated in
[0070] All components of the main system 1 must be suitably synchronized. To that aim, the vehicle 2 may communicate with a backend unit 16, which coordinates all subsystems and components. In particular, the K processing systems 11, 12 and the further processing system 14 can be synchronized according to a networking protocol for clock synchronization.
2.2 Preferred flow
[0071]
[0072] The processing system 14 merges S60 the K fused occupancy grids to form a global occupancy grid, which is then suitably timestamped at the system 14. Next, the further processing system 14 identifies S70 cells in the unknown state (typically occluded cells) and attempts to refine such states based on cell histories, using cell memory values, as explained in section 1. I.e., cell memory values are updated S80 in parallel, based on the cell states determined last, using a forgetting mechanism.
[0073] The vehicle trajectory is determined (or updated) at step S90, e.g., using one or more downstream processing systems. The determined trajectory is then forwarded S100 to the DbW system 20 of the vehicle 2, to accordingly steer the latter. The process loops back to step S10, thereby starting a new iteration based on new sensor output data. Such algorithmic iterations are executed at an average frequency that is between 5 and 20 hertz, typically equal to 10 hertz. This requires efficient computations and data transmissions, hence the benefits of the approach of
[0074]
2.3. Temporality and Timestamp Management
[0075] A preferred implementation is one in which all sensor measurements are provided with timestamps, which correspond to the sensor measurement times. If several measurements from different sources (e.g., Lidars) are used, the oldest measurement time of all considered inputs is set as a measurement time in the outgoing data. This procedure makes it possible to keep track of the oldest time associated with the information considered at any point in the chain. This way, it is possible to determine the maximum time at which the system must be transferred to a safe state throughout the entire processing chain.
[0076] For example, the measurement time of the any point cloud can be sent to each voting system. At the pre-fusion stage, the oldest measurement time is set as an effective measurement time, as described earlier. Since there is no clear definition of a reference time for a grid map, the average of all input measurement times is used to set the reference time for the grid map. Grid maps with a measurement age above 150 ms are discarded in order to avoid fusing obsolete information, something that would invalidate the entire grid map. In the global grid map generator, the same logic is applied as in the pre-fusion.
3. Technical Implementation Details
[0077] Computerized devices can be suitably designed for implementing embodiments of the present invention as described herein. In that respect, it can be appreciated that the methods described herein are non-interactive, i.e., automated, although human input may be required in certain cases, e.g., should an anomaly or emergency be detected. Automated parts of such methods can be implemented in software, hardware, or a combination thereof. In exemplary embodiments, automated parts of the methods described herein are implemented in software, as a service or an executable program (e.g., an application), the latter executed by suitable digital processing devices.
[0078] In the present context, each processing system 11, 12, 14, 15 unit is preferably mapped onto one or more respective sets of processors or, even, one or more respective computers. In particular, the system 15 may typically involve several processors or computers.
[0079] A suitable computer will typically include at least one processor and a memory (possibly several memory units) coupled to one or memory controllers. Each processor is a hardware device for executing software. The processor, which may in fact comprise one or more processing units (e.g., processor cores), can be any custom made or commercially available processor, likely subject to some certification.
[0080] The memory typically includes a combination of volatile memory elements (e.g., random access memory) and nonvolatile memory elements, e.g., a solid-state device. The software in memory may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions. The software in the memory captures methods described herein in accordance with exemplary embodiments, as well as a suitable operating system (OS). The OS essentially controls the execution of other computer (application) programs and provides scheduling, input-output control, file and data management, memory management, and communication control and related services. It may further control the distribution of tasks to be performed by the processors.
[0081] The methods described herein shall typically be in the form of executable program, script, or, more generally, any form of executable instructions.
[0082] In exemplary embodiments, each computer further includes a network interface or a transceiver for coupling to a network (not shown). In addition, each computer will typically include one or more input and/or output devices (or peripherals) that are communicatively coupled via a local input/output controller. A system bus interfaces all components. Further, the local interface may include address, control, and/or data connections to enable appropriate communications among the aforementioned components. The I/O controller may have additional elements, which are omitted for simplicity, such as controllers, buffers (caches), drivers, repeaters, and receivers, to allow data communication.
[0083] When a computer is in operation, one or more processing units executes software stored within the memory of the computer, to communicate data to and from the memory and/or the storage unit (e.g., a hard drive and/or a solid-state memory), and to generally control operations pursuant to software instruction. The methods described herein and the OS, in whole or in part are read by the processing elements, typically buffered therein, and then executed. When the methods described herein are implemented in software, the methods can be stored on any computer readable medium for use by or in connection with any computer related system or method.
[0084] Computer readable program instructions described herein can be downloaded to processing elements from a computer readable storage medium, via a network, for example, the Internet and/or a wireless network. A network adapter card or network interface may receive computer readable program instructions from the network and forwards such instructions for storage in a computer readable storage medium interfaced with the processing means. All computers and processors involved can be synchronized using any suitable protocol (e.g., NTP) or thanks to timeout messages.
[0085] Aspects of the present invention are described herein notably with reference to a flowchart and a block diagram. It will be understood that each block, or combinations of blocks, of the flowchart and the block diagram can be implemented by computer readable program instructions.
[0086] These computer readable program instructions may be provided to one or more processing elements as described above, to produce a machine, such that the instructions, which execute via the one or more processing elements create means for implementing the functions or acts specified in the block or blocks of the flowchart and the block diagram. These computer readable program instructions may also be stored in a computer readable storage medium.
[0087] The flowchart and the block diagram in the accompanying drawings illustrate the architecture, functionality, and operation of possible implementations of the computerized systems, methods of operating it, and computer program products according to various embodiments of the present invention. Note that each computer-implemented block in the flowchart or the block diagram may represent a module, or a portion of instructions, which comprises executable instructions for implementing the functions or acts specified therein. In variants, the functions or acts mentioned in the blocks may occur out of the order specified in the figures. For example, two blocks shown in succession may actually be executed in parallel, concurrently, or still in a reverse order, depending on the functions involved and the algorithm optimization retained. It is also reminded that each block and combinations thereof can be adequately distributed among special purpose hardware components.
[0088] While the present invention has been described with reference to a limited number of embodiments, variants, and the accompanying drawings, it will be understood by those skilled in the art that various changes may be made, and equivalents may be substituted without departing from the scope of the present invention. In particular, a feature (device-like or method-like) recited in a given embodiment, variant or shown in a drawing may be combined with or replace another feature in another embodiment, variant, or drawing, without departing from the scope of the present invention. Various combinations of the features described in respect of any of the above embodiments or variants may accordingly be contemplated, that remain within the scope of the appended claims. In addition, many minor modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. Therefore, it is intended that the present invention is not limited to the particular embodiments disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims. In addition, many other variants than explicitly touched above can be contemplated. For example, several architecture variants may be contemplated for the processing system 15, which involves one or more distinct computers.