System and method for identifying parking spaces and parking occupancy based on satellite and/or aerial images

11170236 · 2021-11-09

Assignee

Inventors

Cpc classification

International classification

Abstract

Methods and system for identifying and evaluating on-street parking spots of an area are disclosed. The method comprises using at least one of satellite and aerial images and map data to identify vehicles, match them to street sections, compute on-street vehicle lanes, consolidate data from a plurality of images and identify parking lanes, as well as parking spots. The described system comprises a memory component, a processing component and an output component. Furthermore, a computer program executable by a computer and a non-transient computer-readable medium for identifying and evaluating on-street parking spots are described.

Claims

1. A method for identifying and evaluating on-street parking spots of an area based on at least one of satellite and aerial images of said area, the method comprising retrieving and processing a plurality of at least one of satellite and aerial images by at least quality benchmarking and georeferencing the images; and for each image, detecting objects of interest comprising at least detected vehicles, computing street sections based on map data corresponding to the images, and assigning the detected objects to at least street sections; and for each street section, identifying on-street vehicle lanes based on the detected objects of interest; and combining street sections based on the on-street vehicle lanes from the plurality of at least one of satellite and aerial images; and in the combined sections, identifying parking lanes and deriving individual parking spots comprised thereon, and computing availability of parking spaces in a given neighborhood at a given time.

2. The method according to claim 1 further comprising consolidating and interpreting data related to the identified individual parking spots.

3. The method according to claim 1 further comprising computing parking space occupancy of the identified parking lanes.

4. The method according to claim 1 further comprising using a plurality of at least one of satellite and aerial images of an area taken over a certain period of time to identify at least time-dependent parking space availability.

5. The method according to claim 1 further comprising identifying parking rules based on the interpreted identified individual parking spots.

6. A method for identifying on-street parking spots of an area based on at least one of satellite and aerial images of said area, the method comprising retrieving and processing a plurality of at least one of satellite and aerial images by at least quality benchmarking and georeferencing the images, and for each image, detecting objects of interest comprising at least detected vehicles, computing street sections based on map data corresponding to the images, and assigning the detected objects to at least street sections, and for each street section, identifying on-street vehicle lanes based on the detected objects of interest, and combining street sections based on the on-street vehicle lanes from the plurality of at least one of satellite and aerial images, and in the combined sections, identifying parking lanes and deriving individual parking spots comprised thereon, wherein identifying on-street vehicle lanes further comprises computing a closest street section for each detected vehicle and recursively identifying on-street vehicle lanes based on a plurality of vehicles present in each street section.

7. The method according to claim 6 wherein the recursively identified on-street vehicle lanes are quality controlled by at least comparing their slope with that of the respective street section.

8. The method according to claim 7 wherein the recursively identified on-street vehicle lanes not compliant with the quality control are further compared to similar lanes compliant with the quality control and adjusted to comply as well by recursively adapting their slope.

9. A method for identifying on-street parking spots of an area based on at least one of satellite and aerial images of said area, the method comprising retrieving and processing a plurality of at least one of satellite and aerial images by at least quality benchmarking and georeferencing the images, and for each image, detecting objects of interest comprising at least detected vehicles, computing street sections based on map data corresponding to the images, and assigning the detected objects to at least street sections, and for each street section, identifying on-street vehicle lanes based on the detected objects of interest, and combining street sections based on the on-street vehicle lanes from the plurality of at least one of satellite and aerial images, and in the combined sections, identifying parking lanes and deriving individual parking spots comprised thereon, wherein combining street sections comprises inputting a plurality of intersecting images, removing vehicles likely located off-street, and consolidating on-street vehicle lanes between the street sections and wherein identifying parking lanes further comprises computing the distance between the vehicles located in each on-street vehicle lane and wherein the method further comprises assigning an identification parameter to each on-street vehicle lane.

10. A method for identifying on-street parking spots of an area based on at least one of satellite and aerial images of said area, the method comprising retrieving and processing a plurality of at least one of satellite and aerial images by at least quality benchmarking and georeferencing the images, and for each image, detecting objects of interest comprising at least detected vehicles, computing street sections based on map data corresponding to the images, and assigning the detected objects to at least street sections, and for each street section, identifying on-street vehicle lanes based on the detected objects of interest, and combining street sections based on the on-street vehicle lanes from the plurality of at least one of satellite and aerial images, and in the combined sections, identifying parking lanes and deriving individual parking spots comprised thereon, wherein identifying individual parking spots comprises computing a mean and minimal distance between neighboring vehicles and determining orientation of parked vehicles with respect to their respective parking lane.

11. A method for identifying on-street parking spots of an area based on at least one of satellite and aerial images of said area, the method comprising retrieving and processing a plurality of at least one of satellite and aerial images by at least quality benchmarking and georeferencing the images; and detecting vehicles and street sections in the georeferenced images; and identifying on-street vehicle lanes based on the detected street sections and vehicles; and identifying parking lanes and deriving individual parking spots on them; and consolidating data from the plurality of at least one of satellite and aerial images of the area to compute an average parking occupancy in said area, wherein identifying on-street vehicle lanes comprises recursively assigning vehicles on a given street section to possible on-street vehicle lanes until an optimal solution yielding at least one lane is obtained.

12. A method for identifying on-street parking spots of an area based on at least one of satellite and aerial images of said area, the method comprising retrieving and processing a plurality of at least one of satellite and aerial images by at least quality benchmarking and georeferencing the images, and detecting vehicles and street sections in the georeferenced images, and identifying on-street vehicle lanes based on the detected street sections and vehicles, and identifying parking lanes and deriving individual parking spots on them; and consolidating data from the plurality of at least one of satellite and aerial images of the area to compute an average parking occupancy in said area, wherein the parking spot derivation comprises at least computing distance between nearest neighbor vehicles on each parking lane, determining types of vehicles and parking orientation and obtaining an average parking spot with a corresponding size based on the above.

13. A computer-implemented system for identifying on-street parking spots based on at least one of satellite and aerial images, the system comprising a storage component configured to store a plurality of at least one of satellite and aerial images and map data; and a processing component configured for retrieving and processing a plurality of at least one of satellite and aerial images from the storage component by at least quality controlling and georeferencing the images; and for each image, detecting objects of interest comprising at least detected vehicles, computing street sections based on map data corresponding to the images, and assigning each detected object to a street section for each street section, identifying on-street vehicle lanes based on the detected objects of interest; and combining street sections based on the on-street vehicle lanes from the plurality of at least one of satellite and aerial images; and in the combined sections, identifying parking lanes and individual parking spots comprised thereon; an output component configured to output the individual parking spots determined by the processing component, wherein the processing component is further configured for computing availability of parking spaces in a given neighborhood at a given time.

14. The method according to claim 11 further comprising deriving rules related to parking from the consolidated data.

15. The method according to claim 11 further comprising consolidating the detected street sections with the respective identified lanes and identifying the parking lanes as the outermost lanes in the resulting consolidated street sections.

16. The system according to claim 13 wherein the processing component is further configured for identifying parking rules based on the identified individual parking spots.

17. The system according to claim 13 wherein the output component comprises at least one of an application for a user's personal computing device that assists a user with parking spot finding; and an interface, for third parties to obtain access to known parking spots in a given area.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1a depicts an embodiment of a method for identifying parking spaces based on satellite images according to one aspect of the invention;

(2) FIG. 1b depicts another method according to one aspect of the invention;

(3) FIG. 2 schematically depicts an embodiment of a system configured to identify parking spaces according to one aspect of the invention;

(4) FIG. 3 depicts an overview of step S1 as depicted in FIG. 1a in more detail;

(5) FIGS. 4a and 4b depict embodiments of step S2 as depicted in FIG. 1a, detailing image segmentation and object detection in satellite images;

(6) FIGS. 4c and 4d similarly depict embodiments of step S2, exemplifying computing street sections based on map data;

(7) FIGS. 5a to 5f depict embodiments of step S3 as depicted in FIG. 1a, detailing the identification of on-street traffic lanes based on the detected objects;

(8) FIGS. 6a and 6b depict embodiments of step S4 as depicted in FIG. 1a, detailing combining of the on-street traffic lanes from different satellite images; and

(9) FIGS. 7a to 7c depict embodiments of step S5 as depicted in FIG. 1a, detailing identifying parking lanes and deriving individual parking spots.

DESCRIPTION OF EMBODIMENTS

(10) FIG. 1a depicts an embodiment of a method according to one aspect of the invention. The method comprises using satellite images to identify on-street parking spaces. The present method is particularly useful for mapping parking areas in a city, providing an overview of a parking situation to interested parties, or generating forecasts regarding parking occupancy.

(11) In step S1, satellite images are retrieved and processed. This can comprise retrieving images from a plurality of storage systems such as databases, harmonizing them, quality controlling them, geotagging them and further prepare them to be used as part of the method.

(12) In step S2, objects of interest are detected in the images. Those typically comprise vehicles, but can also comprise landmarks, buildings, or other objects allowing for further data extraction and use. Furthermore, segmentation analysis is performed on each satellite image.

(13) That is, a plurality of image surfaces is identified, a plurality of street sections is identified in map data, and each identified object is assigned to a street section. This is further detailed below.

(14) In step S3, on-street traffic lanes are identified among the street sections. This can be performed by analyzing the detected vehicles and fitting them to a plurality of lines that represent the lanes.

(15) In step S4, on-street traffic lanes from different street section are combined. This can be done both per image (provided multiple street sections belonging to the same street are present in the image, or there are multiple sections with similar properties, e.g., heading of the section) and for a plurality of images covering a certain area. Note, that for this step, a reference map such as Open Street Map can be used to assist with the combining.

(16) In step S5, parking lanes are identified among the on-street traffic lanes. Furthermore, individual parking spots are derived.

(17) Step S6 comprises consolidating and interpreting data related to parking spots. For example, images depicting the same area at different times can be analyzed. The obtained data can be processed to obtain an average or time-based parking occupancy in a given area. Further, different areas can be combined to obtain an on-street parking map for a neighborhood, a town, a city, a country and/or the world.

(18) Given the number of parking spots found in step or submodule S5, the following information can be extracted:

(19) 1. Additional features about the street from images (number of lanes, information which lanes are used for driving and which are not, type of parking on street—parallel, orthogonal).

(20) 2. Map off all places within an area where people tend to park a car.

(21) 3. Usual rules that govern parking behaviour (which might differ from legal rules in some areas).

(22) 4. Number of usual parking spots on a given street.

(23) 5. Typical parking occupancy of a street.

(24) 6. Areas where people do not park their cars.

(25) Note, that although the present method is geared towards satellite images, aerial images can also be used in the same manner to obtain on-street parking spots. Furthermore, a combination of aerial and satellite images is possible as well. For example, images obtained by drones can be used with the present method.

(26) FIG. 1b depicts an alternative method for on-street parking spot identification based on satellite images.

(27) As before, satellite images are retrieved and processed in step S1′. Following this, objects of interest (preferably at least vehicles) are detected in S2′. In S3′, images are matched with a reference map. That is, satellite image streets are matched with a known map of the area, such as, for example, Open Street Map. Then, traffic lines in images are identified and verified with the reference map in S4′. In S5′, the images are combined into image blocks which correspond to a certain area or patch of a map. In the image blocks, parking lines are identified and additional error correction is performed as part of step S6′. In step S7′, the number of parked cars and free spaces per image block is computed. Note, that lines are used here interchangeably with lanes.

(28) FIG. 2 depicts a schematic embodiment of a system for identifying on-street parking spaces based on satellite images. The system can comprise at least a storage component 10, a processing component 20, and an output component 30. The system can be implemented on a server, or on a local computing device such as a personal computer. The system can be implemented as a standalone software-based tool for identifying on-street parking. Additionally or alternatively, the system can be implemented as part of an app which assists users with locating an on-street parking spot. The system can also be implemented as part of driver assistance systems and/or navigational hardware and software.

(29) The storage component 10 can comprise local or online databases comprising satellite images. The images can originate from a plurality of different sources (such as different satellite systems). Therefore, the storage component 10 can have a plurality of sub-components, each corresponding to a separate database or the like. Note, that the storage component 10 can comprise a database located on an online server and/or a collection of servers such as a cloud. Additionally or alternatively, the storage component 10 can comprise a physical storage device such as a hard drive.

(30) The processing component 20 can comprise a processor in a computer and/or a server. The processing component 20 can be programmed to execute all of the steps of the algorithm resulting in identifying on-street parking spots. Note, that the processing component 20 can also comprise a local and/or a cloud-based processor.

(31) The output component 30 can comprise a user interface that is configured to display the results of the algorithm identifying parking spots and/or further information such as parking occupancy. The output component 30 can comprise a browser, an app, or a front end of a program designed to run on a computing device such as a smartphone, a laptop, a driver assistance system, a tablet, a GPS unit or the like. Additionally or alternatively, the output component 30 can also comprise a back end serving another application or program, that is, an API.

(32) FIG. 3 depicts a more detailed embodiment of step S1 shown in FIGS. 1a and 1b. Satellite images 12 are input into a submodule image retrieval S11. First, this submodule connects to all available databases that store satellite images 12, and requests all images that cover a predefined area. The databases can be locally stored, or they might be located at external partners, e.g., satellite provides, and connected by a defined API. The submodule also requests a number of features for each image such as:

(33) 1. Exact data and time when the image was captured.

(34) 2. Average cloud coverage of the image.

(35) 3. Satellite provider, name, and GSD (ground sampling distance).

(36) 4. Satellite offNADIR angle of the image.

(37) 5. Sun elevation.

(38) 6. Sun azimuth.

(39) In case some of the above-mentioned features are not available for an image, the module attempts to fill this missing value from another database.

(40) From there, a submodule image quality check S12 takes over. In this submodule, a quality control is performed. In this step, satellite images, or parts of the images that do not meet requirements for further analysis are excluded. This can also be referred to as quality benchmarking. Additional features about images might be counted in this step, and additional information about the area of interest might be used to count these features, e.g., average building height in the area together with sun azimuth and sun elevation might be used to count the probability that further analysis will be affected by shadows. Example of potential image requirements comprise:

(41) 1. Cloud coverage below a threshold.

(42) 2. Satellite offNADIR in defined range.

(43) 3. Sun azimuth and elevation in a predefined range.

(44) 4. Probability of large portion area covered by shadow below a threshold.

(45) Following quality benchmarking, image georeferencing S13 is performed. Images that successfully pass the QC (quality control) phase are then georeferenced. Existing georeferencing algorithms can be used for this. These algorithms can be based on providing automatic/automated tie point search between satellites, and estimate a model relating ground coordinates to image coordinates.

(46) FIGS. 4a to 4d depict embodiments of step S2 as shown in FIG. 1a. Note, that FIGS. 4a and 4b show image processing, while FIGS. 4c and 4d show map data processing. In step S2, objects are identified and surface segmentation of satellite images into surfaces is performed. Furthermore, map data is used to extract a plurality of street sections. Note, that all of those outputs are independent, and can be computed in parallel.

(47) FIG. 4a demonstrates an embodiment of the detected objects 100 in an exemplary satellite image 12. Here, the detected objects of interest 100 comprise vehicles. However, they can also comprise other objects such as landmarks, buildings or similar structures. For the purpose of the present disclosure, the objects of interest 100 generally refer to vehicles 100. Existing object detection algorithms can be used to detect vehicles 100 and other objects of relevance, as well as their features. Examples of additional features include:

(48) 1. Coordinates of bounding box and centroid of the vehicle.

(49) 2. Class of the vehicle (e.g., personal, commercial).

(50) 3. Orientation of the vehicle.

(51) 4. Size of the area covered by the vehicle.

(52) FIG. 4b depicts an exemplary image 12 divided into a plurality of surfaces. 110. Different colours indicate different types of terrain automatically recognized. Existing segmentation algorithms can be used to classify types of surfaces captured in an image. These algorithms identify areas on an image with the same surface, and classify them into predefined categories. These categories can include: urban, non-urban, clouds, water, roads, forest, building, etc.

(53) FIGS. 4c and 4d depict map data 16 of an exemplary street with a plurality of sections 110. In FIG. 4c, map data 16 of a typical street is shown. The street is represented by a linestring (blue solid line). The linestring is defined as an order sequence of points (red dots). For further processing, each linestring has to be separated such that each new linestring contains exactly two points. In FIG. 4d, results of the separation algorithm applied on 4c is shown, and different sections are indicated by different colour shades.

(54) Note, that map data 16 can correspond to publically available map data such as that provided by Open Street Map, and/or comprise proprietary map data.

(55) FIGS. 5a to 5c depict more detailed embodiments of step S3 as presented in FIG. 1a. The object of this subroutine is to identify on-street vehicle lanes in street sections.

(56) FIG. 5a depicts a schematic flow of the present subroutine. Inputs comprise the identified objects 100 and map data comprising street sections 110. The inputs are fed into an object grouping subroutine S31. First, only street sections 110 where vehicles 100 are allowed to enter are extracted from the map data. This can be done based on the underlying map such as Open Street Map, or by other methods.

(57) Second, recognized objects 100 and these street sections 110 are merged together. That is, for each object 100, the closest street section 110 is assigned to it. Special care must be taken when an object 100 is similarly close to two or more street sections 110, as libraries counting spatial distances have limited precision. In such cases, additional features of objects, e.g., vehicle orientation can be taken into account when assigning the closest street sections 110.

(58) Then, a lane identification subroutine S32 is performed. This is described in more detail in FIGS. 5b and 5c, which depict more detailed representations of this submodule. First, empirical lane identification S321 is performed. This is followed by quality assurance S322 and lane adjustment S323. FIG. 5c presents step S321 in more detail. The empirical lane identification S321 subroutine comprises the following steps (as also illustrated by FIG. 5c).

(59) 1. Set number of on-street vehicle lanes to k=1.

(60) 2. Divide all vehicles 100 into k lanes using hierarchical clustering based on distance between vehicles 100 and/or distance between vehicles 100 and street sections 110 and/or distance to other objects of interest identified on image.

(61) 3. Using total least squares, estimate lines that pass through centroids of vehicles 100 assuming that each line represent one lane, all lines have identical slope, and only their intercept (that is, the offset between them) differs.

(62) 4. Count the error of the model from the previous point. If the error is over a certain threshold, and k is lower than number of vehicles 100, increase k, and go to point 2.

(63) 5. Write down the assignment of each vehicle 100, and slope and intercepts of estimated on-street vehicle lanes.

(64) Back to FIG. 5b, for each street section 110, its slope (bearing) is compared with the slope of the empirical lane identified in the previous step S321. If the difference is too high, the street section 110 is flagged. This happens, e.g., in situations when there are only few vehicles 100 identified on a street section 110. In this way, quality assurance S322 is performed.

(65) Finally lanes flagged during quality assurance S322 are adjusted as in step S323. If a street section is flagged, a similar non-flagged section or sections are identified (based on bearing of the sections and other section features), and Empirical lane identification S321 is performed again, but this time stop criteria penalize estimated slope using slopes of similar sections.

(66) FIGS. 5d to 5f depict typical outputs of step S3 as depicted in FIG. 1a.

(67) FIG. 5d depicts an exemplary output of matching vehicles 100 to street sections 110 as part of the object grouping step S31. Vehicles 100 matched to the same street section 110 are marked with the same colour.

(68) FIG. 5e depicts another part of the object grouping step S31. Vehicles matched to one street section 110 are identified as belonging to its one side or another. Left side vehicles 104 (shown on the left side of the figure) are shown with a red line, and right side vehicles 104′ (shown on the right side of the figure) are shown with a green line.

(69) FIG. 5f depicts a typical final output of the on-street vehicle lane identification module. A street section 110 is shown as a line through the satellite image. Vehicles 102 are grouped to a first lane, vehicles 102′ to a second line, and vehicle 102″ to a third lane. Note, that the third lane comprises only one vehicle.

(70) FIGS. 6a and 6b depict exemplary and more detailed embodiments of S4 as shown in FIG. 1a. That is, they depict consolidation of on-street vehicle lanes from different street sections.

(71) This module combines data coming from multiple images, and it creates a uniform notation for on-street vehicle lanes 120 across images, as same lane might have different IDs on data from different locations.

(72) FIG. 6a shows two satellite images of a same area (with a different timestamp) with estimated lanes 120, 120′ (with the individual lines denoted 120a, 120b, 120c and 120a′, 120b′, 120c′ left to right respectively). On the left image, there are red vehicles assigned to lane 120a and blue vehicles assigned to lane 120c. On the right figure, there are red vehicles assigned to lane 120a′, green to 120b′, and blue to 120c′. The middle lane on the left image (lane 120b) is artificial, and comprises an output of the whole module. That is, even if one street section did not have a certain lane (based on the vehicles present in it), it can be added in based on the different street sections (and/or different images).

(73) FIG. 6b depicts a more detailed breakdown of the lane consolidation submodule. A plurality of satellite images 14 serve as inputs. In S41, off-street vehicles are removed. If vehicles 100 located off-street are assigned to the street sections 110, they are assigned to one or more separate lanes. These lanes can be easily identified, as their intercepts (that is, offset with respect to the other lanes) are much higher in absolute value than intercept of vehicle lanes 120. Then, the number of lanes 120 is determined in step S42. As the number of images increases, this number will simply approach a maximal number of lanes 120, since the probability that there is at least one picture where each lane 120 is occupied increases. If the number of pictures is low, additional information about street sections 110 might be taken in to account and the number of lanes 120 is determined based on offsets of estimated lanes.

(74) In step S43, lane identification is established. Here, uniform lane number is assigned to lanes, so that lane number 1 is the leftmost vehicle lane, and the rightmost vehicle lane gets the highest number. E.g, on FIG. 6a, lane 1 is 120a and 120 a′, 2-120b (artificial) and 120b′, etc.

(75) FIGS. 7a to 7c depict more detailed and exemplary embodiments of step S5 as shown in FIG. 1a. That is, parking lanes 150 are identified and parking spaces 152 derived. This module can be run independently on each street section 110 (but use data from all satellite images 14 covering this section) and all vehicles 100 assigned to this street section 110.

(76) FIG. 7a shows a more detailed overview of step S5. Parking lane filtering subroutine S51 comprises determining which on-street vehicle lanes 120 are parking lanes 150. From the previous module (the one described in relation to FIGS. 6a and 6b), there is a uniform notation of on-street vehicle lanes 120. Obviously, only vehicles in the leftmost and rightmost lanes might be used for parking. Hence, vehicles 100 from inner lanes 140 are removed from the data set.

(77) In the second step, these potential parking lanes are confirmed to contain parked cars. This is done based on: Distribution of the distance between identified vehicles 100, as mutual distance of vehicles that are parked and that are moving differs; Distribution of distances of these vehicles to other objects of interest; Results of segmentation algorithms around the lane.

(78) At the end of the process, a list of all parking lanes 150 that are used for parking in an area is obtained.

(79) Parking spot features extraction submodule S52 comprises extracting information about additional features for each parking lane 150. The additional information can comprise:

(80) 1. Minimal and mean distance between two neighbouring vehicles, 2. Mean orientation of vehicles, and distribution of orientation off all vehicles in a lane.

(81) Spot identification submodule S53 comprises using the features extracted in the previous submodule to identify parking spots 152.

(82) FIG. 7b schematically depicts a typical street. Two parking lanes 150 are present on the extremities. In the middle, street middle line 130 is shown. Between the two on each side, moving traffic lane 140 or traffic lane 140 is depicted.

(83) FIG. 7c depicts a typical result of step S5 as shown in FIG. 1a. Parking spots 152 and 152′ are identified in the depicted satellite image. Note, that parking spots 152 are configured for diagonal or perpendicular parking, while parking spots 152′ are configured for parallel parking.

LIST OF REFERENCE NUMERALS

(84) 10—Storage component 12—Storage database 14—Image 20—Processing component 30—Output component 100—Detected object of interest/vehicle 102, 102′, 102″—Vehicle grouped to street section 104, 104′—Vehicle assigned to street section side 110—Street section 120, 120′—On-street vehicle lanes 120a, 120b, 120c, 120a′, 120b′, 120c′—Specific detected lanes 130—Street middle line 140—Moving traffic lane 150—Parking lane 152, 152′—Parking spot

(85) Whenever a relative term, such as “about”, “substantially” or “approximately” is used in this specification, such a term should also be construed to also include the exact term. That is, e.g., “substantially straight” should be construed to also include “(exactly) straight”. Whenever steps were recited in the above or also in the appended claims, it should be noted that the order in which the steps are recited in this text may be the preferred order, but it may not be mandatory to carry out the steps in the recited order. That is, unless otherwise specified or unless clear to the skilled person, the order in which steps are recited may not be mandatory. That is, when the present document states, e.g., that a method comprises steps (A) and (B), this does not necessarily mean that step (A) precedes step (B), but it is also possible that step (A) is performed (at least partly) simultaneously with step (B) or that step (B) precedes step (A). Furthermore, when a step (X) is said to precede another step (Z), this does not imply that there is no step between steps (X) and (Z). That is, step (X) preceding step (Z) encompasses the situation that step (X) is performed directly before step (Z), but also the situation that (X) is performed before one or more steps (Y1), . . . , followed by step (Z). Corresponding considerations apply when terms like “after” or “before” are used.