Object Location Information Provisioning for Autonomous Vehicle Maneuvering

20230093668 · 2023-03-23

    Inventors

    Cpc classification

    International classification

    Abstract

    Embodiments of the present disclosure provide a method, a computer program product, and an arrangement (200) for object location information provisioning for autonomous vehicle (100) maneuvering. The method comprises receiving (S21) a request for object location information from at least one autonomous vehicle (100). The method comprises retrieving (S23) vulnerable road user, VRU, data, from a plurality of VRU data sources (104a-104n), wherein the VRU data comprises respective VRU locations in a pre-determined surrounding of the autonomous vehicle (100). Further, the method comprises determining (S25) the object location information based on the retrieved VRU data. Additionally, the method comprises periodically (S27) transmitting the determined object location information to the autonomous vehicle (100).

    Claims

    1-21. (canceled)

    22. A computer-implemented method, for object location information provisioning for autonomous vehicle maneuvering, the method comprising: receiving a request for object location information from at least one autonomous vehicle; retrieving vulnerable road user (VRU) data, from a plurality of VRU data sources, wherein the VRU data comprises respective VRU locations in a pre-determined surrounding of the autonomous vehicle; determining the object location information based on the retrieved VRU data; and periodically transmitting the determined object location information to the autonomous vehicle.

    23. The computer-implemented method according to claim 22, wherein retrieving VRU data comprises: authenticating the plurality of VRU data sources for data ingestion of VRU data; and disassociating the VRU data from VRU identifying information.

    24. The computer-implemented method according to claim 22, wherein determining object location information comprises: identifying the VRU locations comprised in the VRU data from the plurality of VRU data sources; and combining the VRU locations for each respective VRU.

    25. The computer-implemented method according to claim 22, wherein the request comprises an identifier of the at least one autonomous vehicle.

    26. The computer-implemented method according to claim 22, wherein the plurality of VRU data sources comprises one or more mobile network operators and wherein VRU data corresponds to wireless device data.

    27. The computer-implemented method according to claim 26, wherein the plurality of VRU data sources comprises user equipment, wireless devices, wireless cameras, or wireless sensors.

    28. The computer-implemented method according to claim 22, wherein the method comprises anonymizing user specific information from the VRU data retrieved from the one or more mobile network operators.

    29. The computer-implemented method according to claim 22, wherein the method comprises validating the object location information by: determining that the plurality of VRU data sources are detecting a same object; detecting data points belonging to the detected object identified by the plurality of VRU data sources; identifying redundant data points of the object detected by the plurality of VRU data sources; assigning confidence levels based on overlapping information from the plurality of VRU data sources; and validating the object location information using the assigned confidence levels.

    30. The computer-implemented method according to claim 22, wherein the method comprises generating a report in a pre-defined format with the determined object location information.

    31. The computer-implemented method according to claim 30, wherein the generated report with the determined object location information is periodically transmitted to the autonomous vehicle over a cooperative awareness message, CAM.

    32. A non-transitory computer readable medium storing a computer program comprising program instructions that, when executed by a computer processor of an arrangement, configures the arrangement to: receive a request for object location information from at least one autonomous vehicle; retrieve vulnerable road user (VRU, data) from a plurality of VRU data sources, wherein the VRU data comprises respective VRU locations in a pre-determined surrounding of the autonomous vehicle; determine the object location information based on the retrieved VRU data; and periodically transmit the determined object location information to the autonomous vehicle.

    33. An arrangement for provisioning object location information for autonomous vehicle maneuvering, the arrangement comprising controlling circuitry configured to: receive a request for object location information from at least one autonomous vehicle; retrieve vulnerable road user (VRU) data, from a plurality of VRU data sources, wherein the VRU data comprises respective VRU locations in a pre-determined surrounding of the autonomous vehicle; determine the object location information based on the retrieved VRU data; and periodically transmit the determined object location information to the autonomous vehicle.

    34. The arrangement according to claim 33, wherein the controlling circuitry is configured to retrieve VRU data by: authenticating the plurality of VRU data sources for data ingestion of VRU data; and disassociating the VRU data from VRU identifying information.

    35. The arrangement according to claim 33, wherein the controlling circuitry is configured to determine object location information by: identifying the VRU locations comprised in the VRU data from the plurality of VRU data sources; and combining the VRU locations for each respective VRU.

    36. The arrangement according to claim 33, wherein the request comprises an identifier of the autonomous vehicle.

    37. The arrangement according to claim 33, wherein the plurality of VRU data sources comprises one or more mobile network operators and wherein VRU data corresponds to wireless device data.

    38. The arrangement according to claim 36, wherein the plurality of VRU data sources comprises user equipment, UEs, wireless devices, wireless cameras, or wireless sensors.

    39. The arrangement according to claim 33, wherein the controlling circuitry is configured to anonymize user specific information from the VRU data retrieved from the one or more mobile network operators.

    40. The arrangement according to claim 33, wherein the controlling circuitry is configured to validate the object location information by: determining the plurality of VRU data sources detecting a same object; detecting data points belonging to the detected object identified by the plurality of VRU data sources; identifying redundant data points of the object detected by the plurality of VRU data sources; assigning confidence levels based on overlapping information from the plurality of VRU data sources; and validating the object location information using the assigned confidence levels.

    41. The arrangement according to claim 33, wherein the controlling circuitry is configured to generate a report in a pre-defined format with the determined object location information.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0020] The foregoing will be apparent from the following more particular description of the example embodiments, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the example embodiments.

    [0021] FIG. 1 illustrates an autonomous vehicle in a multi-source scenario;

    [0022] FIG. 2 discloses a flowchart illustrating example method steps implemented in an object location information provisioning application;

    [0023] FIG. 3 is a signaling diagram illustrating an exchange of signals for the object location information provisioning application in a telecommunication network;

    [0024] FIG. 4 [0025] a. discloses an object location information provisioning application in a 4G telecommunication network; [0026] b. discloses an object location information provisioning application in a 5G telecommunication network;

    [0027] FIG. 5 is a schematic block diagram illustrating an example configuration of an object location information provisioning application and its interfaces.

    [0028] FIG. 6 illustrates a computing environment implementing the object location information provisioning application for autonomous vehicle maneuvering, according to an embodiment.

    DETAILED DESCRIPTION

    [0029] Aspects of the present disclosure will be described more fully hereinafter with reference to the accompanying drawings. The apparatus and method disclosed herein can, however, be realized in many different forms and should not be construed as being limited to the aspects set forth herein. Like numbers in the drawings refer to like elements throughout.

    [0030] The terminology used herein is for the purpose of describing particular aspects of the disclosure only, and is not intended to limit the invention. It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps, or components, but does not preclude the presence or addition of one or more other features, integers, steps, components, or groups thereof. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise.

    [0031] Embodiments of the present disclosure will be described and exemplified more fully hereinafter with reference to the accompanying drawings. The solutions disclosed herein can, however, be realized in many different forms and should not be construed as being limited to the embodiments set forth herein.

    [0032] It will be appreciated that when the present disclosure is described in terms of a method, it may also be embodied in one or more processors and one or more memories coupled to the one or more processors, wherein the one or more memories store one or more programs that perform the steps, services and functions disclosed herein when executed by the one or more processors.

    [0033] In the following description of exemplary embodiments, the same reference numerals denote the same or similar components.

    [0034] FIG. 1 illustrates an autonomous vehicle 100 in a multi-source scenario in a surrounding comprising infrastructure components and vulnerable road users, VRUs, e.g., pedestrians and cyclists. In the context of the present disclosure, the term autonomous vehicle reflects a vehicle that can sense its surroundings and perform necessary functions with minimum-to-no human intervention to manoeuvre the vehicle from a starting point to a destination point. Different levels of autonomous driving has been defined and with each increasing level, the extent of the car's independence regarding decision making and vehicle control increases. Vehicles with capabilitites for autonomous maneuvering are expected to be seen in confined spaces like ports, logistics/distribution centers as well as on general public roads.

    [0035] The autonomous vehicle 100 may use different technologies to be able to detect objects in its surrounding; Image Recognition (cameras), Radar sensors and LIDAR sensors. For example, Image processing algorithms are used to categorize detected objects such as lanes, traffic lights, vehicles, pedestrians. It crucial to ensure safety of all those involved especially the Vulnerable Road Users (VRUs) like pedestrians and cyclists. The local processing resources within the autonomous vehicle 100 are used to build a 3D LDMs (Local Dynamic Map) and locate/track objects.

    [0036] Such a self-reliant system is important so that the autonomous vehicle 100 can act based on only its own input data for when the vehicle does not have or has poor/unreliable connectivity. But on the other hand, it limits the potential of taking advantage of connectivity and using input from other data sources to identify objects and improve the vehicles perception of the surroundings. Input from additional data sources would also enable the vehicle to make more informed decisions especially considering the limitations of current camera and sensor technology in cases of bad weather, physical damage to the devices or obstacles in the path of the VRUs. The proposed invention solves the above mentioned disadvantages by sharing anonymized object location information, retrieved from the VRU data sources 104a, 104b, e.g., by means of the telecommunication network 300, with the autonomous vehicle 100. Hence, the VRU data sources 104a, 104b act as additional sources for the autonomous vehicle 100 to identify VRUs in a pre-determined surrounding.

    [0037] An arrangement which implements an object location provisioning application provides additional processing capacity for performing various functions on the obtained VRU data and VRU data source such as, authentication, ingestion, anonymization, data combining and validation. Therefore, the proposed arrangement allows determination of object location information, using VRU data retrieved from additional VRU data sources 104a, 104b, e.g., user equipments, handheld devices, wireless devices or wireless sensors, retrievable using the means of communication that has been established, e.g., by means of the telecommunication network 300. Other examples of VRU data sources comprise traffic cameras and connected wireless transport units like scooters or rental bikes. Thus, the usage of plurality of VRU data sources to determine the object location information and communicating the determined object location information to the autonomous vehicle, improves the reliability of the autonomous vehicle 100 in taking more informed decisions based on a better perception of the vehicle surroundings.

    [0038] FIG. 2 is a flow chart illustrating an example method steps implemented in an object location information provisioning application. At step S21, the method comprises receiving a request for object location information from at least one autonomous vehicle 100. In an embodiment, the request includes an identifier of the autonomous vehicle 100. For example, the identifier of the autonomous vehicle 100 can be an International Mobile Subscriber Identity, IMSI associated with the autonomous vehicle 100 which can be used to track or monitor the autonomous vehicle 100 and/or to perform a Vehicle-to-Everything (V2X) communication between the autonomous vehicle and a wireless communication network.

    [0039] At step S23, the method comprises retrieving VRU, data, from a plurality of VRU data sources 104a, 104b, wherein the VRU data comprises respective VRU locations in a pre-determined surrounding of the autonomous vehicle 100. For example, the VRU data corresponds to data obtained from various wireless devices such as user equipments, wireless cameras, cameras on likepoles/traffic lights or the like.

    [0040] The plurality of VRU data sources 104a, 104b may include one or more wireless network operators. Further, the plurality of VRU data sources 104a, 104b may include various wireless devices such as but not limited to user equipments (UEs), wireless cameras or wireless sensors.

    [0041] In an embodiment, retrieving the VRU data may comprise authenticating and/or authorizing the plurality of VRU data sources 104a, 104b for data ingestion of VRU data at step S24a. For example, the VRU data sources 104a-104n, may be authenticated by verifying the credentials associated with the VRU data sources 104a-104n e.g. using passwords. The VRU data sources may be authenticated using advanced authentication methods like digital certificates, e.g., using specific authentication protocols like Secure Socket Layer, Transport Layer Security (SSL/TLS). After authentication of the VRU data sources 104a, 104b, the VRU data may be disassociated from VRU identifying information at step S24b.

    [0042] At step S25, the method further comprises determining the object location information based on the retrieved VRU data and data fusion or the like to determine the object location information based on the retrieved VRU data.

    [0043] In an embodiment, the VRU locations are identified in the VRU data retrieved from the plurality of VRU data sources 104a, 104b. Further, the identified VRU locations for each VRU can be combined to determined the object location information. For example, an object (such as a pedestrian) is identified from the VRU data sources 104a and 104b. The VRU location of the object is identified using the VRU data retrieved from the VRU data sources 104a and 104b. The VRU location obtained from the VRU data source 104a is combined with the VRU location obtained from the VRU data source 104b to determine the accurate location of the object. It should be noted that one or more location determination techniques or yet to be known techniques may be used to accurately determine the object location information based on the retrieved VRU data.

    [0044] At step S27, the method comprises periodically transmitting the determined object location information to the autonomous vehicle 100, i.e., object location information of the detected object(s). For example, the determined object location information is transmitted to the autonomous vehicle 100 every one second. The transmission of the object location information to the autonomous vehicle 100 may be periodic or may be configurable depending on the requirements of the object location information at the autonomous vehicle 100.

    [0045] In an embodiment, the determined object location information can be transmitted to the autonomous vehicle 100 by generating a report in a pre-defined format or a standard format which includes the determined object location information.

    [0046] Further, the generated report with the determined object location information may be periodically transmitted to the automomous vehicle 100 (for example, every one second) over a cooperative awareness message (CAM).

    [0047] The above mentioned steps can be realized or performed using an object provisioning application which can be configured to provide the object location information to the automomous vehicle 100. The object provisioning application may reside in an arrangement 200 for edge computing, e.g., an edge node comprising one or more servers. The arrangement 200 may include necessary controlling circuitry which is required to perform the method steps as described above.

    [0048] In some embodiments, the object provisioning application may reside in a cloud computing environment or a remote server configured to execute the object provisioning application in order to transmit the object location information periodically, to the autonomous vehicle 100.

    [0049] The arrangement 200 can include various modules which can be realized using hardware and/or software or in combination of hardware and software to perform the method steps. The functions of the various modules of the arrangement 200 are explained in conjunction with FIG. 5 in the later parts of the description.

    [0050] FIG. 3 is a signaling diagram illustrating an exchange of signals for the object location information provisioning application in a telecommunication network 300. The object location provisioning application may be configured to interact with one or more network entitites in the telecommunication network 300 to retrieve the VRU data. For example, the telecommunication network 300 includes a plurality of network elements such as base stations i.,e., EUTRAN 302a in a 4G network and NG-RAN 302b in a 5G network, a mobility management entity, MME 304a/access and mobility Management Function, AMF 304b, a gateway mobile location center, GMLC 306 and an enhanced serving mobile location center, E-SMLC 308a/location management function, LMF 310. It should be noted that the telecommunication network may include other network entities other than the entities shown in FIG. 3.

    [0051] As depicted in FIG. 3, the object location information provisioning application may be configured to transmit 5302 a location service request to the GMLC 306 over a standard interface. The GMLC 306 transmits 5304 the location service request to the MME 304a/AMF 304b. The MME 304a/AMF 304b upon receiving the location service request, transmits 5306 the location service request to E-SMLC 308a/LMF 310 for processing the location service request. The E-SMLC 308a/LMF 310 processes 5308 the location service request in coordination with the EUTRAN 302a/NG-RAN 302b.

    [0052] The E-SMLC 308a/LMF 310 supports multiple positioning techniques which provide a different level of position accuracy. The E-SMLC 308a/LMF 310 calculates 5310 the position or location information of the object based on the retrieved VRU data. Among the available network-based positioning methods, UE-assisted A-GNSS (Assisted-GNSS) positioning method over the control plane provides best accuracy (.sup.˜10 m to 50 m) and least UE power consumption. It should be noted that, more advanced positioning methods or positioning processes (Ex: GNSS-rtk, positioning over user plane, or the like) that provides higher accuracy and better UE performance can be implemented at the E-SMLC 308a/LMF 310 for calculating the location information of the object.

    [0053] Further, the E-SMLC 308a/LMF 310 then transmits 5312 location service response back to the MME 304a/AMF 304b. The MME 304a/AMF 304b in turn sends 5314 the location service response to the GMLC 306 and the GMLC 306 sends 5316 the location service response to object location provisioning application 200.

    [0054] FIG. 4a discloses an object location information provisioning application in a 4G telecommunication network. As depicted in FIG. 4a, various entities of a 4G tecommunciation network includes the EUTRAN 302a, the MME 302a, the GMLC 306a and the E-SMLC 308a. The object location information provisioning application hosted in an arrangement 200 (for example, a server in a network domain) interacts with the 4G telecommunication network for retrieving VRU data. For example, the arrangement 200 communicates with the GMLC 306a over an Open Mobile Alliance Mobile Location Protocol, OMA MLP interface. The arrangement 200 can be configured to trigger a location service request to the GMLC 306a over the OMA MLP interface. The GMLC 306a and the E-SMLC communicates with the MME 304a over SLg and SLs interfaces respectively. Further, the MME 304a and the E-UTRAN 302a interacts with each other over S1 interface. The E-UTRAN 302a transmits control signaling to the UE 104a through LTE-Uu interface.

    [0055] The MME 304a monitors the mobility of the UE 104a and transmits mobility information of the UE to the GMLC 306a and E-SMLC 308a. The E-SMLC 308a implements multiple positioning techniques to determine the location of the UE 104a. Further, the location information of the object can be determined based on the location of the UE 104a. The E-SMLC communicates the determined location of the object to the GMLC 306a and the GMLC 306a in turn communicates the location information of the object to the arrangement 200 over the OMA MLP interface as shown in FIG. 4a.

    [0056] In some embodiments, as defined in 3GPP 36.305 and 38.305, the request for a target UE location can be triggered by the MME 304a or by another entity in the 4G telecommunication network.

    [0057] In another embodiment, the location service request can be triggered by location information provisioning application implemented in the arrangement 200 via the GMLC over the OMA MLP interface.

    [0058] FIG. 4b discloses an object location information provisioning application in a 5G telecommunication network. As depicted in FIG. 4b, various entities of a 5G tecommunciation network includes the NG-RAN 302B, the AMF 304b, the GMLC 306a, the E-SMLC 308a and the LMF 310. The object location information provisioning application hosted in the arrangement 200 (for example, a server in a network domain) interacts with the 5G telecommunication network for retrieving the VRU data. For example, the arrangement 200 communicates with the GMLC 306a over the OMA MLP interface. The arrangement 200 can be configured to trigger a location service request to LMF 310 over the OMA MLP interface. The GMLC 306a and the LMF 310 communicates with the AMF 304b over NLg and SLs interfaces respectively. Further, the AMF 304b and the E-UTRAN 302a interacts with each other over N2 interface. The NG-RAN 302b transmits control signaling to the UE 104a through NR-Uu interface.

    [0059] The AMF 304b monitors the mobility of the UE 104a and transmits the mobility information of the UE 104a to the GMLC 306a and the LMF 310. The LMF 310 implements multiple positioning techniques to determine the location of the UE 104a. Further, the location information of the object can be determined based on the location of the UE 104a. The LMF 310 communicates the determined location of the object to the arrangement 200 over the OMA MLP interface as shown in FIG. 4b.

    [0060] In some embodiments, as defined in 3GPP 36.305 and 38.305, the request for a target UE location can be triggered by the MME 304a or by another entity in the 5G telecommunication network.

    [0061] FIG. 5 is a schematic block diagram illustrating an example configuration of an object location information provisioning application and its interfaces. The object location provisioning application is implemented (for example in an edge server) as various modules within an arrangement 200 for provisioning object location information for autonomous vehicle maneuvering, e.g., within an edge node. In the context of the present disclosure ‘edge’ indicates a location where the object location provisioning application is running, e.g., an edge node comprising the arrangement 200. The location of the arrangement depends on network characteristics, e.g., telecom network characteristics, and the various modules may also partly be distributed between different entities. The application will be run in a location chosen such that the data sharing from the network and other sources to the edge node and from the edge node to the autonomous vehicles satisfies latency requirements for it to serve the use case, e.g., as useful ‘real-time’ data. Thus, in some examples, the edge server is located as close as possible to the VRU data source and where the autonomous vehicle operates, e.g., in the Mobile Network Operator, MNO, infrastructure close to the roads to reduce latency and offload processing from the vehicle to the edge application. Moreover, introducing the edge application in the MNOs infrastructure would enable secure provisioning of object location information over MNOs 4G or 5G network. Also, arranging the edge application in the MNOs infrastructure enables the application to use standardized APIs to capture some of the data required from the telecom network.

    [0062] The arrangement 200 for provisioning object location information for autonomous vehicle maneuvering comprises controlling circuitry e.g., as illustrated in FIG. 6.

    [0063] The controlling circuitry is configured to receive a request for object location information from at least one vehicle. The controlling circuitry is further configured to retrieve vulnerable road user, VRU, data, from at plurality of VRU data sources 104a-104n, wherein the VRU data comprises respective VRU location in a predetermined surrounding of the autonomous vehicle.

    [0064] The controlling circuitry is also configured to determine the object location information based on the retrieved VRU data, and to periodically transmit the determined object location information to the autonomous vehicle.

    [0065] In an embodiment, the arrangement 200, e.g., the controlling circuitry of the arrangement, comprises a data ingestor 202, an authenticator 204, a data anonymizer 206, a data combiner 208, a data validation engine 210, a report generator, a storage 214 and an interface 216.

    [0066] In some embodiments, the VRU data sources 104a-104n may be authenticated by the authenticator 204 for data ingestion of VRU data through the data ingestor 202. The authentication of the VRU data sources 104-104n may include verifying credentials of the VRU data sources 104a-104n. The most basic authentication method would be using passwords.

    [0067] More advanced authentication methods like digital certificates are preferred using specific authentication protocols like SSL/TLS, as earlier mentioned.

    [0068] Thus, upon successful authentication of the VRU data sources 104a-104n by the authenticator 204, the controlling circuitry, e.g., the data ingestor 202, may be configured to retrieve VRU data from a plurality of VRU data sources 104a-104n, i.e., once authenticated, the VRU data sources can send the data to the data ingestion layer provided by the data ingestor. For some VRU data sources, a request needs to be sent (one-time or periodically) to trigger data collection. E.g. IMSIs (unique UE identifiers) of phones for which location data is to be collected by the telecom N/W is to be sent over the OMA MLP 3.2 interface (open and standardized) to the GMLC system in the telecom N/W (4G and 5G) as explained with reference to FIGS. 4a and b. Such request clients are implemented in the data ingestion layer. Thus, the data input to the data ingestor is from multiple sources and comprises VRU location, e.g., location as defined in a global standardized format such as World Geodetic System 1984, WGS84, timestamp and other additional data such as direction, speed, object type, etc. The VRU data includes each VRU location in a pre-determined surrounding of the autonomous vehicle. For example, the pre-determined surrounding of the autonomous vehicle 100 may include a distance ranging from 50 meters-100 meters or the like. The data ingestor 202 may be configured to disassociate the VRU data from VRU identifying information when retrieving the VRU data.

    [0069] In some embodiments, the controlling circuitry, e.g., the data ingestor 202, may be configured to determine the VRU locations comprised in the VRU data retrieved from the plurality of VRU data sources. Further, the data ingestor 202 may be configured to store the VRU data stored over time in a storage. The VRU data stored in the storage 214 may be used to understand and/or device important characteristics of VRU movement patterns along the path of the autonomous vehicle. The VRU data combined together with other data like road accident zones, school zones, etc. may be used to improve the knowledge of surroundings of an autonomous vehicle 100.

    [0070] The controlling circuitry, e.g., by means of the data anonymizer 206, may be configured to anonymize user specific information from the VRU data retrieved from the telecommunication network or the mobile network operators. Data anonymization is required for data from sources that contain sensitive user information. This step is either performed by the VRU data source itself (remove/mask sensitive information, assign temporary identities, IDs, to send towards the edge application, etc), or performed by the edge application depending on the deployment model. For example, the data anonymizer 206 may be configured to anonymize the user specific information by removing International Mobile Subscriber Identity, IMSI from the VRU data. Further, the data anonymizer 206 may maintain a mapping of network identifiers (i.e., user IDs) to an application-assigned user IDs to differentiate the data for different users.

    [0071] The controlling circuitry, e.g., by means of the data combiner 208, may further be configured to combine the VRU locations for each respective VRU. Data may be sent to the data combiner, i.e., a data fusion component, that will Convert the location input in the data to a single standard format, e.g., WGS84, and fuse data from multiple sources together for each time period of collection (every second). For example, the data combiner 208 can be configured to implement data fusion by combining the VRU locations retrieved from the plurality of VRU data sources 104a-104n, e.g., wireless devices such as user equipments 104a, wireless cameras and wireless sensors for which data is retrievable by means of the wireless network. Other examples of data sources comprises traffic cameras and connected wireless transport units like scooters or rental bikes. For example, the data combiner 208 can be configured to combine the data from the plurality of VRU data sources together for a time period of every second. Further, the data combiner 208 can be configured to perform one or more actions on the VRU data which includes converting the VRU data into a standard format, compressing the VRU data, extracting the VRU data or the like.

    [0072] In some embodiments, the data combiner 208 may be configured to data fusion of the VRU data retrieved from the plurality of VRU data sources 104a-104n in a data validation engine 210 to detect the VRUs with different levels of accuracy. For example, the data combiner 208 may be configured for data fusion of data points corresponding to a same object detected by the plurality of VRU data sources 104a-104n (when necessary and feasible) in order to improve the accuracy of the data.

    [0073] The controlling circuitry, e.g., by means of the data validation engine 210, may be configured to validate the object location information by analyzing the VRU locations in the VRU data. The data validation engine 210 can be configured to determine that the plurality of VRU data sources are detecting same object. For example, the data validation engine 210 can be configured to perform data fusion of data points corresponding to a same object detected by the plurality of VRU data sources 104a-104n (when necessary and feasible) in order to improve the accuracy of the data. Additionally, any duplicate data points observed during data fusion of the data and confirmed to be belonging to the same object can be filtered to improve the determination of object location information.

    [0074] The data validation engine 210 may also be configured to detect data points belonging to the detected object identified by the plurality of VRU data sources. Further, the data validation engine 210 can be configured to identify redundant data points of the object detected by the plurality of VRU data sources. Furthermore, the data validation engine 210 can be configured to assign confidence levels based on overlapping information from the plurality of VRU data sources and the data validation engine 210 can be configured to validate the object location information using the assigned confidence levels.

    [0075] The controlling circuitry, e.g., by means of the report generator 212, may further be configured to generate a report with the determined object location information in a pre-defined format or a standard format. The generated report with the determined object location information is periodically transmitted to the automomous vehicle 100 (for example, every one second) using a cooperative awareness message (CAM) over the interface 216. The interface 216 can be a standard interface, e.g., standardized 3GPP or ETSI defined interface. The reporting generator is responsible for generating messages as per the standardized format with basic information—location data points of VRUs- and possible additional information like speed of motion of the VRU, VRU type (cyclist, pedestrian, etc), direction of motion, predicted direction, etc. It will then send standardized messages over a standardized interface to the connected autonomous vehicles. The report that is being considered today is a generic report for the entire ‘area’ that is of interest (e.g. where autonomous vehicles can operate). The same report may be sent to each vehicle. In the future, the solution can evolve to sending more personalized messages to each connected vehicle based on the vehicle's speed, location, circular area around the vehicle that is of immediate interest for it, etc. This information is to be collected via the standardized interface.

    [0076] FIG. 6 illustrates a computing environment 600 implementing the object location information provisioning application for autonomous vehicle 100 maneuvering, according to an embodiment. As depicted the computing environment 600 comprises at least one data processing unit 604 that is equipped with a control unit 602 and an Arithmetic Logic Unit (ALU) 603, a memory 605, a storage unit 606, plurality of networking devices 608 and a plurality Input output (I/O) devices 607. The data processing unit 604 is responsible for processing the instructions of the algorithm. The data processing unit 604 receives commands from the control unit in order to perform its processing. Further, any logical and arithmetic operations involved in the execution of the instructions are computed with the help of the ALU 603.

    [0077] The overall computing environment 600 can be composed of multiple homogeneous and/or heterogeneous cores, multiple CPUs of different kinds, special media and other accelerators. The data processing unit 604 is responsible for processing the instructions of the algorithm. Further, the plurality of data processing units 604 may be located on a single chip or over multiple chips.

    [0078] The algorithm comprising of instructions and codes required for the implementation are stored in either the memory 605 or the storage 606 or both. At the time of execution, the instructions may be fetched from the corresponding memory 605 and/or storage 606, and executed by the data processing unit 604.

    [0079] In case of any hardware implementations various networking devices 608 or external I/O devices 607 may be connected to the computing environment to support the implementation through the networking devices 608 and the I/O devices 607.

    [0080] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the elements. The elements shown in FIG. 6 include blocks which can be at least one of a hardware device, or a combination of hardware device and software module.

    [0081] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the scope of the disclosure.