TRUST RELATED MANAGEMENT OF ARTIFICIAL INTELLIGENCE OR MACHINE LEARNING PIPELINES IN RELATION TO THE TRUSTWORTHINESS FACTOR EXPLAINABILITY
20230045754 · 2023-02-09
Inventors
Cpc classification
International classification
Abstract
There are provided measures for trust related management of artificial intelligence or machine learning pipelines in relation to the trustworthiness factor “explainability”. Such measures exemplarily comprise, at a first network entity managing artificial intelligence or machine learning trustworthiness in a network, transmitting a first artificial intelligence or machine learning trustworthiness related message towards a second network entity managing artificial intelligence or machine learning trustworthiness in an artificial intelligence or machine learning pipeline in said network, and receiving a second artificial intelligence or machine learning trustworthiness related message from said second network entity.
Claims
1. An apparatus of a first network entity managing artificial intelligence or machine learning trustworthiness in a network, the apparatus comprising transmitting circuitry configured to transmit a first artificial intelligence or machine learning trustworthiness related message towards a second network entity managing artificial intelligence or machine learning trustworthiness in an artificial intelligence or machine learning pipeline in said network, and receiving circuitry configured to receive a second artificial intelligence or machine learning trustworthiness related message from said second network entity, wherein said first artificial intelligence or machine learning trustworthiness related message is related to artificial intelligence or machine learning model explainability as a trustworthiness factor out of trustworthiness factors including at least artificial intelligence or machine learning model fairness, artificial intelligence or machine learning model explainability, and artificial intelligence or machine learning model robustness, said second artificial intelligence or machine learning trustworthiness related message is related to artificial intelligence or machine learning model explainability as said trustworthiness factor, and said first artificial intelligence or machine learning trustworthiness related message comprises a first information element including at least one first artificial intelligence or machine learning model explainability related parameter.
2. The apparatus according to claim 1, further comprising translating circuitry configured to translate an acquired artificial intelligence or machine learning quality of trustworthiness into requirements related to artificial intelligence or machine learning model explainability as said trustworthiness factor, and identifying circuitry configured to identify said second network entity based on said acquired artificial intelligence or machine learning quality of trustworthiness, wherein said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability capability information request, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability capability information response, and said second artificial intelligence or machine learning trustworthiness related message comprises a second information element including at least one second artificial intelligence or machine learning model explainability related parameter.
3. The apparatus according to claim 2, wherein said at least one first artificial intelligence or machine learning model explainability related parameter includes a list indicative of a cognitive network function scope, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of supported artificial intelligence or machine learning model explanation methods, a list indicative of supported artificial intelligence or machine learning model explainability metrics, and a list indicative of supported artificial intelligence or machine learning model explanation aggregation period lengths.
4. The apparatus according to claim 1, further comprising determining circuitry configured to determine, based on acquired capability information with respect to artificial intelligence or machine learning model explainability as said trustworthiness factor, whether requirements related to artificial intelligence or machine learning model explainability as said trustworthiness factor can be satisfied, wherein said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability configuration request, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability configuration response.
5. The apparatus according to claim 4, wherein said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation collection job, state information indicative of activation or inactivation of said artificial intelligence or machine learning model explanation collection job, start time information indicative of when said artificial intelligence or machine learning model explanation collection job is started, stop time information indicative of when said artificial intelligence or machine learning model explanation collection job is stopped, aggregation period information indicative of an artificial intelligence or machine learning model explanation aggregation period length of said artificial intelligence or machine learning model explanation collection job, keeping time information indicative of for how long artificial intelligence or machine learning model explanations resulting from said artificial intelligence or machine learning model explanation collection job are to be stored, method information indicative of an artificial intelligence or machine learning model explanation method to be used for said artificial intelligence or machine learning model explanation collection job, and filter information indicative of at least one type of artificial intelligence or machine learning model explanations to be collected by said artificial intelligence or machine learning model explanation collection job.
6. The apparatus according claim 1, further comprising determining circuitry configured to determine said second network entity based on an acquired trustworthiness information demand with respect to artificial intelligence or machine learning model explainability as said trustworthiness factor, wherein said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability query request, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability query response, and said second artificial intelligence or machine learning trustworthiness related message comprises a second information element including at least one second artificial intelligence or machine learning model explainability related parameter.
7. The apparatus according to claim 6, wherein said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation query, start time information indicative of a begin of a timeframe for which artificial intelligence or machine learning model explanations are queried with said artificial intelligence or machine learning model explanation query, and stop time information indicative of an end of said timeframe for which artificial intelligence or machine learning model explanations are queried with said artificial intelligence or machine learning model explanation query, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of time information indicative of when key performance indicators considered for an artificial intelligence or machine learning model explanation were reported, cognitive network function information indicative of at least one cognitive network function from which said key performance indicators considered for said artificial intelligence or machine learning model explanation were reported, and a list indicative of a plurality of decision classifications and a number of decisions per decision classification.
8. The apparatus according claim 1, further comprising determining circuitry configured to determine said second network entity based on an acquired trustworthiness information demand with respect to artificial intelligence or machine learning model explainability as said trustworthiness factor, wherein said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability subscription, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability notification, and said second artificial intelligence or machine learning trustworthiness related message comprises a second information element including at least one second artificial intelligence or machine learning model explainability related parameter.
9. The apparatus according to claim 8, wherein said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation query, and filter information indicative of filter criteria for a subscription with respect to said artificial intelligence or machine learning model explanation query, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of time information indicative of when key performance indicators considered for an artificial intelligence or machine learning model explanation were reported, cognitive network function information indicative of at least one cognitive network function from which said key performance indicators considered for said artificial intelligence or machine learning model explanation were reported, and a list indicative of a plurality of decision classifications and a number of decisions per decision classification.
10. An apparatus of a second network entity managing artificial intelligence or machine learning trustworthiness in an artificial intelligence or machine learning pipeline in a network, the apparatus comprising receiving circuitry configured to receive a first artificial intelligence or machine learning trustworthiness related message from a first network entity managing artificial intelligence or machine learning trustworthiness in said network, and transmitting circuitry configured to transmit a second artificial intelligence or machine learning trustworthiness related message towards said first network entity, wherein said first artificial intelligence or machine learning trustworthiness related message is related to artificial intelligence or machine learning model explainability as a trustworthiness factor out of trustworthiness factors including at least artificial intelligence or machine learning model fairness, artificial intelligence or machine learning model explainability, and artificial intelligence or machine learning model robustness, said second artificial intelligence or machine learning trustworthiness related message is related to artificial intelligence or machine learning model explainability as said trustworthiness factor, and said first artificial intelligence or machine learning trustworthiness related message comprises a first information element including at least one first artificial intelligence or machine learning model explainability related parameter.
11. The apparatus according to claim 10, wherein said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability capability information request, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability capability information response, and said second artificial intelligence or machine learning trustworthiness related message comprises a second information element including at least one second artificial intelligence or machine learning model explainability related parameter.
12. The apparatus according to claim 11, wherein said at least one first artificial intelligence or machine learning model explainability related parameter includes a list indicative of a cognitive network function scope, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of supported artificial intelligence or machine learning model explanation methods, a list indicative of supported artificial intelligence or machine learning model explainability metrics, and a list indicative of supported artificial intelligence or machine learning model explanation aggregation period lengths.
13. The apparatus according to claim 10, wherein said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability configuration request, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability configuration response.
14. The apparatus according to claim 13, wherein said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation collection job, state information indicative of activation or inactivation of said artificial intelligence or machine learning model explanation collection job, start time information indicative of when said artificial intelligence or machine learning model explanation collection job is started, stop time information indicative of when said artificial intelligence or machine learning model explanation collection job is stopped, aggregation period information indicative of an artificial intelligence or machine learning model explanation aggregation period length of said artificial intelligence or machine learning model explanation collection job, keeping time information indicative of for how long artificial intelligence or machine learning model explanations resulting from said artificial intelligence or machine learning model explanation collection job are to be stored, method information indicative of an artificial intelligence or machine learning model explanation method to be used for said artificial intelligence or machine learning model explanation collection job, and filter information indicative of at least one type of artificial intelligence or machine learning model explanations to be collected by said artificial intelligence or machine learning model explanation collection job.
15. The apparatus according claim 10, wherein said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability query request, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability query response, and said second artificial intelligence or machine learning trustworthiness related message comprises a second information element including at least one second artificial intelligence or machine learning model explainability related parameter.
16. The apparatus according to claim 15, wherein said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation query, start time information indicative of a begin of a timeframe for which artificial intelligence or machine learning model explanations are queried with said artificial intelligence or machine learning model explanation query, and stop time information indicative of an end of said timeframe for which artificial intelligence or machine learning model explanations are queried with said artificial intelligence or machine learning model explanation query, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of time information indicative of when key performance indicators considered for an artificial intelligence or machine learning model explanation were reported, cognitive network function information indicative of at least one cognitive network function from which said key performance indicators considered for said artificial intelligence or machine learning model explanation were reported, and a list indicative of a plurality of decision classifications and a number of decisions per decision classification.
17. The apparatus according claim 10, wherein said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability subscription, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability notification, and said second artificial intelligence or machine learning trustworthiness related message comprises a second information element including at least one second artificial intelligence or machine learning model explainability related parameter.
18. The apparatus according to claim 17, wherein said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation query, and filter information indicative of filter criteria for a subscription with respect to said artificial intelligence or machine learning model explanation query, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of time information indicative of when key performance indicators considered for an artificial intelligence or machine learning model explanation were reported, cognitive network function information indicative of at least one cognitive network function from which said key performance indicators considered for said artificial intelligence or machine learning model explanation were reported, and a list indicative of a plurality of decision classifications and a number of decisions per decision classification.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0054] In the following, the present disclosure will be described in greater detail by way of non-limiting examples with reference to the accompanying drawings, in which
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
DETAILED DESCRIPTION
[0068] The present disclosure is described herein with reference to particular non-limiting examples and to what are presently considered to be conceivable embodiments. A person skilled in the art will appreciate that the disclosure is by no means limited to these examples, and may be more broadly applied.
[0069] It is to be noted that the following description of the present disclosure and its embodiments mainly refers to specifications being used as non-limiting examples for certain exemplary network configurations and deployments. Namely, the present disclosure and its embodiments are mainly described in relation to 3GPP specifications being used as non-limiting examples for certain exemplary network configurations and deployments. As such, the description of example embodiments given herein specifically refers to terminology which is directly related thereto. Such terminology is only used in the context of the presented non-limiting examples, and does naturally not limit the disclosure in any way. Rather, any other communication or communication related system deployment, etc. may also be utilized as long as compliant with the features described herein.
[0070] Hereinafter, various embodiments and implementations of the present disclosure and its aspects or embodiments are described using several variants and/or alternatives. It is generally noted that, according to certain needs and constraints, all of the described variants and/or alternatives may be provided alone or in any conceivable combination (also including combinations of individual features of the various variants and/or alternatives).
[0071] According to example embodiments, in general terms, there are provided measures and mechanisms for (enabling/realizing) trust related management of artificial intelligence or machine learning pipelines in relation to the trustworthiness factor explainability, and in particular measures and mechanisms for (enabling/realizing) explaining ML decisions in trustworthy AI frameworks.
[0072] A framework for TAI in cognitive autonomous networks (CAN) underlies example embodiments.
[0073]
[0074] Such TAIF for CANs may be provided to facilitate the definition, configuration, monitoring and measuring of AI/ML model trustworthiness (i.e., fairness, explainability and robustness) for interoperable and multi-vendor environments. A service definition or the business/customer intent may include AI/ML trustworthiness requirements in addition to quality of service (QoS) requirements, and the TAIF is used to configure the requested AI/ML trustworthiness and to monitor and assure its fulfilment. The TAIF introduces two management functions, namely, a function entity named AI Trust Engine (one per management domain) and a function entity named AI Trust Manager (one per AI/ML pipeline). The TAIF further introduces six interfaces (named T1 to T6) that support interactions in the TAIF. According to the TAIF underlying example embodiments, the AI Trust Engine is center for managing all AI trustworthiness related things in the network, whereas the AI Trust Managers are use case and often vendor specific, with knowledge of the AI use case and how it is implemented.
[0075] Furthermore, the TAIF underlying example embodiments introduces a concept of AI quality of trustworthiness (AI QoT) (as seen over the T1 interface in
[0076]
[0077] Once the Policy Manager receives an intent from a customer, it is translated into AI QoT intent/class identifier and sent to the AI Trust Engine over the T1 interface. The AI Trust Engine translates the AI QoT intent/class identifer into AI trustworthiness (i.e., fairness, robustness, and explainability) requirements and sends it to the AI Trust Manager of the AI pipeline over the T2 interface. The AI Trust Manager may configure, monitor, and measure AI trustworthiness requirements (i.e., trust mechanisms and trust metrics) for an AI Data Source Manager, an AI Training Manager and an AI Inference Manager (of a respective AI pipeline) over T3, T4 and T5 interfaces, respectively. The measured or collected trustworthy metrics/artifacts/explanations from the AI Data Source Manager, AI Training Manager and AI Inference Manager regarding the AI pipeline may be pushed to the AI Trust Manager over T3, T4 and T5 interfaces, respectively. The AI Trust Manager may push, over the T2 interface, all trustworthy metrics/artifacts/explanations of the AI pipeline to the AI Trust Engine, which may store the information in a trust knowledge database. Finally, the network operator can request and receive the trustworthy metrics/explanations/artifacts of an AI pipeline from the AI Trust Engine over the T6 interface. Based on the information retrieved, the Network Operator may decide to update the policy via the Policy Manager.
[0078] Here, in the TAIF underlying example embodiments, the operator needs methods for configuring the explainability requirements of ML-based network automation functions and for collecting and querying explanations as needed. This can be done by providing the QoT definitions via policies (Interface T1) or directly (Interface T6) to the AI Trust Engine. The AI Trust Engine will translate the requirement and determine the affected network automation functions (NAF) and corresponding AI pipelines and their respective AI Trust Managers.
[0079] In the TAIF underlying example embodiments, the AI Trust Manager is the use case and vendor specific manager, which knows the AI explainability capabilities of the NAF and how to configure it and collect the explanations. Since the AI Trust Manager is a vendor-specific management function, a network may contain AI Trust Managers from several different vendors.
[0080] Therefore, potentially required operations and notifications utilizing the T2 interface to effect and/or facilitate and/or prepare such configuration and reporting need to be specified and provided. In particular, the AI Trust Engine needs the AI Trust Managers to provide an interface for the explainability functionality to be able to operate therewith.
[0081] As an example, in case of predictive handover, it would be important to understand the reasons why handovers are executed or not executed by the algorithm, in order to understand the characteristics of the cell border in question and to improve the performance of the algorithm and the labelling.
[0082] An example of this are so-called desperate handovers in case of coverage holes. The mobility failures in these situations are because of coverage issues and cannot be corrected with mobility optimization. Therefore, it would be important to be able to detect and understand these situations and to exclude the failures from mobility failure counts.
[0083] Hence, in brief, according to example embodiments, AI Trust Manager (which may be considered as a second network entity managing AI/ML trustworthiness in an AI/ML pipeline in a network) application programming interfaces (API) for AI/ML explainability are provided that allow the AI Trust Engine (which may be considered as a first network entity managing AI/ML trustworthiness in the network), over the T2 interface, to discover the AI explainability capabilities of the use case-specific CNF or AI pipeline, to configure the required AI explainability methods and/or the collection of AI/ML explanations.
[0084] In particular, according to example embodiments, the following AI Trust Manager APIs for AI/ML explainability are provided.
[0085] 1. TAI Explainability Capability Discovery API (Request/Response)—It allows the AI Trust Engine, via T2 interface, to discover supported AI explainability methods.
[0086] 2. TAI Explainability Configuration API (Request/Response)—It allows the AI Trust Engine, via T2 interface, to configure appropriate AI explainability method(s) to be used and how the explanations are to be collected and stored.
[0087] 3. TAI Explainability Query API (Request/Response and Subscribe/Notify)—It allows the AI Trust Engine, via T2 interface, to query/request AI decision explanations from the AI Trust Manager.
[0088] Example embodiments are specified below in more detail.
[0089]
[0090] As shown in
[0091]
[0092] In an embodiment at least some of the functionalities of the apparatus shown in
[0093] According to a variation of the procedure shown in
[0094] According to further example embodiments, said at least one first artificial intelligence or machine learning model explainability related parameter includes a list indicative of a cognitive network function scope, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of supported artificial intelligence or machine learning model explanation methods, a list indicative of supported artificial intelligence or machine learning model explainability metrics, and a list indicative of supported artificial intelligence or machine learning model explanation aggregation period lengths.
[0095] According to a variation of the procedure shown in
[0096] According to further example embodiments, said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation collection job, state information indicative of activation or inactivation of said artificial intelligence or machine learning model explanation collection job, start time information indicative of when said artificial intelligence or machine learning model explanation collection job is started, stop time information indicative of when said artificial intelligence or machine learning model explanation collection job is stopped, aggregation period information indicative of an artificial intelligence or machine learning model explanation aggregation period length of said artificial intelligence or machine learning model explanation collection job, keeping time information indicative of for how long artificial intelligence or machine learning model explanations resulting from said artificial intelligence or machine learning model explanation collection job are to be stored, method information indicative of an artificial intelligence or machine learning model explanation method to be used for said artificial intelligence or machine learning model explanation collection job, and filter information indicative of at least one type of artificial intelligence or machine learning model explanations to be collected by said artificial intelligence or machine learning model explanation collection job.
[0097] According to a variation of the procedure shown in
[0098] According to further example embodiments, said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation query, start time information indicative of a begin of a timeframe for which artificial intelligence or machine learning model explanations are queried with said artificial intelligence or machine learning model explanation query, and stop time information indicative of an end of said timeframe for which artificial intelligence or machine learning model explanations are queried with said artificial intelligence or machine learning model explanation query, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of time information indicative of when key performance indicators considered for an artificial intelligence or machine learning model explanation were reported, cognitive network function information indicative of at least one cognitive network function from which said key performance indicators considered for said artificial intelligence or machine learning model explanation were reported, and a list indicative of a plurality of decision classifications and a number of decisions per decision classification.
[0099] According to a variation of the procedure shown in
[0100] According to further example embodiments, said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation query, and filter information indicative of filter criteria for a subscription with respect to said artificial intelligence or machine learning model explanation query, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of time information indicative of when key performance indicators considered for an artificial intelligence or machine learning model explanation were reported, cognitive network function information indicative of at least one cognitive network function from which said key performance indicators considered for said artificial intelligence or machine learning model explanation were reported, and a list indicative of a plurality of decision classifications and a number of decisions per decision classification.
[0101]
[0102] As shown in
[0103] In an embodiment at least some of the functionalities of the apparatus shown in
[0104] According to further example embodiments, said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability capability information request, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability capability information response, and said second artificial intelligence or machine learning trustworthiness related message comprises a second information element including at least one second artificial intelligence or machine learning model explainability related parameter.
[0105] According to further example embodiments, said at least one first artificial intelligence or machine learning model explainability related parameter includes a list indicative of a cognitive network function scope, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of supported artificial intelligence or machine learning model explanation methods, a list indicative of supported artificial intelligence or machine learning model explainability metrics, and a list indicative of supported artificial intelligence or machine learning model explanation aggregation period lengths.
[0106] According to further example embodiments, said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability configuration request, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability configuration response.
[0107] According to further example embodiments, said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation collection job, state information indicative of activation or inactivation of said artificial intelligence or machine learning model explanation collection job, start time information indicative of when said artificial intelligence or machine learning model explanation collection job is started, stop time information indicative of when said artificial intelligence or machine learning model explanation collection job is stopped, aggregation period information indicative of an artificial intelligence or machine learning model explanation aggregation period length of said artificial intelligence or machine learning model explanation collection job, keeping time information indicative of for how long artificial intelligence or machine learning model explanations resulting from said artificial intelligence or machine learning model explanation collection job are to be stored, method information indicative of an artificial intelligence or machine learning model explanation method to be used for said artificial intelligence or machine learning model explanation collection job, and filter information indicative of at least one type of artificial intelligence or machine learning model explanations to be collected by said artificial intelligence or machine learning model explanation collection job.
[0108] According to further example embodiments, said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability query request, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability query response, and said second artificial intelligence or machine learning trustworthiness related message comprises a second information element including at least one second artificial intelligence or machine learning model explainability related parameter.
[0109] According to further example embodiments, said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation query, start time information indicative of a begin of a timeframe for which artificial intelligence or machine learning model explanations are queried with said artificial intelligence or machine learning model explanation query, and stop time information indicative of an end of said timeframe for which artificial intelligence or machine learning model explanations are queried with said artificial intelligence or machine learning model explanation query, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of time information indicative of when key performance indicators considered for an artificial intelligence or machine learning model explanation were reported, cognitive network function information indicative of at least one cognitive network function from which said key performance indicators considered for said artificial intelligence or machine learning model explanation were reported, and a list indicative of a plurality of decision classifications and a number of decisions per decision classification.
[0110] According to further example embodiments, said first artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability subscription, and said second artificial intelligence or machine learning trustworthiness related message is a trustworthiness explainability notification, and said second artificial intelligence or machine learning trustworthiness related message comprises a second information element including at least one second artificial intelligence or machine learning model explainability related parameter.
[0111] According to further example embodiments, said at least one first artificial intelligence or machine learning model explainability related parameter includes at least one of a list indicative of cognitive network function instances within a cognitive network function scope of an artificial intelligence or machine learning model explanation query, and filter information indicative of filter criteria for a subscription with respect to said artificial intelligence or machine learning model explanation query, and said at least one second artificial intelligence or machine learning model explainability related parameter includes at least one of time information indicative of when key performance indicators considered for an artificial intelligence or machine learning model explanation were reported, cognitive network function information indicative of at least one cognitive network function from which said key performance indicators considered for said artificial intelligence or machine learning model explanation were reported, and a list indicative of a plurality of decision classifications and a number of decisions per decision classification.
[0112] Example embodiments outlined and specified above are explained below in more specific terms.
[0113] In particular, Example embodiments outlined and specified above are explained below in terms specifically related to, as an example, the TED method to explain the decisions the ML-based predictive handover. However, it is noted that example embodiments are neither limited to ML-based predictive handover (being one example use case for AI/ML model application) nor to the TED method (being one example for an AI explainability method).
[0114]
[0115] Specifically,
[0116] According to example embodiments, the two IEs are implemented as shown in the tables below.
[0117] In particular, the AI Explainability Capability Information Request IE may be implemented as follows.
TABLE-US-00001 Parameter Type Description CNF Scope List Which CNF instances the capability is requested for
[0118] On the other hand, the AI Explainability Capability Information Response IE may be implemented as follows.
TABLE-US-00002 Parameter Type Description Supported List of Which AI explainability methods are Methods Strings supported Supported List Which explanation metrics are Explanation supported, e.g. faithfulness for self- Metrics explaining neural networks or monotonicity for contrastive explanations Aggregation Timestamp List of supported aggregation period Granularity lengths for global explanations Additional Freetext Freetext description of the Information capabilities
[0119] A specific example of an Explainability Capability Information Response IE for predictive handover is shown in the table below. In this example, the predictive handover function is supporting only TED as a method for providing explanations. These are aggregated and available in granularity of one minute.
TABLE-US-00003 Parameter Value Supported [TED] Methods Supported Explanation Metrics Aggregation 1 minute Granularity Additional “Trained explanations for reasons to handover or Information to stay in the current serving cell”
[0120]
[0121] The TAI Explainability Configuration may utilize existing management interfaces for Creating, Reading, Updating and Deleting (CRUD) TAI Explanation Collection Job Information Elements (TAI-ECJ IEs) that configure the explanation collection, as shown in
[0122] According to example embodiments, the TAI-ECJ IE is implemented as shown in the table below.
TABLE-US-00004 Parameter Type Description ID Integer ID of the TAI-ECJ CNF Scope List List of CNF instances that are within the scope of the TAI-ECJ State [ACTIVE/ INACTIVE] Start Time Timestamp Time, when the collection is started. If omitted, the collection job is continuously active until End Time. End Time Timestamp Time, when the collection is finished. If omitted, the collection job is active indefinitely. Aggregation Timestamp The select global explanation Period aggregation period from the ones supported as indicated by the Explainability Capability Information Response attribute Global Explanation Aggregation Periods. The granularity should be a multiple of the granularity indicated in the capabilities. Availability Timestamp How long the explanations need Time to be stored, especially if created during inference Method Enumeration The used explanation method, or empty if none is explicitly configured. It should be one of the supported methods indicated in the Explainability Capability Information Response. Method Condition A method specific filter for Specific defining, which explanations are Filter collected
[0123] A specific example of a TAI-ECJ IE for configuring TED as explainability method for predictive handover is shown in the following table. In this example, two instances of predictive handover functions (with CNF IDs 1 and 3) are configured to be included in the explanation collection job. Since no start or end time are provided, the collection is active until configured otherwise. The aggregation period in this example is set to 5 minutes, which is a multiple of the minimum possible collection period and explanations for 5 minutes of handover decisions are aggregated into one explanation report. The reports are configured to be stored in the AI Trust Manager for at least 48 hours before the AI Trust Manager may delete them. TED is configured as the requested method of AI decision explanations and it has been indicated as supported in the Explainability Capability Information Response. Lastly, in this example, the TAI-ECJ is configured to collect everything except the default decision to “Stay in current serving cell” because “The Serving cell remains the strongest”. This is the most common and also the least interesting decision by the model.
[0124] The TAI-ECJ is created by passing the TAI-ECJ IE in the following table in a TAI Explainability Configuration Create Request to the AI Trust Manager. The AI Trust Engine may at any point read, modify or delete the currently configured TAI-ECJs, including the new created one, with the corresponding TAI Explainability Configuration Read/Update/Delete requests.
TABLE-US-00005 Parameter Value ID 1 CNF Scope [1,3] State ACTIVE Start Time End Time Aggregation 5 minutes Period Availability 48 hours Time Method TED Method Collect all except “Stay in current serving cell” Specific because “The Serving cell remains the strongest” Filter
[0125]
[0126] More specifically,
[0127] According to example embodiments, the two Request-Response related IEs are implemented as shown in the tables below.
[0128] In particular, the TAI Explanation Query Request IE may be implemented as follows.
TABLE-US-00006 Parameter Type Description TAI-ECJ ID Integer ID of the TAI Explanation Collection Job that is being queried CNF Scope List Which CNF instances the query is requested for Start Time Timestamp Start of the timeframe, where the explanations are requested End Time Timestamp End of the timeframe, where the explanations are requested
[0129] On the other hand, the TAI Explanation Query Response may be implemented as a list of TAI Explanation Query Response IEs as follows.
TABLE-US-00007 Parameter Type Description Time Timestamp When the explanation KPIs were reported CNF ID Integer Reporting CNF Counters List of See table below ExplanationCounter
[0130] The parameter “ExplanationCounter” utilized in the table above may be implemented as illustrated in the table below.
TABLE-US-00008 Parameter Type Description Decision ID Integer ID of a classification Explanation ID Integer ID of the explanation for the classification Count Integer Count of times the classification was given for the provided explanation
[0131] According to example embodiments, the two Subscribe-Notify related IEs are implemented as shown in the tables below.
[0132] In the Subscribe-Notify method, the AI Trust Engine can subscribe to receive explanation notifications, when they are created by the AI Trust Manager, with a TAI Explanation Subscription IE. The TAI Explanation Subscription IE may be implemented as follows. Here, a CNF Scope and an additional filter can be given in the subscription.
TABLE-US-00009 Parameter Type Description TAI-ECJ ID Integer ID of the TAI Explanation Collection Job that is being queried CNF Scope List Which CNF instances the query is requested for Filter Condition An additional filter criteria for subscribed explanations
[0133] On the other hand, the TAI Explanation Notification may be implemented as a list of TAI Explanation Query Response IEs as follows (i.e., similar to a TAI Explanation Query Response).
TABLE-US-00010 Parameter Type Description Time Timestamp When the explanation KPIs were reported CNF ID Integer Reporting CNF Counters List of See table below ExplanationCounter
[0134] Here, again, the parameter “ExplanationCounter” utilized in the table above may be implemented as illustrated in the table below.
TABLE-US-00011 Parameter Type Description Decision ID Integer ID of a classification Explanation ID Integer ID of the explanation for the classification Count Integer Count of times the classification was given for the provided explanation
[0135] A specific example of the TAI Global Explanation Query Request for the predictive handover is given in the table below. In this query, on the explanations of one of the predictive handover functions configured in the TAI-ECJ (CNF ID 1) are queried.
TABLE-US-00012 Parameter Value TAI-ECJ ID 1 CNF Scope 1 Start Time End Time
[0136] Since no start or end time are given, all explanations stored in the AI Trust Manager for CNF ID 1 are returned. An example snippet of the returned list of TAI Explanation Query Response Elements is shown in the table below. Since the aggregation period was configured to 5 minutes in the TAI-ECJ, a value is returned for every 5 minutes. Only one CNF (with an ID 1) was included in the query.
TABLE-US-00013 Time CNF ID Counter 27 May 2021 1 ExplanationCouterList_1 15:20 27 May 2021 1 ExplanationCouterList_2 15:25 27 May 2021 1 ExplanationCouterList_3 15:30
[0137] For each row, a list of Explanation Counter IEs is returned. An example of such a list is shown in the table below. Here for each tuple of a decision and an explanation for why it was taken, a count is given, how many times this tuple did occur in the aggregation period of 5 minutes. It is noted that the available decisions and explanations are an enumeration as introduced above, i.e., example decisions 1 to 4 and example explanations 1 to 6. It is further noted that a same explanation may apply to different decisions, or same decision may be made for different reasons/explanations. For brevity, the complete list of Explanation Counters is not presented, and the missing value may also be assumed as count of zero.
TABLE-US-00014 Decision ID Explanation ID Count 1 (Stay in current 1 (Serving cell remains the best) 1080 serving cell) 1 (Stay in current 2 (Serving cell best, otherwise 28 serving cell) pingpong) 2 (Handover to 3 (Handover to strongest candidate, 12 neighbor 1) gentle pathloss) 2 (Handover to 4 (Handover to strongest candidate, 0 neighbor 1) abrupt pathloss) 2 (Handover to 5 (Handover to not strongest, 0 neighbor 1) otherwise short stay) 3 (Handover to 3 (Handover to strongest candidate, 38 neighbor 2) gentle pathloss) 3 (Handover to 4 (Handover to strongest candidate, 12 neighbor 2) abrupt pathloss) 3 (Handover to 6 (Handover to the least worst) 6 neighbor 2)
[0138] Consequently, according to example embodiments, the TAI Framework is advantageously enabled to configure how explanations need to be provided for decisions made by AI pipelines and CNFs and to query those explanations.
[0139] In case of the specific predictive handover example, the operator or the vendor of the predictive handover function may use the provided explanations to understand how the predictive handover function behaves for a given cell boundary, and why, which can be used to further optimize the performance. For example, the operator may discover that a large part of Radio Link Failures (RLF) during a handover may be because of coverage issues and there was no good target candidate to handover to (so-called desperate handover), corresponding to explanation ID 6 in the specific example above. Such problem may not be solved by optimizing the mobility behavior, but requires re-planning and optimization of the network coverage.
[0140] The above-described procedures and functions may be implemented by respective functional elements, processors, or the like, as described below.
[0141] In the foregoing exemplary description of the network entity, only the units that are relevant for understanding the principles of the disclosure have been described using functional blocks. The network entity may comprise further units that are necessary for its respective operation. However, a description of these units is omitted in this specification. The arrangement of the functional blocks of the devices is not construed to limit the disclosure, and the functions may be performed by one block or further split into sub-blocks.
[0142] When in the foregoing description it is stated that the apparatus, i.e. network entity (or some other means) is configured to perform some function, this is to be construed to be equivalent to a description stating that a (i.e. at least one) processor or corresponding circuitry, potentially in cooperation with computer program code stored in the memory of the respective apparatus, is configured to cause the apparatus to perform at least the thus mentioned function. Also, such function is to be construed to be equivalently implementable by specifically configured circuitry or means for performing the respective function (i.e. the expression “unit configured to” is construed to be equivalent to an expression such as “means for”).
[0143] In
[0144] The processor 131/135 and/or the interface 133/137 may also include a modem or the like to facilitate communication over a (hardwire or wireless) link, respectively. The interface 133/137 may include a suitable transceiver coupled to one or more antennas or communication means for (hardwire or wireless) communications with the linked or connected device(s), respectively. The interface 133/137 is generally configured to communicate with at least one other apparatus, i.e. the interface thereof.
[0145] The memory 132/136 may store respective programs assumed to include program instructions or computer program code that, when executed by the respective processor, enables the respective electronic device or apparatus to operate in accordance with the example embodiments.
[0146] In general terms, the respective devices/apparatuses (and/or parts thereof) may represent means for performing respective operations and/or exhibiting respective functionalities, and/or the respective devices (and/or parts thereof) may have functions for performing respective operations and/or exhibiting respective functionalities.
[0147] When in the subsequent description it is stated that the processor (or some other means) is configured to perform some function, this is to be construed to be equivalent to a description stating that at least one processor, potentially in cooperation with computer program code stored in the memory of the respective apparatus, is configured to cause the apparatus to perform at least the thus mentioned function. Also, such function is to be construed to be equivalently implementable by specifically configured means for performing the respective function (i.e. the expression “processor configured to [cause the apparatus to] perform xxx-ing” is construed to be equivalent to an expression such as “means for xxx-ing”).
[0148] According to example embodiments, an apparatus representing the first network entity 10 (e.g. managing artificial intelligence or machine learning trustworthiness in a network) comprises at least one processor 131, at least one memory 132 including computer program code, and at least one interface 133 configured for communication with at least another apparatus. The processor (i.e. the at least one processor 131, with the at least one memory 132 and the computer program code) is configured to perform transmitting a first artificial intelligence or machine learning trustworthiness related message towards a second network entity managing artificial intelligence or machine learning trustworthiness in an artificial intelligence or machine learning pipeline in said network (thus the apparatus comprising corresponding means for transmitting), and to perform receiving a second artificial intelligence or machine learning trustworthiness related message from said second network entity, wherein said first artificial intelligence or machine learning trustworthiness related message is related to artificial intelligence or machine learning model explainability as a trustworthiness factor out of trustworthiness factors including at least artificial intelligence or machine learning model fairness, artificial intelligence or machine learning model explainability, and artificial intelligence or machine learning model robustness, said second artificial intelligence or machine learning trustworthiness related message is related to artificial intelligence or machine learning model explainability as said trustworthiness factor, and said first artificial intelligence or machine learning trustworthiness related message comprises a first information element including at least one first artificial intelligence or machine learning model explainability related parameter (thus the apparatus comprising corresponding means for receiving).
[0149] According to example embodiments, an apparatus representing the second network entity 10 (e.g. managing artificial intelligence or machine learning trustworthiness in an artificial intelligence or machine learning pipeline in a network) comprises at least one processor 135, at least one memory 136 including computer program code, and at least one interface 137 configured for communication with at least another apparatus. The processor (i.e. the at least one processor 135, with the at least one memory 136 and the computer program code) is configured to perform receiving a first artificial intelligence or machine learning trustworthiness related message from a first network entity managing artificial intelligence or machine learning trustworthiness in said network (thus the apparatus comprising corresponding means for receiving), and to perform transmitting a second artificial intelligence or machine learning trustworthiness related message towards said first network entity, wherein said first artificial intelligence or machine learning trustworthiness related message is related to artificial intelligence or machine learning model explainability as a trustworthiness factor out of trustworthiness factors including at least artificial intelligence or machine learning model fairness, artificial intelligence or machine learning model explainability, and artificial intelligence or machine learning model robustness, said second artificial intelligence or machine learning trustworthiness related message is related to artificial intelligence or machine learning model explainability as said trustworthiness factor, and said first artificial intelligence or machine learning trustworthiness related message comprises a first information element including at least one first artificial intelligence or machine learning model explainability related parameter (thus the apparatus comprising corresponding means for transmitting).
[0150] For further details regarding the operability/functionality of the individual apparatuses, reference is made to the above description in connection with any one of
[0151] For the purpose of the present disclosure as described herein above, it should be noted that [0152] method steps likely to be implemented as software code portions and being run using a processor at a network server or network entity (as examples of devices, apparatuses and/or modules thereof, or as examples of entities including apparatuses and/or modules therefore), are software code independent and can be specified using any known or future developed programming language as long as the functionality defined by the method steps is preserved; [0153] generally, any method step is suitable to be implemented as software or by hardware without changing the idea of the embodiments and its modification in terms of the functionality implemented; [0154] method steps and/or devices, units or means likely to be implemented as hardware components at the above-defined apparatuses, or any module(s) thereof, (e.g., devices carrying out the functions of the apparatuses according to the embodiments as described above) are hardware independent and can be implemented using any known or future developed hardware technology or any hybrids of these, such as MOS (Metal Oxide Semiconductor), CMOS (Complementary MOS), BiMOS (Bipolar MOS), BiCMOS (Bipolar CMOS), ECL (Emitter Coupled Logic), TTL (Transistor-Transistor Logic), etc., using for example ASIC (Application Specific IC (Integrated Circuit)) components, FPGA (Field-programmable Gate Arrays) components, CPLD (Complex Programmable Logic Device) components or DSP (Digital Signal Processor) components; [0155] devices, units or means (e.g. the above-defined network entity or network register, or any one of their respective units/means) can be implemented as individual devices, units or means, but this does not exclude that they are implemented in a distributed fashion throughout the system, as long as the functionality of the device, unit or means is preserved; [0156] an apparatus like the user equipment and the network entity/network register may be represented by a semiconductor chip, a chipset, or a (hardware) module comprising such chip or chipset; this, however, does not exclude the possibility that a functionality of an apparatus or module, instead of being hardware implemented, be implemented as software in a (software) module such as a computer program or a computer program product comprising executable software code portions for execution/being run on a processor; [0157] a device may be regarded as an apparatus or as an assembly of more than one apparatus, whether functionally in cooperation with each other or functionally independently of each other but in a same device housing, for example.
[0158] In general, it is to be noted that respective functional blocks or elements according to above-described aspects can be implemented by any known means, either in hardware and/or software, respectively, if it is only adapted to perform the described functions of the respective parts. The mentioned method steps can be realized in individual functional blocks or by individual devices, or one or more of the method steps can be realized in a single functional block or by a single device.
[0159] Generally, any method step is suitable to be implemented as software or by hardware without changing the idea of the present disclosure. Devices and means can be implemented as individual devices, but this does not exclude that they are implemented in a distributed fashion throughout the system, as long as the functionality of the device is preserved. Such and similar principles are to be considered as known to a skilled person.
[0160] Software in the sense of the present description comprises software code as such comprising code means or portions or a computer program or a computer program product for performing the respective functions, as well as software (or a computer program or a computer program product) embodied on a tangible medium such as a computer-readable (storage) medium having stored thereon a respective data structure or code means/portions or embodied in a signal or in a chip, potentially during processing thereof.
[0161] The present disclosure also covers any conceivable combination of method steps and operations described above, and any conceivable combination of nodes, apparatuses, modules or elements described above, as long as the above-described concepts of methodology and structural arrangement are applicable.
[0162] In view of the above, there are provided measures for trust related management of artificial intelligence or machine learning pipelines in relation to the trustworthiness factor explainability. Such measures exemplarily comprise, at a first network entity managing artificial intelligence or machine learning trustworthiness in a network, transmitting a first artificial intelligence or machine learning trustworthiness related message towards a second network entity managing artificial intelligence or machine learning trustworthiness in an artificial intelligence or machine learning pipeline in said network, and receiving a second artificial intelligence or machine learning trustworthiness related message from said second network entity, wherein said first artificial intelligence or machine learning trustworthiness related message is related to artificial intelligence or machine learning model explainability as a trustworthiness factor out of trustworthiness factors including at least artificial intelligence or machine learning model fairness, artificial intelligence or machine learning model explainability, and artificial intelligence or machine learning model robustness, said second artificial intelligence or machine learning trustworthiness related message is related to artificial intelligence or machine learning model explainability as said trustworthiness factor, and said first artificial intelligence or machine learning trustworthiness related message comprises a first information element including at least one first artificial intelligence or machine learning model explainability related parameter.
[0163] Even though the disclosure is described above with reference to the examples according to the accompanying drawings, it is to be understood that the disclosure is not restricted thereto. Rather, it is apparent to those skilled in the art that the present disclosure can be modified in many ways without departing from the scope of the inventive idea as disclosed herein.
List of Acronyms and Abbreviations
[0164] 3GPP Third Generation Partnership Project [0165] AI artificial intelligence [0166] AI QoT AI quality of trustworthiness [0167] API application programming interface [0168] AV autonomous vehicle [0169] CAN cognitive autonomous network [0170] CNF cognitive network function [0171] CRUD Creating, Reading, Updating and Deleting [0172] HLEG High Level Expert Group [0173] HO handover [0174] IE information element [0175] IEC International Electrotechnical Commission [0176] ISO International Organization for Standardization [0177] MANO management and orchestration [0178] ML machine learning [0179] NAF network automation function [0180] QCI QoS Class Identifier [0181] QoE quality of experience [0182] QoS quality of service [0183] QoT quality of trustworthiness [0184] RLF Radio Link Failure [0185] RNN Recurrent Neural Network [0186] RSRP reference signal received power [0187] TAI trustworthy artificial intelligence [0188] TAIF trustworthy artificial intelligence framework [0189] TAI-ECJ TAI Explanation Collection Job [0190] TED Teaching Explainable Decisions, [0191] Teaching Explanations for Decisions [0192] VNF virtual network function