COMPUTER-READABLE RECORDING MEDIUM STORING RISK ANALYSIS PROGRAM, RISK ANALYSIS METHOD, AND INFORMATION PROCESSING DEVICE OF RISK ANALYSIS
20230237573 · 2023-07-27
Assignee
Inventors
- Izumi Nitta (Kawasaki, JP)
- Kyoko Ohashi (Fuchu, JP)
- Satoko Iwakura (Kawasaki, JP)
- Sachiko Onodera (Kawasaki, JP)
Cpc classification
G06Q30/0201
PHYSICS
International classification
Abstract
A non-transitory computer-readable recording medium storing a risk analysis program for an artificial intelligence (AI) system, the analysis program being a program for causing a computer to execute processing, the processing including: acquiring a plurality of pieces of relational information that include at least two attributes among an attribute of a type of an object person, an attribute of a type of processing, and an attribute of a type of data, wherein the relational information is determined on a basis of a configuration of the AI system; determining a priority of the plurality of pieces of relational information on a basis of the attribute of the type of the object person; and outputting one or a plurality of check items selected on a basis of the determined priority from among a plurality of check items associated with each attribute as a checklist for the AI system.
Claims
1. A non-transitory computer-readable recording medium storing a risk analysis program for an artificial intelligence (AI) system, the risk analysis program being a program for causing a computer to execute processing comprising: acquiring a plurality of pieces of relational information that include at least two attributes among an attribute of a type of an object person, an attribute of a type of processing, and an attribute of a type of data, wherein the relational information is determined on a basis of a configuration of the AI system; determining a priority of the plurality of pieces of relational information on a basis of the attribute of the type of the object person; and outputting one or a plurality of check items selected on a basis of the determined priority from among a plurality of check items associated with each attribute as a checklist for the AI system.
2. The non-transitory computer-readable recording medium according to claim 1, wherein the determining a priority is executed by increasing the priority of a specific object person related to each of the plurality of pieces of relational information.
3. The non-transitory computer-readable recording medium according to claim 1, wherein the determining a priority is executed by increasing the priority of specific relational information among the plurality of pieces of relational information.
4. A computer-implemented method of risk analysis for an artificial intelligence (AI) system, the method comprising: acquiring a plurality of pieces of relational information that include at least two attributes among an attribute of a type of an object person, an attribute of a type of processing, and an attribute of a type of data, wherein the relational information is determined on a basis of a configuration of the AI system; determining a priority of the plurality of pieces of relational information on a basis of the attribute of the type of the object person; and outputting one or a plurality of check items selected on a basis of the determined priority from among a plurality of check items associated with each attribute as a checklist for the AI system.
5. An information processing apparatus of risk analysis for an artificial intelligence (AI) system, the information processing apparatus comprising: a memory; and a processor coupled to the memory, the processor being configured to perform processing including: acquiring a plurality of pieces of relational information that include at least two attributes among an attribute of a type of an object person, an attribute of a type of processing, and an attribute of a type of data, wherein the relational information is determined on a basis of a configuration of the AI system; determining a priority of the plurality of pieces of relational information on a basis of the attribute of the type of the object person; and outputting one or a plurality of check items selected on a basis of the determined priority from among a plurality of check items associated with each attribute as a checklist for the AI system.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
DESCRIPTION OF EMBODIMENTS
[0033] However, the checklists presented by the principles and guidelines do not specifically indicate what part of the AI system should be, and AI system developers or providers need to materialize the AI system. This materialization work is highly difficult and requires a large amount of man-hours.
[0034] Furthermore, although the risk constituent elements are organized in the risk chain models, the AI system providers and developers need to put the risk constituent elements into items to be practiced for AI system components and individual stakeholders.
[0035] In one aspect, an object is to support AI system developers and providers in appropriately recognizing and dealing with ethical risks that may arise from operation of AI systems.
[A] Embodiment
[0036] Hereinafter, an embodiment will be described with reference to the drawings. Note that the embodiment to be described below is merely an example, and there is no intention to exclude application of various modifications and techniques not explicitly described in the embodiment. For example, the present embodiment can be variously modified and carried out without departing from the spirit thereof. Furthermore, each drawing is not intended to include only constituent elements illustrated in the drawing and can include another function and the like.
[0037] According to the embodiment, ethical characteristics that an AI system 100 (to be described below with reference to
[0038] Furthermore, the relationship between the constituent elements of the AI system 100 and the stakeholders is formed into a graph structure, and a checklist giving priority to check items on the basis of characteristics of the graph structure is automatically generated. Therefore, important check items are analyzed on a priority basis to improve the efficiency.
[0039]
[0040] The AI ethics model organizes principles, guidelines and the like related to AI ethics, and is configured as a checklist that the AI system 100 should satisfy. Part of the AI ethics model illustrated in
[0041] An AI ethics risk checklist (to be described below with reference to
[0042] In the part of the AI ethics model illustrated in
[0043]
[0044] The AI system 100 illustrated in
[0045] The AI system 100 is used by an AI service provider 10 such as an AI service vendor, the data provider 20 such as a credit bureau, the data provider 30 such as a bank, and the user 40 such as a loan applicant.
[0046] A training unit 110 includes a machine learning unit 102 (loan screening model generation unit) that executes training of the AI model 103 (loan screening model) by machine learning for training data 101. The training data 101 may be generated by an input of a credit score from the data provider 20 and an input of transaction data from the data provider 30.
[0047] A prediction unit 120 includes an inference unit 105 that outputs an inference result 106 (screening result) by inferring inference data 104 using the AI model 103. The inference data 104 may be generated by an input and an output of a credit score from the data provider 20, an input and an output of application information and transaction data from the data provider 30, and an input of applicant information from the user 40.
[0048]
[0049] The AI ethics checklist associates the check item of the AI ethics model illustrated in
[0050] In the part of the AI ethics checklist illustrated in
[0051] Here, AI ethics risk analysis processing according to the embodiment will be described.
[0052] For the AI system 100 to be analyzed, outline sheets (to be described below with reference to
[0053] Then, the risk analysis is performed by the user according to the following procedures (1) to (4).
[0054] (1) The constituent elements of the AI system 100, data, and relationships among stakeholders are drawn as a system diagram (to be described below with reference to
[0055] (2) Breakdown for each interaction is described in an analysis sheet (to be described below with reference to
[0056] (3) For each item of the AI ethics checklist, a risk assumed from a state where the corresponding interaction does not satisfy the item is extracted and described in the analysis sheet.
[0057] (4) The risks in the analysis sheet are referred to, the same content is organized, and a relationship between an event and a factor is described. For visualization, an analysis diagram (to be described below with reference to
[0058] For example, the system diagram, analysis sheet, and analysis diagram are output as output data.
[0059] In the above-described risk analysis procedure (3), there are many items in the AI ethics checklist, so the man-hours for verifying the entire checklist are large. For each item of the AI ethics checklist, a task of assuming a risk (a task of a human thinking) occurs from a state where the corresponding interaction does not satisfy the item. Furthermore, it is not always necessary to pay attention to all items, but it may be difficult to determine which items should be paid attention to, depending on the configuration of the AI system 100 and the stakeholders.
[0060] Therefore, for the above-described risk analysis procedure (3), checklist with priority generation processing by an information processing device 1 (to be described below with reference to
[0061] In the checklist generation processing, the relationships (interactions) between the AI system 100 to be analyzed and the stakeholders are expressed in a graph structure. Then, relationships (interactions) of high importance to ethically pay attention to are extracted on the basis of rules according to characteristics of the graph structure, and the check items for extracting ethical risks associated with the relationship (interaction) of high importance are presented as the checklist with priority.
[0062] The information processing device 1 narrows down the checklist. In narrowing down the checklist, characteristics of the “relationship between the configuration of the AI system and the stakeholders” are expressed as the characteristics of the graph structure including a set of interactions.
[0063] Since table data of the analysis sheet is in a data format of “interaction set”, it is possible to automatically generate the graph structure. For example, the following four characteristics can be automatically extracted as the characteristics of the graph structure. [0064] The number of nodes of the stakeholders [0065] The number of stakeholders having a plurality of roles [0066] The number of stakeholders who are not directly involved with the AI system
[0067] The characteristics of the graph structure that are likely to cause ethical risks and the items of the AI ethics checklist that should be suppressed are registered as rules in advance. For example, in a case where there is one or more stakeholders who are not directly involved with the AI system 100, the priority of the interaction involving that stakeholder is increased. This is to grasp indirect effects on the stakeholders that tend to be overlooked in design and development of the AI system 100.
[0068] The AI ethics check items to pay attention to are narrowed down on the basis of the rules registered according to the characteristics of the graph structure and are generated as the AI ethics checklist with priority.
[0069]
[0070] The use cases illustrated in
[0071] A use case is associated with a major item, an intermediate item, content, and item description. For example, in
[0072]
[0073] The system diagram is output as output data of the AI system 100, and the components of the AI system 100, training data, inference data, stakeholders, and their relationships are described as interactions.
[0074] In
[0075] In the example illustrated in
[0076] S106 is the interaction of the input data from the inference data 104 to the loan screening inference unit 105. S107 is the interaction of the output of the screening result 106 from the loan screening inference unit 105.
[0077] S108 is the interaction of the input data from the training data 101 to the loan screening model training unit 102, and S109 is the interaction of the output of the loan screening model 103 from the loan screening model training unit 102.
[0078] sill is the interaction of the transaction data input to the training data 101 from the data provider 30 (bank). S112 is the interaction of the credit score input to the training data 101 from the data provider 20 (credit bureau), and S113 is the interaction of the input data from the AI service provider 10 to the loan screening model training unit 102.
[0079] S114 is the interaction of the input data from the AI service provider 10 to the loan screening inference unit 105, and S110 is the interaction of the screening result 106 output to the user 40.
[0080]
[0081] In the analysis sheet, stakeholder and data types, risk, AI ethics check item (AI ethical characteristic), measure, and the like are associated with each interaction ID. For example, the interaction ID “S110” is associated with the stakeholder type “user”, the name “loan applicant”, and the start/end point “1 (end point)”. Furthermore, the interaction ID “S110” is associated with the data type “inference result”, the data name “screening result”, and the start point/end point “0 (start point)”. Moreover, the interaction ID “S110” is associated with the risk (event) “women and black people are less easy to pass screening”, the AI ethics check item “group fairness”, and the measure “to improve an AI algorithm to cause a difference in ratio of being financeable between genders or racial groups to fall within an allowable range”.
[0082]
[0083] In the analysis diagram, the AI ethics check item is displayed in association with each of the interaction IDs displayed in the system diagram illustrated in
[0084] In the example illustrated in
[0085]
[0086] The analysis sheet illustrated with code A1 has a table structure similar to the analysis sheet illustrated in
[0087] In the graph structure illustrated with code A2, an arrow between nodes illustrated with circles represents an interaction, and Sxxx attached to each interaction represents an interaction ID.
[0088] In the example illustrated with code A2 in
[0089] Here, interactions to pay attention to are extracted in the following order of (1) to (3).
[0090] (1) The importance of all the interactions is set to 1 point.
[0091] (2) The importance of an interaction with a specific characteristic is added (1 point may be added for one characteristic).
[0092] (3) The interactions are ranked by importance.
[0093] The specific characteristic in (2) above may include a characteristic of nodes at both ends of the interaction (components of the AI system 100, data, or stakeholders) and a characteristic of a connection relationship. The characteristic of nodes at both ends of the interaction may include a stakeholder having a plurality of roles (the AI system provider and the data provider), a stakeholder having a user role, and a stakeholder having a role of a training data provider. The characteristic of a connection relationship may include an interaction of a stakeholder not connected to the output of the AI system 100, and an interaction in which the training data or the inference data is connected to a plurality of data providers.
[0094]
[0095] Corresponding AI ethics check items are listed in descending order of the interaction with the importance score.
[0096] In the example illustrated in
[0097]
[0098] As illustrated in
[0099] The memory unit 12 is an example of a storage unit and, illustratively, is a read only memory (ROM), a random access memory (RAM), or the like. Programs such as a basic input/output system (BIOS) may be written in the ROM of the memory unit 12. A software program of the memory unit 12 may be appropriately read and executed by the CPU 11. Furthermore, the RAM of the memory unit 12 may be used as a temporary recording memory or a working memory.
[0100] The display control unit 13 is connected to a display device 131 and controls the display device 131. The display device 131 is a liquid crystal display, an organic light-emitting diode (OLED) display, a cathode ray tube (CRT), an electronic paper display, or the like and displays various kinds of information for an operator or the like. The display device 131 may be combined with an input device and may be, for example, a touch panel. The display device 131 displays various types of information for the user of the information processing device 1.
[0101] The storage device 14 is a storage device having high input/output (IO) performance, and for example, a dynamic random access memory (DRAM), a solid state drive (SSD), a storage class memory (SCM), or a hard disk drive (HDD) may be used.
[0102] The input IF 15 may be connected to an input device such as a mouse 151 and a keyboard 152 and may control the input device such as the mouse 151 and the keyboard 152. The mouse 151 and the keyboard 152 are examples of the input devices, and the operator performs various kinds of input operations through these input devices.
[0103] The external recording medium processing unit 16 is configured in such a manner that a recording medium 160 is attachable to the external recording medium processing unit 16. The external recording medium processing unit 16 is configured in such a manner that information recorded in the recording medium 160 is allowed to be read in a state in which the recording medium 160 is attached to the external recording medium processing unit 16. In the present example, the recording medium 160 is portable. For example, the recording medium 160 is a flexible disk, an optical disc, a magnetic disk, a magneto-optical disk, a semiconductor memory, or the like.
[0104] The communication IF 17 is an interface for enabling communication with an external device.
[0105] The CPU 11 is an example of a processor and is a processing device that performs various controls and calculations. The CPU 11 implements various functions by executing an operating system (OS) or a program loaded into the memory unit 12. Note that the CPU 11 may be a multi-processor including a plurality of CPUs, or a multi-core processor having a plurality of CPU cores, or may have a configuration having a plurality of multi-core processors.
[0106] A device for controlling operation of the entire information processing device 1 is not limited to the CPU 11 and may be, for example, any one of an MPU, a DSP, an ASIC, a PLD, or an FPGA. Furthermore, the device for controlling operation of the entire information processing device 1 may be a combination of two or more types of the CPU, MPU, DSP, ASIC, PLD, and FPGA. Note that the MPU is an abbreviation for a micro processing unit, the DSP is an abbreviation for a digital signal processor, and the ASIC is an abbreviation for an application specific integrated circuit. Furthermore, the PLD is an abbreviation for a programmable logic device, and the FPGA is an abbreviation for a field-programmable gate array.
[0107]
[0108] The CPU 11 of the information processing device 1 illustrated in
[0109] The graph generation unit 111 acquires a plurality of pieces of relational information (for example, interactions) including at least two attributes among an attribute of a type of an object person, an attribute of a type of processing, and an attribute of a type of data, which is determined on the basis of the configuration of the AI system 100. The graph generation unit 111 may acquire relational information on the basis of an interaction set 141 to be analyzed. The graph generation unit 111 may generate the graph structure illustrated in
[0110] The characteristic extraction unit 112 determines a priority of the plurality of pieces of relational information on the basis of the attribute of the type of the object person. The characteristic extraction unit 112 may determine the priority on the basis of an important interaction extraction rule 142. The characteristic extraction unit 112 may increase the priority of a specific object person related to each of the plurality of pieces of relational information. The characteristic extraction unit 112 may increase the priority of specific relational information among the plurality of pieces of relational information.
[0111] The check item extraction unit 113 outputs one or a plurality of check items selected on the basis of the determined priority from among a plurality of check items each associated with each attribute as a narrowed AI ethics checklist 114 of the AI system 100. The check item extraction unit 113 may output a narrowed AI ethics checklist 114 on the basis of an AI ethics checklist 143.
[0112]
[0113] The graph generation unit 111 receives the important interaction extraction rule 142, the AI ethics checklist 143, and the interaction set 141 to be analyzed as input data (steps Cl to C3).
[0114] The graph generation unit 111 generates the graph structure from the interaction set 141 (step C4).
[0115] The characteristic extraction unit 112 extracts characteristics from the graph structure (step C5). The extraction of characteristics may be executed on the basis of the number of nodes of the stakeholders, the number of stakeholders having a plurality of roles, and the number of stakeholders not directly involved with the AI system 100, for example.
[0116] The characteristic extraction unit 112 extracts an interaction to pay attention to from the extracted characteristics on the basis of the important interaction extraction rule 142 (step C6).
[0117] The check item extraction unit 113 extracts the check items of the AI ethics checklist 143 corresponding to the interaction to pay attention to (step C7)
[0118] The check item extraction unit 113 outputs the AI ethics checklist 114 narrowed down to important items (step C8). Then, the AI ethics checklist generation processing ends.
[B] Effects
[0119] According to the risk analysis program, the risk analysis method, and the information processing device 1 in the above-described embodiment, the following effects can be obtained, for example.
[0120] The graph generation unit 111 acquires a plurality of pieces of relational information (for example, interactions) including at least two attributes among an attribute of a type of an object person, an attribute of a type of processing, and an attribute of a type of data, which is determined on the basis of the configuration of the AI system 100. The characteristic extraction unit 112 determines a priority of the plurality of pieces of relational information on the basis of the attribute of the type of the object person. The check item extraction unit 113 outputs one or a plurality of check items selected on the basis of the determined priority from among a plurality of check items each associated with each attribute as a narrowed AI ethics checklist 114 of the AI system 100.
[0121] Therefore, it is possible to support a developer and a provider of the AI system 100 in appropriately recognizing and dealing with ethics risks that may arise from the operation of the AI system 100. Furthermore, the need for the AI service provider 10 and developers to put the ethical characteristics into items to be practiced for components of the AI system 100 and individual stakeholders can be eliminated. Moreover, by prioritizing the checklist of approximately 300 items and preferentially analyzing the top N most important (for example, N=20) items, critical risks can be recognized early.
[C] Others
[0122] The disclosed technique is not limited to the embodiments described above, and various modifications may be made and carried out without departing from the spirit of the present embodiments. Each configuration and each process of the present embodiments may be selected or omitted as desired, or may be combined as appropriate.
[0123] All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.