AUGMENTED REALITY TRAINING SYSTEM
20230237920 · 2023-07-27
Inventors
- Steven Patrick Wolf (Cincinnati, OH, US)
- John Gerald Hendricks (Cincinnati, OH, US)
- Jason Todd Van Cleave (Cincinnati, OH, US)
Cpc classification
International classification
Abstract
An augmented reality training system provides immersive training scenarios, and uses a scenario management engine to assess scenario results and recommend appropriate subsequent scenarios. Scenario results may be assessed based upon comparison to doctrinal methods, expert performance, peer performance, or student past performance. Based upon assessments, a student may be presented with challenge appropriate subsequent scenarios. Determination of the challenge or complexity of scenarios for purposes of such recommendations may be accomplished by determination of an objective complexity or challenge metric that is based upon the results of scenario training across multiple students. One example of such a metric is a Shannon entropy metric, which calculates the unpredictability of a scenario by comparing actions taken during the scenario to a configured depth.
Claims
1. An augmented reality training system providing immersive training scenarios to a user, comprising: (a) a wearable augmented reality viewing device; (b) a computing device comprising a display screen, the computing device being in communication with the wearable augmented reality viewing device; (c) a physical object, the physical object being in communication with the computing device; and (d) one or more markers positioned on a surface of the physical object; wherein the wearable augmented reality viewing device comprises in memory executable instructions for: (i) capturing information of a physical image; (ii) creating the training scenarios, wherein the training scenarios include an initial difficulty rating; (iii) displaying the physical image on the wearable augmented reality viewing device and on the display screen of the computing device, the physical image presenting one or more virtual critical cues; (iv) assessing the user’s results during the training scenarios by comparing the user’s results to an ideal doctrinal or expert-based approach to the training scenarios; and (v) modifying the training scenarios based on the user’s results of the training scenarios.
2. The system of claim 1, wherein the system further comprises determining a set of subsequent scenarios that correspond to the user’s results, wherein the subsequent scenarios may be more challenging or less challenging than the training scenarios.
3. The system of claim 1, wherein the system further comprises an instructor interface that is presented to an instructor when the user completes the training scenarios, wherein the instructor interface permits the instructor to select additional training scenarios without the user’s input or knowledge.
4. The system of claim 3, wherein the ideal doctrinal approach may include Advanced Trauma Life Saving (ATLS) methods, wherein the ATLS methods may include Triage Considerations, Airway Assessment, Breathing and Ventilation, Circulation and Hemorrhage Control.
5. The system of claim 1, wherein the ideal doctrine approach determines a doctrine timeline from a set of doctrine rules, wherein the doctrine timeline is compared to the user’s results to check for the critical cues and accurate step performances, wherein the training scenarios’ complexity is increased if the doctrine timeline is within a configured threshold of similarity to the user’s results, wherein the training scenario’s complexity is decreased or maintained at a current level if the doctrine timeline is not within the configured threshold of similarity to the user’s results.
6. The system of claim 1, wherein the expert-based approach determines an expert timeline for the training scenarios, wherein the expert timeline is based upon one or more expert evaluations of the training scenarios, wherein the expert timeline is compared to the user’s results to check for the critical cues and accurate step performances, wherein the training scenarios’ complexity is increased if the expert timeline is within a configured threshold of similarity to the user’s results, wherein the training scenario’s complexity is decreased or maintained at a current level if the expert timeline is not within the configured threshold of similarity to the user’s results.
7. The system of claim 1, wherein a Shannon entropy score is determined from Shannon’s Entropy Equation to rank the complexity, unpredictability, or challenge of the training scenarios, wherein a low entropy score indicates fewer processing elements for completing the training scenarios, and a high entropy score indicates more processing elements for completing the training scenarios.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The drawings and detailed description that follow are intended to be merely illustrative and are not intended to limit the scope of the invention as contemplated by the inventors.
[0007]
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
DETAILED DESCRIPTION
[0030] The inventors have conceived of novel technology that, for the purpose of illustration, is disclosed herein as applied in the context of training simulations and learning management. While the disclosed applications of the inventors’ technology satisfy a long-felt but unmet need in the art of training simulations and learning management, it should be understood that the inventors’ technology is not limited to being implemented in the precise manners set forth herein, but could be implemented in other manners without undue experimentation by those of ordinary skill in the art in light of this disclosure. Accordingly, the examples set forth herein should be understood as being illustrative only, and should not be treated as limiting.
[0031] As compared to virtual reality (VR) or other types of virtual training simulation, Augmented reality (AR) offers more flexibility that may be beneficially applied to SIT and other training approaches. By presenting realistic virtual cues superimposed on a physical manikin, the learner has an immersive experience, but is also able to practice medical skills (e.g., applying a tourniquet) using equipment from his/her own medical kit, while seeing real-world visual feedback such as their own arms and hands moving in the expected manner. This augmented reality feedback may be implemented within a training system that provides additional features, such as guided learning and automated creation and selection of appropriate training scenarios.
[0032] Features available with the system may include the presentation of virtual patients, or other objects related to the objective of the scenario, that display high-fidelity visual cues that mimic human physiology and respond to user’s actions, and which may be superimposed over physical manikins to deliver training that addresses assessment, decision making, and optimizes skill development and retention. Other training elements designed to improve the transferability of skills training to real world application may include virtual patients that expose learners to photorealistic and auditory cues, olfactory cues that recreate important medical cues and psychological stressors present in the real world (e.g., blood, smoke, toxic gases), and introduction of physiological stressors known to degrade psychomotor skills (increased heart rate and respiratory rate, elevated blood pressure, and increased sweating).
[0033] Virtual patients or other objects allow trainees to practice identifying perceptual cues that are difficult to recreate using physical objects alone (e.g., skin tone and mental status changes in a patient, selective spread of smoke or fire in a structure), and may allow trainees to see the outcomes of their decisions in a compressed timeline, which may reinforce mastery of related skills. The controlled introduction of sounds, visual cues, smells, time pressure, and uncertainty combined with biometric data of student performance may allow scenarios to be adapted to varying skill levels, prior to the start of a scenario or in real time, and may also allow the targeted presentation of scenarios that are suitable for a previously displayed skill level.
[0034] It should be understood that, while many of the examples described herein may be disclosed in the context of medical training scenarios for the sake of clarity, they are not limited to such applications. Instead, it should be apparent that the described methods, devices, and features may be broadly applied to a variety of training scenarios beyond medical scenarios.
[0035] Turning now to the figures,
[0036] The server (100) may also be in communication with one or more devices within a training environment (10), either directly or through an environment hub (12). Communication with the training environment (10) may be via a wired or wireless connection, such as by Wi-Fi, Bluetooth, Ethernet, or USB. In some examples, some or all of the devices of the training environment (10) may be in direct communication with the server (100) via a wireless data connection, while in other examples some or all of the devices may be in direct communication with the server (100) via a wired data connection, or a combination of wireless and wired data connection. The environment hub (12) may be, for example, a networking device such as router, switch, or hub that is configured to communicate with devices in the training environment (10) and provide access (e.g., via a LAN or WAN connection) to the server (100). In some implementations, the environment hub (12) may itself be a computer that is located proximate to the training environment (10), and that is configured to communicate with the devices in the training environment (10) and perform some level of storage, processing, or manipulation of data separately from that provided by the server (100). As an example, the environment hub (12) may receive a scenario dataset that includes visual assets, audio assets, and programming instructions that may be locally executed to conduct a training scenario.
[0037] Devices within the training environment (10) will vary by implementation, but may include smart tools (14), smart mannequins (16), auditory immersion devices (18), olfactory immersion devices (20), haptic immersion devices (22), and head mounted displays (HMDs) (24). Smart tools (14) may include devices that are used to perform specific actions or provide specific treatments during a scenario, and may be configured to be rendered within the virtual environment in specific ways, or to provide certain types of feedback or response data depending upon their use during a scenario. As one example, a tourniquet smart tool (14) may include sensors that produce data when the tourniquet is applied to a mannequin that indicates how tightly the tourniquet has been tied. Such data may be used to update the scenario (e.g., where pressure is appropriate, a virtual patient’s condition begins to stabilize) or determine results (e.g., where pressure is appropriate, scenario results may indicate a passing score). Smart tools (14) may include those used to accomplish tasks within the simulation (e.g., a tourniquet or other instrument used to provide a medical treatment), as well as those used to diagnose or gain information during the simulation (e.g., a stethoscope or other instrument used to determine information within the virtual environment).
[0038] The smart mannequin (16) may be a physical stand-in for a virtual patient or other object. During the simulation, a student may interact directly with the smart mannequin in an augmented reality view that includes physical touch as well as viewing of virtually overlays that are matched to the smart mannequin. As with smart tools (14), the smart mannequin (16) may include sensors, communication devices, and feedback devices that may be configured to provide data and feedback during scenarios. In varying implementations, smart mannequins (16) and their use within training scenarios may include any of the features described in U.S. Pat. No. 10,438,415, issued Oct. 8, 2019, and titled “Systems and Methods for Mixed Reality Medical Training,” the entire disclosure of which is hereby incorporated by reference herein.
[0039] The auditory (18), olfactory (20), and haptic immersion devices (22) may be configured to provide varying types of stimuli during a scenario, based upon signals or instructions provided by the environment hub (12), the server (100), the HMD (24), or another device. The auditory immersion device (18) may include one or more speakers positioned around an area, separately from the HMD (24), and operable to provide sounds related to the simulated scenario, such as a siren sound during a medical scenario, or the sound of burning wood during a fire response scenario. The olfactory immersion device (20) is operable to introduce a scent into the simulation area that relates to the scenario, such as a chemical that simulates the smell of blood during a medical scenario, or the smell of smoke in a fire response scenario. This may include operating a sprayer to spray an amount of a chemical into the air, and may also include operating a fan or other air circulating device to spread the chemical into an area. The haptic immersion device (22) may be integrated with another object, such as the smart mannequin (16) or smart tool (14) and may be operable to simulate a movement, vibration, or other physical response from those objects. In some implementations, the haptic immersion device (22) may be integrated with the training area itself as a platform or pad that the student and smart mannequin (16) or other object are positioned on, and that is operable to provide motion or vibration to simulate a collapsing structure, and earthquake, an explosion, or another condition related to the scenario.
[0040] The HMD (24) will vary by implementation, but will generally include a display positioned over the eyes of a wearer, a power source (e.g., a battery or power cable), a communication device (e.g., a wireless communication device or data cable), sensors (e.g., accelerometer, gyroscope), image capture devices usable to capture images of the physical environment for room tracking, hand tracking, controller tracking, object tracking, or augmentation, integrated controllers (e.g., one or more hand controllers used to interact with virtual objects in augmented reality), and other components. Some HMDs (24) may include processors, memories, software, and other configurations to allow them to operate independently of other devices, while some HMDs (24) may instead be primarily configured to operate displays that receive video output from another connected device (e.g., such as a computer or the environment hub (12)).
[0041]
[0042] As an example, this may include identifying a smart mannequin (16) within the field of view of the HMD (24), and then determining the corresponding virtual coordinate space in which that mannequin exists within the virtual layer. This allows the system to overlay (204) renderings from the virtual layer onto the physical layer via a display of the HMD (24) in order to provide an augmented reality view of the simulation area (e.g., either by displaying renderings on a translucent display that allows viewing of the physical layer, or by capturing an image of the physical layer which is modified and redisplayed). Continuing the above example, the overlaid (204) information could include a simple outline or arrow being rendered in the AR view to identify the location and orientation of the smart mannequin (16), or could include more complex graphical renderings to simulate the appearance or physical condition of a virtual patient that corresponds to the mannequin, including the rendering of injuries or other visible characteristics that may be relevant to the scenario, as has been described and referenced above. Once the virtual layer has been determined and combined with the physical layer, the augmented view may be displayed (206) via the HMD and/or other devices used during the training scenario.
[0043] The process of capturing, registering, creating, and displaying the augmented view may be performed multiple times per second during the simulation to provide a smooth frame rate and immersive viewing experience during the training scenario, and to account for changes detected (208) in the training environment, which may include movement of the student, movement of the HMD (24), movement of registered objects, or the occurrence of scenario specific events based upon the passage of time, the student’s actions, or other occurrences. As changes occur (208), the augmented view must be updated (210) to reflect the change, which may include rendering and displaying a new virtual layer where the positions and orientations of overlays have changed, or where the overlay has changed due to a scenario driven event (e.g., an improvement in a virtual patient’s condition as a result of treatment, or the passage of time).
[0044]
[0045] Types of changes that may occur include an object interaction change (220), spatial or temporal change (222), immersion change (224), stress level change (226), and other types of changes. An object interaction change (220) may occur as a result of a student using or interacting with a smart tool (14), interacting with a smart mannequin (16), or interacting with another object that is part of the scenario. This could include, for example, using a smart tool (14) or other medical instrument to provide a treatment to a virtual patient such as applying a tourniquet, applying a bandage, injecting a medicine, or providing CPR. Any detected object interaction change may cause changes to the scenario (e.g., changing the state of the virtual patient, influencing the final results of the scenario) or augmented view (e.g., bleeding from a wound overlaid to the augmented view may slow or stop).
[0046] Spatial or temporal changes (222) may occur as a result of the passage of time, or the movement of the student or objects that are related to the scenario. Spatial changes may include the student walking around within the AR view, or reorienting their head to see the AR view from different perspectives. In each case, the AR view must be updated to ensure that overlays are positioned and oriented correctly over the physical layer. Temporal changes may include events occurring within the scenario as a result of the passage of time, such as a virtual patient’s condition worsening or improving as a result of treatment, or a virtual structure fire growing in size.
[0047] Immersion changes (224) may occur or be triggered by the occurrence of other events or changes within the scenario, and may include operating one or more of the immersion devices (18, 20, 22) to provide additional immersion to the scenario experience. This may include providing audio of a patient coughing or gasping for air in response to receiving a treatment (220) or the passage of time (222) without treatment. Immersion changes (224) may also be triggered by stress level changes (226), such as causing a surface where the scenario is occurring to vibrate to simulate a structural collapse or explosion where a student’s stress level is determined to be low.
[0048] Stress level changes (226) may occur in response to measured or determined stress levels for a student participating in the scenario. Stress measurements may be performed with heart rate tracking devices or other bio feedback devices, or stress may be estimated based upon other detectable characteristics of the student (e.g., a microphone within the HMD (24) may measure breathing rate, or eye tracking and/or hand tracking features of the HMD (24) may detected how steady the student’s hands or gaze are).
[0049] As has been described, changes that occur may influence the augmented view or immersion of the scenario in different ways, but may also trigger other changes (e.g., a lowering of the students stress level may cause an immersion change (224) or a rapid advancement of the scenario timeline resulting in a temporal change (222)). Generally, the results of changes may include determining (228) the impact of the change on the virtual environment (e.g., applying a bandage to a virtual patient might improve the patient’s condition, dousing a virtual flame might introduce additional smoke), determining (230) a new virtual layer (e.g., an improved patient condition might result in changes to the overlaid appearance of a virtual patient, dousing a fire might reduce the size of the virtual fire), modifying the physical environment (e.g., activating a device to provide audio, olfactory, or other feedback to match the changing virtual environment (228)), and overlaying (234) the new virtual layer to create an updated augmented view (e.g., rendering the virtual patients new appearance).
[0050]
[0051]
[0052] For example, one implementation might include a tablet device configured to provide interfaces via the display (44) that are responsive to the scenario and use of the probe (46). In this implementation, when the probe (46) is positioned on a smart mannequin (16) the display (44) may simulate a body temperature reading, heart rate reading, blood oxygenation reading, or other feedback based upon scenario information received via the communication device (48).
[0053] In an implementation where the diagnostic tool (40) is a dummy device, the display (44) may be a non-functional surface that includes a visual pattern, fiducial marker, or other markers that allow the tools display to be readily identified during capture and recognition of the physical layer, so that the diagnostic interface may be overlaid onto the diagnostic tool (40) during application of the virtual layer. In this case, the probe (46) may be a simple push button that detects when it is placed against an object and transmits a signal via the communication device (48) to indicate that the device has been activated, which may result in a virtual diagnostic interface being overlaid upon the display (44) surface.
[0054]
[0055] As has been described, maintaining an appropriate level of challenge and stress during training may be beneficial for student retention and mastery of skills, especially when translating such skills to real world practice. While the challenge and stress of a scenario may be influenced by the AR experience itself, it may also be advantageously influenced by an SME (102) that is configured to recommend appropriate scenarios to ensure that students are presented appropriately challenging scenarios that are neither trivial, nor so difficult that they are discouraging. As an example of the operation of the SME (102),
[0056]
[0057] Each scenario that is configured (310) may also be associated with an initial rating that is representative of its difficulty. In some implementations, the rating may be dynamically determined based upon the scenario results of students, instructors, or others that are participating in the scenarios. In some implementations of the system, scenario difficulty may be based at least in part upon a determined degree of surprise or complexity inherent in the scenario, which may be determined and/or expressed in the context of a Shannon entropy rating for the scenario. Shannon entropy is a measurement of uncertainty associated with a system or variables. In some implementations, the SME (102) may be configured to dynamically calculate Shannon entropy ratings for a plurality of scenarios, across the entire system or in relation to individual skills, based upon the results of scenarios for a plurality of students or other users participating in the scenarios, as will be described in more detail below. The Shannon entropy equation may be expressed as:
[0058] Students may be added (314) as users of the system, which may include granting them unique credentials for accessing the system, and creating a user primary key or other identifiers which all other records for the user may be associated with. While the SME (102) will track and determine a student’s skill level and growth over time, it may be beneficial for a student’s initial skill level to be set (316), which may include participating in a scenario that is configured to provide results indicative of placement, or may instead include providing details of past experiences with the trained skills (e.g., years of professional or academic experience with the skill, certifications related to the skill).
[0059]
[0060] The system may also track (324) and generate a treatment timeline for each action taken by the student towards resolving the scenario. Tracked treatments may include actual treatments as well as diagnostic actions, and may include, for example, applying bandages, performing CPR, applying a tourniquet, injecting a medicine, or other actions. The performance of treatment actions may be determined based upon object tracking and identification via the HMD (24), or based upon feedback from smart devices in use during the scenarios such as smart tools (14) or smart mannequins, or a combination of the above. In addition to allowing the scenario and AR view to update in response to treatment actions, tracking (324) the timeline of treatments will also indicate the order and time at which the student performed certain treatments, which may be useful for accessing the student’s performance in the scenario, as well as for determining the challenge or complexity of the scenario.
[0061] The system may also determine (326) the results or outcome of the scenario when the scenario is completed by the student, either by successfully spotting critical cues and providing treatments, or due to the passage of time. The determined (326) results may be expressed as tracked timelines of critical cues, or treatments, or both, or may be expressed as one or more outcomes determined by those timelines. For example, when faced with a virtual patient that has sustained a life threatening injury, a treatment timeline that shows appropriate treatments being rapidly applied is indicative of a successful result.
[0062] The SME (102) may guide students through the learning process based upon their determined (326) results and the determined difficulty, challenge, or complexity ratings of scenarios available to the system. While treatment timelines and other results of the simulated scenario may indicate a simple success or failure in the scenario (e.g., patient survived, patient died), such a binary system may not be beneficial in terms of student retention and mastery. Rather, the SME (102) may be configured to perform one or more assessments of the scenario results to determine relative performance of the student, in order to provide a recommendation of one or more subsequent scenarios appropriate for their stage of skill development.
[0063] As an example,
[0064] After determining the assessment results, the system may then modify that student’s level of skill master for one or more skills based on those results (410). Refactoring the student’s skill level may be accomplished in varying ways, but as one example the system may determine, based upon the assessment results, that the student is either “crawling” (e.g., struggling, much room for improvement, perhaps overwhelmed), “walking” (e.g., showing steady improvement), or “running” (e.g., at or near mastery) with respect to one or more skills. Based on this determination, the system may then decrease (412) that student’s skill level, maintain or increase (414) that student’s skill level, or increase (416) that student’s skill level. The student’s skill level may be expressed by the system as a score, rating, level, or tier that relates to the plurality of scenarios, or may be expressed by a designation of a scenario challenge rating that they are currently mastering, or have previously mastered, or in other ways. Table 1 below shows an example of scenarios ranked by difficulty or complexity, and categorized as appropriate for a student that is crawling, walking, or running with respect to a certain skill (e.g., Airway Scenario 9 is appropriate for a student who has been assessed as at or near “running” level of skill mastery for airway emergency medical treatment scenarios).
TABLE-US-00001 Example of scenario ranking system Massive Hemorrhage Scenarios Airway Scenarios Respiration Scenarios Circulation Scenarios Hypothermia Scenarios 9 Run Run Run Run Run 8 Run Run Run Run Run 7 Run Walk Run Run Walk 6 Walk Walk Run Walk Walk 5 Walk Walk Walk Walk Walk 4 Walk Crawl Walk Walk Crawl 3 Crawl Crawl Crawl Walk Crawl 2 Crawl Crawl Crawl Crawl Crawl 1 Crawl Crawl Crawl Crawl Crawl
[0065] After the student’s skill level has been modified and/or determined, the system may then determine (418) a set of subsequent scenarios that are appropriate for that skill level, which may use fuzzy logic to identify scenarios that are within a configured range of that users skill level, whether more challenging or less challenging, and may also identify scenarios that are focused on different or related skills (e.g., where a prior completed scenario focuses on Airway aspects of the ABC method with Breathing as a secondary aspect, a subsequent scenario may instead focus primarily on Breathing). The system may then provide (420) the recommended scenario set to the student via the HMD (24), student device (104), instructor device (106), or another interface so that the student or instructor may select a subsequent scenario immediately after completing a prior scenario, providing a seamless and efficient training experience.
[0066]
[0067]
[0068]
[0069]
[0070]
[0071] With reference to the student mannequin (612), the interface (600) shows that, for this exemplary scenario, the student performed treatment actions that were in different orders than the expert, and that were different types of treatments. A visual key (618) is included to aid in interpreting the interface (600). As can be seen, the student performed treatment a chest bandage treatment (614) first, while the expert performed the same treatment third. Additionally, instead of performing tourniquet treatments, the student performed bandage treatments on the legs and arms (616), subsequent to the chest bandage treatment. Based on the prior assessment, the interface (600) indicates to the student that their timeline actions varied from “acceptable” to “failure.”
[0072]
[0073]
[0074]
[0075] For example, as shown the interface (660) visually indicates to the student that the first event occurring in their scenario was observation of a Cue at around 12 seconds, and the sixth event occurring in their scenario was performing a treatment action (670) at around 37 seconds. The interface (660) may also present comparison values, such that the student might determine that while they performed the treatment action (670) at 37 seconds, other students or experts performed the treatment action (670) at 32 seconds, or performed the treatment action 8 seconds after the prior event (e.g., Cue 5), while the student performed the treatment action 11 seconds after the prior event. Such comparison may be provided by a second mannequin (662), may be included in the event indicators (668, 670), or may appear as hover over or pop information based upon user interactions with the interface (660).
[0076]
[0077]
[0078]
[0079] The Shannon entropy score may be particularly useful as a metric for ranking the complexity, unpredictability, or challenge for different scenarios. This is because the Shannon entropy score can provide a concrete indication of the various ways in which other users have approached the scenario, with a lower score indicating fewer information processing elements, or more obvious elements, for completing the scenario, and a higher score indicating more information processing elements, or less obvious elements, for understanding and responding to a scenario.
[0080] The Shannon entropy score is also advantageous for application to training in real-world domains where satisficing decision-making strategies are preferred over optimizing decision-making strategies. The information processing strategies in such domains are often described with naturalistic models that focus on situation assessment and rapid recognition skills versus Bayesian approaches or other prescriptive models that presume a higher degree of information certainty, fixed goals and a priori assumptions about potential decision outcomes. The Recognition Primed Decision (RPD) model is an example of a relevant descriptive model for such domains and is illustrated in
[0081] The training system of
[0082] As an example of how the SME (102) or another process might determine and manage the Shannon entropy scores for a plurality of scenarios,
[0083] As each scenario is presented (800) and results are received (802), the system may analyze the types of events occurring during the scenario, and the order of events occurring during the scenario. In varying implementations, the system may analyze timelines of cues (e.g., the student notices the virtual patient looks pale), timelines of diagnostic actions (e.g., the student uses a pulse oximeter on the virtual patient), timelines of treatment actions (e.g., the student provides oxygen to the virtual patient), or other timelines, or combinations of the above. In some implementations, the system may analyze the overall timeline more generally, without regard to the type of event. Generally, the analysis will include determining the total number of possible events that might occur at certain points along the timeline, and then determining which of the possible events actually did occur at certain points along the timeline.
[0084] As an example, one augmented reality scenario might provide smart tools (14) or other instruments that allow for five possible treatments to be provided to a virtual patient, and successful completion of the scenario might involve using three of those possible treatments. To determine the complexity metric, the system may gather scenario timelines and results for a plurality of instances of the scenario and determine, to a configured depth of the timeline, which treatment action was performed first, second, third, and so on. Table 2 below provides an exemplary dataset of inputs and results for a complexity calculation, such as the Shannon entropy metric, for four different scenarios, with five different treatment options, based on the percentage of scenario participants that selected each treatment option as their first treatment during the scenario (e.g., 75% of students testing Scenario A chose Treatment 1 as their first action). As can be seen from Table 2, the entropy score increases as the scenario results reflect a wider variability of first treatment actions chosen by participants. While Table 2 reflects entropy scoring based upon a timeline depth of N=1 (e.g., probability of first action), scoring could be based upon varying depths such as N=2 (e.g., first two actions), N=3, and so on.
TABLE-US-00002 Exemplary complexity calculation input and results Treat 1 Treat 2 Treat 3 Treat 4 Treat 5 Entropy Score Scenario A 75% 15% 0% 0% 10% 1.05 Scenario B 75% 10% 5% 5% 5% 1.29 Scenario C 52% 11% 11% 11% 15% 1.95 Scenario D 21% 20% 18% 19% 22% 2.32
[0085] Returning to
[0086] In some implementations, the system may maintain separate entropy scores for each scenario that reflect the complexity of different aspects of the scenario. For example, with reference to
[0087] It should be understood that any one or more of the teachings, expressions, embodiments, examples, etc. described herein may be combined with any one or more of the other teachings, expressions, embodiments, examples, etc. that are described herein. The following-described teachings, expressions, embodiments, examples, etc. should therefore not be viewed in isolation relative to each other. Various suitable ways in which the teachings herein may be combined will be readily apparent to those of ordinary skill in the art in view of the teachings herein. Such modifications and variations are intended to be included within the scope of the claims.
[0088] Having shown and described various embodiments of the present invention, further adaptations of the methods and systems described herein may be accomplished by appropriate modifications by one of ordinary skill in the art without departing from the scope of the present invention. Several of such potential modifications have been mentioned, and others will be apparent to those skilled in the art. For instance, the examples, embodiments, geometrics, materials, dimensions, ratios, steps, and the like discussed above are illustrative and are not required. Accordingly, the scope of the present invention should be considered in terms of the following claims and is understood not to be limited to the details of structure and operation shown and described in the specification and drawings.