AUTOMATED SYSTEM FOR MAPPING ORDINARY 3D MEDIA AS MULTIPLE EVENT SINKS TO SPAWN INTERACTIVE EDUCATIONAL MATERIAL

20200202737 ยท 2020-06-25

    Inventors

    Cpc classification

    International classification

    Abstract

    A method of transforming 3D models or animations into 3D educational objects for providing educational experiences addressing personalised learning paths of students or trainees is provided. The method comprises converting (1500) the 3D models or animations into the 3D educational objects. The 3D educational objects are treated as a primary object to which sets of interactable digital assets provisioned by the Learning Phase Generator; transforming (1502) the conventional 2D or 3D digital assets to the 3D educational objects by implementing real time learning object modification; and associating (1504) learning information that comprises the 3D educational objects with specific locations on a surface of the primary object.

    Claims

    1. A system for automatically transforming 3D models or animations into 3D educational objects for providing educational experiences as sessions addressing personalised learning paths of students or trainees, said system comprising: a Learning Phase Generator that converts said 3D models or animations into said 3D educational objects, wherein said 3D educational objects are treated as a primary object to which sets of interactable digital assets are provisioned by the Learning Phase Generator; and a central learning nodes design editor module that implements real time learning object modification that aids in the transformation of said conventional 2D or 3D digital assets to said 3D educational objects, said Learning Phase Generator associates learning information that comprises said 3D educational objects with specific locations on a surface of said primary object, wherein said Learning Phase Generator re-packages said 3D educational object with necessary associations of event triggers and media projection points to make up a contextual educational skin for said 3D educational object, and wherein said educational skin represents a virtual wrapping of said 3D educational object with a mesh of local points anywhere in proximity or contact with said 3D educational object to which said students or trainees interaction, including virtual touching, will cause pre-configured events to occur in the appropriate manner

    2. The system as claimed in claim 1, wherein said virtual touching represents touching an augmented reality or a virtual reality projection at a specific point or user input that will produce a reaction for said event, and wherein said educational skin represents projecting context based information in any digital format that is contextually relevant to such situation, whereby said students or trainees interact with specific hot spots within the nominated regions of said 3D educational object to activate the functioning of configured components as related to a learning phase.

    3. The system as claimed in claim 1, wherein said session comprises augmented reality or virtual reality projection of said 3D educational objects, wherein said 3D educational objects comprises interactive educational materials that is used to strengthen a learning experience of said students or trainees.

    4. The system as claimed in claim 1, comprising a digital assets query engine that recurses through the text-based definition of a 3D object to capture defined individual vertex offsets; UV mapping for each texture coordinate vertex; faces that organise polygons into the object's list of vertices; texture vertices; vertex normals; and any other data as peripheral in sequential order and automatically classify names of such nodes to provide the means to instantiate said educational skin on said 3D Model; and a question and answers module that enables an administrator or a content manager to rely on their designated ranges of gradings, to configure the run-time logic to statistically trigger said system produced contextually valid evaluation statements at run-time in response to each answer supplied by said student or trainee involved in completing an assessment activity

    5. The system as claimed in claim 1, comprising a learning process evaluator that supports requirements of autodidacticism by reinforcing learning through access to past learning decisions as personal progress reports or video interactives and fulfilling requirements of utilising discovery or exploration in both directed and undirected fashions.

    6. The system as claimed in claim 1, comprising: a natural language processing and hybrid recommender module that assists an educator to automatically generate classification metadata for complex text material across a complete set of course resources associated with said 3D educational objects, wherein said natural language processing and hybrid recommender module is configured to identify metadata and associated subjects and topics of interest, which is extracted from said text material to add as primary or peripheral learnings to enhance specific learning experiences of students or trainees; and a visual object processing and classification module that assists said educator to automatically generate the classification metadata for complex video footage for said complete set of course resources.

    7. The system as claimed in claim 1, comprising: a learning phase evaluator module that provides students or trainees with evaluations on their progress at any point of a learning cycle as generated by sub components of a question and answers module, wherein said learning phase evaluator module generates micro certification for said students or trainees based on their learning cycle, which is symbolised by digital trophies and awards that can be visible to other students or trainees.

    8. The system as claimed in claim 1, comprising: a learning monitoring and habit assessment module that generates said augmented reality or said virtual reality exploration or navigation maps that can be extrapolated to quantify relevant variables attached to measure the degree to which knowledge or skills have been transferred through said 3D objects, wherein said degree comprises a report for tracking a progress of said students or trainees and overall success of a course.

    9. The system as claimed in claim 1, wherein said 3D educational objects comprises educational materials, different classifications of said educational materials and related video footage, wherein said 3D educational objects can be selected by said students or trainees through said various forms of virtual touching, as configured by said administrator or content manager

    10. A method of transforming 3D models or animations into 3D educational objects for providing educational experiences addressing personalised learning paths of students or trainees, said method comprising: converting (1500) said 3D models or animations into said 3D educational objects, wherein said 3D educational objects are treated as a primary object to which sets of interactable digital assets provisioned by the Learning Phase Generator; transforming (1502) said conventional 2D or 3D digital assets to said 3D educational objects by implementing real time learning object modification; and associating (1504) learning information that comprises said 3D educational objects with specific locations on a surface of said primary object.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0084] The embodiments herein will be better understood from the following detailed description with reference to the drawings.

    [0085] In order that the present disclosure may be readily understood and put into practical effect, reference will now be made to embodiments of the present disclosure with reference to the accompanying drawings, wherein like reference numbers refer to identical elements. The drawings are provided by way of example only, wherein:

    [0086] FIG. 1 illustrates an authentication, activation and initial configuration of a client application from a Central Processor fitted with components of a system for consumption of a user, or student, thus representing a typical implementation of the client application according to an embodiment herein.

    [0087] FIG. 2 illustrates how an authenticated user, through a client application module, triggers an AR/VR Educational Content according to an embodiment herein;

    [0088] FIG. 3 illustrates how an authenticated administrator or a content manager accesses a content management module (CMM) to which a central processing unit coordinates other modules in appropriate fashion to provide one or more functionalities according to an embodiment herein;

    [0089] FIG. 4 illustrates a Central Learning Nodes Design Editor when an administrator or a content manager loads primary and supporting Digital Media according to an embodiment herein;

    [0090] FIG. 5 illustrates the Central Learning Nodes Design Editor when the administrator or the content manager configures an educational skin according to an embodiment herein;

    [0091] FIG. 6 illustrates an authenticated administrator or the content manager accessing the Learning Phase Editor to Configure Educational Phases according to an embodiment herein;

    [0092] FIG. 7 illustrates a question and answers module admin interface available to an authenticated administrator or the content manager according to an embodiment herein;

    [0093] FIG. 8 illustrates utilisation of a Digital Assets Query Module by an authenticated administrator or the content manager according to an embodiment herein;

    [0094] FIG. 9 illustrates a Core Data Model for Asset Catalogue that shows a data model and respective data relationships for conducting the core serialisation and set of queries pertinent according to an embodiment herein.

    [0095] FIG. 10 illustrates a Learning Monitoring and Habit Assessment Application (LMHA) Components and respective interactions of different software layers within an application enabled by CASTMDM components system according to an embodiment herein;

    [0096] FIG. 11 illustrates a Questions and Answers Data Model (QADM) according to an embodiment herein;

    [0097] FIG. 12 illustrates a Learning Monitoring and Habit Assessment (LMHA) Data Model according to an embodiment herein;

    [0098] FIG. 13 illustrates a Topic Wizard Data Model that shows a record structure, data fields and external relationships with other core data models according to an embodiment herein;

    [0099] FIG. 14 illustrates an Educational Skin Data Model (ESDM) according to an embodiment herein; and

    [0100] FIG. 15 is a flow diagram illustrating a method of transforming 3D models or animations into 3D educational objects for providing educational experiences addressing personalised learning paths of students/trainees.

    [0101] Skilled addressees will appreciate that elements in the drawings are illustrated for simplicity and clarity and have not necessarily been drawn to precision. For example, the relative relation of some of the elements in the drawings may be simplified to help improve understanding of embodiments; or in other instances the possibility of system calls or procedure failures are not illustrated to present the sequence of events or system tasks that would log exceptions; present warning dialogs to the user; or show the system gracefully terminating after a failure, although it may be safely assumed that such cases would be catered for in any implementation or version of the present disclosure.

    DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

    [0102] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.

    [0103] FIG. 1 illustrates an authentication, activation and initial configuration of a client application from a Central Processor or a content management module, fitted with components of a central system for consumption of a user, or student, thus representing a typical implementation of the client application according to an embodiment herein. The interaction of various components residing in the client application, as well as in the central processing unit, utilise an https secured connection to achieve activation through central validation of credentials and activation of critical on-line responses and services to resolve the initial configuration and activation of the client application. Upon activation, a user logs-in to the client application to which the application engages the central processing on-line API through the various client application components provision for such purpose. The user accessing the central processor, which engages an authentication algorithm to authenticate the credentials of the user against credentials managed by any variety of Authentication Provider layer options to validate or reject the access claim from the user depending on the validity of such credentials. Once the authentication succeeding, the central system responds with an asset catalogue provisioning the necessary download and configuration instructions to initiate the setup of the learning models in the client application. If the authentication fails, the client application provides an appropriate access failure message to the user.

    [0104] In one embodiment, users are students or trainees. The user attempts to log in to a content management module 5 by logging their credentials through the client application 10, which through the API layer transmits an authentication request to the Central Processor or the content management module 20. The user credentials are authenticated through provider request to the assigned Authentication Service 22. If the content management module fails to authenticate a request 25, a login failure message is passed back to the client application 30. The user login failure is displayed to the user through a warning dialog 35. If the authentication request succeeds 40, the client application, requests the Asset Catalogue 45 from an online API, which is a configuration manifest describing the assets that require download and the default behaviour of each asset as prescribed by their learning phase and learning complexity context and their subscribed use either as AR or VR presentations. The Asset Catalogue is coupled to a Content Management tenant, or faculty, and a specific channel or course that have been set up for such purpose. The Gets Assets Catalogue API calls a tenant/channel identifier 50. The Central Processor layer creates the appropriate query using the call Fetch Assets Catalogue 55 that targets the database system staged by the Central Storage system, which responds with an empty Asset Catalogue, in cases of any failure, or with an appropriately constructed Asset Catalogue 60. The client application call back procedure on Assets received from the central processor 65 will serialize the Assets Catalogue in local storage 70, whether the manifest is empty or contains the appropriate configuration. The app will provide the appropriate failure message to the user in cases that the Asset Catalogue is empty at step 105. If the Asset Catalogue contains items that require download from the content management module, IfRequiresAssetDownload 72, then the DoAssetCatalogue function 75 in turn deploys the Get Assets API call 80. The Central Processor prepares a FetchAssets Query 85 which asynchronously retrieves the require digital assets from Central Storage as binary files that are transmitted through the network to the client application through the Assets Fetch Response 90 and once the call back on Assets Received 95 has a success response of the asset call back, the client application serializes the files in local storage 100. Once all downloads are completed or no downloads are necessary to the client application functions, the display app readiness displays a message to the user that all expected downloads have been completed, no downloads were necessary as the digital resources had already been downloaded or a failure has occurred 105. The user then responds to the client application in whatever fashion is appropriate, that is, continue or exit the app session respectively.

    [0105] FIG. 2 illustrates how an authenticated user, through a client application module, triggers an AR/VR Educational Content according to an embodiment herein. The client application is made up of CASTMDM components that include a Learning Phase Generator (LPG), a question and answers module (QAM), a Digital Assets Query Engine (DAQE) to which various processes culminate in storing, or are required to construct an access from such storage during various phases of the client application lifecycle to instigate the rendering engine to produce the expected presentations and interactions. Once the user has been previously authenticated 150, the user triggers an educational experience 155 in a form prescribed by the configuration of the client application through the Asset Catalogue configuration properties. The client application, through InitExperience 160, instructs the local Learning Phase Generator to prepare a query to fetch educational payload 165. The query is passed through the DAQE to execute fetch asset catalogue 170. The DAQE waits for fetch asset catalogue to complete to apply educational skin 175 to the primary model that was fetched. The educational skin represents the virtual wrapping of a 3D object with a mesh of local points (e.g. regions) anywhere in proximity or contact with the 3D object, to which events will occur such as, virtual touching. The virtual touching represents touching an AR/VR projection at a specific point (whether through screen touch; articulated hand or eye tracking input retrieved through cameras; or pointer devices) or user input (through forms or other mechanisms) produce the reaction. The DAQE initialise educational package function 180, which prepares a preliminary extract for the later construction of the asset catalogue, which in this instance singly contains the vectors, which lists the properties of the educational skin. The education payload ready notification 185 is captured by the Learning Phase Generator through call back on educational payload 190. The Learning Phase Generator examines whether the current activation phase requires the configuration of a questions and answers module (QAM) 195. Examining of configuration of a questions and answers module (QAM) is reference to the client application session parameters. The phase options and difficulty level 200, evincing under which phase and complexity context the current session is being conducted. In one embodiment, the discovery phase does not contain the QAM, and as such, the configuration parameters controls are given to the administrator to fathom as configuration options. The system configures the learning phase navigation pathways in apply learning phase navigation 205, which pre-determines critical rendering pathways that prioritise the viewing of specific objects over others to increase clarity, hence, the discovery of specific objects may be pre-configured by the administrator to obey specific learning strategies. The function on educational package ready 210 determines when all asynchronous preparatory operations and functions have completed to finally respond with a ready to render notification after the complete protocol of behaviours have been configured within the asset catalogue object 215. The ready to render call back 220 engages an event loop 230, sensitive to touch, object recognition and SLAM events, acting on the system to trigger the behaviour protocols configured by the implementation of the asset catalogue instructions to render education package 225 from instant to instant, and event to event, until the user exits the client application through a unique event configured by the administrator to signify an exit request.

    [0106] FIG. 3, illustrates how an authenticated administrator or a content manager accesses a content management module (CMM) to which a central processing unit coordinates other modules in appropriate fashion to provide one or more functionalities according to an embodiment herein. In one embodiment, the one or more functionalities include (a) store and distribute the client application configurations and digital content, (b) create new tenancies and channels, and (c) manage different roles, which includes the administrator/content manager, reviewer or report consumer, the user or student. The CMM is constructed within the scope of a multitenant architecture to which each potential tenant shares an instance of the hardware and software components, but has discreet domain over; their data, configuration, user management and individual configuration aspects of each tenancy. The tenancy configuration is also provided with the ability to create and configure channels, the maximum number of such channels limited by the set parameters of the instance's hardware/software limitations, which provide sub-divisions of the system. The match of faculty/course and user/student is provisioned by the tenancy configurations related to management of course profile and user, or student profile, junction management.

    [0107] At step 250, the administrator or the content manager requests the online portal to display the CLNDE page. At step 255, the online portal activates the CLNDE at the API layer. At step 270, the online portal call back the render CLNDE. At step 275, the online portal displays the rendered CLNDE page to the administrator or the content manager. At step 280, the event loop is called, which includes keyboard/mouse events or screen touch events.

    [0108] FIG. 4 illustrates a central learning nodes design editor when the administrator or the content manager loads primary and supporting digital media according to an embodiment herein. The on-line portal is engaged to enact a visual wizard to upload primary and secondary media. In one embodiment, the primary media is the object of educational focus, whilst the secondary media is the support material that is used to strengthen the learning experience. The on-line portal requests pertinent services via the API layer that in this instance require the central processing unit to execute the initialisation of the central learning nodes Design editor, to which the DAQE included to process the uploaded objects within their respective influence domain (primary or secondary).

    [0109] The authenticated administrator or content manager 300 selects the CLNDE through the on-line portal 305, which fires the start CLNDE command 310 through the API layer, causing the Central processor to execute init CLNDE 315, which initialises the CLNDE. The CLNDE ready notification 320 is received by the on-line portal layer using display CLNDE 325. The online portal projects the wizard for the administrator's consumption 330. The administrator loads the primary model and support media, if required 335. The post raw media call 340 induces central processing to process the posts via the on receive raw media call back 345, which when complete uploads the primary object through the UPLOAD PRIMARY TO DAQE call 350. The CLNDE processes this payload via the OnReceivePrimaryData 352 which turns the object stream into a text (e.g. .OBJ format), if the data is not already in that format. This primary object, in an .OBJ format, is sent to the DAQE Engine for further processing via the Inj ectPrimary call 355. The rest of the support of material, if any, is also routed to the CLNDE via UploadSupportMediatoDAQE 365, to which its payload is processed by OnRecieveSupportData 357, which optimises the original support components (for instance, video/audio/static pictures/text/other 3D objects) into more efficient data streams for the consumption of mobile devices, prior to packaging and using the InjectSupportMedia 370 which routes the reworked support data into the DAQE Engine for final processing 360. Once the DAQE Engine has completed the tasks related to this process, it responds with the event ProcesseObjectsOk 375 to the Central Learning Nodes Design Editor, indicating that the data model and relations illustrated in FIG. 9 have been serialised into internal memory, as per interactions described in FIG. 8. The Central Processing catches the ProcessedObjectOk event in PrepareProcessedObjectstoRender 380, which acts primarily as a semaphore to achieve synchronisation of the various processes that require resolution prior to any attempt at rendering. The On-line portal layer waits for the ReadyToRender event 382 to be captured by the DisplayEducationalSupportLibrary 385, which displays the appropriate support elements that were initially injected as support media and are now available as digital components that can be injected into the newly created Educational Skin. The DiplayEducationalSkin 390 completes and displays the Primary Object, now with an Educational Skin ready to be assigned the workflow intended by the administrator 395.

    [0110] FIG. 5 illustrates the Central Learning Nodes Design Editor when the administrator, or the content manager is tasked to configure the Educational Skin according to an embodiment herein. In accessing the system, an on-line portal is engaged, which fires requests, via the API layer, instructing the central processing unit to initialise the Central Learning Nodes Design Editor which is supported by the DAQE to process the originally injected object, to which an educational skin will be applied, along with any supporting material which is turned into an educational support library by the system.

    [0111] The Authenticated Admin/Content Manager is already working with the on-line Portal tools and the system is tracking all user events through an event loop 400. The administrator elects to modify the Educational Skin by reference to the appropriate selection presented by the on-line portal 405, to which the on-line portal fires PostNewNodeParams 410, which prompts Central Processing to execute ConfigureNodeOptions 415, aligning the Central Learning Nodes Design Editor to present Node_Category; Node_Category_Association; Node_Category_Antagonistics; Node_Name; and Node_Learning_Complexity_Levels as configuration options 420. The DAQE Engine prepares the educational skin data model and its counterpart the interactive components of the Educational skin OnConfigurePrimaryObjectNode 425 to provision the system with read and write permissions that will modify the skin data model, including the trigger configuration. A copy of the current primary skin data model and its event trigger configuration is saved to disk by SaveToDisk 430. The On-line portal receives the JSON package identified by an internal header notification with the text value PrimaryModel 435 through the DisplayPrimaryModel callback 440. When the Educational skin is enabled, the administrator can begin to modify or integrate further digital material 445. Simultaneously, as new digital material is injected as an add-on to the educational skin, the trigger mechanism of the AssetNodeContainer can also be modified and the changes are communicated up the system hierarchy by PostNodeTrigger 450, fired by the on-line portal layer, which induces central processing to execute ConfigureNodeTrigger 455, which attentive to its parameters 460, can assign any of the trigger configuration options described by Experience Trigger Setup in section 034b. The call-back OnConfigurePrimaryObj ectNodeTrigger 465 causes a save of the modifications of the object by executing SaveToDisk 470, immediately after which Central processing is induced to PrepareRenderObjects 475, ensuring that the primary and support media modification processes have been synchronised before finally causing the on-line portal to RenderEducationalObjects 480. After all the Education Skin modifications are completed the administrator may elect to begin work with the Learning Phase Editor to configure educational phases 485.

    [0112] FIG. 6 illustrates an authenticated administrator or the content manager, accessing the Learning Phase Editor to Configure Educational Phases according to an embodiment herein. In accessing the system, an On-line Portal is engaged to enact a visual wizard that sets up the learning phases required, which includes a discovery, learning and evaluation phase to which the on-line portal requests pertinent services via the API layer, that in this instance instructs the Central Processing unit to initialise the Learning Phase Editor and sequencing the Question and Answers Module to also engage internal memory processes and central storage facilities.

    [0113] The Authenticated Admin/Content Manager is at a stage ready to set up the Learning Phases 500. The administrator selects the option to StartPhaseWizard, fired by the display in the on-line portal 505, which causes central processing to InitPhaseWizard 510, that is, initialise the Phase Wizard, which loads default configurations to set up the initial phases that the administrator can utilise to begin their build by calling on GetPhaseConfigs 515 from the Learning Phase Editor. Immediately after the default phase configuration is loaded from Central Storage it is also copied to internal memory for later use 520. A JSON, with an internal header notification with the text value WizardlnitOk 525, is sent back as positive response, which is processed by the on-line portal function DisplayPhaseWizard 530. The administrator works through the Phase Wizard options 535, and the wizard in turn responds to the administrator input through an event processing poll 540. As the administrator works through the configuration options available, they are presented with a series of activities which include ConfigurePhaseHeader 550; ConfigurePhaseBodyPart 555; and ConfigurePhaseDatallodeRelationsWithLearninglnformation 560 (a figurative term for the sake of clarity, rather than its proper functional name).

    [0114] The ConfigurePhaseHeader 550, relies on a template generator, which defaults to prompting the administrator to create new/or accept the default HeaderTitle, which denotes the learning phase title. The system includes suggested HeaderTitles as Discovery (AKA Exploration), Learning, and Assessment Phases, although the administrator is free to create new learning phase labels. The system prompts the administrator to provide a Description and Summary of Objectives for that specific phase. This is important to remind the administrator that each phase created is a container of properties and triggers that may be executed within the domain of that phase.

    [0115] During the ConfigurePhaseBodyPart stage 555, the administrator is prompted by the system to review the unedited educational skin, represented by the virtual wrapping of a 3D object with a mesh of local points anywhere in proximity or contact with the 3D object supplied as the primary object of focus. The LPE provides a visual editor with a 3D projection system that permits the administrator to peruse the objects displayed in the editor poised through the full range of viewing angles and scales. The unedited educational skin, wrapped around the primary object as initially generated by the DAQE, provides the means to focus on any point of interest, on any part of the Primary 3D object/animation, which the administrator interacts with to substitute such points of interest with interactive anchors. These interactive anchors are the locus to which other visual objects, interactive elements, or even sound schemas, simulate their tethering to inject contextual valid educational elements. This network of interactive anchors also injects event notifiers for any number of user or system events (touch or mouse event, keyboard input, eye gazing or visually recognised gesture tracking, point location mapping or entering or leaving a configured proximity fence and such like).

    [0116] During the Configure Phase Data Node Relations With Learning Information stage 560, the LPE provides the ability to mark any point in the educational skin with a score range set up by the administrator, from the lowest score signifying nil focus requirements to the maximum indicating crucial focus priority, in reference to one or any of the Learning Phases already configured. This action sets up the level of focus relevance by phase and item, e.g. LFRPHI scoring. The LFRPHI scoring offers a prioritisation referencing prompt that administrators rely on to later adjust the learning phases, types, frequency, and learning activities placed on such markers to increase the value of the learning experience once progress outcomes have been captured and analysed. Under the LFRPHI schema every point in the educational skin has varying degrees of focus priority, requiring different levels of educational planning and implementation from one phase to another, as each level of focus precedence may be lowered or heightened depending on the objectives and requirements of that learning phase. An ML hybrid (MLH) recommender system employs a multi-stage approach to select reference metadata as the system iterates through the educational documentation accessible to the system, automatically extracting document specific implicit and explicit metadata, such as, topic name (which for the purpose of this system is also known as the AssetNodeContainerLabel in the AssetNodeContainer structure) and subject information, and other identifying characteristics. The MLH also carries out a cross reference analysis using a theme relevance scoring system to cluster documents with common elements together. This information is used by the administrator to perform intelligent searches on topics that will bring in more focussed and detailed information into the subjects sought, especially as LFRPHI scoring is applied, to assist the system to sort and match through results bounded by the immediate context of the learning phase which imparts relevance to the interactive anchors that implicitly enforce it as the centre piece for that phase.

    [0117] The system requires QAM support, 565, the system provides such abilities, in reference to a detailed description of critical interactions illustrated by FIG. 7 which expounds how the QAM is administered. Once the administrator completes all tasks prompted by the Learning Phase Editor, the administrator typically proceeds to save their work 570, using the portal layer function SaveWork 575, which induces central processing to GetActiveMemoryObjects 580, and prepares such in the call OnReadyToSave 585, which when ready executes SavePhaseConfigstoDisk 590.

    [0118] FIG. 7 illustrates a question and answers module (QAM) admin interface available to an authenticated administrator or the content manager according to an embodiment herein. In accessing the system, an On-line Portal is engaged to enact a visual wizard to set up the Questions and Answers delivery system. The on-line portal requests pertinent services via the API layer that in this instance instructs the Central Processing unit to initialise the Question and Answers Module, the Central Learning Nodes Design Editor, the Learning Phase editor to which internal memory processes and central storage facilities may also be used in support of injecting questions and answers into the system.

    [0119] Whenever the QAM is engaged, following subsequent actions detailed in FIG. 6, ExecuteQAM is launched by the LPE 600. The QAM responds through OnQAMinitOK 605, which initialises the QAM and fires the GetPhaseConfigs 610 to access the ContainerLearningPhaseComplexityExternalMediaLinkage data model and its dependencies; with an illustration of such data relationships displayed in FIG. 9. Get Extracts 612 is the next call from the QAM which gets the entirety of the digital reading material meta data from central storage that may relate to the topics discussed by the different phases of learning. This material is processed by ExtractRelevantAbstracts 615, which utilises a hybrid recommender system to filter through the most appropriate material for the requirements of the course, since ExtractRelevantAbstracts permits the administrator to configure the metadata input that would most likely obtain the best material for use, along with the system utilising the labels already serialised by the AssetNodeContainer hierarchy as asserted meta data instructions. The InitTopicWizard 620 QAM process translates the resulting digital reading material into queryable objects (JSON nested arrays) for dynamic consumption. The on-line portal layer waits for the JSON object identified by an internal header notification with the text value TopicWizardOk 630, which carries the appropriate payload, as detailed by FIG. 13, to instruct DisplayTopicWizard 635 to paint and activate the Topic Wizard. The Topic Wizard conducts searches for extant material that may be of relevance given a specific phase, complexity level or AssetNodeContainer context (that is, its AssetNodeContainerLabel and other supporting fields). Such search results can be viewed as recommendations that assist administrators in the design of questions and answers.

    [0120] During the question creation stage in FIG. 7, commenced by the administrator by selecting the complexity level instructing the session 640, the Topic Wizard provides a sub-component, PickUsage 645, which permits the joining of Phase and complexity levels utilising the function SelectPhaseandComplexityLevel 650, which elicits the LPE to RunTopicQuery 655. The QAM responds in kind by executing QueryTopics 660, which actively engages the Topic Wizard to run specific queries on its already processed data to provide a breakdown of any material that corresponds with the learning phase, complexity level and, if so desired, specific AssetNodeContainerLabels or synonyms of such, which for the purpose of the system act as topics of interest. The results are listed in pertinence order, from highest to lowest score, by SubjectThemeRelevanceScoring (as part of TopicWizardData, FIG. 13). The administrator creates questions 655, relying on the support provided by the Topic Wizard recommendations.

    [0121] The CreateQuestion 670 function is submitted through the on-line portal, which is processed by the QAM in PrepareQ&AList 675. The context and functionality of PrepareQ&AList output generates different types of assessments, which automatically produce assessment instances for the different contexts (i.e. learning phase and complexity level required). This list or Q&A instance recommendations is packaged and displayed to the administrator by

    [0122] DisplayPotentialQ&AList 680. The administrator proceeds by accepting, editing, rejecting, or reformatting parts of the assessment instance for the Question and Answer set provided by the system 685. The edited listing is posted back by the administrator through the on-line portal PostAcceptance function 690. The re-edited list is once again processed by the QAM in PrepareQuestionSet 695, which returns back the appropriate instance type set up and the question and answer set pertinent to that type of assessment. The revised question set is displayed by the function DisplayQuestionSet 710, which displays the set in both student and administrator view options. Finally, the administrator approves and saves the assessment instance 715 by opting for Save 720, which executes a SaveToDisk instruction 725, causing the QAM to serialise the assessment instance into Disk, SerializeToDisk 730.

    [0123] FIG. 8 and FIG. 14 represent the system interactions and data processing requirements of a typical Educational skin implementation. The Educational Skin contains both an event sink and response activities that can be configured by the administrator. In terms of the event sink, the schema depicted in FIG. 14, EventTriggerType, defines a number of events that instantiate the responses listed by ActionType. The defined sink event causes a response which spawns the activity linked through the various relationships in the schema as depicted in FIG. 14. There are various data concerns and relationships that the system resolves, critically supported by ContainerLearningPhaseComplexityExternalMediaLinkage which acts as the coordinating junction.

    [0124] An EventTriggerType can be any of the following: GPS, signifying that specific GPS locations, or on the device detecting proximity to such location, such that the linked activity will be rendered or activated; Image Recognition, signifying that whenever the system identifyes a specific image the linked activity will be rendered or activated; Object Recognition signifying that whenever the system classifies an object of a specific class the linked activity will be rendered or activated; Bluetooth Fencing, signifying that whenever the system detects specific Bluetooth identifiers the linked activity will be rendered or activated; Site Recognition, with simultaneous localization and mapping (SLAM), signifying that whenever the system detects a specific site through image/or object recognition, the linked activity will be rendered or activated following SLAM principles.

    [0125] An ActionType can be any of the following: Render Raw Node, signifying that the only the primary object should be visible; Render Node with Extra Media, signifying that the connected but external media object should also be presented; Remain Invisible, signifying that the primary object should be made invisible; Render Sound Only, signifying that the primary object should be made invisible but the linked sound file should be played; Use Transparency Rules, signifying that either the primary or secondary object should be made transparent by a certain percentage as configured.

    [0126] FIG. 8 illustrates utilisation of a Digital Assets Query Engine (DAQE) by an authenticated administrator or the content manager according to an embodiment herein. In accessing the system, an On-line Portal is engaged to enact a visual wizard to set up the DAQE. The on-line portal requests pertinent services via the API layer, that in this instance instructs the central processing unit to initialise the Digital Assets Query Module to which internal memory processes and central storage facilities are also engaged in support of discovering and serialising the various schemas required to generate the primary object with its extended Educational Skin and the secondary object or objects as supportive junctions for said educational skin.

    [0127] The authenticated administrator uploads the primary model, which causes central processing to post the data through InjectPrimaryObject 800 to the DAQE. This primary object is the form of an OBJ text file. After the data transfer to the DAQE has occurred without error OnPrimaryEducationalObjectReceived 805, the DAQE uses InterrogateModelsVertices 815, which recurses through every vertex offsets, the UV mapping for each texture coordinate vertex, the faces that organise polygons into the object's list of vertices, texture vertices and vertex normals to generate a mapping of the object to support the ultimate creation of the Educational Skin. ListPotentialVertexCovers 815 identifies every point in this map that may be used by the system as a vertex cover, or node cover, in which a set of vertices is such that each edge of the graph is incident to at least one vertex of the set. By using a NP-hard optimisation technique, with an approximation algorithm, the system resolves the maximum amount of vertex covers used by the system as Educational Skin. CreateDataMap 820 transforms the vertex covers into a location mesh that can be wrapped around the primary 3D object. The CreateDefaultTouchSurface data model 825 provides the means to configure how the user or student will interact with the different regions of the Education Skin, in terms of TriggerEventType events. The Educational Skin is completed by the function CreateDefaultEducationalSkin 830, which adds ActionType to resolve the activities that will respond to the already configured events and packages the JSON object identified by an internal header notification with the text value PrimaryObjectsProcessedOK 845. The UpdateModel 835 function finally integrates the initial 3D model with its new Educational Skin. This co-joint model is saved by SaveModelToDisk 840. The JSON object is posted back to central processing 845, where the function PreparePrimaryObjectForRender 850 hooks up the configuration to the rendering system.

    [0128] FIG. 8 further involves in uploading secondary support media and that is performed by InjectSupportMedia 855, which transfers the necessary data to the DAQE for OnSupportMediaReceived 860 to initiate processing upload processing. The proceeding call TextToMetaData 865 is a Natural Language Processing (NLP) filter that automatically produces metadata for the documents it processes. The sequence of procedures within this function relate to conversion of the text to lower case; the removal of common stop words, punctuation, blank spaces and meaning neutral tokens; text stemming; the creation of a document matrix containing the frequency words within the document; performing association analysis across frequent terms found in separate sentences; and reducing the most relevant words to a metadata list. The VideoObjectClassifiertoMetaData 870 function performs live object classification, on such video footage that has been recorded in an acceptable file formats, for the purposes of adding them into the presentation as secondary virtual skin objects including the processing of text classifiers that are logged for further processing. Once the video has completed its run, the classifications logged are taken though a similar NLP filter, as already discussed, to produce metadata that can provide textual context for video material. This meta data extraction processed is completed by RenderMetaDataReport 875, which matches the support media provided to the meta data produced for more efficient identification and consumption. This information is loaded to internal memory through the UpdateModel 880 function, and also saved by SaveModelToDisk 885. A JSON object identified by an internal header notification with the text value MetaDataReportProcessedOk 890 is dispatched by the DAQE, which is further processed by PrepareEducationalSupportLibrary 895 to supply the rendering engine with the appropriate visual/audio elements that were initially injected as support media and that are now available as digital components ready to be injected into the Educational Skin. PrepareEducationalSkin 900 and PrepareProcessedObjectsForRender 905 complete the rendering preparation cycle by supplying the rendering engine with the necessary constructs to complete the rendering cycle on all visual objects.

    [0129] FIG. 9 illustrates a Core Data Model for Asset Catalogue that shows a data model and respective data relationships for conducting the core serialisation and set of queries pertinent according to an embodiment herein. This representation is by no means the absolute or ideal implementation, but rather a representation of a viable solution without recourse to more detailed specifications that will not further the claims here in.

    [0130] FIG. 10 illustrates a Learning Monitoring and Habit Assessment Application (LMHA) Components and respective interactions of different software layers within an application enabled by CASTMDM components system according to an embodiment herein. In this instance the user or circumstances (i.e. a scheduled assessment event), cause navigation through the different learning phases. Once the client application triggers the experience, that is initialises the experience as a response to the trigger events configured, the Learning Phase Generator traps a series of events from rendering the initial experience to the completion of an assessment, depending on the learning phase encountered. At each terminating event the LMHA Logger serialises the session, results, or more likely both sequence of events into local storage. A delta synchronisation is performed by an application service, which determines what needs to be uploaded to the server, and performs said tasks as a background operation of the app.

    [0131] The instance the application is started or reawakened an APP RESUME EVENT is fired 4000, and a background thread or service initialises to accommodate the LMHA by executing OnlnitAppSynching 4007. Meanwhile, the user elects to navigate either the discovery or learning phase 4008 by appropriately interacting with their device. Whenever an experience is triggered by the client application in response to its trigger configuration, the TriggerExperience call 4010 is launched. This causes OnRenderExperience 4015 to render or activate the configured experience and execute LogStartOfSession 4020, which begins a new logging session for the LMHA logger. OnChangeAssetNode 4025, acting on any input by the user that causes focus on a new AssetNodeContainer, or vertex cover, executes a LogAssetNodelnformation 4030 with all the pertinent information required to formulate a log record, which has at least information relating to FIG. 12, NavigationLinkage record. similarly, an ExperiencePause 4035 request caught by the LPG OnPauseExperience 4040 call back causes LogEndOfSession 4045 serialised by the NavigationalLinkage record.

    [0132] The session is saved to LocalStorage by SerializeSession 4050, which permits the background app synching service, BASS, to FetchLMHAData 4052 at the most appropriate times. At these junctions the BASS packages the LMHA Data with the process PrepareMHAData 4054 and when appropriate executes PostLMHAData 4056 to the Platform Upload Service or end point.

    [0133] During Assessment Phases, depicted in FIG. 10, the user may trigger the beginning of the assessment activity, TriggerAssessment 4060, which is processed by OnRenderAssessment 4065, executing LogStartofSession 4070, with the necessary information for the system to serialise the session as an assessment activity since an assessment instance is initialised by the same call, referencing such instance by its AssessmentInstanceID (FIG. 12, NavigationLinkage). Every time a new question is completed by the user, OnCompleteQuestion 4080 submits the supplied answer, RequestFeedBack from the QAM 4085, and on receiving the feedback, logs the entirety of the exchange with LogResults 4087, transmitting such information to the LMHA Logger. Feedback is continuously displayed to the user by the client application, DisplayFeedback 4090 function. If the system throws a request to pause, causing PauseAssessment 4095 to fire, the LPG catches such request in

    [0134] OnAttemptPauselncompleteAssessment 4100, which attempts to notify the user that the assessment may not be paused through the NotifyWarning 4102. The LPG executes LogPauseAttempt 4105, causing the LMHA Logger to SerializeResult 4106, which immediately prompts the BASS to FetchLMHAData 4107 and save such data in local storage. If the interruption is caused by the client application or system failure, then all the activity carried out by the student is safely recorded in Local Storage, including the exception description that caused the failure. This not only assists in correcting any bugs, but also in testifying on behalf of the student that the assessment activity pause was not caused as a means to evade the evaluation exercise. Should the pause not be caused by a system or app failure, and the student still insists on pausing their assessment after a warning has been dispatched 4102. The LPG through its call back OnPauselncompleteAssessment 4110, will LoglncompleteAssesment 4115; the LMHA logger will SerializeResults 4117, and the BASS will FetchLMHAData 4118, to save to Local Storage. If the assessment is completed, the LPG OnCompleteAssessment 4120 executes LogCompleteAssessment 4125, then the LMHA logger SerializeResults 4127 and the BASS executes FetchLMHAData 4128 to save to Local Storage. There is a final LPG RequestFeedback cycle 4134, which displays final results to the student 4135, which may include a digital trophy or micro certificate as part of the feedback if gradings and feedback configuration allow it. The client application displays a feedback 4140 by representing the completion of the feedback cycle.

    [0135] FIG. 11 illustrates a Questions and Answers Data Model (QADM) according to an embodiment herein. The Questions and Answers Data Model (QADM) shows a data model and its data relationships for the purpose of supporting the core serialisation and set of queries pertinent to the implementation of the QAM components.

    [0136] FIG. 12 illustrates a Learning Monitoring and Habit Assessment (LMHA) Data Model according to an embodiment herein. The Learning Monitoring and Habit Assessment (LMHA) Data Model shows data entities and their relationships for the purpose of supporting the core serialisation and set of queries pertinent to the implementation of the LMHA components.

    [0137] FIG. 13 illustrates a Topic Wizard Data Model that shows a record structure, data fields and external relationships with other core data models according to an embodiment herein. The usage of Topic Wizard Data Model becomes evident as a collector of processed data for data staging processes, as well as the output of a hybrid recommender system that filters through the text-based material based on subject, or topic, correlation significance.

    [0138] FIG. 14 illustrates an Educational Skin Data Model (ESDM) according to an embodiment herein. The Educational Skin Data Model (ESDM) shows a data model and its data relationships for the purpose of supporting the core serialisation and set of queries pertinent to the implementation of the concept of an Educational Skin.

    [0139] FIG. 15 is a flow diagram illustrating a method of transforming 3D models or animations into 3D educational objects for providing educational experiences addressing personalised learning paths of students/trainees. At step 1500, the Learning Phase Generator converts the 3D models or animations into the 3D educational objects. The 3D educational objects are treated as a primary object to which sets of interactable digital assets provisioned by the Learning Phase Generator. At step 1502, the central learning nodes design editor module implements real time learning object modification to transform the conventional 2D or 3D digital assets to the 3D educational objects. At step 1504, the Learning Phase Generator associates learning information that includes the 3D educational objects with specific locations on a surface of the primary object.

    [0140] The Learning Phase Generator re-packages the 3D educational object with necessary associations of event triggers and media projection points to make up a contextual educational skin for the 3D educational object. The educational skin represents a virtual wrapping of the 3D educational object with a mesh of local points anywhere in proximity or contact with the 3D educational object to which events will occur through a virtual touching.

    [0141] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the claims.