AUTOMATED SYSTEM FOR MAPPING ORDINARY 3D MEDIA AS MULTIPLE EVENT SINKS TO SPAWN INTERACTIVE EDUCATIONAL MATERIAL
20200202737 ยท 2020-06-25
Inventors
Cpc classification
G06T19/20
PHYSICS
G09B7/00
PHYSICS
G06F3/011
PHYSICS
G06V20/653
PHYSICS
G06V20/41
PHYSICS
G06T19/00
PHYSICS
G09B23/00
PHYSICS
International classification
G09B7/00
PHYSICS
Abstract
A method of transforming 3D models or animations into 3D educational objects for providing educational experiences addressing personalised learning paths of students or trainees is provided. The method comprises converting (1500) the 3D models or animations into the 3D educational objects. The 3D educational objects are treated as a primary object to which sets of interactable digital assets provisioned by the Learning Phase Generator; transforming (1502) the conventional 2D or 3D digital assets to the 3D educational objects by implementing real time learning object modification; and associating (1504) learning information that comprises the 3D educational objects with specific locations on a surface of the primary object.
Claims
1. A system for automatically transforming 3D models or animations into 3D educational objects for providing educational experiences as sessions addressing personalised learning paths of students or trainees, said system comprising: a Learning Phase Generator that converts said 3D models or animations into said 3D educational objects, wherein said 3D educational objects are treated as a primary object to which sets of interactable digital assets are provisioned by the Learning Phase Generator; and a central learning nodes design editor module that implements real time learning object modification that aids in the transformation of said conventional 2D or 3D digital assets to said 3D educational objects, said Learning Phase Generator associates learning information that comprises said 3D educational objects with specific locations on a surface of said primary object, wherein said Learning Phase Generator re-packages said 3D educational object with necessary associations of event triggers and media projection points to make up a contextual educational skin for said 3D educational object, and wherein said educational skin represents a virtual wrapping of said 3D educational object with a mesh of local points anywhere in proximity or contact with said 3D educational object to which said students or trainees interaction, including virtual touching, will cause pre-configured events to occur in the appropriate manner
2. The system as claimed in claim 1, wherein said virtual touching represents touching an augmented reality or a virtual reality projection at a specific point or user input that will produce a reaction for said event, and wherein said educational skin represents projecting context based information in any digital format that is contextually relevant to such situation, whereby said students or trainees interact with specific hot spots within the nominated regions of said 3D educational object to activate the functioning of configured components as related to a learning phase.
3. The system as claimed in claim 1, wherein said session comprises augmented reality or virtual reality projection of said 3D educational objects, wherein said 3D educational objects comprises interactive educational materials that is used to strengthen a learning experience of said students or trainees.
4. The system as claimed in claim 1, comprising a digital assets query engine that recurses through the text-based definition of a 3D object to capture defined individual vertex offsets; UV mapping for each texture coordinate vertex; faces that organise polygons into the object's list of vertices; texture vertices; vertex normals; and any other data as peripheral in sequential order and automatically classify names of such nodes to provide the means to instantiate said educational skin on said 3D Model; and a question and answers module that enables an administrator or a content manager to rely on their designated ranges of gradings, to configure the run-time logic to statistically trigger said system produced contextually valid evaluation statements at run-time in response to each answer supplied by said student or trainee involved in completing an assessment activity
5. The system as claimed in claim 1, comprising a learning process evaluator that supports requirements of autodidacticism by reinforcing learning through access to past learning decisions as personal progress reports or video interactives and fulfilling requirements of utilising discovery or exploration in both directed and undirected fashions.
6. The system as claimed in claim 1, comprising: a natural language processing and hybrid recommender module that assists an educator to automatically generate classification metadata for complex text material across a complete set of course resources associated with said 3D educational objects, wherein said natural language processing and hybrid recommender module is configured to identify metadata and associated subjects and topics of interest, which is extracted from said text material to add as primary or peripheral learnings to enhance specific learning experiences of students or trainees; and a visual object processing and classification module that assists said educator to automatically generate the classification metadata for complex video footage for said complete set of course resources.
7. The system as claimed in claim 1, comprising: a learning phase evaluator module that provides students or trainees with evaluations on their progress at any point of a learning cycle as generated by sub components of a question and answers module, wherein said learning phase evaluator module generates micro certification for said students or trainees based on their learning cycle, which is symbolised by digital trophies and awards that can be visible to other students or trainees.
8. The system as claimed in claim 1, comprising: a learning monitoring and habit assessment module that generates said augmented reality or said virtual reality exploration or navigation maps that can be extrapolated to quantify relevant variables attached to measure the degree to which knowledge or skills have been transferred through said 3D objects, wherein said degree comprises a report for tracking a progress of said students or trainees and overall success of a course.
9. The system as claimed in claim 1, wherein said 3D educational objects comprises educational materials, different classifications of said educational materials and related video footage, wherein said 3D educational objects can be selected by said students or trainees through said various forms of virtual touching, as configured by said administrator or content manager
10. A method of transforming 3D models or animations into 3D educational objects for providing educational experiences addressing personalised learning paths of students or trainees, said method comprising: converting (1500) said 3D models or animations into said 3D educational objects, wherein said 3D educational objects are treated as a primary object to which sets of interactable digital assets provisioned by the Learning Phase Generator; transforming (1502) said conventional 2D or 3D digital assets to said 3D educational objects by implementing real time learning object modification; and associating (1504) learning information that comprises said 3D educational objects with specific locations on a surface of said primary object.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0084] The embodiments herein will be better understood from the following detailed description with reference to the drawings.
[0085] In order that the present disclosure may be readily understood and put into practical effect, reference will now be made to embodiments of the present disclosure with reference to the accompanying drawings, wherein like reference numbers refer to identical elements. The drawings are provided by way of example only, wherein:
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092]
[0093]
[0094]
[0095]
[0096]
[0097]
[0098]
[0099]
[0100]
[0101] Skilled addressees will appreciate that elements in the drawings are illustrated for simplicity and clarity and have not necessarily been drawn to precision. For example, the relative relation of some of the elements in the drawings may be simplified to help improve understanding of embodiments; or in other instances the possibility of system calls or procedure failures are not illustrated to present the sequence of events or system tasks that would log exceptions; present warning dialogs to the user; or show the system gracefully terminating after a failure, although it may be safely assumed that such cases would be catered for in any implementation or version of the present disclosure.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0102] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0103]
[0104] In one embodiment, users are students or trainees. The user attempts to log in to a content management module 5 by logging their credentials through the client application 10, which through the API layer transmits an authentication request to the Central Processor or the content management module 20. The user credentials are authenticated through provider request to the assigned Authentication Service 22. If the content management module fails to authenticate a request 25, a login failure message is passed back to the client application 30. The user login failure is displayed to the user through a warning dialog 35. If the authentication request succeeds 40, the client application, requests the Asset Catalogue 45 from an online API, which is a configuration manifest describing the assets that require download and the default behaviour of each asset as prescribed by their learning phase and learning complexity context and their subscribed use either as AR or VR presentations. The Asset Catalogue is coupled to a Content Management tenant, or faculty, and a specific channel or course that have been set up for such purpose. The Gets Assets Catalogue API calls a tenant/channel identifier 50. The Central Processor layer creates the appropriate query using the call Fetch Assets Catalogue 55 that targets the database system staged by the Central Storage system, which responds with an empty Asset Catalogue, in cases of any failure, or with an appropriately constructed Asset Catalogue 60. The client application call back procedure on Assets received from the central processor 65 will serialize the Assets Catalogue in local storage 70, whether the manifest is empty or contains the appropriate configuration. The app will provide the appropriate failure message to the user in cases that the Asset Catalogue is empty at step 105. If the Asset Catalogue contains items that require download from the content management module, IfRequiresAssetDownload 72, then the DoAssetCatalogue function 75 in turn deploys the Get Assets API call 80. The Central Processor prepares a FetchAssets Query 85 which asynchronously retrieves the require digital assets from Central Storage as binary files that are transmitted through the network to the client application through the Assets Fetch Response 90 and once the call back on Assets Received 95 has a success response of the asset call back, the client application serializes the files in local storage 100. Once all downloads are completed or no downloads are necessary to the client application functions, the display app readiness displays a message to the user that all expected downloads have been completed, no downloads were necessary as the digital resources had already been downloaded or a failure has occurred 105. The user then responds to the client application in whatever fashion is appropriate, that is, continue or exit the app session respectively.
[0105]
[0106]
[0107] At step 250, the administrator or the content manager requests the online portal to display the CLNDE page. At step 255, the online portal activates the CLNDE at the API layer. At step 270, the online portal call back the render CLNDE. At step 275, the online portal displays the rendered CLNDE page to the administrator or the content manager. At step 280, the event loop is called, which includes keyboard/mouse events or screen touch events.
[0108]
[0109] The authenticated administrator or content manager 300 selects the CLNDE through the on-line portal 305, which fires the start CLNDE command 310 through the API layer, causing the Central processor to execute init CLNDE 315, which initialises the CLNDE. The CLNDE ready notification 320 is received by the on-line portal layer using display CLNDE 325. The online portal projects the wizard for the administrator's consumption 330. The administrator loads the primary model and support media, if required 335. The post raw media call 340 induces central processing to process the posts via the on receive raw media call back 345, which when complete uploads the primary object through the UPLOAD PRIMARY TO DAQE call 350. The CLNDE processes this payload via the OnReceivePrimaryData 352 which turns the object stream into a text (e.g. .OBJ format), if the data is not already in that format. This primary object, in an .OBJ format, is sent to the DAQE Engine for further processing via the Inj ectPrimary call 355. The rest of the support of material, if any, is also routed to the CLNDE via UploadSupportMediatoDAQE 365, to which its payload is processed by OnRecieveSupportData 357, which optimises the original support components (for instance, video/audio/static pictures/text/other 3D objects) into more efficient data streams for the consumption of mobile devices, prior to packaging and using the InjectSupportMedia 370 which routes the reworked support data into the DAQE Engine for final processing 360. Once the DAQE Engine has completed the tasks related to this process, it responds with the event ProcesseObjectsOk 375 to the Central Learning Nodes Design Editor, indicating that the data model and relations illustrated in
[0110]
[0111] The Authenticated Admin/Content Manager is already working with the on-line Portal tools and the system is tracking all user events through an event loop 400. The administrator elects to modify the Educational Skin by reference to the appropriate selection presented by the on-line portal 405, to which the on-line portal fires PostNewNodeParams 410, which prompts Central Processing to execute ConfigureNodeOptions 415, aligning the Central Learning Nodes Design Editor to present Node_Category; Node_Category_Association; Node_Category_Antagonistics; Node_Name; and Node_Learning_Complexity_Levels as configuration options 420. The DAQE Engine prepares the educational skin data model and its counterpart the interactive components of the Educational skin OnConfigurePrimaryObjectNode 425 to provision the system with read and write permissions that will modify the skin data model, including the trigger configuration. A copy of the current primary skin data model and its event trigger configuration is saved to disk by SaveToDisk 430. The On-line portal receives the JSON package identified by an internal header notification with the text value PrimaryModel 435 through the DisplayPrimaryModel callback 440. When the Educational skin is enabled, the administrator can begin to modify or integrate further digital material 445. Simultaneously, as new digital material is injected as an add-on to the educational skin, the trigger mechanism of the AssetNodeContainer can also be modified and the changes are communicated up the system hierarchy by PostNodeTrigger 450, fired by the on-line portal layer, which induces central processing to execute ConfigureNodeTrigger 455, which attentive to its parameters 460, can assign any of the trigger configuration options described by Experience Trigger Setup in section 034b. The call-back OnConfigurePrimaryObj ectNodeTrigger 465 causes a save of the modifications of the object by executing SaveToDisk 470, immediately after which Central processing is induced to PrepareRenderObjects 475, ensuring that the primary and support media modification processes have been synchronised before finally causing the on-line portal to RenderEducationalObjects 480. After all the Education Skin modifications are completed the administrator may elect to begin work with the Learning Phase Editor to configure educational phases 485.
[0112]
[0113] The Authenticated Admin/Content Manager is at a stage ready to set up the Learning Phases 500. The administrator selects the option to StartPhaseWizard, fired by the display in the on-line portal 505, which causes central processing to InitPhaseWizard 510, that is, initialise the Phase Wizard, which loads default configurations to set up the initial phases that the administrator can utilise to begin their build by calling on GetPhaseConfigs 515 from the Learning Phase Editor. Immediately after the default phase configuration is loaded from Central Storage it is also copied to internal memory for later use 520. A JSON, with an internal header notification with the text value WizardlnitOk 525, is sent back as positive response, which is processed by the on-line portal function DisplayPhaseWizard 530. The administrator works through the Phase Wizard options 535, and the wizard in turn responds to the administrator input through an event processing poll 540. As the administrator works through the configuration options available, they are presented with a series of activities which include ConfigurePhaseHeader 550; ConfigurePhaseBodyPart 555; and ConfigurePhaseDatallodeRelationsWithLearninglnformation 560 (a figurative term for the sake of clarity, rather than its proper functional name).
[0114] The ConfigurePhaseHeader 550, relies on a template generator, which defaults to prompting the administrator to create new/or accept the default HeaderTitle, which denotes the learning phase title. The system includes suggested HeaderTitles as Discovery (AKA Exploration), Learning, and Assessment Phases, although the administrator is free to create new learning phase labels. The system prompts the administrator to provide a Description and Summary of Objectives for that specific phase. This is important to remind the administrator that each phase created is a container of properties and triggers that may be executed within the domain of that phase.
[0115] During the ConfigurePhaseBodyPart stage 555, the administrator is prompted by the system to review the unedited educational skin, represented by the virtual wrapping of a 3D object with a mesh of local points anywhere in proximity or contact with the 3D object supplied as the primary object of focus. The LPE provides a visual editor with a 3D projection system that permits the administrator to peruse the objects displayed in the editor poised through the full range of viewing angles and scales. The unedited educational skin, wrapped around the primary object as initially generated by the DAQE, provides the means to focus on any point of interest, on any part of the Primary 3D object/animation, which the administrator interacts with to substitute such points of interest with interactive anchors. These interactive anchors are the locus to which other visual objects, interactive elements, or even sound schemas, simulate their tethering to inject contextual valid educational elements. This network of interactive anchors also injects event notifiers for any number of user or system events (touch or mouse event, keyboard input, eye gazing or visually recognised gesture tracking, point location mapping or entering or leaving a configured proximity fence and such like).
[0116] During the Configure Phase Data Node Relations With Learning Information stage 560, the LPE provides the ability to mark any point in the educational skin with a score range set up by the administrator, from the lowest score signifying nil focus requirements to the maximum indicating crucial focus priority, in reference to one or any of the Learning Phases already configured. This action sets up the level of focus relevance by phase and item, e.g. LFRPHI scoring. The LFRPHI scoring offers a prioritisation referencing prompt that administrators rely on to later adjust the learning phases, types, frequency, and learning activities placed on such markers to increase the value of the learning experience once progress outcomes have been captured and analysed. Under the LFRPHI schema every point in the educational skin has varying degrees of focus priority, requiring different levels of educational planning and implementation from one phase to another, as each level of focus precedence may be lowered or heightened depending on the objectives and requirements of that learning phase. An ML hybrid (MLH) recommender system employs a multi-stage approach to select reference metadata as the system iterates through the educational documentation accessible to the system, automatically extracting document specific implicit and explicit metadata, such as, topic name (which for the purpose of this system is also known as the AssetNodeContainerLabel in the AssetNodeContainer structure) and subject information, and other identifying characteristics. The MLH also carries out a cross reference analysis using a theme relevance scoring system to cluster documents with common elements together. This information is used by the administrator to perform intelligent searches on topics that will bring in more focussed and detailed information into the subjects sought, especially as LFRPHI scoring is applied, to assist the system to sort and match through results bounded by the immediate context of the learning phase which imparts relevance to the interactive anchors that implicitly enforce it as the centre piece for that phase.
[0117] The system requires QAM support, 565, the system provides such abilities, in reference to a detailed description of critical interactions illustrated by
[0118]
[0119] Whenever the QAM is engaged, following subsequent actions detailed in
[0120] During the question creation stage in
[0121] The CreateQuestion 670 function is submitted through the on-line portal, which is processed by the QAM in PrepareQ&AList 675. The context and functionality of PrepareQ&AList output generates different types of assessments, which automatically produce assessment instances for the different contexts (i.e. learning phase and complexity level required). This list or Q&A instance recommendations is packaged and displayed to the administrator by
[0122] DisplayPotentialQ&AList 680. The administrator proceeds by accepting, editing, rejecting, or reformatting parts of the assessment instance for the Question and Answer set provided by the system 685. The edited listing is posted back by the administrator through the on-line portal PostAcceptance function 690. The re-edited list is once again processed by the QAM in PrepareQuestionSet 695, which returns back the appropriate instance type set up and the question and answer set pertinent to that type of assessment. The revised question set is displayed by the function DisplayQuestionSet 710, which displays the set in both student and administrator view options. Finally, the administrator approves and saves the assessment instance 715 by opting for Save 720, which executes a SaveToDisk instruction 725, causing the QAM to serialise the assessment instance into Disk, SerializeToDisk 730.
[0123]
[0124] An EventTriggerType can be any of the following: GPS, signifying that specific GPS locations, or on the device detecting proximity to such location, such that the linked activity will be rendered or activated; Image Recognition, signifying that whenever the system identifyes a specific image the linked activity will be rendered or activated; Object Recognition signifying that whenever the system classifies an object of a specific class the linked activity will be rendered or activated; Bluetooth Fencing, signifying that whenever the system detects specific Bluetooth identifiers the linked activity will be rendered or activated; Site Recognition, with simultaneous localization and mapping (SLAM), signifying that whenever the system detects a specific site through image/or object recognition, the linked activity will be rendered or activated following SLAM principles.
[0125] An ActionType can be any of the following: Render Raw Node, signifying that the only the primary object should be visible; Render Node with Extra Media, signifying that the connected but external media object should also be presented; Remain Invisible, signifying that the primary object should be made invisible; Render Sound Only, signifying that the primary object should be made invisible but the linked sound file should be played; Use Transparency Rules, signifying that either the primary or secondary object should be made transparent by a certain percentage as configured.
[0126]
[0127] The authenticated administrator uploads the primary model, which causes central processing to post the data through InjectPrimaryObject 800 to the DAQE. This primary object is the form of an OBJ text file. After the data transfer to the DAQE has occurred without error OnPrimaryEducationalObjectReceived 805, the DAQE uses InterrogateModelsVertices 815, which recurses through every vertex offsets, the UV mapping for each texture coordinate vertex, the faces that organise polygons into the object's list of vertices, texture vertices and vertex normals to generate a mapping of the object to support the ultimate creation of the Educational Skin. ListPotentialVertexCovers 815 identifies every point in this map that may be used by the system as a vertex cover, or node cover, in which a set of vertices is such that each edge of the graph is incident to at least one vertex of the set. By using a NP-hard optimisation technique, with an approximation algorithm, the system resolves the maximum amount of vertex covers used by the system as Educational Skin. CreateDataMap 820 transforms the vertex covers into a location mesh that can be wrapped around the primary 3D object. The CreateDefaultTouchSurface data model 825 provides the means to configure how the user or student will interact with the different regions of the Education Skin, in terms of TriggerEventType events. The Educational Skin is completed by the function CreateDefaultEducationalSkin 830, which adds ActionType to resolve the activities that will respond to the already configured events and packages the JSON object identified by an internal header notification with the text value PrimaryObjectsProcessedOK 845. The UpdateModel 835 function finally integrates the initial 3D model with its new Educational Skin. This co-joint model is saved by SaveModelToDisk 840. The JSON object is posted back to central processing 845, where the function PreparePrimaryObjectForRender 850 hooks up the configuration to the rendering system.
[0128]
[0129]
[0130]
[0131] The instance the application is started or reawakened an APP RESUME EVENT is fired 4000, and a background thread or service initialises to accommodate the LMHA by executing OnlnitAppSynching 4007. Meanwhile, the user elects to navigate either the discovery or learning phase 4008 by appropriately interacting with their device. Whenever an experience is triggered by the client application in response to its trigger configuration, the TriggerExperience call 4010 is launched. This causes OnRenderExperience 4015 to render or activate the configured experience and execute LogStartOfSession 4020, which begins a new logging session for the LMHA logger. OnChangeAssetNode 4025, acting on any input by the user that causes focus on a new AssetNodeContainer, or vertex cover, executes a LogAssetNodelnformation 4030 with all the pertinent information required to formulate a log record, which has at least information relating to
[0132] The session is saved to LocalStorage by SerializeSession 4050, which permits the background app synching service, BASS, to FetchLMHAData 4052 at the most appropriate times. At these junctions the BASS packages the LMHA Data with the process PrepareMHAData 4054 and when appropriate executes PostLMHAData 4056 to the Platform Upload Service or end point.
[0133] During Assessment Phases, depicted in
[0134] OnAttemptPauselncompleteAssessment 4100, which attempts to notify the user that the assessment may not be paused through the NotifyWarning 4102. The LPG executes LogPauseAttempt 4105, causing the LMHA Logger to SerializeResult 4106, which immediately prompts the BASS to FetchLMHAData 4107 and save such data in local storage. If the interruption is caused by the client application or system failure, then all the activity carried out by the student is safely recorded in Local Storage, including the exception description that caused the failure. This not only assists in correcting any bugs, but also in testifying on behalf of the student that the assessment activity pause was not caused as a means to evade the evaluation exercise. Should the pause not be caused by a system or app failure, and the student still insists on pausing their assessment after a warning has been dispatched 4102. The LPG through its call back OnPauselncompleteAssessment 4110, will LoglncompleteAssesment 4115; the LMHA logger will SerializeResults 4117, and the BASS will FetchLMHAData 4118, to save to Local Storage. If the assessment is completed, the LPG OnCompleteAssessment 4120 executes LogCompleteAssessment 4125, then the LMHA logger SerializeResults 4127 and the BASS executes FetchLMHAData 4128 to save to Local Storage. There is a final LPG RequestFeedback cycle 4134, which displays final results to the student 4135, which may include a digital trophy or micro certificate as part of the feedback if gradings and feedback configuration allow it. The client application displays a feedback 4140 by representing the completion of the feedback cycle.
[0135]
[0136]
[0137]
[0138]
[0139]
[0140] The Learning Phase Generator re-packages the 3D educational object with necessary associations of event triggers and media projection points to make up a contextual educational skin for the 3D educational object. The educational skin represents a virtual wrapping of the 3D educational object with a mesh of local points anywhere in proximity or contact with the 3D educational object to which events will occur through a virtual touching.
[0141] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the claims.