A METHOD AND SYSTEM FOR TRAINING A MACHINE LEARNING MODEL FOR CLASSIFICATION OF COMPONENTS IN A MATERIAL STREAM
20230169751 · 2023-06-01
Inventors
Cpc classification
G06V10/774
PHYSICS
B07C5/342
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
A method and system for training a machine learning model configured to perform characterization of components in a material stream with a plurality of unknown components. A training reward associated with each unknown component within the plurality of unknown components in the material stream is determined, based on which at least one unknown component is physically isolated from the material stream by means of a separator unit, wherein the separator unit is configured to move the selected unknown component to a separate accessible compartment. The isolated at least one unknown component is analyzed for determining the ground truth label thereof, wherein the determined ground truth is used for training an incremental version of the machine learning model.
Claims
1. A method for training a machine learning model configured to perform characterization of components in a heterogeneous material stream with a plurality of unknown components, the method comprising: scanning the material stream by means of a sensory system configured to perform imaging of the material stream with the plurality of unknown components; predicting one or more prediction labels and associated label prediction probabilities for each of the unknown components in the material stream by means of a machine learning model which is configured to receive as input the imaging of the material stream and/or one or more features of the unknown components extracted from the imaging of the material stream; determining a training reward associated with each unknown component within the plurality of unknown components in the material stream; selecting at least one unknown component from the plurality of unknown components in the material stream based at least partially on the training reward associated with the unknown components, wherein determining a ground truth for said at least one unknown component requires analysis in physical isolation, wherein the selected at least one unknown component is physically isolated from the material stream by means of a separator unit, wherein the separator unit is configured to move the selected unknown component to a separate accessible compartment; analyzing the isolated at least one unknown component for determining the ground truth label thereof, wherein the determined ground truth label of the isolated at least one unknown component is added to a training database; and training an incremental version of the machine learning model using the determined ground truth label of the physically isolated at least one unknown component; and wherein the at least one unknown component which is isolated from the material stream is subjected to chemical analysis for determining the ground truth label at least partially based thereon.
2. The method according to claim 1, wherein the machine learning model is configured to receive as input one or more user-defined features of the unknown components extracted from the imaging of the material stream, and wherein user-generated selection criteria for the selection of components are employed.
3. (canceled)
4. The method according to claim 1, wherein the separation unit comprises multiple subunits employing different separation techniques, wherein the separation unit has at least a first subunit and a second subunit, wherein one of the first or second subunit is selected for physical isolation of the selected at least one unknown component based on the one or more features of the unknown components extracted from the imaging of the material stream.
5. (canceled)
6. The method according to claim 1, wherein the first subunit is used for physical isolation of smaller and/or lighter components in the material stream, and the second subunit being used for physical isolation of larger and/or heavier components in the material stream.
7. The method according to claim 1, wherein the first subunit is configured to isolate components by directing a fluid jet towards the components in order to blow the components to the separate accessible compartment, and wherein the second subunit is configured to isolate components by means of a mechanical manipulation device.
8. (canceled)
9. The method according to claim 1, wherein for each unknown component in the material stream data indicative of a mass is calculated.
10. The method according to claim 9, wherein a resulting force induced by the fluid jet is adjusted based on the mass of the selected at least one unknown component.
11. The method according to claim 1, wherein a value indicative of a difficulty for performing physical isolation of the unknown component from the material stream by means of the separation unit is determined and associated to each unknown component, wherein the selection of the at least one unknown component from the plurality of unknown components in the material stream is additionally based on the value.
12. The method according to claim 11, wherein a top number of unknown components are selected from the plurality of unknown components in the material stream based on the training reward associated with the unknown components, wherein a subset of the top number of unknown components is selected for physical isolation based on the value indicative of the difficulty for performing physical isolation by means of the separation unit.
13-15. (canceled)
16. The method according to claim 1, wherein the separate accessible compartment enables a manual removal of the isolated unknown component, wherein an indication of an internal reference of the machine learning model is provided for the isolated unknown component within the separate accessible compartment, wherein the analysis of the at least one selected unknown component is performed at least partially by human annotation.
17. The method according to claim 1, wherein the isolated unknown component is analyzed by means of an analyzing unit, wherein the analyzing unit is arranged to automatically perform a characterization of the isolated unknown component within the separate accessible compartment for determining the ground truth label based on the characterization.
18-19. (canceled)
20. The method according to claim 1, wherein the analyzing unit is configured to perform destructive measurements on isolated components for determining the ground truth label at least partially based thereon.
21. The method according to claim 1, wherein the analyzing unit is configured to perform at least one of: an energy or wavelength dispersive X-ray fluorescence spectrometry, fire assay, inductively coupled plasma optical emission spectrometry, inductively coupled plasma atomic emission spectroscopy, inductively coupled plasma mass spectrometry, laser-induced breakdown spectroscopy, infrared spectroscopy, hyperspectral spectroscopy, x-ray diffraction analysis, scanning electron microscopy, nuclear magnetic resonance, Raman spectroscopy.
22. (canceled)
23. The method according to claim 1, wherein the one or more features relate to at least one of a volume, dimension, diameter, shape, texture, color, or eccentricity.
24-25. (canceled)
26. A system for training a machine learning model which is configured to perform characterization of components in a heterogeneous material stream with a plurality of unknown components, the system including a processor, a computer readable storage medium, a sensory system, and a separator unit, wherein the computer readable storage medium has instructions stored which, when executed by the processor, result in the processor performing operations comprising: operating the sensory system to scan the material stream such as to perform imaging of the material stream with the plurality of unknown components; predicting one or more labels and associated label probabilities for each of the unknown components in the material stream by means of a machine learning model which is configured to receive as input the imaging of the material stream and/or one or more features of the unknown components extracted from the imaging of the material stream; determining a training reward associated with each unknown component within the plurality of unknown components in the material stream; selecting at least one unknown component from the plurality of unknown components in the material stream based at least partially on the training reward associated with the unknown components, wherein determining a ground truth for said at least one unknown component requires analysis in physical isolation; operating the separator unit for physically isolating the selected at least one unknown component from the material stream, wherein the separator unit is configured to move the selected unknown component to a separate accessible compartment; receiving for the isolated at least one unknown component the ground truth label determined by performing an analysis, wherein the determined ground truth label of the isolated at least one unknown component is added to a training database; and training an incremental version of the machine learning model using the determined ground truth label of the physically isolated at least one unknown component; and wherein the system is configured to subject the at least one unknown component which is isolated from the material stream to chemical analysis for determining the ground truth label at least partially based thereon.
Description
BRIEF DESCRIPTION OF THE DRAWING
[0090] The invention will further be elucidated on the basis of exemplary embodiments which are represented in a drawing. The exemplary embodiments are given by way of non-limitative illustration. It is noted that the figures are only schematic representations of embodiments of the invention that are given by way of non-limiting example.
[0091] In the drawing:
[0092]
[0093]
[0094]
[0095]
[0096]
[0097]
[0098]
[0099]
DETAILED DESCRIPTION
[0100] In supervised machine learning, the model is trained on (large) material streams in which each object is accompanied by a label. The labels can denote respective material classes (e.g. metal, wood, glass, ceramics, . . . ) of the components/objects identified in the material stream, and can be used by the machine learning model to learn how components/objects in the material stream are to be classified correctly. Determination and/or preparation of this labeled data often turns out to be the bottleneck of a training process: meticulously selecting thousands of individual components/particles from a heterogeneous material stream can be a time-consuming and expensive endeavor. Hence, while unlabeled data from material streams can be abundantly available and easily acquired, labeled data can be scarce and difficult to obtain. Furthermore, the entire labeling process may have to be repeated from start to finish each time a new material stream is considered. The invention employs a data-driven characterization of components in the material stream in which the labeling cost is strongly reduced while substantially retaining an accuracy that is comparable with supervised models which use the entire training dataset. By employing active learning, the machine learning model itself can selects a small optimal subset of components (cf. objects, particles) in the material stream that require labeling. Training the machine learning model exclusively on this small labeled subset then results in a model performance that can compete with the scenario in which the model would have been trained on the entire stream of components in the material stream.
[0101]
[0102] operating the sensory system 5 to scan the material stream 3 such as to perform imaging of the material stream 3 with the plurality of unknown components 3i;
[0103] predicting one or more labels and associated label probabilities for each of the unknown components 3i in the material stream 3 by means of a machine learning model which is configured to receive as input the imaging of the material stream 3 and/or one or more features of the unknown components extracted from the imaging of the material stream 3;
[0104] determining a training reward associated with each unknown component 3i within the plurality of unknown components 3i in the material stream 3;
[0105] selecting at least one unknown component from the plurality of unknown components 3i in the material stream 3 based at least partially on the training reward associated with the unknown components 3i;
[0106] operating the separator unit 100 for physically isolating the selected at least one unknown component from the material stream 3, wherein the separator unit 100 is configured to move the selected unknown component to a separate accessible compartment 101;
[0107] receiving for the isolated at least one unknown component the ground truth label determined by performing an analysis, wherein the determined ground truth label of the isolated at least one unknown component is added to a training database; and
[0108] training an incremental version of the machine learning model using the determined ground truth label of the physically isolated at least one unknown component.
[0109] In this exemplary embodiment, the separator unit includes a robotic arm for automatically isolating the selected components in the compartment 101. It will be appreciated that other means may also be employed for selectively moving the selected components from the material stream 3 to the compartment 101 for further analysis with regard to ground truth determination. This can be performed in different ways, for instance involving robotic means for performing physical separation. Various other techniques may also be employed. For instance, ejection of a selected component from the material stream can be achieved by means of an air jet (e.g. using air nozzles). A combination of techniques may also be used (e.g. depending on the size of the component to be separated/isolated from the material stream. For example, larger components may be physically isolated using a robotic arm, while smaller components can be isolated by means of fluid jets using fluid nozzles.
[0110] Due to the large amount of components in the material stream 3, it can be impractical for human beings to hand-label each component (large datasets). In order to optimize the labeling effort associated with training data classifiers, an active learning method is employed which selects only the promising and exemplar components for manual labeling. The selected components in the material stream are automatically physically isolated by means of the separator unit 100. In this example, a robotic arm is arranged. However, as mentioned above, one or more other means may also be employed.
[0111] The machine learning model may be an active learner applying a selection function to physically isolate a component for labeling. Based on the selection, the component can be isolated from the material stream 3 in the separate accessible compartment 101 for manual and/or experimental labeling to determine the ground truth. The machine learning model (cf. classifier) can be retrained with the newly labeled data and the process can continue, for example until a pre-defined stopping criterion is satisfied. Since the components to be labeled for training the machine learning model are selected and isolated based on the training reward, a time consuming process of retraining the classifier based on new data points can be avoided. Hence, the machine learning model can be trained more efficiently.
[0112]
[0113] The most appropriate data points linked to the identified components in the material stream can be selected for isolation and manual and/or experimental labeling to determine the ground truth. The resulting ground truth can then be used for further training the machine learning model. Since the selection is performed based on the training reward, a maximum generalization capability can be ensured of the machine learning model requiring minimum human labeling effort.
[0114]
[0115] A pool-based active learning cycle is illustrated in
[0116] Active learning or query learning can overcome the labeling bottleneck of a training process by asking queries in the form of unlabeled instances to be labeled by an oracle, e.g. a human annotator and/or automatic analyzer. In this way, the active learner aims to achieve high accuracy using as few labeled instances as possible, thereby minimizing the cost of obtaining labeled data. Many query strategies exist. For example, a so-called pool-based active learning may be employed wherein the training data is divided in a (small) labeled dataset on the one hand and a large pool of unlabeled instances on the other hand. The active learner may operate in a greedy fashion: samples to be queried to the annotator may be selected by evaluating all instances in the unlabeled pool simultaneously. The component (cf. sample) that maximizes a certain criterion is sent to the oracle for annotation and added to the labeled training set, after which the classification algorithm can be re-trained on this set. The updated results from the model then allow the active learner to make a new selection of queries for the human annotator.
[0117] The active learner can employ one or more criteria for selecting a new component to be isolated and analyzed for annotation. Different approaches exist. In some advantageous embodiments, the query strategy employed is based on uncertainty sampling. The active learner queries the instances of the unlabeled pool about which it is least certain how to label. Let x be the feature vector describing a certain component in the unlabeled pool of components in the material stream. Under model θ, one can predict its material class, i.e. the particle's label, as the class with the highest posterior probability of all classes y:
[0118] An exemplary query strategy would be to select the component whose prediction is the least confident, by computing the above equation (1) for all components in the unlabeled pool and choose one according to
[0119] This criterion is equivalent to selecting the sample that maximizes the machine learning model's belief it will mislabel x, i.e. the sample whose most likely labeling is the least likely among the unlabeled components available for querying. A drawback is that the machine learning model only considers information about the most probable label and therefore throws away information about the rest of the posterior distribution.
[0120] An alternative sampling strategy that addresses the drawback described above is one that uses the Shannon entropy as an uncertainty measure:
[0121] Here y=(y1, . . . , y6).sup.T is the vector containing the labels of all 6 classes as shown in the example of
[0122]
[0123] It can be far too labor-intensive to separately determine the ground truth label of each of the components afterwards. Advantageously, now the model can be trained very well with much less data. The system can automatically select and isolate the components in the material stream for further analysis in order to determine the ground truth label. This is for example very useful for waste processing involving one or more waste material streams. For instance, the system can be configured to perform waste characterization, wherein the system allows for efficient further training of the employed machine learning model. Additionally, in some examples, the system may also be configured to perform sorting of materials based on the waste characterization. It will be appreciated that the invention may also be used in other applications for characterization of other material streams.
[0124] Determining the ground truth can be established in different ways, for instance partially involving manual labeling (e.g. at least partially analyzed by a human). However, it can also be determined automatically, for example involving chemical experimentation. A combination of techniques can also be employed, for instance when different properties are to be determined for deriving the ground truth label, e.g. requiring different techniques. Different characterization parameters may be determined for determining the ground truth (e.g. mass, chemistry, weight, geometrical properties, etc.).
[0125] The material stream may be an heterogeneous flow of materials or components. Various algorithms and techniques may be used for determining which particle contributes most to training the machine learning model. Different active learning methods can be applied for this purpose.
[0126] Different strategies can be employed for choosing a next point for ground truth labeling (e.g. by means of an analysis). In the example shown in
[0129] Selecting the sample that is located the furthest away from the majority of the samples/clusters (i.e. isolated sample). This allows to identify outliers/anomalies that potentially represent a new (sub)class.
[0130] A combination of above techniques may also be used. It will be appreciated that other selection strategies can also be employed.
[0131]
[0132] The diagonal graphs represent kernel density estimates for the distributions of 4 features from the dataset. The off-diagonal graphs represent scatterplots of the respective features: mean atomic number <Z>, the logarithm of the mean density <ρ>, the logarithm of the standard deviation of the height o height and the logarithm of the perimeter of the components.
[0133] Selection and isolation of the components of the material stream for ground truth label analysis can be based on a level of confidence of the current machine learning model (cf. classifier) on the unlabeled identified components in the material stream.
[0134]
[0135] The uncertainty sampling based on the confidence criterion as in equation (2) is compared to the entropy criterion as in equation (3) with random sampling. In the latter case components of the material stream are not queried based on some uncertainty criterion but completely randomly.
[0136] In general, it is expected the performance of any model to go up with the sample size, as more labeled data means more information. However, this does not happen at the same pace for all models. The graph indicates that results for entropy- and confidence based sampling techniques are comparable but random sampling clearly underperforms for classification of the components in the material stream. In the limit of large sample sizes, all model performances converge to the “optimal” value of the model that makes use of the entire training dataset. This performance is the one the active learning models are to compete with and is shown as the baseline accuracy of 0.988 in
[0137] The lines show the mean results of 250 different random initial conditions, and the boundaries of the shaded regions are defined by the 10% and 90% quantiles. Furthermore, a cross section of the feature space spanned by the mean atomic number Z and density ρ at three different stages of the learning process is shown. The first column indicates which samples have been queried up until that point. The second and third columns show the behavior of the least confidence and entropy measures in this two-dimensional cross-section of the feature space. The remaining unlabeled samples are shown, and the one with highest uncertainty is indicated by a cross: this is the next component to be isolated and analyzed (e.g. by human annotator and/or experimentally).
[0138] Three locations have been indicated on the learning curves, which are further examined in the other graphs of
[0139] However, when more data becomes available, the active learner starts to recognize the boundary regions between the different material classes, and primarily queries samples in the immediate vicinity of these class boundaries, as these are typically the particles with the highest classification uncertainty for the model. This can also be observed from the second and third columns in
[0140] Generally, the optimal choice of uncertainty measure depends strongly on the dataset at hand. However, one could argue that the confidence criterion is possibly slightly more appropriate in the case where classification is simply performed by means of majority vote: a component is assigned to the class with the highest posterior probability. If however more complicated rules are used (e.g. in the case of imbalanced datasets), entropy is arguably the more obvious choice.
[0141]
[0142] The system according to the invention can be faster and more autonomous in characterization of one or more materials, while requiring less (labor-intensive) input from humans. The system can provide important advantages in the application of waste characterization.
[0143] In order to develop a model that recognizes different (images of) waste particles and classifies them into different categories, a machine learning model can be trained by showing it a large number of images, each image accompanied by a label that describes what is in it. The conventional approach, in which all data is labeled in advance, is known as supervised learning. This labeled data represents the fuel of machine learning algorithms. For the waste characterization technology, labeled data can typically be generated by scanning physical “pure” mono-material streams, which are often manually prepared by meticulously selecting thousands of individual particles from a heterogeneous waste stream.
[0144] The characterization of waste has several important applications in the recycling industry. It can be used for value assessment. Fast and reliable value assessment of complete material streams decreases the risk of exposure to volatility of commodity stock markets. Further, it can be used for quality control. In a circular economy, it is desired that the quality of recycled products is guaranteed. The characterization technology helps to establish market trust. Further, it can be used for process engineering. The technical and economic feasibility of waste recycling processes and design of new processes by virtual experimenting can be assessed. Further, it can be used for online process optimization. Sorting processes can be measured, controlled and optimized on-the-fly.
[0145] In some examples, a direct, inline characterization technology can be provided that assess the materials both qualitatively (material type, chemistry, purity, . . . ) and quantitatively (mass balances, physical properties, . . . ). Such an in-line characterization system can be configured to assess heterogeneous and complex material streams completely, eliminating the need for subsampling. Moreover, mass-balances can be produced on-the-fly. In fact, for each material object, a digital twin can be created which can be further assessed in a virtual way.
[0146] The invention can provides for a data-driven material characterization using physical active learning that can strongly reduce the labeling effort when gathering training data. While conventional machine learning algorithms require a large and completely labeled dataset for training, it is observed that only a fraction of this data is required to make good predictions. Active learning allows to train the model on a small subset, chosen by the algorithm, and obtain an accuracy that is comparable with the one that is found by training the model on the complete dataset. In some examples, active learning allows to reduce the labeling cost by 70% while retaining 99% of the accuracy that would be obtained by training on the fully labeled dataset.
[0147] It will be appreciated that the system and method according to the invention can be used for different material streams. In some examples, the materials stream includes construction and demolition waste. However, other waste streams can also be used.
[0148]
[0149] The invention provides for a more efficient training of the machine learning model used (e.g. deep neural network). By means of active learning it is possible to reduce a number of training samples to be (manually) labeled by selectively sampling a subset of the unlabeled data (in the material stream). This may be done by inspecting the unlabeled samples, and selecting the most informative ones with respect to a given cost function for human and/or experimental labeling. The active learning machine learning model can select samples which can result in the largest increase in performance, and thereby reduce the human and/or experimental labeling effort. Selectively sampling components of the plurality of components in the material stream assumes that there is a pool of candidate components of the plurality of components to label. As there can be a constant stream of new and relatively unique components in the material stream, the stream provides for a source for continuously and effectively improve the performance of the machine learning model. Advantageously, the selected components can be isolated automatically by the system by means of a separation unit. The active learning machine model can derive a smaller subset of all components collected from the material stream for human and/or experimental labeling.
[0150] An initial deep learning neural network can be trained on a set of classified data, for example obtained by human annotation. This set of data builds the first parameters for the neural network, and this would be the stage of supervised learning. During the stage of supervised learning, the neural network can be tested whether the desired behavior has been achieved. Once a desired neural network behavior has been achieved (e.g., a machine learning model has been trained to operate according to a specified threshold), the machine learning model can be deployed for use (e.g., testing the machine with “real” data). During operation, neural network classifications can be confirmed or denied (e.g., by an expert user, expert system, reference database, etc.) to continue to improve neural network behavior. The example neural network is then in a state of transfer learning, as parameters for classification that determine neural network behavior are updated based on ongoing interactions. In some examples, the neural network of the machine learning model can provide direct feedback to another process, e.g. changing control parameters of a waste recycling process. In some examples, the neural network outputs data that is buffered (e.g., via the cloud, etc.) and validated before it is provided to another process.
[0151] Data acquisition can be performed in different ways. The sensory system may include various sensors. In an example, data with respect to the material properties of the particles in the material stream (e.g. waste stream) is gathered by means of a multi-sensor characterization device. Firstly, dual-energy X-ray transmission (DE-XRT) may allow to see “through” the material and to determine certain material properties such as average atomic number and density. The advantage is that one can inspect the complete volume and not only the surface of the component (e.g. waste material is often dirty and surface properties are therefore not necessarily representative for the bulk of the material). Secondly, additionally or alternatively, a 3D laser triangulation unit can be utilized to measure the shape of the object at high resolution (e.g. sub-mm accuracy). This allows for additional information to complement the one gathered from DE-XRT, such as 3D shape and volume. Thirdly, additionally or alternatively, a RGB detector may be used, which allows to differentiate the components in the material stream regarding color and shape. In some examples, the above mentioned sensors are used together. Optionally, image processing can be used for segmenting the images into individual components. From these segmented images, various features describing the object's shape may be computed. Examples are the area, eccentricity and perimeter of a component. In some examples, this can be done for all images obtained from all sensors.
[0152] Various neural network models and/or neural network architectures can be used. A neural network has the ability to process, e.g. classify, sensor data and/or pre-processed data, cf. determined features characteristics of the segmented objects. A neural network can be implemented in a computerized system. Neural networks can serve as a framework for various machine learning algorithms for processing complex data inputs. Such neural network systems may “learn” to perform tasks by considering examples, generally without being programmed with any task-specific rules. A neural network can be based on a collection of connected units or nodes called neurons. Each connection, can transmit a signal from one neuron to another neuron in the neural network. A neuron that receives a signal can process it and then signal additional neurons connected to it (cf. activation). The output of each neuron is typically computed by some non-linear function of the sum of its inputs. The connections can have respective weights that adjust as learning proceeds. There may also be other parameters such as biases. Typically, the neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs to form a deep neural network.
[0153] A deep learning neural network can be seen as a representation-learning method with a plurality of levels of representation, which can be obtained by composing simple but non-linear modules that each transform the representation at one level, starting with the raw input, into a representation at a higher, slightly more abstract level. The neural network may identify patterns which are difficult to see using conventional or classical methods. Hence, instead of writing custom code specific to a problem of printing the structure at certain printing conditions, the network can be trained to be able to handle different and/or changing structure printing conditions e.g. using a classification algorithm Training data may be fed to the neural network such that it can determine a classification logic for efficiently controlling the printing process.
[0154] It will be further understood that when a particular step of a method is referred to as subsequent to another step, it can directly follow said other step or one or more intermediate steps may be carried out before carrying out the particular step, unless specified otherwise. Likewise it will be understood that when a connection between components such as neurons of the neural network is described, this connection may be established directly or through intermediate components such as other neurons or logical operations, unless specified otherwise or excluded by the context.
[0155] It will be appreciated that the term “label” can be understood as both categorical variables (e.g. using neural networks) and continuous variables (e.g. using regression models). For example, the continuous variables may have uncertainties (e.g. chemical analysis variable).
[0156] It will be appreciated that the method may include computer implemented steps. All above mentioned steps can be computer implemented steps. Embodiments may comprise computer apparatus, wherein processes performed in computer apparatus. The invention also extends to computer programs, particularly computer programs on or in a carrier, adapted for putting the invention into practice. The program may be in the form of source or object code or in any other form suitable for use in the implementation of the processes according to the invention. The carrier may be any entity or device capable of carrying the program. For example, the carrier may comprise a storage medium, such as a ROM, for example a semiconductor ROM or hard disk. Further, the carrier may be a transmissible carrier such as an electrical or optical signal which may be conveyed via electrical or optical cable or by radio or other means, e.g. via the internet or cloud.
[0157] Some embodiments may be implemented, for example, using a machine or tangible computer-readable medium or article which may store an instruction or a set of instructions that, if executed by a machine, may cause the machine to perform a method and/or operations in accordance with the embodiments.
[0158] Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, application specific integrated circuits (ASIC), programmable logic devices (PLD), digital signal processors (DSP), field programmable gate array (FPGA), logic gates, registers, semiconductor device, microchips, chip sets, et cetera. Examples of software may include software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, mobile apps, middleware, firmware, software modules, routines, subroutines, functions, computer implemented methods, procedures, software interfaces, application program interfaces (API), methods, instruction sets, computing code, computer code, et cetera.
[0159] Herein, the invention is described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications, variations, alternatives and changes may be made therein, without departing from the essence of the invention. For the purpose of clarity and a concise description features are described herein as part of the same or separate embodiments, however, alternative embodiments having combinations of all or some of the features described in these separate embodiments are also envisaged and understood to fall within the framework of the invention as outlined by the claims. The specifications, figures and examples are, accordingly, to be regarded in an illustrative sense rather than in a restrictive sense. The invention is intended to embrace all alternatives, modifications and variations which fall within the scope of the appended claims. Further, many of the elements that are described are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, in any suitable combination and location.
[0160] In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other features or steps than those listed in a claim. Furthermore, the words ‘a’ and ‘an’ shall not be construed as limited to ‘only one’, but instead are used to mean ‘at least one’, and do not exclude a plurality. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to an advantage.