ENERGY-AWARE PROCESSING SYSTEM
20230114303 · 2023-04-13
Inventors
- Alessandro MONTANARI (Cambridge, Cambridgeshire, GB)
- Mohammed ALLOULAH (Cambridge, Cambridgeshire, GB)
Cpc classification
H03M7/6047
ELECTRICITY
G06F1/3203
PHYSICS
H02J3/144
ELECTRICITY
H03M7/70
ELECTRICITY
H02J3/004
ELECTRICITY
Y02D10/00
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
International classification
Abstract
An apparatus, method and computer program is described comprising: degrading an acquired data signal, using a source coding module, to generate a degraded signal having a fidelity dependent on a first measure of available energy, wherein the acquired data signal is degraded based on a scalar dependent on said first measure of available energy; and generating an output based on the degraded data signal, wherein the output is generated using an inference module that has parameters dependent on a second measure of available energy, wherein the inference module is configured to output degradable inferences dependent on the degraded signal received by the inference module from the source coding module.
Claims
1-15. (canceled)
16. An apparatus comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to; degrade an acquired data signal, using a source coding module, to generate a degraded signal having a fidelity dependent on a first measure of available energy, wherein the acquired data signal is degraded based on a scalar dependent on said first measure of available energy; and generate an output based on the degraded data signal, wherein the output is generated using an inference module that has parameters dependent on a second measure of available energy, wherein the inference module is configured to output degradable inferences dependent on the degraded signal received by the inference module from the source coding module.
17. An apparatus as claimed in claim 16, wherein the instructions that, when executed by the at least one processor further cause the apparatus at least to: select the inference module from a plurality of available inference modules dependent on the second measure of available energy.
18. An apparatus as claimed in claim 16, wherein the instructions that, when executed by the at least one processor further cause the apparatus at least to: determine the first measure of available energy, wherein the first measure of available energy is a measure of an instantaneous energy supply.
19. An apparatus as claimed in claim 16, wherein the instructions that, when executed by the at least one processor further cause the apparatus at least to: determine the second measure of available energy, wherein the second measure of available energy is a forecast of future available energy.
20. An apparatus as claimed in claim 16, wherein the parameters of the inference module are trained together with the source coding module at a particular measure of available energy.
21. An apparatus comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to; train parameters of an inference module that forms part of a system comprising a source coding module and the inference module, wherein the source coding module is configured to degrade an acquired data signal to generate a degraded data signal, based on a scalar selected dependent on an available energy: and train the inference module, for a particular available energy, together with the source coding module, such that the inference module is configured to output degradable inferences dependent on the degraded signal received by the inference module from the source coding module.
22. An apparatus as claimed in claim 21, wherein the instructions that, when executed by the at least one processor further cause the apparatus at least to: train a plurality of inference modules, each inference module configured to operate for a defined measure of available energy.
23. An apparatus as claimed in claim 21, wherein the inference module has a plurality of trainable parameters.
24. An apparatus as claimed in claim 21, wherein the acquired data signal comprises an audio data signal and wherein the scalar defines a coarseness of the degraded data signal.
25. An apparatus as claimed in claim 21, wherein the acquired data signal comprises an image data signal and wherein the scalar defines a quantization level of the degraded data signal.
26. A method comprising: degrading an acquired data signal, using a source coding module, to generate a degraded signal having a fidelity dependent on a first measure of available energy, wherein the acquired data signal is degraded based on a scalar dependent on said first measure of available energy; and generating an output based on the degraded data signal, wherein the output is generated using an inference module that has parameters dependent on a second measure of available energy, wherein the inference module is configured to output degradable inferences dependent on the degraded signal received by the inference module from the source coding module.
27. A method as claimed in claim 26, further comprising: determining the first measure of available energy, wherein the first measure of available energy is a measure of an instantaneous energy supply.
28. A method as claimed in claim 26, further comprising: determining the first measure of available energy, wherein the first measure of available energy is a measure of an instantaneous energy supply.
29. A method as claimed in claim 26, further comprising: determine the second measure of available energy, wherein the second measure of available energy is a forecast of future available energy
30. A method comprising: training parameters of an inference module that forms part of a system comprising a source coding module and the inference module, wherein the source coding module is configured to degrade an acquired data signal to generate a degraded data signal, based on a scalar selected dependent on an available energy; and training the inference module, for a particular available energy, together with the source coding module, such that the inference module is configured to output degradable inferences dependent on the degraded signal received by the inference module from the source coding module.
31. A method as claimed in claim 30, wherein the particular inference module is configured to operate for a defined measure of available energy.
32. A method as claimed in claim 30, wherein the acquired data signal comprises an audio data signal and wherein the scalar defines a coarseness of the degraded data signal.
33. A method as claimed in claim 30, wherein the acquired data signal comprises an image data signal and wherein the scalar defines a quantization level of the degraded data signal.
34. A non-transitory computer readable medium comprising program instructions that, when executed by an apparatus, cause the apparatus to perform at least the following: degrading an acquired data signal, using a source coding module, to generate a degraded signal having a fidelity dependent on a first measure of available energy, wherein the acquired data signal is degraded based on a scalar dependent on said first measure of available energy; and generating an output based on the degraded data signal, wherein the output is generated using an inference module that has parameters dependent on a second measure of available energy, wherein the inference module is configured to output degradable inferences dependent on the degraded signal received by the inference module from the source coding module.
35. A non-transitory computer readable medium comprising program instructions, that, when executed by an apparatus, cause the apparatus to perform at least the following: training parameters of an inference module that forms part of a system comprising a source coding module and the inference module, wherein the source coding module is configured to degrade an acquired data signal to generate a degraded data signal, based on a scalar selected dependent on an available energy, and training the inference module, for a particular available energy, together with the source coding module, such that the inference module is configured to output degradable inferences dependent on the degraded signal received by the inference module from the source coding module.
36. A system comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to; train an inference module, for a particular available energy, together with a source coding module; degrade an acquired data signal, using the source coding module, to generate a degraded signal having a fidelity dependent on a first measure of available energy, and generate by using the inference module an output based on the degraded data signal.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0034] Example embodiments will now be described, by way of example only, with reference to the following schematic drawings, in which:
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
DETAILED DESCRIPTION
[0053] The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments and features, if any, described in the specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention.
[0054] In the description and drawings, like reference numerals refer to like elements throughout.
[0055] As noted above, the widespread of machine learning (ML) algorithms (or similar algorithms) across many application domains, has led to concerns over associated energy demands. By way of example, three application domains that are particularly hard to service using current ML techniques are as follows: [0056] Low-end applications that run on ultra-low power devices which rely on harvested ambient energy. Real-world examples include wildlife monitoring and cognitive augmentation, which are characterised by a high uncertainty in power availability. The times at which such devices are awake and can sense the environment or process incoming data may be unpredictable. Under these conditions, energy supply may fluctuate significantly. The net effect can be an uptime of few milliseconds only, with periods of power denial that could span hours. [0057] Applications on consumer devices. ML workloads are increasingly common in consumer devices like smartphones and wearables. However, battery capacity remains a significant limitation, especially given shrinking form factors and the availability of inexpensive computation (e.g. neural accelerators embedded in modern processors). [0058] High-end applications with significant impact on greenhouse emissions and energy supplies. Real-world examples include a smart grid peak curtailment mechanism whereby data centres are incentivised to compromise inference fidelity in favour of energy efficiency. If acted upon by data centres, large amounts of energy and greenhouse emissions can be potentially conserved.
[0059] Many state-of-the-art ML and similar models lack scalability. That is, ML models may require invasive retraining and/or architectural restructuring when their resource-performance operating point is to be adjusted. Although the severity of the required re-engineering varies, such processes may be untenable for the purpose of dynamic energy-aware performance adaptation. That is, many of these techniques are at heart static and unable to cater for widely fluctuating resource availability.
[0060] Furthermore, many ML and similar models exhibit an all-or-nothing behaviour at the designated resource-performance point. Thus, once retrained and/or restructured, such models may only output an inference if resource availability is above a requisite threshold. Such rigidity can be problematic for application domains in which harvested energy is wasted for all but those energy levels rising above a critical operational threshold. As such, when available energy falls below this threshold, sensor nodes are unable to convert data into inferences. In certain time-sensitive applications, this all-or-nothing behaviour may further result in data becoming stale owing to sensor nodes' inability to act on data in a timely fashion.
[0061]
[0062] The system 10 comprises an energy source 12, an energy-aware adaptation module 14, a source coding module 16 and an inference module 18. The energy source 12 is a variable energy source, which dictates the instantaneous energy budget for computational workloads.
[0063] The energy-aware adaptation module 14 is a logic module that seeks to adapt computations to available energy. Energy-aware adaptations controlled by the module 14 include controlling the functionality or performance of the source coding module 16 and the inference module 18. As discussed further below, the inference module may be implemented using a neural network (e.g. a deep neural networks (DNN)) which can tolerate inputs of reduced quality, while still being able to produce inferences whose accuracies are dependent on (e.g. proportional to) instantaneous energy levels. The combined operation is made possible by designing and/or training the source coding module 16 and the inference module 18 together.
[0064] In general terms, the inference module 18 provides for degradable inference, namely inference whose quality varies in proportion to the instantaneous energy supplied. As discussed below, the system 10 supports two common modalities in recognition tasks: vision and audio (and may, of course, be used for other tasks). In contrast to static model engineering techniques, energy-aware source encoding seeks to achieve smooth inference degradability over a wide-range of energy availability levels through the use of a single control parameter at the encoder which, in turn, also controls the configurations of the inference module.
[0065]
[0066] The algorithm 20 starts at operation 22, where data is acquired. The data may be audio data (acquired, for example, using a microphone) or visual data (acquired, for example, using a camera system), but the principles described herein are applicable to many data sources. The acquired data may be provided to the source coding module 16 of the system 10.
[0067] At operation 24, the acquired data signal is degraded, for example using the source coding module 16. As discussed below, degrading the acquired data signal may involve techniques such as “sparsifying” or “coarsifying” the data based on the available energy, such that the fidelity of the acquired data signal may be actively reduced when the available energy is low. The source coding module may generate a degraded signal having a fidelity dependent on a first measure of available energy, wherein the acquired data signal is degraded based on a scalar dependent on said first measure of available energy. Providing data having a lower fidelity (e.g. fewer data points) may result in reduced power consumption during processing (e.g. during processing by the inference module 18 and/or the source coding module 16).
[0068] At operation 26, an output is generated based on the degraded data signal, wherein the output is generated using an inference module (e.g. the inference module 18) that has parameters dependent on the fidelity of the energy-aware source encoding. The inference module may be configured to output degradable inferences based on (e.g. proportional to) the degraded signal received by the inference module from the source coding module.
[0069]
[0070] The system 30 includes the energy-aware adaptation module 14 of the system 10 described above and further comprises a plurality of modules 31 to 34 that are an example implementation of the source coding module 16 described above.
[0071] The modules 31 to 34 comprise a YCbCr module 31, a DCT module 32, a variable quantisation module 33 and a re-normalisation module 34.
[0072] The system 30 may receive RGB data as “acquired data”, which data is converted to YCbCr data by the module 31. In the system 30, the DCT module 32 generate a discrete cosine transform (DCT) based on the YCbCr data. The DCT data is quantised by the quantisation module 33 and normalised by the re-normalisation module 34.
[0073] The quantisation module 33 is variable (as indicated by the diagonal arrow in
[0074] where: [0075] S.sub.ch is a discrete consine transform (DCT) subblock of channel ch ∈[Y, Cb, Cr] [0076] Q.sub.ch is the quantisation table for that channel. [0077] q is a quality scaler that controls the extent of sparsification [0078] S.sup.q is the resultant quantised subblock.
[0079] We observe that q scales the dynamic range of the DCT coefficients. Thus, the renormalisation module 34 is used to re-normalise the input to the DNN according to:
or any equivalent mathematical operation that harmonises the numerical dynamic range of different quantisation levels.
[0080] Thus, the acquired data that provides the input to the source coding module 16 described above may comprise an image data signal, wherein a scalar (q) defines a quantization level of the degraded data signal output by the source coding module 16.
[0081]
[0082] The system 40 includes the energy-aware adaptation module 14 of the system 10 described above and further comprises a plurality of modules 41 to 46 that are an example implementation of the source coding module 16 described above.
[0083] The modules 41 to 46 comprise a variable spectrogram module 41, a Mel filters module 42, an optional first interpolation module 43, a log module 44, a DCT module 45 and a second optional interpolation module 46.
[0084] The system 40 may receive audio data as “acquired data” and may output mel-frequency cepstral coefficients (MFCCs) or log mel-frequency spectral coefficients (MFSCs).
[0085] The spectrogram module 41 is variable (as indicated by the diagonal arrow in
[0086] The source coding module 16 may be used to degrade audio MFCCs by reducing their temporal granularity i.e. by making transitions in their spectra over time coarser. Concretely, recalling that the continuous-time spectrogram of signal x(t) is given by:
spectrogram{x}(τ,w)=|∫.sub.−∞.sup.∞x(t)w(t−τ)e.sup.−jwtdt|.sup.2 Eq (3)
[0087] Specifically, the source coding module 16 may be use a larger stride τ to “coarsify” the resultant spectrogram. That is, compared to the sparsification in the vision encoder, this audio “coarsification” trades-off fine-grained temporal transitions in the spectrogram for significant computational gains. However, once trained, the distortion-tolerant inference module 18 may require fixed MFCC temporal granularity as input. Therefore, the source coding module 16 may linearly interpolate the coarse-grained spectrogram onto the original fine spectral grid in order to equalise for the expected inference module input (e.g. using the optional second interpolator 46 described above).
[0088] Thus the acquired data that provides the input to the source coding module 16 described above may comprise an audio data signal, wherein a scalar defines a coarseness of the degraded data signal output by the source coding module 16.
[0089]
[0090] The plot 50 is a visualisation for 10 levels of JPEG-style DCT quantisation averaged across the CIFAR10 dataset for the three YCbCr colour space channels: (a) Y, (b) Cb, and (c) Cr. Increasing the quality scaler results in a progressive retention of more spatial frequencies. (d) CIFAR10 average spatial frequency decay (dB); dynamic range is in excess of 120 dB.
[0091] In the plot 50, 10 quantisation levels were swept in order to progressively sparsify the DCT representation of tiny 32×32 images. These sparsified DCT images were then averaged channel-wise in the YCbCr colour space. Beginning with the Y channel shown in the plot (a), the loth level aggressively compresses the image into a small blue cluster around the DC value at the upper left corner. As we relax compression—using q in Equation (2)—more DCT coefficients emanate outwards from DC until all spatial frequencies are retained beginning at the 3rd quantisation level shown in grey. Comparatively the chroma channels are quantised further as per the JPEG standard. This is reflected in the average coefficients retained across quantisation levels for the Cb and Cr channels of the plots b and c respectively.
[0092]
[0093] As discussed above, the system 10 comprises an inference module 18. The inference module receives degradable encoded outputs from the source coding module 16.
[0094]
[0095] The algorithm 70 starts at operation 72, where an inference module (such as the inference module 18) learns distortion tolerant decoder functions. The learning referred to in operation 72 is carried out during training of the inference module.
[0096] At operation 74 of the algorithm, the inference module outputs degradable inferences based on the degradable input received from the source encoder.
[0097] For example, for both the audio and vision examples discussed above, the relevant inference module (e.g. a suitably trained neural network) takes degradable domain-expert encodings (e.g. quantised JPEG data or coarsified audio data) as input and generates inferences based on the received inputs based on the training of the inference module.
[0098] The training of the inference module 18 may take many forms.
[0099] Consider the example of vision data (such as JPEG data). A data augmentation approach may be taken. Examining the plot 50 described above with reference to
[0100] In one example implementation, state-of-the-art image augmentation was carried out in the RGB domain. That is, during training, standard augmentation techniques, such as rotation and blur, may be utilised, with the augmented data then transformed into new RGB images to our spatial frequency representation. We have found this approach to be effective at reaching DCT networks of accuracies on par with their RGB counterparts.
[0101] Similar to vision, an inference module for audio applications may employ standard audio data augmentation techniques, such as background noise addition and time shifting. Both may be used in order to ensure the model generalises to real-world scenarios with expected variabilities in ambient noise (e.g. factory, retail, or home settings) and/or trigger instance (e.g. up to ±100 ms).
[0102] For both vision and audio applications, an ensemble of models may be trained with different source encoder fidelities i.e. q for vision and τ for audio as per Equations (2) and (3) respectively. As discussed further below, in some example implementations, three levels of inference module distortion-tolerance may be considered: high (HI), middle (MID), and low (LO), for both vision and audio i.e. q.sub.HI, q.sub.MID, q.sub.LO, and τ.sub.HI, τ.sub.MID, and τ.sub.LO, respectively.
[0103] In another example implementation, a plurality of q scalers may be used to generate images augmentations at different qualities in order to simultaneously enhance the DNN accuracy and tolerance to increased DCT coefficient de-activation (e.g. at q.sub.HI=100, q.sub.MID=60, and q.sub.LO=30).
[0104]
[0105] The algorithm 80 starts at operation 82 where one of a plurality of models (e.g. HI inference modules) is selected for training. Next, at operation 84, the model is trained.
[0106] In the operation 84, the selected model is trained together with the relevant source encoding. Thus, the selected model is trained for use with a source encoder providing a particular level of degradation of an acquired data input. Specifically, the source coding module may be configured to degrade an acquired data signal based on a selected scalar.
[0107] As discussed further below, the parameters of the inference module may be trained together with the source coding module at a particular measure of available energy (e.g. based on a scalar selected dependent on an available energy).
[0108]
[0109]
[0110] Inspecting
[0111]
[0112]
[0113]
[0114] The algorithm no starts at operation 112, where energy availability is determined.
[0115] Next, at operation 114, an inference module is selected from a plurality of available inference modules dependent on energy availability determined in operation 112. It should be noted that in an alternative embodiment, an inference module may be adapted based on energy availability (rather than selecting one of a plurality of modules). Thus, for example, a single inference module may be provided that is itself adaptable.
[0116] As discussed further below, the operation 112 may be implemented by determining a measure of an instantaneous energy supply and/or by generating a forecast of future is available energy.
[0117]
[0118] The system 120 comprises an energy source 121, an energy forecaster module 122, an instant energy monitor 123, a model loader 125, a model pool 126 and an execution engine 128.
[0119] As discussed further below, the energy source 121 is an example of the energy source 12 described above, the energy forecaster 122 and the instant energy monitor 123 may collectively implement the energy-aware adaptation module 14 and the execution engine 128 may include the source coding module 16 and the inference module 18 of the system 10.
[0120] In the use of the system 120, the energy forecaster module 122 monitors the available energy on a relatively long-term scale and predicts how variable the available energy would be in the future. The energy forecaster module 122 then provides energy forecast information to the model loader 125 for the selection of the appropriate inference module/inference module parameters.
[0121] The instant energy monitor 123 tracks energy fluctuations of the energy source 121 with fine granularity and selects the appropriate parameters for the source coding module (e.g. the appropriate scaler (for vision) or stride (for audio)). Thus, the source coding module can be configured to adapt its computational requirements to an instantaneous energy availability.
[0122] The model loader 125 determines which of a plurality of models to use (e.g. one of the LO, MID or HI models discussed above). The models are stored in the model pool 126. (Of course, the model pool 126 could include more or fewer than the three models described herein.) For example, if the energy forecaster predicts the energy to fluctuate quickly, the LO model may be preferable since it incurs a smaller accuracy loss at high compression rates, allowing the system to cope with highly variable energy levels without degrading the accuracy excessively. Otherwise, if the energy is stable, there is less need to adapt the computation frequently, the HI model may be selected to achieve is an overall higher accuracy.
[0123] Once a model has been loaded into the execution engine 128 and the instant energy monitor 123 has provided parameters for the source coding module, the execution engine can be used to process acquired data in order to generate an inference output (as discussed above with reference to the system 10).
[0124] The combination of dynamic model loading and variable input encoding allows the system to adapt gracefully to widely fluctuating and unknown energy operational conditions. The dynamic model loading functionality (based on the output of the energy forecaster module 122) enables macro adaptations that respond to different classes of energy availability patterns while the variable encoding (based on the output of the instant energy monitor 123) accounts for fine-grained and instantaneous fluctuations.
[0125]
[0126] In an example embodiment, the inference module may be a model (such as a machine learning model) having a plurality of trainable parameters. By way of example,
[0127] The input layer 141 may receive one or more inputs from the source coding module 16. The output layer 143 may provide one or more outputs of the inference module.
[0128] A primary motivation behind using DCT image representation for learning is to capitalise on DCT's energy clustering property. For LO, MID, and HI models (i.e. the three studied quantisation levels),
[0129] For completeness,
[0130] The processing system 300 may have a processor 302, a memory 304 closely coupled to the processor and comprised of a RAM 314 and a ROM 312, and, optionally, a user input 310 and a display 318. The processing system 300 may comprise one or more network/apparatus interfaces 308 for connection to a network/apparatus, e.g. a modem which may be wired or wireless. The network/apparatus interface 308 may also operate as a connection to other apparatus such as device/apparatus which is not network side apparatus. Thus, direct connection between devices/apparatus without network participation is possible.
[0131] The processor 302 is connected to each of the other components in order to control operation thereof.
[0132] The memory 304 may comprise a non-volatile memory, such as a hard disk drive (HDD) or a solid state drive (SSD). The ROM 312 of the memory 304 stores, amongst other things, an operating system 315 and may store software applications 316. The RAM 314 of the memory 304 is used by the processor 302 for the temporary storage of data. The operating system 315 may contain code which, when executed by the processor implements aspects of the algorithms 20, 70, 80 and no described above. Note that in the case of small device/apparatus the memory can be most suitable for small size usage i.e. not always a hard disk drive (HDD) or a solid state drive (SSD) is used.
[0133] The processor 302 may take any suitable form. For instance, it may be a microcontroller, a plurality of microcontrollers, a processor, or a plurality of processors.
[0134] The processing system 300 may be a standalone computer, a server, a console, or a network thereof. The processing system 300 and needed structural parts may be all inside device/apparatus such as IoT device/apparatus i.e. embedded to very small size.
[0135] In some example embodiments, the processing system 300 may also be associated with external software applications. These may be applications stored on a remote server device/apparatus and may run partly or exclusively on the remote server device/apparatus. These applications may be termed cloud-hosted applications. The processing system 300 may be in communication with the remote server device/apparatus in order to utilize the software application stored there.
[0136]
[0137] Tangible media can be any device/apparatus capable of storing data/information which data/information can be exchanged between devices/apparatus/network.
[0138] Embodiments of the present invention may be implemented in software, hardware, application logic or a combination of software, hardware and application logic. The software, application logic and/or hardware may reside on memory, or any computer media. In an example embodiment, the application logic, software or an instruction set is maintained on any one of various conventional computer-readable media. In the context of this document, a “memory” or “computer-readable medium” may be any non-transitory media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer.
[0139] Reference to, where relevant, “computer-readable medium”, “computer program product”, “tangibly embodied computer program” etc., or a “processor” or “processing circuitry” etc. should be understood to encompass not only computers having differing architectures such as single/multi-processor architectures and sequencers/parallel architectures, but also specialised circuits such as field programmable gate arrays FPGA, application specify circuits ASIC, signal processing devices/apparatus and other devices/apparatus. References to computer program, instructions, code etc. should be understood to express software for a programmable processor firmware such as the programmable content of a hardware device/apparatus as instructions for a processor or configured or configuration settings for a fixed function device/apparatus, gate array, programmable logic device/apparatus, etc.
[0140] If desired, the different functions discussed herein may be performed in a different order and/or concurrently with each other. Furthermore, if desired, one or more of the above-described functions may be optional or may be combined. Similarly, it will also be appreciated that the flow diagrams of
[0141] It will be appreciated that the above described example embodiments are purely illustrative and are not limiting on the scope of the invention. Other variations and modifications will be apparent to persons skilled in the art upon reading the present specification.
[0142] Moreover, the disclosure of the present application should be understood to include any novel features or any novel combination of features either explicitly or implicitly disclosed herein or any generalization thereof and during the prosecution of the present application or of any application derived therefrom, new claims may be formulated to cover any such features and/or combination of such features.
[0143] Although various aspects of the invention are set out in the independent claims, other aspects of the invention comprise other combinations of features from the described example embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
[0144] It is also noted herein that while the above describes various examples, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications which may be made without departing from the scope of is the present invention as defined in the appended claims.