Physiological signal prediction method

11227161 · 2022-01-18

Assignee

Inventors

Cpc classification

International classification

Abstract

A physiological signal prediction method includes: collecting a video file, the video file containing long-term videos, and contents of the video file containing data for a face of a single person and true physiological signal data; segmenting a single long-term video into multiple short-term video clips; extracting, by using each frame of image in each of the short-term video clips, features of interested regions for identifying physiological signals so as to form features of interested regions of a single frame; splicing, for each of the short-term video clips, features of interested regions of all fixed frames corresponding to the short-term video clip into features of interested regions of a multi-frame video, and converting the features of the interested regions of the multi-frame video into a spatio-temporal graph; inputting the spatio-temporal graph into a deep learning model for training, and using the trained deep learning model to predict physiological signal parameters.

Claims

1. A physiological signal prediction method, comprising: S1, collecting a video file, the video file containing long-term videos, and content of the video file containing data for a face of a single person, wherein the face rotates with a certain amplitude at a certain speed, and the face has true physiological signal data; S2, segmenting a single long-term video in the video file into multiple short-term video clips, wherein each of the short-term video clips has a fixed number of frames, and each of the short-term video clips corresponds to a true physiological signal tag; S3, extracting, with each frame of image in each of the short-term video clips, features of interested regions for identifying physiological signals so as to form features of interested regions of a single frame; S4, splicing, for each of the short-term video clips, the features of the interested regions of all fixed frames corresponding to the short-term video clip so as to form features of interested regions of a multi-frame video, and converting the features of the interested regions of the multi-frame video from an RGB color space to a YUV color space so as to form a spatio-temporal graph containing temporal and spatial dimensions; wherein the step of splicing the features of the interested regions of all fixed frames corresponding to the short-term video clip further comprises: S41, dividing a feature of an interested region on one cheek of a single frame uniformly into multiple rectangular zones so as to construct a matrix of pixel values; S42, reorganizing, by taking RGB as a standard, the matrix of pixel values to construct a reorganized matrix of pixel values; S43, splicing the reorganized matrices of pixel values on both cheeks of the face by column so as to construct a matrix of features of interested regions of a single frame; and S44, splicing multiple matrices, each of which is a matrix of features of interested regions of a single frame, by column so as to form features of interested regions of a multi-frame video; and wherein the step of dividing a feature of an interested region on one cheek of a single frame uniformly into the multiple rectangular zones so as to construct a matrix of pixel values further comprises: dividing the feature of the interested region on one cheek of the single frame uniformly into m×n rectangular zones to construct a matrix of pixel values denoted as: A = [ A 11 .Math. A 1 k .Math. A 1 n .Math. .Math. A i 1 A ik A in .Math. .Math. A m 1 .Math. A mk .Math. A mn ] where A.sub.ik represents a pixel matrix for a single rectangular zone, with a matrix dimension of [p, q, 3]; and readjusting the matrix dimension of A.sub.ik into [p×q, 3], where 3 columns correspond to RGB channels, respectively; and S5, inputting the spatio-temporal graph into a deep learning model for training, and using the trained deep learning model to predict physiological signal parameters.

2. The physiological signal prediction method according to claim 1, wherein the step of segmenting the single long-term video into the multiple short-term video clips comprises: segmenting the long-term video, with a time interval for the physiological signal tag as a length of a window for intercepting each of the short-term video clips, and with a time point of the respective physiological signal tag as an intermediate time point within the window.

3. The physiological signal prediction method according to claim 1, wherein the step of extracting the features of the interested regions for identifying physiological signals by using each frame of image in each of the short-term video clips comprises: determining four-point coordinates of respective rectangular boxes on both cheeks by means of a 68 marks method in a dlib library, and selecting the rectangular boxes on both cheeks as the interested regions for identifying physiological signals.

4. The physiological signal prediction method according to claim 3, wherein for a frame that is unidentifiable for any feature of any interested region, replacing a value of the frame that is unidentifiable for any feature of any interested region by a value of a previous frame that is identifiable for a feature of an interested region.

5. The physiological signal prediction method according to claim 3, wherein the step of extracting the features of the interested regions for identifying physiological signals by using each frame of image in each of the short-term video clips further comprises: using, for each frame of image in each of the short-term video clips, functions in the dlib library to perform face recognition, alignment, and mask face extraction.

6. The physiological signal prediction method according to claim 1, wherein the step of reorganizing the matrix of pixel values by taking RGB as the standard comprises: averaging the pixel values in the A.sub.ik by column for R, G, and B channels, respectively, to construct a matrix denoted as Ā.sub.ik=[R G B] with a matrix dimension of [1, 3]; and splicing Ā.sub.1 . . . Ā.sub.ik . . . Ā.sub.mn into a [mn, 3]-dimensional matrix by column, which matrix is denoted as:
B=[Ā.sub.11 . . . Ā.sub.1n . . . Ā.sub.i1 . . . Ā.sub.in . . . Ā.sub.m1 . . . Ā.sub.mn].sup.T.

7. The physiological signal prediction method according to claim 6, wherein the step of splicing the reorganized matrices of pixel values on both cheeks by column to construct a matrix of features of interested regions of a single frame comprises: splicing the reorganized matrices of pixel values on both cheeks into a [2mn, 3]-dimensional matrix by column, which is denoted as Bd[t], a matrix of features of interested regions of the t-th frame; and the steps of splicing the multiple matrices, each of which is a matrix of features of interested regions of a single frame, by column so as to form features of interested regions of a multi-frame video comprises: splicing matrices of features of interested regions of T frames by column into a matrix denoted as:
C=[Bd[1] . . . Bd[t] . . . Bd[T]].sup.T.

8. The physiological signal prediction method according to claim 7, wherein the deep learning model is a three-dimensional convolutional neural network model or a two-dimensional convolutional neural network model with a residual network as the core; and inputting the spatio-temporal graph into the three-dimensional convolutional neural network model or the two-dimensional convolutional neural network model for training, and using the trained three-dimensional convolutional neural network model or the trained two-dimensional convolutional neural network model to predict the physiological signal parameters.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the present disclosure, and together with the description serve to explain the principles of the disclosure.

(2) In order to more clearly explain the embodiments of the present disclosure or the technical solutions in the existing technologies, drawings that need to be used in the description of the embodiments or the existing technologies will be briefly introduced below. Obviously, for those of ordinary skill in the art, other drawings can be obtained based on these drawings without any creative effort.

(3) FIG. 1 is a flowchart illustrating a physiological signal prediction method provided by an embodiment of the present disclosure.

(4) FIG. 2 is a flowchart illustrating the process of splicing features of interested regions of each frame of image in the short-term video clip according to the physiological signal prediction method provided by an embodiment of the present disclosure.

DETAILED DESCRIPTION OF THE EMBODIMENTS

(5) In order to make the objectives, technical solutions and advantages of the present disclosure clearer, the technical solutions in the embodiments of the present disclosure will be further described in detail in conjunction with the accompanying figures showing exemplary embodiments of the disclosure. Obviously, the described embodiments are only part of the embodiments of the present invention, rather than all of the embodiments thereof. All other embodiments obtained based on embodiments in the present disclosure by those of ordinary skill in the art without any creative effort fall within the scope of the present disclosure.

Embodiment One

(6) As shown in FIG. 1, a physiological signal prediction method provided by an embodiment of the present application includes the following steps.

(7) At step S1, a video file is collected, the video file containing long-term videos, and content of the video file containing data for a face of a single person. The face may rotate with a certain velocity at a certain speed, and the face has true physiological signal data.

(8) At step S2, a single long-term video in the video file is segmented into multiple short-term video clips. For example, the long-term video is segmented, with a time interval for the physiological signal tag as a length of a window for intercepting each of the short-term video clips, and with a time point of the respective physiological signal tag as an intermediate time point within the window. Each of the short-term video clip has a fixed number of frames, and each of the short-term video clips corresponds to a true physiological signal tag. In this embodiment, each short video file needs to be stored in an uncompressed manner, and the file format can be .avi.

(9) At step S3: in this embodiment, for each frame of image in each of the short-term video clips, functions in the dlib library are used to perform face recognition, alignment, and mask face extraction. In this embodiment, four-point coordinates of respective rectangular boxes on both cheeks are determined by means of a 68 marks method in a dlib library, and the rectangular boxes on both cheeks are selected as the interested regions for identifying physiological signals, from which the features of the interested regions for identifying physiological signals are extracted, to form features of interested regions of a single frame.

(10) The reasons for this is that, on the one hand, the positions of the cheeks will not be easily obstructed, and on the other hand blood flows at the positions of the cheeks will be higher. Choosing such positions as feature extraction regions can obtain a good prediction effect.

(11) In some embodiments, for a frame that is unidentifiable for any feature of any interested region, replacing a value of the frame that is unidentifiable for any feature of any interested region by a value of a previous frame that is identifiable for a feature of an interested region, so as to ensure the continuity of the spatio-temporal graph in its temporal dimension. Usually, unidentifiable video frames will be replaced by black pixels by default. Herein, the value of the previous identifiable frame can be used to replace the value of the unidentifiable frame, which means that an approximate video frame value is inserted here. Therefore, it can avoid negative effects of large difference in pixel values on the model prediction.

(12) At step S4, for each of the short-term video clips, the features of the interested regions of all fixed frames corresponding to the short-term video clip are spliced so as to form features of interested regions of a multi-frame video, and the features of the interested regions of the multi-frame video are converted from an RGB color space to a YUV color space so as to form a spatio-temporal graph containing temporal and spatial dimensions.

(13) In some embodiments, as shown in FIG. 2, the process of splicing features of interested regions of each frame of image in the short-term video clip may be specifically implemented in the following manners.

(14) At step S41, a feature of an interested region on one cheek of a single frame is divided uniformly into multiple rectangular zones so as to construct a matrix of pixel values.

(15) In some embodiments, the process of dividing a feature of an interested region on one cheek of a single frame uniformly into the multiple rectangular zones so as to construct a matrix of pixel values may be specifically implemented in the following manners.

(16) The feature of the interested region on one cheek of the single frame is divided uniformly into m×n rectangular zones to construct a matrix of pixel values denoted as:

(17) A = [ A 11 .Math. A 1 k .Math. A 1 n .Math. .Math. A i 1 A ik A in .Math. .Math. A m 1 .Math. A mk .Math. A mn ]

(18) where A.sub.ik represents a pixel matrix for a single rectangular zone, with a matrix dimension of [p, q, 3], where i=1 . . . m, and k=1 . . . n. p and q correspond to height h and width w of a custom pixel matrix A, respectively, p*m is height H of the matrix A, and q*n is width W of the matrix A.

(19) The matrix dimension of the A.sub.ik is readjusted into [p×q, 3], where 3 columns correspond to RGB channels, respectively.

(20) At step S42, by taking RGB as a standard, the matrix of pixel values is reorganized to construct a reorganized matrix of pixel values.

(21) The pixel values in the A.sub.ik are averaged by column for R, G, and B channels, respectively, to construct a matrix with a matrix dimension of [1, 3] denoted as:
Ā.sub.ik=[R G B].

(22) Ā.sub.1 . . . Ā.sub.ik . . . Ā.sub.mn may be spliced into a [mn, 3]-dimensional matrix by column, which matrix is denoted as:
B=[Ā.sub.11 . . . Ā.sub.1n . . . Ā.sub.i1 . . . Ā.sub.in . . . Ā.sub.m1 . . . Ā.sub.mn].sup.T.

(23) At step S43, the reorganized matrices of pixel values of interested regions on both cheeks are spliced into a [2mn, 3]-dimensional matrix by column to construct a matrix of features of interested regions of a single frame, which is denoted as Bd[t], namely a matrix of features of interested regions of the t-th frame, where t=1 . . . T, and T is the fixed number of frames in each short-term video clip.

(24) At step S44, multiple matrices, each of them is a matrix of features of interested regions of a single frame, are spliced by column so as to form features of interested regions of a multi-frame video.

(25) Specifically, matrices of features of interested regions of T frames are spliced by column into a matrix denoted as:
C=[Bd[1] . . . Bd[t] . . . Bd[T]].sup.T.

(26) The matrix C describes the features of interested regions of a multi-frame video.

(27) At step S5, the spatio-temporal graph is input into a deep learning model for training, and the trained deep learning model is used to predict physiological signal parameters.

(28) In fact, a short video segment corresponds to a physiological signal tag, corresponds to a spatio-temporal graph, and corresponds to multiple picture frames, each picture frame corresponds to two interested regions (left and right cheeks). Each physiological signal tag is used to characterize a set of physiological signal data, which can reflect pulsatile information about facial physiological parameters in a short video segment.

(29) The deep learning model is a three-dimensional convolutional neural network model or a two-dimensional convolutional neural network model with a residual network as the core. The spatio-temporal graph is input into the three-dimensional convolutional neural network model or the two-dimensional convolutional neural network model for training, and the trained three-dimensional convolutional neural network model or the trained two-dimensional convolutional neural network model is used to predict the physiological signal parameters.

(30) In some embodiments, the three-dimensional convolutional neural network model or the two-dimensional convolutional neural network model is a convolutional neural network constructed with a residual network (ResNet) as basis. A design idea of SENet is introduced in the spatial dimension, and Squeeze-and-Excitation (SE) blocks are incorporated. Due to introduction of a design idea of Depthwise Separable Convolution and ShuffleNet, the complexity of the model is reduced under a certain performance of the model. That is, group convolution is performed on the channel dimension, which is suitable for designing a block in the case of a large channel value. Convolution kernels adopt a method of dilated convolution. Due to influence of ambient noise, the extracted spatio-temporal graph may suffer from continuous information missing or information inaccuracy, and the pooling operation may also cause loss of pulsatile information about physiological signals. Therefore, a large convolution kernel, such as a 5*5 convolution kernel, can be used and also be mixed with a 3*3 convolution kernel, or a dilated convolution can be used to increase the receptive field and reduce the amount of calculation caused by using a large convolution kernel. That is, each convolution output contains a wide range of information, which improves effectiveness of information extracted by convolution. Use of a neural network with a large convolution kernel, such as Alexnet, will greatly improve the problem of missing continuous segments of feature information for a spatio-temporal graph caused by factors such as substantial and rapid head rotation or changes in illumination. The size of convolution kernel of the first layer in Alexnet model is 11, and the receptive field is large, which can facilitate extraction of pulsatile information about physiological signals in the spatio-temporal graph. Compared with a small convolution kernel, this can weaken influence of lack of spatio-temporal graph information.

(31) Finally, mean absolute error (MAE) and root mean square error (RMSE) are used to evaluate measurement results for the physiological signals, to draw a scatter plot with tag value-predicted value.

Embodiment Two

(32) Opencv is used to read video frame pictures. The video frame pictures are converted from an RGB space to a grayscale space for face detection. Coordinates of picture pixels are converted to a numpy array. Four-point coordinates of a rectangle on a first side, i.e., shape[12][0], shape[54][0], shape[33][1], and shape[29][1], are determined, the enclosed rectangle representing a first interested region; and four-point coordinates of a rectangle on a second side, i.e., shape[48][0], shape[4][0], shape[33] [1], and shape[29] [1], are determined, the enclosed rectangle representing a second interested region. In the above shape[a][b], a represents a serial number of one of 68-point marks, b is 0 for an abscissa x, and b is 1 for an ordinate y.

(33) For a specific frame that is unidentifiable for any feature of an interested region caused by influence from ambient noise, a method for processing such a frame may be implemented in the following manner.

(34) Due to the extremely short interval between frames, a value of a physiological signal under normal conditions will not fluctuate sharply. The unidentifiable frame may be replaced by a value of a previous identifiable frame to improve robustness of a spatio-temporal graph and guarantee continuity of the spatio-temporal graph in its temporal dimension. Usually, unidentifiable video frames will be replaced by black pixels by default. However, in this implementation, the value of the previous identifiable frame can be used to replace the value of the unidentifiable frame, which means that an approximate video frame value is inserted here, thereby avoiding negative effects of large difference in pixel values on the model prediction.

(35) An interested region on one cheek of a single frame is divided uniformly into m×n rectangular zones, to construct a matrix of pixel values denoted as:

(36) A = [ A 11 .Math. A 1 k .Math. A 1 n .Math. .Math. A i 1 A ik A in .Math. .Math. A m 1 .Math. A mk .Math. A mn ]

(37) where A.sub.ik represents a pixel matrix for a single rectangular zone, with a matrix dimension of [p, q, 3].

(38) The matrix dimension of the A.sub.ik is readjusted into [p×q, 3], where 3 columns correspond to RGB channels, respectively.

(39) The pixel values in the A.sub.ik are averaged by column for R, G, and B channels, respectively, to construct a matrix with a matrix dimension of [1, 3] denoted as:
Ā.sub.ik=[R G B]

(40) Ā.sub.1 . . . Ā.sub.ik . . . Ā.sub.mn are spliced into a [mn, 3]-dimensional matrix by column, and the matrix is denoted as:
B=[Ā.sub.11 . . . Ā.sub.1n . . . Ā.sub.i1 . . . Ā.sub.in . . . Ā.sub.m1 . . . Ā.sub.mn].sup.T.

(41) The reorganized matrices of pixel values of interested regions on both cheeks are spliced into a [2mn, 3]-dimensional matrix by column so as to construct a matrix of features of interested regions of a single frame, which is denoted as Bd[t], namely a matrix of features of interested regions of the t-th frame, where t=1 . . . T, and T is the fixed number of frames in each short-term video clip.

(42) Matrices of features within T frames are spliced by column into a matrix denoted as:
C=[Bd[1] . . . Bd[t] . . . Bd[T]].sup.T.

(43) The matrix C describes the features of interested regions of a multi-frame video.

(44) The matrix C is converted from an RGB color space to a YUV color space to generate a spatio-temporal graph.

(45) For example, the spatio-temporal graph has a dimension of [128, 128, 3], where length and width are both 128, and the number of channels is 3;

(46) Input dimensions of a three-dimensional convolutional neural network model may be (batch, c, L, H, W), where c=3.

(47) Input dimensions of a two-dimensional convolutional neural network model may be (batch, c, H, W), where c=3.

(48) batch—the number of pieces of data processed in a batch of the model;

(49) 3—RGB channels in a color space, i.e., the number of the channels;

(50) L—a temporal dimension, that is, one video clip is input in each batch, the number of frames contained in this clip being L;

(51) H—a height of a spatial dimension, i.e., a height of a single spatio-temporal graph;

(52) and

(53) W—a width of a spatial dimension, i.e., a width of a single spatio-temporal graph.

(54) The number of prediction results output by the three-dimensional convolutional neural network model is consistent with that of true physiological signal tags, and the number of its dimensions is the same as that of spatio-temporal graphs in the input temporal dimension L. For example, the input of the model is k spatio-temporal graphs, and the output thereof is k predicted values, which corresponds to k physiological signal tags. It should be noted here that the spatio-temporal graph is continuous in time because the short video is continuous.

(55) A single spatio-temporal graph is input into the two-dimensional convolutional neural network model, and the output predicted by the model corresponds to a true physiological signal tag of the spatio-temporal graph.

(56) The three-dimensional convolutional neural network model is a three-dimensional convolutional neural network model built with a residual network (ResNet) as the core. A design idea of SENet is introduced in the spatial dimension, and Squeeze-and-Excitation (SE) blocks are incorporated. The sensitivity of the pulsatile information to the three YUV channels is different. Weights of the channels learned by the model through data driving determine the degree of influence of each channel information on physiological parameters. Therefore, the model should maintain the dimensions “batch” and “L” unchanged during the application of the Squeeze-and-Excitation blocks.

(57) The following methods can also be used to build a two-dimensional convolutional neural network model, for example, by removing information about the temporal dimension L.

(58) (1) Squeeze: the temporal dimension L is maintained unchanged, and for input feature matrices of a single spatio-temporal graph, u.sub.c, global average pooling is taken for a feature matrix corresponding to each channel according to:

(59) z c = 1 H × W .Math. i = 1 H .Math. j = 1 W u c ( i , j )

(60) where z.sub.c is an average value of the channel, the subscript c represents the channel, H and W represent height and width respectively, and u.sub.c(i, j) represents matrix values of the i-th and j-th pixels.

(61) (2) Excitation: the value of the channel is recalibrate self-adaptively, which is equivalent to calculation of the weight of the channel according to:
s=σ(W.sub.2δ(W.sub.1z))

(62) where δ is a ReLu activation function, W.sub.1 and W.sub.2 are weights of a fully connected layer, σ is a softmax function, applied to the spatial dimensions H and W, with the temporal dimension L unchanged; s represents the weighted weight of all channels, a one-dimensional tensor, the tensor magnitude being the number of channels; and z represents the global average pooled value of all input channels, a one-dimensional tensor, the tensor magnitude being the number of channels.

(63) (3) The feature matrix of each channel is weighted according to:
U.sub.c=u.sub.c×s.sub.c

(64) where u.sub.c represents a feature matrix value of a single channel c for the original input, U.sub.c represents a feature matrix value of the channel c after weighting, and s.sub.c represents a weighted value corresponding to the channel c.

(65) The three-dimensional convolutional neural network model introduces a design idea of Depthwise Separable Convolution and ShuffleNet. Under a certain performance of the model, the complexity of the model is reduced. That is, group convolution is performed on the channel dimension, which is suitable for designing a block in the case of a large channel value.

(66) The following methods can also be used to build a two-dimensional convolutional neural network model, for example, by removing information about the temporal dimension L.

(67) The input is split with ½ of the number of input channels (channel split), as the input of a first branch and a second branch respectively.

(68) 1. The first branch is built through the following processes sequentially:

(69) (1) group convolution according to 1×1×1 GConv, where the channels may be selectively divided into 3, 4, 8 groups;

(70) (2) batch normalization on ReLu activation function or H-Swish activation function (BN ReLu or BNH-Swish);

(71) (3) depthwise separable convolution, each channel as a group with a convolution step of 2, according to 3×3×3DWConv(stride=2);

(72) (4) batch normalization on BN;

(73) (5) group convolution according to 1×1×1 GConv; and

(74) (6) batch normalization on BN.

(75) 2. The second branch is built through the following processes sequentially:

(76) global average pooling according to 3×3×3 AVG Pool(stride=2).

(77) 3. After the first and second branches are connected to Concat, channel shuffle is performed, and a shuffle block is constructed through all the above processes.

(78) Convolution kernels adopt a method of dilated convolution. Due to influence of ambient noise, the extracted spatio-temporal graph may suffer from continuous information missing or information inaccuracy, and the pooling operation may also cause loss of pulsatile information about physiological signals. Use of dilated convolution may increase the receptive field, which means that each convolution output contains a wide range of information, thereby improving effectiveness of information extracted by convolution.

(79) Use of a neural network with a large convolution kernel, such as Alexnet, will greatly improve the problem of missing continuous segments of feature information for a spatio-temporal graph caused by factors such as substantial and rapid head rotation or changes in illumination. In some embodiments, the size of convolution kernel of the first layer in Alexnet model is 11, and the receptive field is large, which can facilitate extraction of pulsatile information about physiological signals in the spatio-temporal graph. Compared with a small convolution kernel, this can weaken influence of lack of spatio-temporal graph information.

(80) Physiological signal prediction can be performed over multiple channels at the same time. The extracted spatio-temporal graphs are selected for inputting, and the two-dimensional convolutional neural network model or the three-dimensional convolutional neural network model is used for training respectively, and the predicted values are output.

(81) Mean absolute error (MAE) and root mean square error (RMSE) are used to evaluate measurement results for the physiological signals, to draw a scatter plot with tag value-predicted value as abscissa-ordinate, which is defined as:

(82) MAE ( y ^ , y ) = 1 m .Math. i = 1 m .Math. y ^ - y .Math. RMSE ( y ^ , y ) = 1 m .Math. i = 1 m ( y ^ - y ) 2

(83) where ŷ represents a prediction result of the model, y represents a true physiological signal tag corresponding to a video clip, and m represents a total number of physiological signal tags.

(84) It should be noted that relational terms such as “first” and “second” herein are used solely to distinguish one from another entity or operation, without necessarily requiring or implying any such actual relationship or order between such entities or operations. The terms “comprising”, “including”, or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, a method, an article, or an apparatus that includes a series of elements not only includes those elements but also may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element preceded by “including a/an . . . ” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that includes the element.

(85) In the above description, only specific embodiments of the present disclosure have been provided, so that those skilled in the art can understand or implement the present disclosure. Various modifications to those embodiments will be obvious to those skilled in the art, and general principles defined herein can be implemented in other embodiments without departing from the spirit or scope of the present disclosure. Therefore, the present disclosure will not be limited to the embodiments described herein, but shall accord with the widest scope consistent with the principles and novel characteristics described and claimed herein.