APPARATUS AND METHOD FOR DEVELOPING SPACE ANALYSIS MODEL BASED ON DATA AUGMENTATION
20220358752 · 2022-11-10
Assignee
Inventors
Cpc classification
G06V20/70
PHYSICS
G06V10/774
PHYSICS
G06T19/20
PHYSICS
International classification
G06V10/774
PHYSICS
Abstract
Disclosed is a data augmentation-based space analysis model learning apparatus including one or more processors, wherein the operation performed by the processor includes acquiring a plurality of space images and labeling a class specifying space information corresponding to each of the plurality of space images or acquiring the plurality of space images to which the class is labeled and generating learning data, generating a second space image by changing some or all of pixel information included in a first space image among the plurality of space images and augmenting the learning data, labeling a class labeled to the first space image, to the second space image, and learning a weight of a model designed based on a predetermined image classification algorithm, for deriving a correlation between a space image included in the learning data and a class labeled to each of the space images.
Claims
1. A data augmentation-based space analysis model learning apparatus comprising: one or more memories configured to store instructions for performing a predetermined operation; and one or more processors operatively connected to the one or more memories and configured to execute the instructions, wherein the operation performed by the processor includes: acquiring a plurality of space images and labeling a class specifying space information corresponding to each of the plurality of space images or acquiring the plurality of space images to which the class is labeled and generating learning data; generating a second space image by changing some or all of pixel information included in a first space image among the plurality of space images and augmenting the learning data; labeling a class labeled to the first space image, to the second space image; and learning a weight of a model designed based on a predetermined image classification algorithm, for deriving a correlation between a space image included in the learning data and a class labeled to each of the space images, by inputting the augmented learning data to the model to generate a model for determining a space image based on the correlation.
2. The data augmentation-based space analysis model learning apparatus of claim 1, wherein the generating the second space image includes generating the second space image by changing an element value that is greater than a predetermined reference value to a greater element value and changing an element value smaller than the reference value to a smaller element value with respect to an element value (x, y, z) configuring RGB information of the pixel information included in the first space image.
3. The data augmentation-based space analysis model learning apparatus of claim 2, wherein the generating the second space image includes generating the second space image from the first space image based on Equation 1 below:
dst(I)=round(max(0, min(α*src(I)−β,255))) [Equation 1] (src(I): element value (x, y, z) before pixel information is changed, α: constant, β: constant, and dst(I): element value (x′, y′, z′) after pixel information is changed).
4. The data augmentation-based space analysis model learning apparatus of claim 1, wherein the generating the second space image includes generating the second space image from the first space image based on Equation 2 below:
Y=0.1667*R+0.5*G+0.3334*B [Equation 2] (R: x of RGB information (x, y, z) of pixel information, G: y of GB information (x, y, z) of pixel information, B: z of GB information (x, y, z) of pixel information, and Y: element value (x′, y′, z′) after pixel information is changed).
5. The data augmentation-based space analysis model learning apparatus of claim 1, wherein the generating the second space image includes generating the second space image from the first space image based on Equations 3 and 4 below:
dst(I)=round(max(0, min(α*src(I)−β,255))) [Equation 3] (src(I): element value (x, y, z) before pixel information is changed, α: constant, β: constant, dst(I): element value (x′, y′, z′) after pixel information is changed)
Y=0.1667*R+0.5*G+0.3334*B [Equation 2] (R: x′ of (x′, y′, z′) of dst(I) acquired from Equation 4, G: y′ of (x′, y′, z′) of dst(I) acquired from Equation 4, B: z′ of (x′, y′, z′) of dst(I) acquired from Equation 4, and Y: element value (x″, y″, z″) after pixel information is changed).
6. The data augmentation-based space analysis model learning apparatus of claim 1, wherein the generating the second space image includes generating the second space image by adding noise information to some of pixel information included in the first space image.
7. The data augmentation-based space analysis model learning apparatus of claim 6, wherein the generating the second space image includes generating the second space image by adding noise information to pixel information of the first space image based on Equation 5 below:
dst(I)=round(max(0, min(src(I)±N,255))) [Equation 5] (src(I): element value (x, y, z) before pixel information is changed, N: random number, dst(I): element value (x′, y′, z′) after pixel information is changed).
8. The data augmentation-based space analysis model learning apparatus of claim 1, wherein the generating the second space image includes generating the second space image by calculating a value (R.sub.max−R.sub.AVG,G.sub.max−G.sub.AVG,B.sub.max−B.sub.AVG) by subtracting an element average value (R.sub.AVG,G.sub.AVG,B.sub.AVG) of each of R, G, and B of a plurality of pixels from a maximum element value (R.sub.max,G.sub.max,B.sub.max) among element values of each of R, G, and B of the plurality of pixels included in a size of an N×N matrix (N being a natural number equal to or greater than 3) including a first pixel at a center among pixels included in the first space image and, when any one of element values of the (R.sub.max−R.sub.AVG,G.sub.max−G.sub.AVG,B.sub.max−B.sub.AVG) is smaller than a preset value, performing an operation of blurring the first pixel.
9. The data augmentation-based space analysis model learning apparatus of claim 1, wherein the generating the second space image includes generating random number information based on standard Gaussian normal distribution with an average value of 0 and a standard deviation of 100 as much as a number of all pixels included in the first space image and generating the second space image into which noise is inserted by adding the random number information to each of the all pixels.
10. The data augmentation-based space analysis model learning apparatus of claim 1, wherein the generating the model includes setting the space image included in the learning data to be input to an input layer of a neural network designed based on a Deep Residual Learning for Image Recognition (ResNet) algorithm, setting a class, labeled to each of the space images, to be input to an output layer, and learning a weight of a neural network for deriving a correlation between the space image included in the learning data and the class labeled to each of the space images.
11. The data augmentation-based space analysis model learning apparatus of claim 10, wherein a number of network layers among hyper parameters of the neural network designed based on the ResNet algorithm has one value of [18, 34, 50, 101, 152, and 200], a number of classes includes 4 classes classified into a living room/room/kitchen/bathroom, a size of mini batch has one value of [32, 64, 128, and 256], a learning number of times has one of 10 to 15, a learning rate is set to 0.005 or 0.01, and a loss function is set to SGD or Adam.
12. An apparatus including an augmentation-based space analysis model generated by the apparatus of claim 1.
13. A method performed by a data augmentation-based space analysis model learning apparatus, the method comprising: acquiring a plurality of space images and labeling a class specifying space information corresponding to each of the plurality of space images or acquiring the plurality of space images to which the class is labeled and generating learning data; generating a second space image by changing some or all of pixel information included in a first space image among the plurality of space images and augmenting the learning data; labeling a class labeled to the first space image, to the second space image; and learning a weight of a model designed based on a predetermined image classification algorithm, for deriving a correlation between a space image included in learning data and a class labeled to each of the space images, by inputting the augmented learning data to the model to generate a model for determining a space image based on the correlation.
14. A computer program recorded in a computer-readable recording medium for performing the method of claim 13 by a processor.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0028] The accompanying drawings, which are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the present disclosure and together with the description serve to explain the principle of the present disclosure. In the drawings:
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
DETAILED DESCRIPTION OF THE INVENTION
[0039] The attached drawings for illustrating exemplary embodiments of the present disclosure are referred to in order to gain a sufficient understanding of the present disclosure, the merits thereof, and the objectives accomplished by the implementation of the present disclosure. The present disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the present disclosure to one of ordinary skill in the art. Meanwhile, the terminology used herein is for the purpose of describing particular embodiments and is not intended to limit the present disclosure.
[0040] In the following description of the present disclosure, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present disclosure unclear. The terms used in the specification are defined in consideration of functions used in the present disclosure, and can be changed according to the intent or conventionally used methods of clients, operators, and users. Accordingly, definitions of the terms should be understood on the basis of the entire description of the present specification.
[0041] The functional blocks shown in the drawings and described below are merely examples of possible implementations. Other functional blocks may be used in other implementations without departing from the spirit and scope of the detailed description. In addition, although one or more functional blocks of the present disclosure are represented as separate blocks, one or more of the functional blocks of the present disclosure may be combinations of various hardware and software configurations that perform the same function.
[0042] The expression that includes certain components is an open-type expression and merely refers to existence of the corresponding components, and should not be understood as excluding additional components.
[0043] It will be understood that when an element is referred to as being “on”, “connected to” or “coupled to” another element, it may be directly on, connected or coupled to the other element or intervening elements may be present.
[0044] Expressions such as ‘first, second’, etc. are used only for distinguishing a plurality of components, and do not limit the order or other characteristics between the components.
[0045] Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0046]
[0047] Referring to
[0048]
[0049] Referring to
[0050] The memory 110 may include a learning data database (DB) 111, a neural network model 113, and an instruction DB 115.
[0051] The learning data DB 111 may include a space image file formed by photographing a specific space such as an indoor space or an outdoor space. The space image may be acquired through an external server or an external DB or may be acquired on the Internet. In this case, the space image may include a plurality of pixels (e.g., M*N pixels in the form of M horizontal and N vertical matrices), and each pixel may include pixel information configured with RGB element values (x, y, z) representing unique colors of red (R), green (G), and blue (B).
[0052] The neural network model 113 may be an artificial intelligence model learned based on an image classification artificial intelligence algorithm for determining a class that specifies a space used for which name, use, and characteristics of a space image by analyzing an input space image. The artificial intelligence model may be generated by an operation of the processor 120 to be described and may be stored in the memory 110.
[0053] The instruction DB 115 may store instructions for performing an operation of the processor 120. For example, the instruction DB 115 may store a computer code for performing operations corresponding to operations of the processor 120, which will be described below.
[0054] The processor 120 may control the overall operation of the components of the data augmentation-based space analysis model learning apparatus 100, that is, the memory 110, the input interface 130, the display 140, and the communication interface 150. The processor 120 may include a labeling module 121, an augmentation module 123, a learning module 125, and a control module 127. The processor 120 may execute the instructions stored in the memory 110 to drive the labeling module 121, the augmentation module 123, the learning module 125, and the control module 127, and operations performed by the labeling module 121, the augmentation module 123, the learning module 125, and the control module 127 may be understood to be operations performed by the processor 120.
[0055] The labeling module 121 may generate learning data used in learning of an artificial intelligence model by labeling (mapping) a class specifying space information (e.g., a space name, space use, and the characteristic of the use) represented by each of a plurality of space images and may store the learning data in the learning data DB 111. The labeling module 121 may acquire a space image through an external server or an external DB or may acquire a space image on the Internet. A class (e.g., a room, a bathroom, a kitchen, or a living room) specifying space information of a corresponding image may be pre-labeled to the space image.
[0056] The augmentation module 123 may generate a space image (a space image that is transformed by the augmentation module will be referred to as a “second space image”) formed by changing some or all of pixel information contained in the space image (a space image that is not transformed by the augmentation module will be referred to as a “first space image”) stored in the learning data DB 111 to augment the learning data and may add and store the second space image in the learning data DB 111.
[0057] A model learned by the data augmentation-based space analysis model learning apparatus 100 according to an embodiment of the present disclosure may have a function of classifying a class of a space image. In this case, even if the space image is captured by photographing the same space, information contained in an image file may be changed by various variables due to various environments or situations in which the space image is actually generated, such as the characteristics of a camera used for photograph, a time at which photograph is performed, or a habit of a person who takes a picture. Thus, in order to improve performance of the artificial intelligence model, the amount and quality of data used for learning may be important. In particular, in order to learn variables to be generated according to the characteristics of a camera used for photograph, a time at which photograph is performed, or a habit of a person who takes a picture, the augmentation module 123 may increase the quantity of learning data through a data augmentation algorithm of
[0058] The learning module 125 may learn a weight of a model designed based on a predetermined image classification algorithm, for deriving a correlation between a space image included in learning data and a class labeled to each of the space images, by inputting augmented learning data to the model, and thus may generate an artificial intelligence model for determining a class for a space image that is newly input based on the correlation of the weight. For example, the learning module 125 may generate a neural network by setting the space image included in the learning data to be input to an input layer of a neural network designed based on a Deep Residual Learning for Image Recognition (ResNet) algorithm among image classification algorithms, setting a class to which a space image is labeled to be output to an output layer, and learning a weight of a neural network for deriving a correlation between the space image included in the learning data and the class labeled to each space image.
[0059] The control module 127 may input a space image to the completely learned artificial intelligence model to derive a class that is determined by the artificial intelligence model as a keyword of the class as a keyword. Thus, the control module 127 may store keywords in a product DB of an online shopping mall server to use corresponding keyword information on a product page including the space image.
[0060] The input interface 130 may receive user input. For example, when a class for learning data is labeled, the input interface 130 may receive user input.
[0061] The display 140 may include a hardware component that includes a display panel to output an image.
[0062] The communication interface 150 may communicate with an external device (e.g., an online shopping mall server or a user equipment) to transmit and receive information. To this end, the communication interface 150 may include a wireless communication module or a wired communication module.
[0063] Hereinafter, various embodiments of components of the data augmentation-based space analysis model learning apparatus 100 will be described with reference to
[0064]
[0065] The augmentation module 123 may perform transformation to increase contrast by making a bright part of pixels of the first space image brighter and making a dark part darker or to reduce contrast by making the bright part less bright and making the dark part less dark, and thus may generate a second space image for learning a variable for generating different images of one space depending on the performance or model of a camera.
[0066] To this end, the augmentation module 123 may generate the second space image by changing an element value that is greater than a predetermined reference value to a greater element value and changing an element value smaller than the reference value to a smaller element value with respect to the element value (x, y, z) configuring RGB information of the pixel information included in the first space image.
[0067] For example, the augmentation module 123 may generate the second space image, pixel information of which is changed by applying Equation 1 below, with respect to pixel information of all pixels of the first space image.
dst(I)=round(max(0, min(α*src(I)−β,255))) [Equation 1]
[0068] (src(I): element value (x, y, z) before pixel information is changed, α: constant, β: constant, and dst(I): element value (x′, y′, z′) after pixel information is changed)
[0069] According to Equation 1 above, when α is set to have a greater value than 1, contrast may be increased by making a bright part of pixels of the first space image brighter and making a dark part darker among pixels in the first space image, and when α is set to have a value greater than 0 and smaller than 1, contrast may be reduced by making the bright part less bright and making the dark part less dark among the pixels in the first space image.
[0070] Since an element value of R, G, and B generally has a value between 0 and 255, β may be set to prevent the element value output based on α from being excessively greater than 255 and may be set to prevent the maximum value from being greater than 255 using a min function.
[0071] Since an element value of R, G, and B generally has a value between 0 and 255, a max function may be used to prevent the element value based on β from being smaller than 0 using the max function.
[0072] When α is set to a value having a decimal point, a round function may be used in such a way that the element value of the changed pixel information becomes an integer.
[0073] Referring to
[0074] Referring to
[0075] Referring to
[0076]
[0077] Since determination of a class of a space image is largely affected by arrangement of objects or patterns of the objects, the augmentation module 123 may convert colors to monotonous color and then may generate learning data to which a variable is applied to appropriately learn the arrangement of the objects and the patterns of the objects.
[0078] To this end, as shown in a left image of
Y=0.1667*R+0.5*G+0.3334*B [Equation 2]
[0079] (R: x of RGB information (x, y, z) of pixel information, G: y of GB information (x, y, z) of pixel information, B: z of GB information (x, y, z) of pixel information, and Y: element value (x′, y′, z′) after pixel information is changed)
[0080] In addition, as shown in a right image of
dst(I)=round(max(0, min(α*src(I)−β,255))) [Equation 3]
[0081] (src(I): element value (x, y, z) before pixel information is changed, α: constant, β: constant, dst(I): element value (x′, y′, z′) after pixel information is changed)
Y=0.1667*R+0.5*G+0.3334*B [Equation 2]
[0082] (R: x′ of (x′, y′, z′) of dst(I) acquired from Equation 3, G: y′ of (x′, y′, z′) of dst(I) acquired from Equation 3, B: z′ of (x′, y′, z′) of dst(I) acquired from Equation 3, and Y: element value (x″, y″, z″) after pixel information is changed)
[0083]
[0084] The augmentation module 123 may generate learning data for learning the case in which noise is generated in an image captured based on enlargement of a camera. To this end, the augmentation module 123 may add noise information to some of the pixel information included in the first space image to generate the second space image. For example, the augmentation module 123 may generate the second space image to which noise information is added by generating arbitrary coordinate information through an algorithm for generating a random number, selecting some coordinates of pixels included in the first space image, and adding the random number, calculated using the algorithm for generating a random number, to the pixel information based on Equation 5 with respect to an element value of a pixel of the selected coordinates.
dst(I)=round(max(0, min(src(I)±N,255))) [Equation 5]
[0085] (src(I): element value (x, y, z) before pixel information is changed, N: random number, dst(I): element value (x′, y′, z′) after pixel information is changed)
[0086] As seen from
[0087]
[0088] The augmentation module 123 may generate the second space image in which the edge of the object seems to be blurred to learn an image captured when a camera is not in focus according to the following embodiment.
[0089]
[0090] In
[0091] As such, the augmentation module 123 may perform the above operation on each of all pixels included in the first space image. In the case of the pixel on which the operation is performed, the second space image may be generated by selecting a plurality of pixels included in the size of an N×N (N is an odd number of 3 or more) matrix including the corresponding pixel in the center as the kernel region, calculating a value (R.sub.max−R.sub.AVG,G.sub.max−G.sub.AVG,B.sub.max−B.sub.AVG) by subtracting an element average value (R.sub.AVG,G.sub.AVG,B.sub.AVG) of each of R, G, and B of a plurality of pixels included in the kernel region from the maximum element value (R.sub.max,G.sub.max,B.sub.max) among element values of each of R, G, and B of the plurality of pixels included in the kernel region, and applying the Gaussian blur algorithm to the corresponding pixel when at least one element value of (R.sub.max−R.sub.AVG,G.sub.max−G.sub.AVG,B.sub.max−B.sub.AVG) is smaller than a preset value n.
[0092] When the operation is performed on all pixels included in the first space image, only pixels of the edge region with a large color difference may have pixel information without change, the pixels in the region without color difference may be blurred, and thus the second space image based on which an image captured while the camera is out of focus may be generated. In this case, the Gaussian blur algorithm may be applied for blur processing, but the present disclosure is not limited thereto, and various blur filters may be used.
[0093] Referring to
[0094] Referring to
[0095] In the embodiment described with reference to
[0096] In the embodiment described with reference to
[0097]
[0098] The augmentation module 123 may generate learning data for learning the case in which a specific part of an image is out of focus. To this end, the augmentation module 123 may generate random number information based on the standard Gaussian normal distribution with an average value of 0 and a standard deviation of 100 as much as the number of all pixels included in the first space image and may generate the second space image into which noise information is inserted by adding random number information to each of the all pixels.
[0099] With respect to the second spatial data generated in
[0100] Then, the learning module 125 may input the original learning data (the first space image) and the augmented learning data (the second space image) in the embodiments of
[0101] The image classification algorithm may include a machine learning algorithm for defining various problems in an artificial intelligence field and overcoming the problems. According to an embodiment of the present disclosure, learning may proceed through the artificial intelligence model designed using an algorithm of ResNet, LeNet-5, AlexNet, VGG-F, VGG-M, VGG-S, VGG-16, VGG-19, GoogLeNet (inception v1), and SENet.
[0102] The artificial intelligence model may refer to the overall model having problem-solving ability, which is composed of nodes that form a network by combining synapses. The artificial intelligence model may be defined based on a learning process for updating a model parameter as a weight between layers configuring the model and an activation function for generating an output value.
[0103] The model parameter may refer to a parameter determined through learning and may include a weight of layer connection and bias of neurons. A hyper parameter may refer to a parameter to be set before learning in a machine learning algorithm and may include the number of network layers (num_layer), the number of learning data (num_training_samples), the number of classes (num_classes), a learning rate, a learning number of times (epochs), the size of mini batch (mini_batch_size), and a loss function (optimizer).
[0104] The hyper parameter of the artificial intelligence model according to an embodiment of the present disclosure may have the following setting value. For example, the number of network layers may be selected among [18, 34, 50, 101, 152, and 200] in the case of learning data with a large image. In this case, the number of network layers may be learned as an initial value of 18 in consideration of a learning time, and may be changed to 34 after a predetermined number of learning data is learned, thereby improving accuracy. The number of learning data may be a value obtained by subtracting the number of evaluation data from the total image data, 63,806 pieces of learning data may be used among total 79,756 pieces, and the remaining 16,625 pieces may be used as evaluation data. The number of classes may include 7 classes classified into living room/room/kitchen/bathroom. Since the size of mini batch is different in a convergence speed and a final loss value depending on a size value, [32, 64, 128, 256] may be attempted to be used as the size to select an appropriate value, and a size of 128 or 256 may be set. The learning number of times may be set to any one of 10 to 15. The learning rate may be set to 0.005 or 0.01. The loss function (objective function) may be set to a SGD as a default value or may be set to Adam appropriate for image classification. However, the aforementioned values may be merely exemplary, and embodiments are not limited to the above numerals.
[0105] The learning objective of the artificial intelligence model may be seen as determining the model parameter for minimizing the loss function. The loss function may be used as an index to determine the optimal model parameters in a learning process of the artificial intelligence models.
[0106]
[0107] First, the labeling module 121 may acquire a plurality of space images and may label a class for specifying space information corresponding to each of the plurality of space images or may acquire a plurality of space images to which classes are labeled to generate learning data (S710). Then, the augmentation module 123 may augment learning data by generating the second space image obtained by changing some or all of pixel information included in the first space image among the plurality of space images (S720). Then, the labeling module 121 may label to the class labeled to the first space image, to the second space image (S730). Thus, the learning module 125 may learn a weight of a model for deriving a correlation between a space image included in learning data and a class labeled to each space image by inputting augmented learning data to a model designed based on a predetermined image classification algorithm, and thus may generate a model for determining a space image based on the correlation (S740).
[0108] Since the procedure for performing corresponding operations by components as subjects of the respective operations described above has been described above with reference to
[0109] An embodiment of the present disclosure may provide an image classification model for ensuring high-quality learning data while increasing the amount of learning data through a data augmentation technology for ensuring various learning data by transforming original learning data to learn a variable indicating that a generated image is changed depending on various environments or situations such as the characteristics of a photographing camera, a photographing time, and a habit of a photographing person even if the same space is photographed and for easy learning and improved performance via automation of labeling of augmented learning data.
[0110] When the image classification model is used, an online shopping mall may effectively introduce traffic of consumers to a product page using a keyword related to a product only with an image of the product, and the consumers may also search for a keyword required therefor and may use the keyword in search using a wanted image.
[0111] Various effects that are directly or indirectly identified through the present disclosure may be provided.
[0112] The embodiments of the present disclosure may be achieved by various means, for example, hardware, firmware, software, or a combination thereof.
[0113] In a hardware configuration, an embodiment of the present disclosure may be achieved by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSDPs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, microcontrollers, microprocessors, etc.
[0114] In a firmware or software configuration, an embodiment of the present disclosure may be implemented in the form of a module, a procedure, a function, etc. Software code may be stored in a memory unit and executed by a processor. The memory unit is located at the interior or exterior of the processor and may transmit and receive data to and from the processor via various known means.
[0115] Combinations of blocks in the block diagram attached to the present disclosure and combinations of operations in the flowchart attached to the present disclosure may be performed by computer program instructions. These computer program instructions may be installed in an encoding processor of a general purpose computer, a special purpose computer, or other programmable data processing equipment, and thus the instructions executed by an encoding processor of a computer or other programmable data processing equipment may create means for perform the functions described in the blocks of the block diagram or the operations of the flowchart. These computer program instructions may also be stored in a computer-usable or computer-readable memory that may direct a computer or other programmable data processing equipment to implement a function in a particular method, and thus the instructions stored in the computer-usable or computer-readable memory may produce an article of manufacture containing instruction means for performing the functions of the blocks of the block diagram or the operations of the flowchart. The computer program instructions may also be mounted on a computer or other programmable data processing equipment, and thus a series of operations may be performed on the computer or other programmable data processing equipment to create a computer-executed process, and it may be possible that the computer program instructions provide the blocks of the block diagram and the operations for performing the functions described in the operations of the flowchart.
[0116] Each block or each step may represent a module, a segment, or a portion of code that includes one or more executable instructions for executing a specified logical function. It should also be noted that it is also possible for functions described in the blocks or the operations to be out of order in some alternative embodiments. For example, it is possible that two consecutively shown blocks or operations may be performed substantially and simultaneously, or that the blocks or the operations may sometimes be performed in the reverse order according to the corresponding function.
[0117] As such, those skilled in the art to which the present disclosure pertains will understand that the present disclosure may be embodied in other specific forms without changing the technical spirit or essential characteristics thereof. Therefore, it should be understood that the embodiments described above are illustrative in all respects and not restrictive. The scope of the present disclosure is defined by the following claims rather than the detailed description, and all changes or modifications derived from the meaning and scope of the claims and their equivalent concepts should be construed as being included in the scope of the present disclosure.