Brain-like memory-based environment perception and decision-making method and system for unmanned surface vehicle
12422859 ยท 2025-09-23
Assignee
Inventors
Cpc classification
G06V20/70
PHYSICS
Y02T10/40
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
G05D1/644
PHYSICS
G05D1/243
PHYSICS
International classification
G05D1/243
PHYSICS
Abstract
The present disclosure relates to the technical field of decision-making of unmanned surface vehicles, and provides a brain-like memory-based environment perception and decision-making method and system for an unmanned surface vehicle. The method includes: obtaining an image of an environment in front of an unmanned surface vehicle; and inputting the image of the environment into an environment perception and decision-making model of the unmanned surface vehicle, and outputting an action instruction, where the environment perception and decision-making model of the unmanned surface vehicle includes an image feature extractor, a Bidirectional Encoder Representations from Transformers (BERT) model, a fully connected layer, a short-term scene memory module, and a long-term memory module that are connected in turn; the BERT model extracts an image feature representation containing a text feature from an image feature. The present disclosure improves accuracy of decision-making of an action.
Claims
1. A brain-like memory-based environment perception and decision-making method for an unmanned surface vehicle, comprising: obtaining an image of an environment in front of an unmanned surface vehicle; inputting the image of the environment into an environment perception and decision-making model of the unmanned surface vehicle, and outputting an action instruction, wherein the environment perception and decision-making model of the unmanned surface vehicle comprises an image feature extractor, a Bidirectional Encoder Representations from Transformers (BERT) model, a fully connected layer, a short-term scene memory module, and a long-term memory module that are connected in turn; and using the action instruction to control the unmanned surface vehicle to perform an action; wherein the image feature extractor is configured to extract an image feature from the image of the environment; the BERT model is configured to extract an image feature representation containing a text feature from the image feature; the fully connected layer is configured to map the image feature representation onto an image query suitable for recognition by a large language model; the short-term scene memory module is configured to preset a plurality of questions, and use a short-term scene memory of the large language model to answer the plurality of questions in a specified order to obtain a plurality of answers; the long-term memory module is configured to use a long-term memory and in-context learning of the large language model to output the action instruction based on the plurality of answers; and the large language model is a large language model obtained after fine tuning based on reinforcement learning.
2. The brain-like memory-based environment perception and decision-making method for an unmanned surface vehicle according to claim 1, wherein the BERT model is a trained BERT model, and a cross attention module is added between a self-attention module of each transformer block in the BERT model and a feedforward neural network; and a process of training the BERT model comprises: collecting an environmental dataset of the unmanned surface vehicle, wherein each piece of sample data in the environmental dataset of the unmanned surface vehicle comprises an environment image of the unmanned surface vehicle and text description information corresponding to the environment image of the unmanned surface vehicle; training each piece of sample data, which specifically comprises: inputting the environment image of the unmanned surface vehicle into a pre-trained image feature extractor, and outputting a sample image feature; inputting the text description information corresponding to the environment image of the unmanned surface vehicle into the BERT model, and inputting the sample image feature into the cross-attention module of each transformer block in the BERT model; inputting a feature output by the cross-attention module into the feedforward neural network to obtain a first sample image feature; determining an image-text matching loss based on the first sample image feature; inputting the text description information corresponding to the environment image of the unmanned surface vehicle into a pre-trained network to output a second sample image feature, wherein the pre-trained network comprises the self-attention module and the feedforward neural network that are connected in turn; determining an image-text contrastive loss based on the first sample image feature and the second sample image feature; adding a mask to the self-attention module of each transformer block in the BERT model; inputting the text description information corresponding to the environment image of the unmanned surface vehicle into a masked BERT model, inputting the sample image feature into the cross-attention module of each transformer block in the BERT model, and inputting a feature output by the cross-attention module into the feedforward neural network to obtain a third sample image feature; determining an image-text generation loss based on the third sample image feature and label data corresponding to the third sample image feature; and optimizing the BERT model based on the image-text matching loss, the image-text contrastive loss, and the image-text generation loss.
3. The brain-like memory-based environment perception and decision-making method for an unmanned surface vehicle according to claim 1, wherein the using a long-term memory and in-context learning of the large language model to output the action instruction based on the plurality of answers specifically comprises: based on the large language model, using the long-term memory and the in-context learning to output, based on the plurality of answers, an instruction set comprising a plurality of instructions, and outputting the action instruction based on the instruction set.
4. The brain-like memory-based environment perception and decision-making method for an unmanned surface vehicle according to claim 3, wherein the large language model is fine tuned by using a reinforcement learning model; and a process of fine tuning the large language model comprises: constructing an instruction training set, wherein sample data in the instruction training set comprises input data and label data, the input data is a sample instruction set, and the label data is a sorting of each instruction in the sample instruction set based on a score in descending order; training a reward model by taking the sample instruction set as an input and the sorting of the sample instruction set as an output to obtain a trained reward model; and inputting the instruction set output by the large language model into the trained reward model, and feeding back a first-ranked instruction to the large language model as the action instruction to fine tune the large language model.
5. The brain-like memory-based environment perception and decision-making method for an unmanned surface vehicle according to claim 4, wherein a loss function for training the reward model is expressed as follows:
6. The brain-like memory-based environment perception and decision-making method for an unmanned surface vehicle according to claim 4, wherein an objective function for fine tuning the large language model is expressed as follows:
7. The brain-like memory-based environment perception and decision-making method for an unmanned surface vehicle according to claim 1, wherein the image feature extractor is a trained vision transformer.
8. A brain-like memory-based environment perception and decision-making system for an unmanned surface vehicle, comprising: an environment image obtaining module configured to obtain an image of an environment in front of an unmanned surface vehicle; a decision-making module for an environment perception and decision-making model of the unmanned surface vehicle configured to input the image of the environment into the environment perception and decision-making model of the unmanned surface vehicle, and output an action instruction, wherein the environment perception and decision-making model of the unmanned surface vehicle comprises an image feature extractor, a BERT model, a fully connected layer, a short-term scene memory module, and a long-term memory module that are connected in turn; and a control module configured to use the action instruction to control the unmanned surface vehicle to perform an action; wherein the image feature extractor is configured to extract an image feature from the image of the environment; the BERT model is configured to extract an image feature representation containing a text feature from the image feature; the fully connected layer is configured to map the image feature representation onto an image query suitable for recognition by a large language model; the short-term scene memory module is configured to preset a plurality of questions, and use a short-term scene memory of the large language model to answer the plurality of questions in a specified order to obtain a plurality of answers; the long-term memory module is configured to use a long-term memory and in-context learning of the large language model to output the action instruction based on the plurality of answers; and the large language model is a large language model obtained after fine tuning based on reinforcement learning.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) To describe the technical solutions in embodiments of the present disclosure or in the prior art more clearly, the accompanying drawings required in the embodiments are briefly described below. Apparently, the accompanying drawings in the following description show merely some embodiments of the present disclosure, and other accompanying drawings can be derived from these accompanying drawings by those of ordinary skill in the art without creative efforts.
(2)
(3)
(4)
DETAILED DESCRIPTION OF THE EMBODIMENTS
(5) The technical solutions of the embodiments of the present disclosure are clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Apparently, the described embodiments are merely a part rather than all of the embodiments of the present disclosure. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure.
(6) The present disclosure is intended to provide a brain-like memory-based environment perception and decision-making method and system for an unmanned surface vehicle, to improve accuracy of decision-making of an action.
(7) To make the above objectives, features, and advantages of the present disclosure clearer and more comprehensible, the present disclosure will be further described in detail below with reference to the accompanying drawings and specific implementations.
Embodiment 1
(8) As shown in
(9) The action instruction includes changing a turning angle of the unmanned surface vehicle, changing a speed of the unmanned surface vehicle, changing a submergence depth of the unmanned surface vehicle, or the like. Step 103: Use the action instruction to control the unmanned surface vehicle to perform an action.
(10) The image feature extractor is configured to extract an image feature from the image of the environment. The BERT model is configured to extract an image feature representation containing a text feature from the image feature. The fully connected layer is configured to map the image feature representation onto an image query suitable for recognition by a large language model. The short-term scene memory module is configured to preset a plurality of questions, and use a short-term scene memory of the large language model to answer the plurality of questions in a specified order to obtain a plurality of answers. The long-term memory module is configured to use a long-term memory and in-context learning of the large language model to output the action instruction based on the plurality of answers. The large language model is a large language model obtained after fine tuning based on reinforcement learning.
(11) The large language model may be GPT-4 or the like. The BERT model is a trained BERT model.
(12)
(13) The vision encoder includes the image feature extractor and the BERT model. In the vision encoder, a pre-trained vision transformer is used as the image feature extractor to extract a semantic feature of the image, a pre-trained BERT model is used as a text feature extractor to extract a semantic feature of the text, and the pre-trained vision transformer and the pre-trained BERT model are frozen.
(14) A cross attention module is added between a self-attention module of each transformer block in the pre-trained BERT model and a feedforward neural network.
(15) A process of training the BERT model includes following operations:
(16) The environmental dataset of the unmanned surface vehicle is collected. Each piece of sample data in the environmental dataset of the unmanned surface vehicle includes an environment image of the unmanned surface vehicle and text description information corresponding to the environment image of the unmanned surface vehicle.
(17) The text and a learnable query are fused to obtain an initial input, and the cross-attention module is added between the self-attention module of the BERT model and the feedforward neural network. The cross-attention module plays a role of fusing the image feature and the text feature. The image feature extracted by the pre-trained vision transformer is input into the cross-attention module, and then a query containing both the text feature and the image feature is obtained through the feedforward neural network to calculate a subsequent image-text matching loss. In addition, the text is also input into the self-attention module pre-trained separately by using a BERT and into the feedforward neural network to obtain the text feature, and the text feature is combined with the image feature query obtained above to calculate an image-text contrastive loss. In addition, a mask is added to the text using the self-attention module, and the image query and masked text are used to predict masked content, to obtain an image-text generation loss. The learnable query can be obtained through training by using the above three losses. After that, an additional fully connected layer is trained at an output terminal of the model by using the image-text generation loss. The fully connected layer is used to achieve a mapping between the text feature extracted from the image and a text prompt that can be recognized by the large language model. This allows a pre-trained large language model to recognize the image feature almost without making any change, and based on this, text generation and reasoning can be carried out.
(18) Each piece of sample data is trained, which specifically includes the following operations:
(19) The environment image of the unmanned surface vehicle is input into a pre-trained image feature extractor, and a sample image feature is output.
(20) The text description information corresponding to the environment image of the unmanned surface vehicle is input into the BERT model, and the sample image feature is input into the cross-attention module of each transformer block in the BERT model.
(21) A feature output by the cross-attention module is input into the feedforward neural network to obtain a first sample image feature.
(22) The image-text matching loss is determined based on the first sample image feature.
(23) The text description information corresponding to the environment image of the unmanned surface vehicle is input into a pre-trained network to output a second sample image feature, where the pre-trained network includes the self-attention module and the feedforward neural network that are connected in turn.
(24) The image-text contrastive loss is determined based on the first sample image feature and the second sample image feature.
(25) The mask is added to the self-attention module of each transformer block in the BERT model.
(26) The text description information corresponding to the environment image of the unmanned surface vehicle is input into a masked BERT model, the sample image feature is input into the cross-attention module of each transformer block in the BERT model, and a feature output by the cross-attention module is input into the feedforward neural network to obtain a third sample image feature.
(27) The image-text generation loss is determined based on the third sample image feature and label data corresponding to the third sample image feature.
(28) A hybrid loss for training the BERT model is expressed as follows:
(29)
(30) In the above formula, .sub.1 represents a weight hyper-parameter of the image-text contrastive loss .sub.itc, 107 .sub.2 represents a weight hyper-parameter of the image-text matching loss
.sub.itm, .sub.3 represents a weight hyper-parameter of the image-text generation loss
.sub.itg, and
.sub.loss represents a value of the hybrid loss.
(31) In a learning process of the BERT model, a hybrid .sub.loss function floss is used to train the feature extractor from an image-text pair, so as to extract the image feature and transmit the image feature to the large language model to complete multimodal information transmission.
(32) A specific calculation formula of the image-text contrastive loss .sub.itc is as follows:
(33)
(34) In the above formula, s represents a similarity calculation function, which is intended to be realized by a cosine similarity in the present disclosure; represents a learnable parameter; H represents a cross entropy loss function; p.sub.m.sup.i2t(I) represents an operation of calculating an image-text similarity obtained through Softmax normalization for each image in a batch; p.sub.m.sup.t2i(T) represents an operation of calculating an image-text similarity obtained through the Softmax normalization for each text sentence in the batch; y.sup.i2t(I) represents a one-hot similarity calculated for the image by using label information (a label herein indicates whether the image and the text are an image-text pair in a same group); y.sup.t2i(T) represents a one-hot similarity calculated for the text by using the label information, and a final image-text contrastive loss H is defined as a cross entropy between prediction data and the label data; I represents a single image; T represents a single segment of text; I.sub.m represents an m.sup.th image in a same batch; T.sub.m represents an m.sup.th segment of text in the same batch; M represents a batch size; .sub.(I,T)D represents an image-text pair of one batch extracted from all data; and D represents all the data, namely, the environmental dataset of the unmanned surface vehicle. The image-text contrastive loss
.sub.itc is used to make positive sample pairs close to each other in feature space and negative sample pairs far away from each other in the feature space through comparative learning, so as to align the image feature and the text feature and maximize their mutual information.
(35) A specific calculation formula of the image-text matching loss .sub.itm is as follows:
(36)
(37) In the above formula, p.sup.itm represents an operation of determining, based on a binary prediction obtained through a Softmax function based on multimodal image and text outputs, whether the image and the text belong to positive samples or negative samples, y.sup.itm represents an operation of generating a two-dimensional one-hot vector based on the label information, and H represents the cross-entropy Loss function. The image-text matching loss .sub.itm is used to ensure that the model can correctly recognize positive and negative sample pairs by determining whether the image and the text match, thereby aligning the image feature and the text feature.
(38) A specific calculation formula of the image-text generation loss .sub.itg is as follows:
(39)
(40) In the above formula, {circumflex over (T)} represents the masked text, p.sup.msk represents a prediction made for the masked content by using the image and the masked text, y.sup.msk represents a one-hot embedding generated based on the label to represent the masked content, and H represents the cross-entropy loss function. The image-text generation loss .sub.itg is mainly used to enable the model to complete masked information based on the image and the masked text, to ensure that the model can obtain a correct image feature and generate a corresponding text representation.
(41) The BERT model is optimized based on the image-text matching loss, the image-text contrastive loss, and the image-text generation loss.
(42) A question that can characterize a status of the unmanned surface vehicle is collected as expert knowledge. A status image collected for the unmanned surface vehicle in real time is input into the trained BERT model to obtain the corresponding image feature representation.
(43) The using a long-term memory and in-context learning of the large language model to output the action instruction based on the plurality of answers specifically includes: based on the large language model, using the long-term memory and the in-context learning to output, based on the plurality of answers, an instruction set including a plurality of instructions, and outputting the action instruction based on the instruction set.
(44) The large language model is fine tuned by using a reinforcement learning model.
(45) A process of fine tuning the large language model includes: constructing an instruction training set, where sample data in the instruction training set includes input data and label data, the input data is a sample instruction set, and the label data is a sorting of each instruction in the sample instruction set based on a score in descending order; training a reward model by taking the sample instruction set as an input and the sorting of the sample instruction set as an output to obtain a trained reward model; and inputting the instruction set output by the large language model into the trained reward model, and feeding back a first-ranked instruction to the large language model as the action instruction to fine tune the large language model.
(46) An attention layer formula used by the large language model to realize the long-term memory and the in-context learning is as follows:
(47)
(48) In the formula, W.sub.V and W.sub.K are transformation matrices with a dimension of dd, where both d and d are constants; X represents a token vector representation of an example part in the input; and X represents vector representations of all tokens after the example part in the input and before a last word. [X; X] represents matrix splicing, V represents a query vector, K represents a key vector, q represents a query vector, and .sub.ICL (q) represents an attention layer that plays an in-context learning role. The formula described above provides a detailed description of operational steps of an attention mechanism in a forward propagation process. By comparing the formula with a following formula, it can be concluded that the attention mechanism plays the in-context learning role in the forward propagation process.
(49) A specific formula used by the large language model to realize the long-term memory and the in-context learning is deduced as follows:
(50)
(51) In the above deduction process, W.sub.ZSL (zsl represents zero shot learning) and W.sub.ICL (icl represents in-context learning) are obtained by simplifying the forward propagation process of the large language model; W.sub.ZSL represents a sample learning weight; W.sub.ICL represents an in-context learning weight; Linear Attn represents a linear attention layer; x.sub.i represents an input of a current attention module; i represents an input order of the attention module; specific simplification steps are performed by using the fully connected layer and an attention conversion mechanism; W.sub.VX is regarded as an output gradient corresponding to one calculation of a previous full connection; W.sub.KX is regarded as an input corresponding to the one calculation of the previous full connection; and q represents a current input. This formula specifically describes how the attention layer implicitly completes the in-context learning in one forward propagation process of the large language model.
(52) The reward model is trained by using a manually-annotated sorting of instructions in different scenarios as training data, to simulate humans to score each incoming instruction based on a current status of the unmanned surface vehicle, so as to provide as reasonable a score as possible for each instruction without changing a manually-annotated instruction order.
(53) A loss function for training the reward model is expressed as follows:
(54)
(55) In the above formula, r.sub.( ) represents the reward model, x represents a question and an image that are input into the large language model, E.sub.(x,y.sub.
(56)
(57) An objective function for fine tuning the large language model is expressed as follows:
(58)
(59) In the above formula, objective () represents a value of the objective function, .sub..sup.RL represents the reinforcement learning model, r.sub.( ) represents the reward model, .sup.LLM represents an initial large language model that is not fine tuned, E.sub.(x,y) represents an image and a question in a reinforcement learning training set, and an action instruction output by the large language model for the image and the question, D.sub..sub.
(60) Each piece of sample data in the reinforcement learning training set and the pre-training data during the pre-training includes an image and a question, and an action instruction output by the large language model for the image and the question.
(61) The image feature extractor is a trained vision transformer.
Embodiment 2
(62) As shown in
(63) The image feature extractor is configured to extract an image feature from the image of the environment. The BERT model is configured to extract an image feature representation containing a text feature from the image feature. The fully connected layer is configured to map the image feature representation onto an image query suitable for recognition by a large language model. The short-term scene memory module is configured to preset a plurality of questions, and use a short-term scene memory of the large language model to answer the plurality of questions in a specified order to obtain a plurality of answers. The long-term memory module is configured to use a long-term memory and in-context learning of the large language model to output the action instruction based on the plurality of answers. The large language model is a large language model obtained after fine tuning based on reinforcement learning.
(64) A process of training the environment perception and decision-making model of the unmanned surface vehicle includes following steps:
(65) Step A: Firstly, collect a large number of images that are related to an ocean and the unmanned surface vehicle, and corresponding descriptive text to create an image-text pair dataset. Secondly, a low-quality image is filtered out, and the text is manually reviewed to correct an error, including removing a duplicate word and a disconnected sentence. Finally, an image-text pair obtained after manual filtering is used as training data of a vision encoder.
(66) Step B: In the vision encoder, use a pre-trained vision transformer as the image feature extractor to extract a semantic feature of the image, use a pre-trained BERT model as a text feature extractor to extract a semantic feature of the text, and freeze the pre-trained models.
(67) Step C: Insert a randomly initialized cross attention module into each transformer block of the BERT model, fuse the text and a learnable query to obtain an initial input, and add a cross attention module between a self-attention module of the BERT model and a feedforward neural network, where the cross attention module plays a role of fusing the image feature and the text feature, the image feature extracted by the pre-trained vision transformer is input into the cross attention module, and then a query containing both the text feature and the image feature is obtained through the feedforward neural network to calculate a subsequent image-text matching loss. In addition, the text is also input into the self-attention module pre-trained separately by using a BERT and into the feedforward neural network to obtain the text feature, and the text feature is combined with the image feature obtained above to calculate an image-text contrastive loss. In addition, a mask is added to the self-attention module, and the image feature and masked text are used to predict masked content, to obtain an image-text generation loss. The learnable query can be obtained through training by using the above three losses. After that, an additional fully connected layer is trained at an output terminal of the model by using the image-text generation loss. The fully connected layer is used to achieve a mapping between the text feature extracted from the image and a text embedding that can be recognized by the large language model. This allows a pre-trained large language model to recognize the image feature almost without making any change, and based on this, text generation and reasoning can be carried out.
(68) In the step C, a definition in a trained hybrid loss function is as follows:
(69)
(70) In the above formula, .sub.1 represents a weight hyper-parameter of the image-text contrastive loss .sub.itc, .sub.2 represents a weight hyper-parameter of the image-text matching loss
.sub.itm, and .sub.3 represents a weight hyper-parameter of the image-text generation loss
.sub.itg.
(71) Step D: Collect a question that can characterize a status of the unmanned surface vehicle as expert knowledge. A status image collected for the unmanned surface vehicle in real time is input into a trained BERT model to obtain the corresponding image feature query.
(72) Step E: After sorting the collected question in the step D, splice the question and the image query gradually based on a difficulty level in ascending order, and input spliced information to the language model, where the input of the question based on the difficulty level in the ascending order utilizes a short-term scene memory of the large language model, and based on a progressive setting based on the difficulty level, an answer to a previous question each time is used as a short-term scene memory to provide assistance for an answer to a next question. Next, an attention layer in the large language model can be used to implicitly optimize a parameter in a forward reasoning process, thereby achieving a long-term memory (corresponding to a long-term memory in the flowchart) and in-context learning for input text, and further guiding the language model to fuse multimodal information and the long-term memory to obtain various possible instructions for a next action of the unmanned surface vehicle.
(73) Step F: Find a professional to analyze an instruction set obtained in the step E and rank rationality of each instruction in the instruction set. Finally, all kinds of images and questions and their corresponding instructions are sorted to obtain a small-scale dataset to simulate a working memory of a human brain. A sorting of the instruction set is regarded as label information.
(74) Step G: Train a reward model by using the small-scale dataset collected in the step F, where the reward model is trained by using a manually-annotated sorting of instructions in different scenarios as training data, to simulate humans to score each incoming instruction based on a current status of the unmanned surface vehicle, so as to provide as reasonable a score as possible for each instruction without changing a manually-annotated instruction order.
(75) Further, in the step G, a loss function for training the reward model is defined as follows:
(76)
(77) In the above formula, r.sub. represents the reward model, x represents a question and an image that are input into the model, E.sub.(x,y.sub.
(78) Step H: Use a trained reward model to train a reinforcement learner, and fine tune the large language model in the step E again, such that an output of the large language model can get a higher score in the reward function. A finally trained model is used to obtain a final instruction under current sea conditions, and then the unmanned surface vehicle completes autonomous decision-making according to the obtained instruction.
(79) Further, in the step H, an objective function for training the reinforcement learner is defined as follows:
(80)
(81) In the above formula, .sub..sup.RL represents a reinforcement learning model, r.sub. represents the reward model in the step G, and .sub.LLM represents an initial large language model that is not fine tuned. In the objective function, a first item r.sub.(x, y) is intended to enable the instruction trained by the model to obtain a higher score. It is worth noting that data sampled in E.sub.(x,y) can be regarded as a status in a classic reinforcement learning algorithm and changes with update of the model. A second term log (.sub..sup.RL(y|x)/.sup.LLM(y|x)) is a regular term, which constrains the reinforcement learning model by using Kullback-Leibler (KL) divergence of probability distribution of a new model learned through reinforcement learning and the initial model, such that the learned reinforcement learning model does not deviate from the initial model excessively. A third term E.sub.xD.sub.
(82) Each embodiment of the present specification is described in a progressive manner, each embodiment focuses on the difference from other embodiments, and the same and similar parts between the embodiments may refer to each other. Since the system disclosed in the embodiments corresponds to the method disclosed in the embodiments, the description is relatively simple, and reference can be made to the method description.
(83) Specific examples are used herein to explain the principles and implementations of the present disclosure. The foregoing description of the embodiments is merely intended to help understand the method of the present disclosure and its core ideas; besides, various modifications may be made by a person of ordinary skill in the art to specific implementations and the scope of application in accordance with the ideas of the present disclosure. In conclusion, the content of the present specification shall not be construed as limitations to the present disclosure.