Method of data caching in delay tolerant network based on information centric network, computer readable medium and device for performing the method

10798013 ยท 2020-10-06

Assignee

Inventors

Cpc classification

International classification

Abstract

Provided is a method of data caching in delay tolerant network based on information centric network and a recording medium and a device for performing the same. The data caching method includes: the step of checking a remaining buffer amount and a buffer usage amount of node, the step of caching data in the node which is received from another node according to a data caching policy, in case remaining buffer amount of the node is greater than a preset remaining buffer amount threshold, the step of deleting data cached in the node from the node according to a data deletion policy, in case the buffer usage amount of the node is less than a preset buffer usage amount threshold, and the step of setting an initial Time-to-Live (TTL) value of the data received from another node or updating a TTL of the data cached in the node using information of the data received from another node or information of the node.

Claims

1. A method of data caching in a delay tolerant network based on an information centric network, the method comprising: checking whether data is received from a first node; checking a remaining buffer amount and a buffer usage amount of a second node; caching the data received from the first node in the second node according to a data caching policy, when the data is received from the first node and the remaining buffer amount of the second node is greater than a preset remaining buffer amount threshold; deleting data cached in the second node from the second node according to a data deletion policy, when the data is not received from the first node and the buffer usage amount of the second node is less than a preset buffer usage amount threshold; and setting an initial Time-to-Live (TTL) value of the data received from the first node or updating a TTL of the data cached in the second node using information of the data received from the first node or information of the second node, wherein the caching of the data comprises: comparing data information including the number of requester nodes of the data received from the first node or node information including delivery predictability of the data received from the first node to a requester node with a preset data caching threshold; and setting a remaining delivery frequency of the data received from the first node according to a result of the comparison between the data information or the node information and the preset data caching threshold, and caching the data received from the first node in the second node.

2. The method of claim 1, wherein the setting of the initial TTL value comprises: setting the initial TTL value of the data received from the first node using the data information including the number of requester nodes and a priority of the data received from the first node, or the node information including the delivery predictability of the data received from the first node to the requester node.

3. The method of claim 1, wherein the updating of the TTL comprises: calculating a variation of the data information including the number of requester nodes and a remaining delivery frequency of the data cached in the second node or delivery predictability of the data cached in the second node to the requester node for a unit time; calculating a TTL increase and decrease value of the data cached in the second node using the variation per the unit time; and updating the TTL of the data cached in the second node by adding the TTL increase and decrease value to a current TTL of the data cached in the second node.

4. The method of claim 1, wherein the updating of the TTL comprises: checking the TTL of the data cached in the second node; and when the TTL of the data cached in the second node is equal to or less than zero (0), deleting the data cached in the second node from the second node.

5. The method of claim 1, wherein the deleting of the data comprises: arranging at least one sub-data in the data cached in the second node based on delivery predictability of the data cached in the second node to the requester node, or calculating a node information arrangement value representing a relationship between delivery predictability of the data cached in the second node to a requester node and a priority of the data cached in the second node, and arranging at least one sub-data in the data cached in the second node based on the node information arrangement value; and deleting the data cached in the second node according to an order of the arrangement of the at least one sub-data in the data cached in the second node.

6. The method of claim 1, wherein the setting of the remaining delivery frequency of the data comprises: when the data information or the node information is less than the preset data caching threshold, setting the remaining delivery frequency of the data received from the first node to one (1), and caching the data received from the first node in the second node; and when the data information or the node information is equal to or greater than the preset data caching threshold, calculating the remaining delivery frequency of the data received from the first node using the data information or the node information.

7. The method of claim 1, wherein the setting of the remaining delivery frequency of the data comprises: checking the remaining delivery frequency of the data cached in the second node; and when the remaining delivery frequency of the data cached in the second node is zero (0), deleting the data cached in the second node.

8. The method of claim 1, wherein the deleting of the data comprises: arranging at least one sub-data in the data cached in the second node based on the data information including the number of requester nodes, the TTL, a remaining delivery frequency and a priority of the data cached in the second node; and deleting the data cached in the second node according to an order of the arrangement of the at least one sub-data in the data cached in the second node.

9. The method of claim 1, wherein the deleting of the data comprises: calculating a data information arrangement value representing a relationship between the data information including the number of requester nodes, the TTL, a remaining delivery frequency and a priority of the data cached in the second node; arranging at least one sub-data in the data cached in the second node based on the data information arrangement value; and deleting the data cached in the second node according to an order of the arrangement of the at least one sub-data in the data cached in the second node.

10. A device for data caching in a delay tolerant network based on an information centric network, the device comprising: a memory; and one or more processors configured to: check whether data is received from a first node; check a remaining buffer amount and a buffer usage amount of a second node; cache the data received from the first node in the second node according to a data caching policy, when the data is received from the first node and the remaining buffer amount of the second node is greater than a preset remaining buffer amount threshold; delete data cached in the second node from the second node according to a data deletion policy, when the data is not received from the first node and the buffer usage amount of the second node is less than a preset buffer usage amount threshold; and set an initial Time-to-Live (TTL) value of the data received from the first node or updating a TTL of the data cached in the second node using information of the data received from the first node or information of the second node, wherein the caching of the data comprises: comparing data information including the number of requester nodes of the data received from the first node or node information including delivery predictability of the data received from the first node to a requester node with a preset data caching threshold; and setting a remaining delivery frequency of the data received from the first node according to a result of the comparison between the data information or the node information and the preset data caching threshold, and caching the data received from the first node in the second node.

11. A computer-readable non-transitory recording medium having recorded thereon a computer program for performing a method of data caching, the method comprising: checking whether data is received from a first node; checking a remaining buffer amount and a buffer usage amount of a second node; caching the data received from the first node in the second node according to a data caching policy, when the data is received from the first node and the remaining buffer amount of the second node is greater than a preset remaining buffer amount threshold; deleting data cached in the second node from the second node according to a data deletion policy, when the data is not received from the first node and the buffer usage amount of the second node is less than a preset buffer usage amount threshold; and setting an initial Time-to-Live (TTL) value of the data received from the first node or updating a TTL of the data cached in the second node using information of the data received from the first node or information of the second node, wherein the caching of the data comprises: comparing data information including the number of requester nodes of the data received from the first node or node information including delivery predictability of the data received from the first node to a requester node with a preset data caching threshold; and setting a remaining delivery frequency of the data received from the first node according to a result of the comparison between the data information or the node information and the preset data caching threshold, and caching the data received from the first node in the second node.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a block diagram of a device of data caching in a delay tolerant network based on information centric network according to an embodiment of the present disclosure.

(2) FIG. 2 is a flowchart of a congestion control method according to the embodiment of the present disclosure.

(3) FIG. 3 is a detailed flowchart when determining a data caching policy in the step of determining a data caching policy or a data deletion policy shown in FIG. 2.

(4) FIG. 4 is a detailed flowchart when determining a data deletion policy in the step of determining a data caching policy or a data deletion policy shown in FIG. 2.

(5) FIG. 5 is a detailed flowchart of the step of setting a Time-To-Live, of data shown in FIG. 2.

DETAILED DESCRIPTION OF EMBODIMENTS

(6) The following detailed description of the present disclosure is made with reference to the accompanying drawings, in which particular embodiments for practicing the present disclosure are shown for illustration purposes. These embodiments are described in sufficient detail for those skilled in the art to practice the present disclosure. It should be understood that various embodiments of the present disclosure are different but do not need to be mutually exclusive. For example, particular shapes, structures and features described herein in connection with one embodiment can be embodied in other embodiment without departing from the spirit and scope of the present disclosure. It should be further understood that changes can be made for locations or arrangements of individual elements in each disclosed embodiment without departing from the spirit and scope of the present disclosure. Accordingly, the following detailed description is not intended to be taken in limiting senses, and the scope of the present disclosure, if appropriately described, is only defined by the appended claims along with the full scope of equivalents to which such claims are entitled. In the drawings, similar reference signs denote same or similar functions in many aspects

(7) Hereinafter, the embodiments of the present disclosure will be described in more detail with reference to the accompanying drawings.

(8) The term Unit is defined herein as having its broadest definition to an ordinary skill in the art to refer to a software including instructions executable in a non-transitory computer readable medium that would perform the associated function when executed, a circuit designed to perform the associated function, a hardware designed to perform the associated function, or a combination of a software, a circuit, or a hardware designed to perform the associated function.

(9) FIG. 1 is a block diagram of a device of data caching in a delay tolerant network based on the information centric network according to the embodiment of the present disclosure.

(10) The data caching device 1000 according to the embodiment of the present disclosure may be equipped in each node in a network environment to control caching of data received from another node or deletion of data cached in nodes.

(11) In the present disclosure, the network environment may be a Delay Tolerant Networking (DTN) environment based on Information Centric Networking (ICN).

(12) ICN is a networking approach based on data information such as the name of content or data rather than an IP address. In ICN, a message may be classified into Interest and Data. A data requester that needs Data can disseminate Interest through the network, and when receiving the Interest, a data provider that has the corresponding Data can deliver the Data to the data requester along the same path in reverse.

(13) DTN is an approach designed to deliver a message between neighboring nodes in a store-carry-forward way in an environment where connectivity between a source node and a destination node is not guaranteed. In DTN, each node stores a message to transmit, and can forward the message to another node according to a preset condition when it encounters another node.

(14) The data caching device 1000 according to the embodiment of the present disclosure may control data caching or deletion according to a data caching or deletion policy using data information or node information defined in ICN in consideration of node buffer capacity.

(15) Referring to FIG. 1, the data caching device 1000 according to the embodiment of the present disclosure may include a buffer check unit 10, a caching policy unit 20, a Time-To-Live (TTL) setting unit 30, a data caching unit 40 and a data deletion unit 50.

(16) The buffer check unit 10 may check a remaining buffer amount b.sub.rem.sup.N and a buffer usage amount b.sub.use.sup.N of node.

(17) The buffer check unit 10 may compare the remaining buffer amount b.sub.rem.sup.N of node with a preset remaining buffer amount threshold b.sub.rem.sup.thr. When data is received from another node, the buffer check unit 10 may check the remaining buffer amount b.sub.rem.sup.N of node, and compare the remaining buffer amount b.sub.rem.sup.N of node with the remaining buffer amount threshold b.sub.rem.sup.thr. When the remaining buffer amount b.sub.rem.sup.N of node is greater than the remaining buffer amount threshold b.sub.rem.sup.thr, the buffer check unit 10 may control to cache the data received from another node in the node.

(18) The buffer check unit 10 may compare the buffer usage amount b.sub.use.sup.N of the node with a preset buffer usage amount threshold b.sub.use.sup.thr. When the buffer usage amount b.sub.use.sup.N of node is less than the buffer usage amount threshold b.sub.use.sup.thr, the buffer check unit 10 may control to delete the data cached in the node.

(19) As described above, the buffer check unit 10 may check the remaining buffer amount b.sub.rem.sup.N and the buffer usage amount b.sub.use.sup.N of the node to support data caching or deletion control in the caching policy unit 20 as described below, thereby allowing for efficient management of node buffer capacity.

(20) The caching policy unit 20 may control to cache the data received from another node according to the data caching policy. Additionally, the caching policy unit 20 may control to delete the data cached in the node according to the data deletion policy.

(21) Specifically, the caching policy unit 20 may control to cache the data in the node received from another node according to the data caching policy using data information or node information. When data is received from another node and the buffer check unit 10 identifies that the remaining buffer amount b.sub.rem.sup.N of the node is greater than the remaining buffer amount threshold b.sub.rem.sup.thr, the caching policy unit 20 may control to cache the data received from other node according to the data caching policy.

(22) In the following description, the data information may include the number of requester nodes N.sub.di, TTL T.sub.di, delivery frequency f.sub.di, the remaining delivery frequency F.sub.di and priority Y.sub.di of data.

(23) Additionally, the node information may include delivery predictability P.sub.di of data from the node to requester node. The delivery predictability P.sub.di may be set as a maximum value P.sub.di.sup.max or a minimum value P.sub.di.sup.min of delivery predictability of data from the node to at least one requester node. Alternatively, the delivery predictability P.sub.di may be set as a difference P.sub.di.sup.maxP.sub.di.sup.min, a sum P.sub.di.sup.sum or an average P.sub.di.sup.avg of the maximum value P.sub.di.sup.max and the minimum value P.sub.di.sup.min of delivery predictability of data from the node to at least one requester node.

(24) According to the data caching policy, the caching policy unit 20 may compare a value I.sub.di obtained from the data information or node information with a preset data caching threshold I.sub.C.

(25) For example, the caching policy unit 20 may compare the number of requester nodes N.sub.di of data received from other node with a preset requester node number threshold N.sub.C. Alternatively, the caching policy unit 20 may compare the delivery predictability P.sub.di of data received from other node to requester node with a preset delivery predictability threshold P.sub.C.

(26) Additionally, the caching policy unit 20 may set the remaining delivery frequency F.sub.di of data received from other node according to the results of comparison between the value I.sub.di obtained from data information or node information and the data caching threshold I.sub.C, and control to cache in the node.

(27) When the value I.sub.di obtained from data information or node information is less than the data caching threshold I.sub.C, the caching policy unit 20 may set the remaining delivery frequency F.sub.di of data received from other node to one (1) and control to cache in the node. In this case, the data received from another node may be only cached in the node until it is delivered to another node.

(28) For example, when the number of requester nodes N.sub.di of data received from another node is less than the preset requester node number threshold N.sub.C, or the delivery predictability P.sub.di of data received from another node to requester node is less than the preset delivery predictability threshold P.sub.C, the caching policy unit 20 may set the remaining delivery frequency F.sub.di of data received from another node to one (1).

(29) When the value I.sub.di obtained from data information or node information is equal to or greater than the data caching threshold I.sub.C, the caching policy unit 20 may calculate the remaining delivery frequency F.sub.di of data received from another node using the data information or node information as shown in the following Equation 1, and control to cache in the node:
F.sub.d.sub.i=c.sub.FI.sub.d.sub.i[Equation 1]

(30) In Equation 1, F.sub.di denotes the remaining delivery frequency of data, c.sub.F denotes the delivery frequency coefficient, and I.sub.di denotes the value obtained from data information or node information.

(31) For example, when the number of requester nodes N.sub.di of data received from other node is equal to or greater than the preset requester node number threshold N.sub.C, the caching policy unit 20 may calculate the remaining delivery frequency F.sub.di by multiplication of the delivery frequency coefficient c.sub.F.sup.N of the number of requester nodes and the number of requester nodes N.sub.di of the data as shown in Equation 1.

(32) Alternatively, when the number of requester nodes N.sub.di of data received from another node is equal to or greater than the preset requester node number threshold N.sub.C, the caching policy unit 20 may calculate the remaining delivery frequency F.sub.di by multiplication of the delivery frequency coefficient c.sub.F.sup.Y of priority and the priority Y.sub.di of the data as shown in Equation 1. The priority Y.sub.di of data may be expressed as a natural number of [1, Y] according to the grade.

(33) Alternatively, when the delivery predictability P.sub.di of data received from another node to requester node is equal to or greater than the preset delivery predictability threshold P.sub.C, the caching policy unit 20 may calculate the remaining delivery frequency F.sub.di by multiplication of the delivery frequency coefficient c.sub.F.sup.P of delivery predictability and the delivery predictability P.sub.di as shown in Equation 1.

(34) Additionally, the caching policy unit 20 may check the remaining delivery frequency F.sub.di of data cached in the node, and when the remaining delivery frequency F.sub.di of data cached in the node is zero (0), may control to delete the corresponding data from the node.

(35) The caching policy unit 20 may control to delete the data cached in the node according to the data deletion policy using data information or node information. When the buffer check unit 10 identifies that the buffer usage amount b.sub.use.sup.N of the node is less than the buffer usage amount threshold b.sub.use.sup.thr, the caching policy unit 20 may control to delete the data cached in the node from the node according to the data deletion policy.

(36) According to the data deletion policy, the caching policy unit 20 may arrange at least one data cached in the node based on data information or node information of each of the at least one data cached in the node.

(37) For example, the caching policy unit 20 may arrange at least one data cached in the node in ascending order of the number of requester nodes of data.

(38) Alternatively, the caching policy unit 20 may arrange at least one data cached in the node in ascending order of the TTL of data.

(39) Alternatively, the caching policy unit 20 may arrange at least one data cached in the node in descending order of the delivery frequency of data.

(40) Alternatively, the caching policy unit 20 may arrange at least one data cached in the node in ascending order of the priority of data.

(41) Alternatively, the caching policy unit 20 may arrange at least one data cached in the node in ascending order of the delivery predictability to requester node.

(42) Alternatively, the caching policy unit 20 may calculate a data information arrangement value I.sub.di.sup.pl using data information or node information of each of at least one data cached in the node and arrange the at least one data cached in the node based on the data information arrangement value I.sub.di.sup.pl. Here, the data information arrangement value I.sub.di.sup.pl may be a value denoting the relationship between multiple data information. Alternatively, the data information arrangement value I.sub.di.sup.pl may be a value denoting the relationship between the node information and the priority of data.

(43) For example, the caching policy unit 20 may calculate a delivery frequency ratio W.sub.di.sup.f or the data information arrangement value I.sub.di.sup.pl that is calculated as a ratio of the number of requester nodes N.sub.di and the delivery frequency f.sub.di of data as shown in the following Equation 2, and arrange at least one data cached in the node in descending order of the delivery frequency ratio W.sub.di.sup.f:

(44) W d i f = f d i N d i [ Equation 2 ]

(45) In Equation 2, W.sub.di.sup.f denotes the delivery frequency ratio, f.sub.di denotes the delivery frequency of data, and N.sub.di denotes the number of requester nodes of data.

(46) Alternatively, the caching policy unit 20 may calculate a priority multiple W.sub.di.sup.Y or the data information arrangement value I.sub.di.sup.pl that is calculated by multiplication of the number of requester nodes N.sub.di and the priority Y.sub.di of data as shown in the following Equation 3, and arrange at least one data cached in the node in ascending order of the priority multiple W.sub.di.sup.Y:
W.sub.d.sub.i.sup.Y=N.sub.d.sub.i=Y.sub.di.sub.i[Equation 3]

(47) In Equation 3, W.sub.di.sup.Y denotes the priority multiple, N.sub.di denotes the number of requester nodes of data, and Y.sub.di denotes the priority of data.

(48) Alternatively, the caching policy unit 20 may calculate a delivery predictability multiple P.sub.di.sup.pl or the data information arrangement value I.sub.di.sup.pl that is calculated by multiplication of the delivery predictability P.sub.di to data requester node and the priority Y.sub.di as shown in the following Equation 4, and arrange at least one data cached in the node in ascending order of the delivery predictability multiple P.sub.di.sup.pl:
P.sub.d.sub.i.sup.pl=P.sub.d.sub.i=Y.sub.d.sub.i[Equation 4]

(49) In Equation 4, P.sub.di.sup.pl denotes the delivery predictability multiple, P.sub.di denotes the delivery predictability of data to requester node, and Y.sub.di denotes the priority of data.

(50) As described above, the caching policy unit 20 may arrange at least one data cached in the node, and delete the data cached in the node in the order of arrangement.

(51) The TTL setting unit 30 may set an initial TTL value of data received from other node and control to cache in the node. Additionally, the TTL setting unit 30 may update the TTL of data cached in the node.

(52) Specifically, the TTL setting unit 30 may set the initial TTL value of data received from another node using data information or node information. When the caching policy of data received from another node is determined by the caching policy unit 20, the TTL setting unit 30 may set the initial TTL value of the corresponding data.

(53) The TTL setting unit 30 may set the initial TTL value T.sub.di.sup.init of data received from another node as the sum of the value I.sub.di obtained from data information or node information multiplied by the TTL coefficient c.sub.T and the TTL constant C.sub.T as shown in the following Equation 5 and control to cache in the node. Here, the TTL coefficient c.sub.T may be set as a difference T.sub.di.sup.maxT.sub.di.sup.min of the maximum TTL value T.sub.di.sup.max and the minimum TTL value T.sub.di.sup.min, and the TTL constant C.sub.T may be set as the minimum TTL value T.sub.di.sup.min.
T.sub.d.sub.i.sup.init=c.sub.TI.sub.d .sub.i+C.sub.T[Equation 5]

(54) In Equation 5, T.sub.di.sup.init denotes the initial TTL value, c.sub.T denotes the TTL coefficient, I.sub.di denotes the value obtained from data information or node information, and C.sub.T denotes the TTL constant.

(55) For example, the TTL setting unit 30 may set the initial TTL value T.sub.di.sup.init of data received from another node by applying the number of requester nodes N.sub.di of data received from another node to the value I.sub.di obtained from data information or node information Equation 5.

(56) Alternatively, the TTL setting unit 30 may set the initial TTL value T.sub.di i.sup.init of data received from another node by applying the priority Y.sub.di of data received from another node to the value I.sub.di obtained from data information or node information in Equation 5.

(57) Alternatively, the TTL setting unit 30 may set the initial TTL, value T.sub.di.sup.init of data received from another node by applying the delivery predictability P.sub.di of data received from another node to requester node to the value I.sub.di obtained from data information or node information in Equation 5.

(58) When a unit time has elapsed, the TTL setting unit 30 may update the TTL of data cached in the node using data information or node information.

(59) The TTL setting unit 30 may calculate a variation I.sub.di.sup. of the value obtained from data information or node information for the unit time . The TTL setting unit 30 may calculate a TTL increase and decrease the value T.sub.di of data cached in the node by multiplication of the variation I.sub.di.sup. of the value obtained from data information or node information for the unit time and the TTL increase/decrease coefficient c.sub.T as shown in the following Equation 6:
T.sub.d.sub.i=c.sub.TI.sub.d.sub.i.sup.[Equation 6]

(60) In Equation 6, T.sub.di denotes the TTL increase and decrease value, c.sub.T denotes the TTL increase and decrease coefficient, and I.sub.di.sup. denotes the variation of the value obtained from data information or node information per the unit time .

(61) For example, the TTL setting unit 30 may calculate a requester node number variation N.sub.di.sup.96 of data cached in the node per the unit time , and calculate a TTL increase and decrease value T.sub.di of data cached in the node by applying the requester node number variation N.sub.di.sup. to the variation I.sub.di.sup. of the value obtained from data information or node information per the unit time in Equation 6.

(62) Alternatively, the TTL setting unit 30 may calculate a remaining delivery frequency variation F.sub.di.sup. of data cached in the node per the unit time , and calculate a TTL increase and decrease value T.sub.di of data cached in the node by applying the remaining delivery frequency variation F.sub.di.sup. to the variation I.sub.di.sup. of the value obtained from data information or node information per the unit time in Equation 6.

(63) Alternatively, the TTL setting unit 30 may calculate a delivery predictability variation P.sub.di.sup. of data cached in the node to requester node for the unit time , and calculate a TTL increase and decrease the value T.sub.di of data cached in the node by applying the delivery predictability variation P.sub.di.sup. to the variation I.sub.di.sup. of the value obtained from data information or node information per the unit time in Equation 6.

(64) The TTL setting unit 30 may update the TTL of data cached in the node by adding the TTL increase and decrease value T.sub.di to the current TTL T.sub.di.sup.old of data cached in the node as shown in the following Equation 7:
T.sub.d.sub.i.sup.new=T.sub.d.sub.i.sup.old+T.sub.d.sub.i[Equation 7]

(65) In Equation 7, T.sub.di.sup.new denotes the updated TTL of data cached in the node, T.sub.di.sup.old denotes the current TTL of data cached in the node, and T.sub.di denotes the TTL increase and decrease value.

(66) Meanwhile, the TTL setting unit 30 may check the TTL T.sub.di of data cached in the node, and when the TTL T.sub.di of data cached in the node is equal to or less than zero (0), may control to delete the data cached in the node from the node.

(67) The data caching unit 40 caches data, for which the caching policy is determined the by caching policy unit 20 in the buffer of the node.

(68) The data deletion unit 50 may delete data, for which the deletion policy is determined by the caching policy unit 20 from the buffer of the node.

(69) As described above, the data caching device 1000 according to the embodiment of the present disclosure may control the data caching or deletion according to the data caching or deletion policy using data information or node information defined in ICN in consideration of node buffer capacity. Accordingly, it is possible to allow for efficient node buffer capacity management in DTN having limited node buffer capacity, and caching policy in consideration of the presence of multiple requester nodes for the same data in DTN environment.

(70) Hereinafter, a method of data caching in delay tolerant network based on information centric network according to the embodiment of the present disclosure will be described.

(71) The data caching method according to the embodiment of the present disclosure may be applied to each node in a network environment to control caching of data received from another node or deletion of data cached in node.

(72) The data caching method according to the embodiment of the present disclosure may be performed in substantially the same configuration as the data caching device 1000 of FIG. 1. Accordingly, the same drawing signs are given to the same elements as the data caching device 1000 of FIG. 1, and repeated descriptions are omitted herein.

(73) FIG. 2 is a flowchart of a congestion control method according to the embodiment of the present disclosure.

(74) Referring to FIG. 2, the buffer check unit 10 may check a remaining buffer amount and a buffer usage amount of node (S100).

(75) The buffer check unit 10 may compare the remaining buffer amount and the buffer usage amount with a preset remaining buffer amount threshold and a preset buffer usage amount threshold respectively (S200).

(76) According to the results of comparison between the remaining buffer amount and the buffer usage amount and each threshold, the caching policy unit 20 may determine the caching policy of data received from another node or the deletion policy of data cached in node (S300). Its detailed description will be provided below with reference to FIGS. 3 and 4.

(77) The TTL setting unit 30 may set the TTL of data received from another node or data cached in node (S400). Its detailed description will be provided below with reference to FIG. 5.

(78) The data caching unit 40 may cache data received from another node in node according to the data caching policy, and the data deletion unit 50 may delete data cached in node according to the data deletion policy (S500).

(79) FIG. 3 is a detailed flowchart when determining a data caching policy in the step of determining a data caching policy or a data deletion policy shown in FIG. 2.

(80) Referring to FIG. 3, when data is received from another node (S210), and the remaining buffer amount b.sub.rem.sup.N of the node is greater than the remaining buffer amount threshold b.sub.rem.sup.thr (S230), the caching policy unit 20 may compare the value I.sub.di obtained from data information or node information with the preset data caching threshold I.sub.C (S310).

(81) For example, the caching policy unit 20 may compare the number of requester nodes N.sub.di of data received from another node with the preset requester node number threshold N.sub.C. Alternatively, the caching policy unit 20 may compare the delivery predictability P.sub.di of data received from another node to requester node with the preset delivery predictability threshold P.sub.C.

(82) When the value I.sub.di obtained from data information or node information is less than preset data caching threshold I.sub.C (S310), the caching policy unit 20 may set the remaining delivery frequency F.sub.di of data, received from another node to one (1) (S311), and control to cache in the node (S510).

(83) For example, when the number of requester nodes N.sub.di of data received from another node is less than the preset requester node number threshold N.sub.C, or the delivery predictability P.sub.di of data received from another node to requester node is less than the preset delivery predictability threshold P.sub.C, the caching policy unit 20 may set the remaining delivery frequency F.sub.di of data received from another node to one (1).

(84) When the value I.sub.di obtained from data information or node information is equal to or greater than the preset data caching threshold I.sub.C (S310), the caching policy unit 20 may calculate the remaining delivery frequency F.sub.di by multiplication of the value I.sub.di obtained from data information or node information and the delivery frequency coefficient c.sub.F as shown in the above Equation 1 (S312), and control to cache in the node (S510).

(85) For example, when the number of requester nodes N.sub.di of data received from another node is equal to or greater than the preset requester node number threshold N.sub.C, the caching policy unit 20 may calculate the remaining delivery frequency F.sub.di by multiplication of the delivery frequency coefficient c.sub.F.sup.N of the number of requester nodes and the number of requester nodes N.sub.di of the data as shown in Equation 1.

(86) Alternatively, when the number of requester nodes N.sub.di of data received from another node is equal to or greater than the preset requester node number threshold N.sub.C, the caching policy unit 20 may calculate the remaining delivery frequency F.sub.di by multiplication of the delivery frequency coefficient c.sub.F.sup.Y of priority and the priority Y.sub.di of data as shown in Equation 1. The priority Y.sub.di of data may be represented as a natural number of [1, Y] according to the grade.

(87) Alternatively, when the delivery predictability P.sub.di of data received from another node to requester node is equal to or more than the preset delivery predictability threshold P.sub.C, the caching policy unit 20 may calculate the remaining delivery frequency F.sub.di by multiplication of the delivery frequency coefficient c.sub.F.sup.P of delivery predictability and the delivery predictability P.sub.di as shown in Equation 1.

(88) Meanwhile, when data is not received from another node (S210) and the buffer usage amount buseN of node is less than the buffer usage amount threshold busethr (S250), the caching policy unit 20 may check the remaining delivery frequency Fdi of data cached in the node (S315).

(89) When the remaining delivery frequency F.sub.di of data cached in the node is zero (0) (S315), the caching policy unit 20 may control to delete the corresponding data from the buffer of the node (S520).

(90) FIG. 4 is a detailed flowchart when determining a data deletion policy in the step of determining a data caching policy or a data deletion policy shown in FIG. 2.

(91) Referring to FIG. 4, when the buffer usage amount b.sub.use.sup.N of node is less than the buffer usage amount threshold b.sub.use.sup.thr (S250), the caching policy unit 20 may arrange at least one data cached in the node (S320).

(92) For example, the caching policy unit 20 may arrange at least one data cached in the node in ascending order of the number of requester nodes of data.

(93) Alternatively, the caching policy unit 20 may arrange at least one data cached in the node in ascending order of the TTL of data.

(94) Alternatively, the caching policy unit 20 may arrange at least one data cached in the node in descending order of the delivery frequency of data.

(95) Alternatively, the caching policy unit 20 may arrange at least one data cached in the node in ascending order of the priority of data.

(96) Alternatively, the caching policy unit 20 may arrange at least one data cached in the node in ascending order of the delivery predictability to requester node.

(97) Alternatively, the caching policy unit 20 may calculate the data information arrangement value I.sub.di.sup.pl using data information or node information of each of at least one data cached in the node, and arrange at least one data cached in the node based on the data information arrangement value I.sub.di.sup.pl. Here, the data information arrangement value I.sub.di.sup.pl may be a value denoting the relationship between multiple data information. Alternatively, the data information arrangement value I.sub.di.sup.pl may be a value denoting the relationship between the node information and the priority of data.

(98) For example, the caching policy unit 20 may calculate the delivery frequency ratio W.sub.di.sup.f or the data information arrangement value I.sub.di.sup.pl that is calculated as a ratio of the number of requester nodes N.sub.di and the delivery frequency f.sub.di of data as shown in Equation 2, and arrange at least one data cached in the node in descending order of the delivery frequency ratio W.sub.di.sup.f.

(99) Alternatively, the caching policy unit 20 may calculate the priority multiple W.sub.di.sup.Y or the data information arrangement value I.sub.di.sup.pl that is calculated by multiplication of the number of requester nodes N.sub.di and the priority Y.sub.di of data as shown in the above Equation 3, and arrange at least one data cached in the node in ascending order of the priority multiple W.sub.di.sup.Y.

(100) Alternatively, the caching policy unit 20 may calculate the delivery predictability multiple P.sub.di.sup.pl or the data information arrangement value I.sub.di.sup.pl that is calculated by multiplication of the delivery predictability P.sub.di to data requester node and the priority Y.sub.di as shown in the above Equation 4, and arrange at least one data cached in the node in ascending order of the delivery predictability multiple P.sub.di.sup.pl.

(101) The caching policy unit 20 may select data to delete from at least e data cached in the node according to the order of arrangement of the at least one data cached in the node (S321).

(102) The caching policy unit 20 may control to delete the selected data from the buffer of the node (S520).

(103) FIG. 5 is a detailed flowchart of the step of setting the TTL of data shown in FIG. 2.

(104) Referring to FIG. 5, when data is received from another node (S210), the TTL setting unit 30 may set the initial TTL, value T.sub.di.sup.init of the data received from another node as the sum of the value I.sub.di obtained from data information or node information of the data received from another node multiplied by the TTL coefficient c.sub.T and the TTL constant C.sub.T as shown in the above Equation 5 (S410), and control to cache in the node (S510).

(105) For example, the TTL setting unit 30 may set the initial TTL value T.sub.di.sup.init of the data received from another node by applying the number of requester nodes N.sub.di of the data received from other node to the value I.sub.di obtained from data information or node information in Equation 5.

(106) Alternatively, the TTL setting unit 30 may set the initial TTL value T.sub.di.sup.init of the data received from another node by applying the priority Y.sub.di of the data received from another node to the value I.sub.di obtained from data information or node information in Equation 5.

(107) Alternatively, the TTL setting unit 30 may set the initial TTL value T.sub.di.sup.init of the data received from another node by applying the delivery predictability P.sub.di of data received from another node to requester node to the value I.sub.di obtained from data information or node information in Equation 5.

(108) Meanwhile, when data is not received from another node (S210), the TTL setting unit 30 may check if the unit time has elapsed (S420).

(109) When the unit time has elapsed (S420), the TTL setting unit 30 may update the TTL T.sub.di.sup.new of data cached in the node by adding a value obtained by multiplying the variation I.sub.di.sup. the value obtained from data information or node information per the unit time by the TTL increase and decrease coefficient c.sub.T to the current TTL T.sub.di.sup.old of the data cached in the node as shown in the following Equation 8 (S421), and control to cache in the node (S510).
T.sub.d.sub.i.sup.new=T.sub.d.sub.i.sup.old+c.sub.TI.sub.d.sub.i.sup.[Equation 8]

(110) In Equation 8, T.sub.di.sup.new denotes the updated TTL of data cached in the node, T.sub.di.sup.old denotes the current TTL of data cached in the node, c.sub.T denotes the TTL increase and decrease coefficient, and I.sub.di.sup. denotes the variation of the value obtained from data information or node information per the unit time .

(111) For example, the TTL setting unit 30 may calculate the requester node number variation N.sub.di.sup. of data cached in the node per the unit time , and calculate the TTL increase and decrease value T.sub.di of data cached in the node by applying the requester node number variation N.sub.di.sup. to the variation I.sub.di.sup. of the value obtained from data information or node information per the unit time in the above Equation 6.

(112) Alternatively, the TTL setting unit 30 may calculate the remaining delivery frequency variation F.sub.di.sup. of data cached in the node per the unit time , and calculate the TTL increase and decrease the value T.sub.di of data cached in the node by applying the remaining delivery frequency variation F.sub.di.sup. to the variation I.sub.di.sup. of the value obtained from data information or node information per the unit time in the above Equation 6.

(113) Alternatively, the TTL setting unit 30 may calculate the delivery predictability variation P.sub.di.sup. of data cached in the node to requester node per the unit time , and calculate the TTL increase and decrease the value T.sub.di of the data cached in the node by applying the delivery predictability variation P.sub.di.sup. to the variation I.sub.di.sup. of the value obtained from data information or node information per the unit time in the above Equation 6.

(114) Additionally, the TTL setting unit 30 may update the TTL of data cached in the node by adding the TTL increase and decrease the value T.sub.di to the current TTL T.sub.di.sup.old of data cached in the node.

(115) Meanwhile, when the unit time has not elapsed (S420), the TTL setting unit 30 may check the TTL T.sub.di of data cached in the node (S423).

(116) When the TTL T.sub.di of data cached in the node is equal to or less than zero (0) (S423), the TTL setting unit 30 may control to delete the data cached in the node from the node (S520).

(117) Accordingly, the data caching method according to the embodiment of the present disclosure can control data caching or deletion according to a data caching or deletion policy using data information or node information defined in an ICN in consideration of node buffer capacity. Accordingly, it is possible to allow for efficient node buffer capacity management in a DTN having limited node buffer capacity, and provide a caching service in consideration of the presence of multiple requester nodes for the same data in a DTN environment.

(118) The above-described method of data caching in DTN based on ICN may be implemented as an application or in the form of program commands that are executed through various computer components, and recorded in computer-readable recording media. The computer-readable recording media may include program commands, data files and data structures, alone or in combination.

(119) The program commands in recorded in the computer-readable recording media may be specially designed and configured for the present disclosure and may be known and available to those having ordinary skill in the field of computer software.

(120) Examples of the computer-readable recording media include hardware devices specially designed to store and execute program commands, such as magnetic media such as hard disk, floppy disk and magnetic tape, optical recording media such as CD-ROM and DVD, magneto-optical media such as floptical disk, and ROM, RAM and flash memory.

(121) Examples of the program commands include machine codes generated by a compiler as well as high-level language codes that can he executed by a computer using an interpreter. The hardware device may be configured to act as one or more software modules to perform processing according to the present disclosure, or vice versa.

(122) While the present disclosure has been hereinabove described with reference to the embodiments, it will be apparent to those skilled in the corresponding technical field that a variety of modifications and changes may be made thereto without departing from the spirit and scope of the present disclosure set forth in the appended claims.

DETAILED DESCRIPTION OF MAIN ELEMENTS

(123) 1000: Data caching device 10: Buffer check unit 20: Caching policy unit 30: TTL setting unit 40: Data caching unit 50: Data deletion unit