Systems and methods for approximate communication framework for network-on-chips
11483256 · 2022-10-25
Assignee
Inventors
Cpc classification
Y02D10/00
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
International classification
G06F11/36
PHYSICS
Abstract
Systems and methods are disclosed for reducing latency and power consumption of on-chip movement through an approximate communication framework for network-on-chips (“NoCs”). The technology leverages the fact that big data applications (e.g., recognition, mining, and synthesis) can tolerate modest error and transfers data with the necessary accuracy, thereby improving the energy-efficiency and performance of multi-core processors.
Claims
1. An approximate communication system for network-on-chips comprising: a multi-core processor, wherein said multi-core processor comprises a code analyzer module that: identifies one or more error-resilient variables in a code of interest prior to code execution, wherein the error-resilient variables comprise a subset of the data generated when the code of interest is executed; generates a data dependency graph to calculate an error threshold for each of the one or more error-resilient variables; appends an approximation instruction to each of the one or more error-resilient variables; updates assembly code to incorporate the approximation instruction; and compresses a data packet comprised of the approximation function prior to transmitting the data packet.
2. The system of claim 1, wherein the error threshold is dependent on an application's quality requirements.
3. The system of claim 1, wherein the approximation instruction loads and stores the one or more error-resilient variables in memory.
4. The system of claim 1, wherein the approximation instruction includes approximation information that is comprised of address data, variable data type, and error threshold data.
5. The system of claim 1, wherein the multi-core processor writes the data packet to a shared memory or shared cache.
6. The system of claim 1, wherein the multi-core processor compresses data in the write request using the approximation function in a network interface at a core and a private cache node.
7. The system of claim 6, wherein the multi-core processor further decompresses data in the write request in the network interface at a shared cache and a shared memory node.
8. The system of claim 7, wherein the multi-core processor further reads a data packet from the shared memory or the shared cache.
9. The system of claim 8, wherein the data packet read from the shared memory or the shared cache is received at a shared memory.
10. The system of claim 9, wherein a quality control table at the shared memory extracts approximation information comprising address data from the read request packet.
11. The system of claim 10, wherein a read reply data packet is approximated if the address data is matched to an entry in the quality control table.
12. The system of claim 11, wherein the matched entry in the quality control table is deleted.
13. The system of claim 11, wherein the read reply packet is received at the core and private cache.
14. The system of claim 13, wherein the data in the read reply packet is decompressed in a network interface at the core and private cache node.
15. A method for approximate communication for network-on-chips comprising: identifying one or more error-resilient variables in a code of interest prior to code execution, wherein the error-resilient variables comprise a subset of the data generated when the code of interest is executed; generates a data dependency graph to calculate an error threshold for each of the one or more error-resilient variables; appending an approximation instruction to each of the one or more error-resilient variables; updating assembly code to incorporate the approximation instruction; and compressing a data packet comprised of the approximation function prior to transmitting the data packet.
16. The method of claim 15, wherein the error threshold is dependent on an application's quality requirements.
17. The method of claim 15, wherein the approximation instruction loads and stores the one or more error-resilient variables in memory.
18. The method of claim 15, wherein the approximation instruction includes approximation information that is comprised of address data, variable data type, and error threshold data.
19. The method of claim 15, further comprising writing the data packet to a shared memory or shared cache.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(9) In describing a preferred embodiment of the disclosure illustrated in the drawings, specific terminology will be resorted to for the sake of clarity. However, the disclosure is not intended to be limited to the specific terms so selected, and it is to be understood that each specific term includes all technical equivalents that operate in a similar manner to accomplish a similar purpose. Several preferred embodiments of the disclosure are described for illustrative purposes, it being understood that the disclosure may be embodied in other forms not specifically shown in the drawings.
(10) Described herein are embodiments of an approximate communication system in which the data in a packet are compressed based on the error tolerance of the application to reduce power consumption and five-network latency. The approximate communication framework described herein includes a software-based quality control method and a hardware-based data approximation method implemented in the network interface (NI).
(11) The high-level workflow of the system is exemplarily shown in
(12) Before the execution of the application, the quality control method uses the code analyzer 108 to identify error-resilient variables and calculate the error tolerance of each variable based on the application's quality requirement on results. The software of the code analyzer preferably runs prior to the execution of the approximation application on the multi-core processor, as exemplarily shown in
(13)
(14) Second, the code analyzer identifies all variables and results in the approximable section 114 in the C code 102, CFG 104, and assembly code 106. In
(15) Third, the analyzer builds a data dependency graph 116, which shows the calculation process for each variable, based on the CFG 104. For the example in
(16) Fourth, the code analyzer 108 traverses the data dependency graph and updates the error threshold for each variable 118. The calculated error thresholds 128 are highlighted in the box in
(17) Equation 1 shows the definition of error tolerance, where ã is approximated a and E.sub.r is relative error:
(18)
(19) Equation 2 describes the addition and subtraction of ã and {tilde over (b)}, where ã and {tilde over (b)} are transmitted through approximate communication and {tilde over (c)} is the result
{tilde over (c)}=ã+{tilde over (b)} (2)
(20) Using Equation 1, the following calculations are performed, as shown below:
(21)
(22) When a and b have the same relative error (E.sub.ra=E.sub.rb=E.sub.rab), Equations 2, 4, and 5 can be combined to arrive at Equation 6 below:
{tilde over (c)}=(a+E.sub.rab×a)±(b+E.sub.rab×b) (6)
(23) By combining Equations 3 and 11 with c=a±b, it can be determined that the relative error (E.sub.rab) for a and h are equal to the relative error (E.sub.rc) for c. Therefore, E.sub.rc≤error tolerance is ensured when E.sub.rab≤error tolerance for the addition and subtraction operation. For example, ã and {tilde over (b)} can each contain less than 5 percent relative error when {tilde over (c)} can tolerate 5 percent relative error. Equation 7 describes the multiplication of ã and {tilde over (b)}, where ã and {tilde over (b)} are transmitted through approximate communication and {tilde over (c)} is the result:
{tilde over (c)}=ã±{tilde over (b)} (7)
(24) With the same theory, Equation 8 can be calculated for the multiplication, where a and b are fully accurate variable and E.sub.rab is relative error:
{tilde over (c)}=(a+E.sub.rab*a)*(b+E.sub.rab8b) (8)
(25) By combining Equations 3 and 8, with c=a×b, it can be determined that for the multiplication operation, E.sub.rc=(1+E.sub.rab).sup.2−1. Therefore, E.sub.rc≤error tolerance is ensured when −1+√{square root over (1+E.sub.rab)}≤error tolerance for the multiplication operation. For example, ã and {tilde over (b)} can each contain less than 2.5 percent relative error when {tilde over (c)} can tolerate 5 percent relative error. Equation 9 then describes the division of ã and {tilde over (b)}, where ã and {tilde over (b)} are transmitted through approximate communication and {tilde over (c)} is the result:
{tilde over (c)}=ã/{tilde over (b)} (9)
(26) For the division operation, Equation 10 is calculated, where a and b are fully accurate variables and E.sub.ra, E.sub.rb are relative error:
{tilde over (c)}=(a+E.sub.rab*a)/(b+E.sub.rab8b) (10)
(27) Finally, the code analyzer selects and replaces the mov instructions in the assembly code with approximate mov instructions 120. The selected mov instructions load and store the variable, which locates at the leaf of the data dependency graph (e.g. a, b, c). A new type of approximate load and store instruction (amov, dist, src, error threshold) is introduced into the X86 instruction set for the network to compress approximable packets. The error threshold in the amov instruction is multiplied by 103 to eliminate the decimal point. The final result 122 is shown in
(28)
(29) The data compression process by the approximate data compression logic 406 is exemplarily outlined herein. The first step is to truncate the integer or floating point data based on the error tolerance. In this paper, we define the error tolerance as the maximum relative error that a data can accept. As shown above, Equation 1 shows the definition of error tolerance, where ã is approximated a and E.sub.r is relative error.
(30) Equations 11 and 12 show the representation of single precision floating point value based in IEEE 754 standard (IEEE Standards Committee et al., 754-2008 IEEE Standard for Floating-Point Arithmetic, IEEE Computer Society Standard 2008:517, 2008).
(31)
(32) Based on Equations 11 and 12, the mantissa always starts with one. According to the IEEE 754 standard, when a data point is represented in the floating point format, the first bit of the mantissa is omitted. We observe that when c bits (of the 23-bit mantissa) are protected, the maximum relative error on this floating point data value will be Σ.sub.k=c+1.sup.232.sup.−k, which is 2.sup.−c according to the sum of the geometric sequence (Σ.sub.k=1.sup.nar.sup.k−1=a(1−r.sup.n)/1−r, where a is the first term, n is the number of term, and r is the common ratio in the sequence). Therefore, using Equation 12, the following expression for the data error tolerance can be deduced, as shown in Equation 13:
error tolerance=2.sup.−n(1≤n23) (13)
(33) In Equation 13 above, the data error tolerance is a number between 0 and 1, and n is the number of most significant bits (MSBs) in the mantissa of this floating point value. In a floating point data value, the 1-bit sign and the 8-bit exponent (a total of 9 bits) are also critical bits, which must be transmitted. Thus, by truncating 23−n bits, we can ensure the value's relative error is less than 2.sup.−n. For example, to satisfy a data error tolerance of 10 percent for any floating point value, we can truncate 18 least significant bits (LSBs), resulting in a maximum relative error of 6.25 percent. Equation 14 shows the representation of a signed integer. In a signed integer, the MSB represents the sign, and the remaining 31 bits represent the value:
(34)
(35) We observe that when n bits (of the 31 LSBs) are truncated, the maximum error caused by truncation will be Σ.sub.k=0.sup.nX.sub.k2.sup.k (X.sub.k=0 or 1). Thus, Equation 15 below can be used to calculate the number of bits (n) to be truncated for a given error tolerance:
(36)
(37) With this data truncation method, an integer with a small absolute value requires a larger number of MSBs to achieve the same error threshold than is required for an integer with a large absolute value. For example, for an integer value of 100, 29 MSBs need to be transmitted to ensure 5 percent error tolerance. On the other hand, for an integer value of 5,418, only 28 MSBs need to be transmitted to achieve the same data error tolerance (5 percent). To overcome this problem, we compress the data using the frequent data pattern compression method. In previous research, the frequent data pattern compression mechanism (Table 1) has been proposed and extended to NoCs with a low-overhead compression and decompression mechanism.
(38) TABLE-US-00001 TABLE 1 Frequent Pattern Encoding Data Size Code Pattern Encoded After Encoding 000 Zero run 3 bits 001 4-bit sign-extended 4 bits 010 1-byte sign-extended 8 bits 011 Halfword sign-extended 16 bits 100 Halfword padded with a zero halfword 16 bits 101 Two half words, each 1-byte sign-extended 16 bits 111 Uncompressed word 32 bits
(39) That mechanism is adopted in certain aspects of the present disclosure to compress approximated data. The essence of frequent data pattern compression method is to eliminate zeros and ones in the MSBs and LSBs for both integer and floating value without effecting the accuracy of the value. We develop a frequent pattern replacement table based on Table 1. Table 2 shows the static set of frequent patterns and the codes. In this table, the notation X represents a bit that can be either 0 or 1.0xff and 0x00 are two hexadecimal numbers, which represent eight 1-bits and eight 0-bits. The data compressor checks every piece of data and attempts to match its pattern. If the pattern matches, the data compressor will replace the pattern with the corresponding code. The 0 or 1 represented by X will not be changed during the compression process.
(40) TABLE-US-00002 TABLE 2 Code Frequent Pattern 000 0x00 0x00 0x00 0x00 001 0x00 0x00 0x00 00000xxx 0xff 0xff 0xff 11111XXX 010 0x00 0x00 0x00 0XXXXXXX 0xff 0xff 0xff 1XXXXXXX 011 0x00 0x00 0XXXXXXX XXXXXXXX 0xff 0xff 1XXXXXXX XXXXXXXX 100 XXXXXXXX XXXXXXXX 0x00 0x00 101 0x00 0XXXXXXX 0x00 0XXXXXXX 0xff 1XXXXXXX 0xff 1XXXXXXX
(41) When a cache miss is caused by approximate load operation, a read request is issued by the private cache 404 with the approximation information. Then, the read request is sent to the packet encoder 410 to generate a read request packet. The read request packet is injected into the network through the network interface 432, where it travels through multiple routers in the on-chip network. Upon reaching the router 418 of the shared cache/shared memory node 420, the read request packet passes through the network interface 434, a packet decoder 428, and a quality control table 424. When the shared cache or shared memory node 420 receives the read request, the approximation information is extracted from the packet and inserted into the quality control table 424. When the read reply is generated by the shared cache or shared memory 420, the approximate data compression logic 426 reads the approximation information in the quality control table and truncates the data in accordance with the approximation information. Then, the packet encoder 430 prepares the read reply packet and sends it to the router 418. The read reply packet is decoded at the packet decoder 412 and then arrives at the core 402 and private cache node 402, where the data decompression module 408 recovers the data.
(42) The baseline system is a multi-core processor, which is shown in
(43)
(44) At
(45) As shown in
(46) As shown in
(47) As shown in
(48) As shown in
(49)
(50) In the network interface of the shared cache or shared memory node, the data approximation logic module 426 receives a read packet (A) containing an address in the quality control table 424 from which to acquire the approximation information for the data. If the address matches an entry in the quality control table 424, then the table sends a read reply packet with the corresponding approximation information, which includes the address, data type, and error tolerance (B). Then, the corresponding entry is deleted from the table after the reply packet (B) is sent. Otherwise, a negative signal (C) is sent to the data approximation logic module 426 to indicate that these data require accurate transmission. When a read request packet (D) arrives at the quality control table, the table first checks whether it contains approximation information. If so, the table extracts and registers the approximation information. If the requested data contains multiple data types, multiple entries in the quality control table 424 are occupied to store approximation information. Then, the quality control table forwards the read request (E), which now contains only the address, to the shared cache or shared memory.
(51) The foregoing description and drawings should be considered as illustrative only of the principles of the disclosure. The disclosure is not intended to be limited by the preferred embodiment and may be implemented in a variety of ways that will be clear to one of ordinary skill in the art. Numerous applications of the disclosure will readily occur to those skilled in the art. Therefore, it is not desired to limit the disclosure to the specific examples disclosed or the exact construction and operation shown and described. Rather, all suitable modifications and equivalents may be resorted to, falling within the scope of the disclosure. All references cited herein are incorporated in their entireties.