FLOATING-POINT NUMBER MULTIPLICATION COMPUTATION METHOD AND APPARATUS, AND ARITHMETIC LOGIC UNIT

20220334798 · 2022-10-20

Assignee

Inventors

Cpc classification

International classification

Abstract

This application discloses a floating-point number multiplication computation method, an apparatus, and an arithmetic logic unit. The method includes: obtaining a plurality of to-be-computed first-precision floating-point numbers; decomposing each to-be-computed first-precision floating-point number to obtain at least two second-precision floating-point numbers, a second precision of the second-precision floating-point number is lower than a first precision of the first-precision floating-point number; determining various combinations including two second-precision floating-point numbers obtained by decomposing different first-precision floating-point numbers; inputting the second-precision floating-point numbers in each combination into a second-precision multiplier to obtain an intermediate computation result corresponding to each combination; and determining a computation result for the plurality of to-be-computed first-precision floating-point numbers based on the intermediate computation result corresponding to each combination.

Claims

1. An arithmetic logic unit in a processor, the arithmetic logic unit comprising: a floating-point number decomposition circuit configured to: decompose each input to-be-computed first-precision floating-point number into at least two second-precision floating-point numbers, a second precision of the second-precision floating-point number being lower than a first precision of the first-precision floating-point number; a second-precision multiplier configured to: receive a combination comprising two second-precision floating-point numbers obtained by decomposing different first-precision floating-point numbers; perform a multiplication operation on the second-precision floating-point numbers in each combination; and output an intermediate computation result corresponding to each combination; and an accumulator configured to: perform an operation to obtain a computation result for the plurality of to-be-computed first-precision floating-point numbers based on the intermediate computation result corresponding to each combination.

2. The arithmetic logic unit according to claim 1, wherein the operation is a summation operation.

3. The arithmetic logic unit according to claim 1, wherein the arithmetic logic unit further comprises an exponent adjustment circuit; the floating-point number decomposition circuit is further configured to: output, to the exponent adjustment circuit, an exponent bias value corresponding to each second-precision floating-point number; the second-precision multiplier is further configured to: output, to the exponent adjustment circuit, the intermediate computation result corresponding to each combination; and the exponent adjustment circuit is configured to: adjust, based on the exponent bias value corresponding to the second-precision floating-point number in each input combination, an exponent of the intermediate computation result corresponding to each input combination; and output an adjusted intermediate computation result to the accumulator.

4. The arithmetic logic unit according to claim 3, wherein the exponent adjustment circuit is configured to: add the exponent bias value corresponding to the second-precision floating-point number in each input combination and the exponent of the intermediate computation result corresponding to each input combination; and output the adjusted intermediate computation result to the accumulator.

5. The arithmetic logic unit according to claim 3, wherein the intermediate computation result is a first-precision intermediate computation result, and the computation result is a first-precision computation result.

6. The arithmetic logic unit according to claim 5, wherein the first-precision floating-point number is a single-precision floating-point number, the second-precision floating-point number is a half-precision floating-point number, the first-precision intermediate computation result is a single-precision intermediate computation result, the first-precision computation result is a single-precision computation result, and the second-precision multiplier is a half-precision multiplier; or the first-precision floating-point number is a double-precision floating-point number, the second-precision floating-point number is a single-precision floating-point number, the first-precision intermediate computation result is a double-precision intermediate computation result, the first-precision computation result is a double-precision computation result, and the second-precision multiplier is a single-precision multiplier.

7. The arithmetic logic unit according to claim 3, wherein the arithmetic logic unit further comprises a format conversion circuit; the second-precision multiplier is configured to: perform a multiplication operation on the second-precision floating-point numbers in each combination; and output, to the format conversion circuit, a first-precision intermediate computation result corresponding to each combination; the format conversion circuit is configured to: perform format conversion on each input first-precision intermediate computation result; and output, to the exponent adjustment circuit, a third-precision intermediate computation result corresponding to each combination, wherein precision of the third-precision intermediate computation result is higher than precision of the first-precision intermediate computation result; the exponent adjustment circuit is configured to: adjust, based on the exponent bias value corresponding to the second-precision floating-point number in each input combination, an exponent of the third-precision intermediate computation result corresponding to each input combination; and output an adjusted third-precision intermediate computation result to the accumulator; and the accumulator is configured to: perform a summation operation on the adjusted third-precision intermediate computation results corresponding to all the input combinations; and output a third-precision computation result for the plurality of first-precision floating-point numbers.

8. The arithmetic logic unit according to claim 7, wherein the format conversion circuit is configured to: perform zero padding processing on an exponent and a mantissa of each input first-precision intermediate computation result; and output, to the exponent adjustment circuit, the third-precision intermediate computation result corresponding to each combination.

9. The arithmetic logic unit according to claim 7, wherein: the first-precision floating-point number is a single-precision floating-point number; the second-precision floating-point number is a half-precision floating-point number; the first-precision intermediate computation result is a single-precision intermediate computation result; the third-precision intermediate computation result is a double-precision intermediate computation result; the third-precision computation result is a double-precision computation result; and the second-precision multiplier is a half-precision multiplier.

10. The arithmetic logic unit according to claim 3, wherein the arithmetic logic unit further comprises a format conversion circuit; the second-precision multiplier is configured to: perform a multiplication operation on the second-precision floating-point numbers in each combination; and output, to the format conversion circuit, a third-precision intermediate computation result corresponding to each combination; the format conversion circuit is configured to: perform format conversion on each input third-precision intermediate computation result; and output, to the exponent adjustment circuit, a first-precision intermediate computation result corresponding to each combination; the exponent adjustment circuit is configured to: adjust, based on the exponent bias value corresponding to the second-precision floating-point number in each input combination, an exponent of the first-precision intermediate computation result corresponding to each input combination; and output an adjusted first-precision intermediate computation result to the accumulator; and the accumulator is configured to: perform a summation operation on the adjusted first-precision intermediate computation results corresponding to all the input combinations; and output a first-precision computation result for the plurality of first-precision floating-point numbers.

11. The arithmetic logic unit according to claim 10, wherein: the first-precision floating-point number is a double-precision floating-point number; the second-precision floating-point number is a half-precision floating-point number; the third-precision intermediate computation result is a single-precision intermediate computation result; the first-precision intermediate computation result is a double-precision intermediate computation result; the first-precision computation result is a double-precision computation result; and the second-precision multiplier is a half-precision multiplier.

12. The arithmetic logic unit according to claim 1, wherein the arithmetic logic unit further comprises a computation mode switching circuit; the computation mode switching circuit is configured to: when the computation mode switching circuit is set to a second-precision floating-point number computation mode, set the floating-point number decomposition circuit and the exponent adjustment circuit to be invalid; the second-precision multiplier is configured to: receive a plurality of groups of to-be-computed second-precision floating-point numbers that are input from the outside of the arithmetic logic unit; perform a multiplication operation on each group of second-precision floating-point numbers; and input an intermediate computation result corresponding to each group of to-be-computed second-precision floating-point numbers; and the accumulator is configured to: perform a summation operation on the intermediate computation results corresponding to all the input groups of to-be-computed second-precision floating-point numbers; and output a computation result for the plurality of groups of to-be-computed second-precision floating-point numbers.

13. A floating-point number multiplication computation method, the method comprising: obtaining a plurality of to-be-computed first-precision floating-point numbers; decomposing each to-be-computed first-precision floating-point number to obtain at least two second-precision floating-point numbers, a second precision of the second-precision floating-point number is lower than a first precision of the first-precision floating-point number; determining various combinations comprising two second-precision floating-point numbers obtained by decomposing different first-precision floating-point numbers; inputting the second-precision floating-point numbers in each combination into a second-precision multiplier to obtain an intermediate computation result corresponding to each combination; and determining a computation result for the plurality of to-be-computed first-precision floating-point numbers based on the intermediate computation result corresponding to each combination.

14. The method according to claim 13, after the decomposing each to-be-computed first-precision floating-point number to obtain at least two second-precision floating-point numbers, further comprising: determining an exponent bias value corresponding to each second-precision floating-point number; and the determining a computation result for the plurality of to-be-computed first-precision floating-point numbers based on the intermediate computation result corresponding to each combination comprises: adjusting, based on the exponent bias value corresponding to the second-precision floating-point number in each combination, an exponent of the intermediate computation result corresponding to each combination, to obtain an adjusted intermediate computation result; and performing a summation operation on the adjusted intermediate computation results corresponding to all the combinations to obtain the computation result for the plurality of first-precision floating-point numbers.

15. The method according to claim 14, wherein the adjusting, based on the exponent bias value corresponding to the second-precision floating-point number in each combination, an exponent of the intermediate computation result corresponding to each combination, to obtain an adjusted intermediate computation result comprises: adding the exponent of the intermediate computation result corresponding to each combination of second-precision floating-point numbers and the exponent bias value corresponding to the second-precision floating-point number in each combination, to obtain the adjusted intermediate computation result.

16. The method according to claim 13, wherein the intermediate computation result is a first-precision intermediate computation result, and the computation result is a first-precision computation result.

17. The method according to claim 14, wherein: the inputting the second-precision floating-point numbers in each combination into a second-precision multiplier to obtain an intermediate computation result corresponding to each combination comprises: inputting the second-precision floating-point numbers in each combination into the second-precision multiplier to obtain a first-precision intermediate computation result corresponding to each combination, and performing format conversion on each first-precision intermediate computation result to obtain a third-precision intermediate computation result corresponding to each combination, wherein precision of the third-precision intermediate computation result is higher than precision of the first-precision intermediate computation result; the adjusting, based on the exponent bias value corresponding to the second-precision floating-point number in each combination, an exponent of the intermediate computation result corresponding to each combination, to obtain an adjusted intermediate computation result comprises: adjusting, based on the exponent bias value corresponding to the second-precision floating-point number in each combination, an exponent of the third-precision intermediate computation result corresponding to each combination, to obtain an adjusted third-precision intermediate computation result; and the performing the summation operation on the adjusted intermediate computation results corresponding to all the combinations to obtain the computation result for the plurality of first-precision floating-point numbers comprises: performing the summation operation on the adjusted third-precision intermediate computation results corresponding to all the combinations to obtain a third-precision computation result for the plurality of first-precision floating-point numbers.

18. The method according to claim 17, wherein the performing the format conversion on the each first-precision intermediate result to obtain the third-precision intermediate computation result corresponding to each combination comprises: performing zero padding processing on an exponent and a mantissa of each first-precision intermediate result to obtain the third-precision intermediate computation result corresponding to each combination of second-precision floating-point numbers.

19. An electronic device, the electronic device comprising: a memory storing instructions; and at least one processor in communication with the memory, the at least one processor configured, upon execution of the instructions, to perform the following steps: obtaining a plurality of to-be-computed first-precision floating-point numbers; decomposing each to-be-computed first-precision floating-point number to obtain at least two second-precision floating-point numbers, a second precision of the second-precision floating-point number being lower than a first precision of the first-precision floating-point number; determining various combinations comprising two second-precision floating-point numbers obtained by decomposing different first-precision floating-point numbers; inputting the second-precision floating-point numbers in each combination into a second-precision multiplier to obtain an intermediate computation result corresponding to each combination; and determining a computation result for the plurality of to-be-computed first-precision floating-point numbers based on the intermediate computation result corresponding to each combination.

20. The electronic device of claim 19, wherein the processor comprises an arithmetic logic unit.

Description

BRIEF DESCRIPTION OF DRAWINGS

[0115] FIG. 1 is a flowchart of a floating-point number multiplication computation method according to an embodiment of this application;

[0116] FIG. 2 is a diagram of composition of a floating-point number according to an embodiment of this application;

[0117] FIG. 3 is a diagram of composition of a floating-point number according to an embodiment of this application;

[0118] FIG. 4 is a diagram of composition of a floating-point number according to an embodiment of this application;

[0119] FIG. 5 is a diagram of inputting a second-precision floating-point number into a second-precision multiplier according to an embodiment of this application;

[0120] FIG. 6 is a diagram of inputting a second-precision floating-point number into a second-precision multiplier according to an embodiment of this application;

[0121] FIG. 7 is a diagram of a floating-point number multiplication computation apparatus according to an embodiment of this application;

[0122] FIG. 8 is a diagram of an electronic device according to an embodiment of this application;

[0123] FIG. 9 is a flowchart of a floating-point number multiplication computation method according to an embodiment of this application;

[0124] FIG. 10 is a flowchart of a floating-point number multiplication computation method according to an embodiment of this application;

[0125] FIG. 11 is a flowchart of a floating-point number multiplication computation method according to an embodiment of this application;

[0126] FIG. 12 is a diagram of an arithmetic logic unit according to an embodiment of this application;

[0127] FIG. 13 is a diagram of an arithmetic logic unit according to an embodiment of this application; and

[0128] FIG. 14 is a diagram of an arithmetic logic unit according to an embodiment of this application.

DESCRIPTION OF EMBODIMENTS

[0129] Embodiments of this application provide a floating-point number multiplication computation method. The method may be implemented by an electronic device, and the electronic device may be any device that needs to perform floating-point number computation. For example, the electronic device may be a mobile terminal such as a mobile phone or a tablet computer, may be a computer device such as a desktop computer or a notebook computer, or may be a server. However, floating-point number computation may be related to many fields such as graphics processing, astronomy, and medicine. In all the fields, when the foregoing type of electronic device is used to perform floating-point number computation, the method provided in the embodiments of this application can be used. A high-precision floating-point number is decomposed to obtain a low-precision floating-point number, and then a low-precision multiplier is used to compute the obtained low-precision floating-point number, to finally obtain a high-precision computation result. Computation that only can be completed by a high-precision multiplier in a related technology can be completed by using a low-precision multiplier without loss of precision.

[0130] Referring to FIG. 1, an embodiment of this application provides a floating-point number multiplication computation method. A processing procedure of the method may include the following steps.

[0131] Step 101: Obtain a plurality of to-be-computed first-precision floating-point numbers.

[0132] The plurality of to-be-computed first-precision floating-point numbers may be a group of first-precision floating-point numbers on which a multiplication operation needs to be performed. “The plurality of” may be two or more than two. In this embodiment of this application, that “the plurality of” is two is used for description.

[0133] In implementation, a processor in a computer device may obtain the plurality of to-be-computed first-precision floating-point numbers. The first-precision floating-point number may be a single-precision floating-point number, a double-precision floating-point number, or the like.

[0134] Step 102: Decompose each to-be-computed first-precision floating-point number to obtain at least two second-precision floating-point numbers, where precision of the second-precision floating-point number is lower than precision of the first-precision floating-point number.

[0135] In implementation, each to-be-computed first-precision floating-point number may be decomposed to obtain a plurality of second-precision floating-point numbers, and precision of the second-precision floating-point number is lower than precision of the first-precision floating-point number. There are a plurality of possible cases for the first-precision floating-point number and the second-precision floating-point number, and several cases are enumerated below: The first-precision floating-point number may be a single-precision floating-point number (FP32), and the second-precision floating-point number may be a half-precision floating-point number (FP16). Alternatively, the first-precision floating-point number may be a double-precision floating-point number (FP64), and the second-precision floating-point number may be an FP32 or may be an FP16. The foregoing several cases are separately described below.

[0136] Case 1: In a case in which the first-precision floating-point number is an FP32 and the second-precision floating-point number is an FP16, that the FP32 is decomposed to obtain a plurality of FP16s may have the following cases.

[0137] I. One FP32 is decomposed to obtain three FP16s.

[0138] Currently, composition of an FP32 in a standard format is shown in FIG. 2, and includes a 1-bit sign, an 8-bit exponent (also referred to as an exponent), and a 23-bit mantissa. In addition, there is an omitted 1-bit integer, and the omitted integer is 1. For an FP32 in a standard format, there are totally 24 bits when the integer and the mantissa are added. Composition of an FP16 in a standard format is shown in FIG. 3, and includes a 1-bit sign, a 5-bit exponent, and a 10-bit mantissa. In addition, there is an omitted 1-bit integer, and the omitted integer is 1. For an FP16 in a standard format, there are totally 11 bits when the integer and the mantissa are added. If an FP32 in a standard format needs to be decomposed to obtain an FP16 in a standard format, three FP16s in a standard format are required.

[0139] The integer and the mantissa of the FP32 in a standard format may be divided into three parts. A first part includes the integer and the first 10 bits of the mantissa, a second part includes the 11.sup.st bit to the 21.sup.st bit of the mantissa, and a third part includes the 22.sup.nd bit and the 23.sup.rd bit of the mantissa. The three parts each are represented by an FP16 in a standard format. It should be noted herein that when the 22.sup.nd bit and the 23.sup.rd of the mantissa in the third part are represented by an FP16 in a standard format, nine zeros may be first padded after the 23.sup.rd bit of the mantissa, that is, the 22.sup.nd bit and the 23.sup.rd bit of the mantissa and the padded zeros are represented by an FP16 in a standard format.

[0140] In addition, an exponent range of the FP16 is from −15 to 15, which may indicate that a decimal point can move 15 bits to the left and can move 15 bits to the right. When the FP16 in a standard format is used to represent the first part of the FP32, a fixed exponent bias value is 0; when the FP16 in a standard format is used to represent the second part of the FP32, a fixed exponent bias value is −11; and when the FP16 in a standard format is used to represent the third part of the FP32, a fixed exponent bias value is −22. It may be learned that when the third part is represented, the corresponding fixed exponent bias value only exceeds the exponent range of the FP16. Therefore, a corresponding fixed exponent bias value may be extracted for an exponent of each FP16 in a standard format.

[0141] Therefore, an FP32 in a standard format may be represented as follows:

[0142] A.sub.1=2.sup.EA.sup.1(a.sub.0+2.sup.−S.sup.1a.sub.1+2.sup.−2S.sup.1a.sub.2), where A.sub.1 is the FP32 in a standard format; EA1 is an exponent of A.sub.1; a.sub.0, a.sub.1, and a.sub.2 are three FP16s in a standard format that are obtained through decomposition; and S.sub.1 is a smallest fixed exponent bias value. For the FP16 in a standard format, S.sub.1=11.

[0143] In addition, a common exponent bias value may be extracted for exponents of all the FP16s in a standard format. Therefore, an FP32 in a standard format may alternatively be represented as follows:

[0144] A.sub.1=2.sup.EA.sup.1.sup.−S.sup.1(a.sub.0′+a.sub.1′+a.sub.2′), where a.sub.0′, a.sub.1′, and a.sub.2′ are three FP16s in a standard format that are obtained through decomposition. In the foregoing two representation methods, the FP16s obtained through decomposition have the following relationships: a.sub.0=2.sup.−S.sup.1a.sub.0′, a.sub.1=a.sub.1′, and a.sub.2=2.sup.S.sup.1a.sub.2′.

[0145] II. One FP32 is decomposed to obtain two FP16s.

[0146] To decrease a quantity of FP16s obtained through decomposition, a current FP16 in a standard format may be adjusted. A mantissa of the FP16 is adjusted to 13 bits, and bit quantities of a sign and an exponent remain unchanged. An adjusted FP16 may be referred to as an FP16 in a non-standard format. In this case, there are totally 14 bits when an integer and a mantissa of the FP16 in a non-standard format are added. Therefore, if a mantissa of an FP32 in a standard format needs to be represented by using an FP16 in a non-standard format, only two FP16s in a non-standard format are required.

[0147] An integer and a mantissa of an FP32 in a standard format are divided into two parts. A first part includes the integer and the first 13 bits of the mantissa, and a second part includes the 14.sup.th bit to the 23.sup.rd bit. The two parts each are represented by an FP16 in a non-standard format.

[0148] It should be further noted herein that when the second part is represented by a non-standard FP16, four zeros may be first padded after the 23.sup.rd bit of the mantissa, that is, the 14.sup.th bit to the 23.sup.rd bit of the mantissa and the padded zeros are represented by an FP16 in a non-standard format. The same as Case 1, herein, a corresponding fixed exponent bias value may also be extracted for each FP16 in a standard format.

[0149] Therefore, an FP32 in a standard format may alternatively be represented as follows:

[0150] A.sub.2=2.sup.EA.sup.2(a.sub.3+2.sup.−S.sup.2a.sub.4), where A.sub.2 is the FP32 in a standard format; EA.sub.2 is an exponent of A.sub.2; a.sub.3 and a.sub.4 are two FP16s in a non-standard format that are obtained through decomposition; and S.sub.2 is a fixed exponent bias value. For the FP16 in a non-standard format, S.sub.2=14.

[0151] In addition, a common exponent bias value may be extracted for exponents of all the FP16s in a standard format. Therefore, an FP32 in a standard format may alternatively be represented as follows:

[0152] A.sub.2=2.sup.EA.sup.2.sup.−S.sup.2(a.sub.3′+a.sub.4′), where a.sub.3′ and a.sub.4′ are two FP16s in a non-standard format that are obtained through decomposition. In the foregoing two representation methods, the FP16s obtained through decomposition have the following relationships: a.sub.3=2.sup.−S.sup.2a.sub.3′, and a.sub.4=a.sub.4′.

[0153] Case 2: In a case in which the first-precision floating-point number is an FP64 and the second-precision floating-point number is an FP32, that the FP64 is decomposed to obtain a plurality of FP32s may have the following cases.

[0154] I. One FP64 is decomposed to obtain three FP32s.

[0155] Currently, composition of an FP64 in a standard format is shown in FIG. 4, and includes a 1-bit sign, an 11-bit exponent (also referred to as an exponent), and a 52-bit mantissa. In addition, there is an omitted 1-bit integer, and the omitted integer is 1. For an FP64 in a standard format, there are totally 53 bits when an integer and a mantissa are added. For the FP32 in a standard format described above, there are totally 24 bits when an integer and a mantissa are added. If an FP64 in a standard format needs to be decomposed to obtain an FP32 in a standard format, three FP32s in a standard format are required.

[0156] The integer and the mantissa of the FP64 in a standard format may be divided into three parts. A first part includes the integer and the first 23 bits of the mantissa, a second part includes the 24.sup.th bit to the 47.sup.th bit of the mantissa, and a third part includes the 48.sup.th bit to the 52.sup.nd bit of the mantissa. The three parts each are represented by an FP32 in a standard format.

[0157] It should be further noted herein that when the 48.sup.th bit to the 52.sup.nd bit of the mantissa in the third part are represented by an FP32 in a standard format, 18 zeros may be first padded after the 23.sup.rd bit of the mantissa, that is, the 48.sup.th bit to the 52.sup.nd bit of the mantissa and the padded zeros are represented by an FP32 in a standard format.

[0158] Therefore, an FP64 in a standard format may be represented as follows:

[0159] A.sub.3=2.sup.EA.sup.3(a.sub.5+a.sub.6+a.sub.7), where A.sub.3 is the FP64 in a standard format; EA.sub.3 is an exponent of A.sub.3; and a.sub.5, a.sub.6, and a.sub.7 are three FP32s in a standard format that are obtained through decomposition.

[0160] II. One FP64 is decomposed to obtain two FP32s.

[0161] To decrease a quantity of FP32s obtained through decomposition, a current FP32 in a standard format may be adjusted. A mantissa of the FP32 is adjusted to 26 bits, and bit quantities of a sign and an exponent remain unchanged. An adjusted FP32 may be referred to as an FP32 in a non-standard format. In this case, there are totally 27 bits when an integer and a mantissa of the FP32 in a non-standard format are added. Therefore, if a mantissa of an FP64 in a standard format needs to be represented by using an FP32 in a non-standard format, only two FP32s in a non-standard format are required.

[0162] An integer and a mantissa of the FP64 in a standard format are divided into two parts. A first part includes the integer and the first 26 bits of the mantissa, and a second part includes the 27.sup.th bit to the 53.sup.rd bit. The two parts each are represented by an FP32 in a non-standard format.

[0163] Therefore, an FP64 in a standard format may alternatively be represented as follows:

[0164] A.sub.4=2.sup.EA.sup.4 (a.sub.8+a.sub.9), where A.sub.4 is the FP64 in a standard format; EA.sub.4 is an exponent of A.sub.4; and a.sub.8 and a.sub.9 are two FP32s in a non-standard format that are obtained through decomposition.

[0165] Case 3: In a case in which the first-precision floating-point number is an FP64 and the second-precision floating-point number is an FP16, that the FP64 is decomposed to obtain a plurality of FP16s may have the following cases.

[0166] I. One FP64 is decomposed to obtain five FP16s.

[0167] For an FP64 in a standard format, there are totally 53 bits when an integer and a mantissa are added. For the FP32 in a standard format described above, there are totally 24 bits when an integer and a mantissa are added. If an FP64 in a standard format needs to be decomposed to obtain an FP16 in a standard format, five FP16s in a standard format are required.

[0168] The integer and the mantissa of the FP64 in a standard format may be divided into five parts. A first part includes the integer and the first 10 bits of the mantissa, a second part includes the 11.sup.st bit to the 21.sup.st bit of the mantissa, a third part includes the 22.sup.nd bit to the 32.sup.nd bit of the mantissa, a fourth part includes the 33.sup.rd bit to the 43.sup.rd bit of the mantissa, and a fifth part includes the 44.sup.th bit to the 52.sup.nd bit of the mantissa. The five parts each are represented by an FP64 in a standard format. It should be further noted herein that when the 44.sup.th bit to the 52.sup.nd bit of the mantissa in the fifth part are represented by an FP16 in a standard format, two zeros may be first padded after the 52.sup.nd bit of the mantissa, that is, the 44.sup.th bit to the 52.sup.nd bit of the mantissa and the padded zeros are represented by an FP16 in a standard format.

[0169] In addition, an exponent range of the FP16 is from −15 to 15, which may indicate that a decimal point can move 15 bits to the left and can move 15 bits to the right. When the FP16 in a standard format is used to represent the first part of the FP64, a fixed exponent bias value is 0; when the FP16 in a standard format is used to represent the second part of the FP64, a fixed exponent bias value is −11; when the FP16 in a standard format is used to represent the third part of the FP64, a fixed exponent bias value is −22; when the FP16 in a standard format is used to represent the fourth part of the FP64, a fixed exponent bias value is −33; and when the FP16 in a standard format is used to represent the fifth part of the FP64, a fixed exponent bias value is −44. It may be learned that when the third part, the fourth part, and the fifth part are represented, the corresponding fixed exponent bias value only exceeds the exponent range of the FP16. Therefore, a corresponding fixed exponent bias value may be extracted for an exponent of each FP16 in a standard format.

[0170] Therefore, an FP64 in a standard format may be represented as follows: A.sub.5=2.sup.EA.sup.5 (a.sub.10+2.sup.−S.sup.1a.sub.11+2.sup.−2S.sup.1a.sub.12+2.sup.−3S.sup.1a.sub.13+2.sup.−4S.sup.1a.sub.14), where A.sub.5 is the FP64 in a standard format; EA.sub.5 is an exponent of A.sub.5; a.sub.10, a.sub.11, a.sub.12, a.sub.13, and a.sub.14 are five FP16s in a standard format that are obtained through composition; and S.sub.1 is a smallest fixed exponent bias value. For the FP16 in a standard format, S.sub.1=11.

[0171] II. One FP64 is decomposed to obtain four FP16s.

[0172] Similarly, the FP64 may be decomposed to obtain the foregoing FP16 in a non-standard format. If a mantissa of an FP64 in a standard format is represented by using an FP16 in a non-standard format, only four FP16s in a non-standard format are required.

[0173] An integer and the mantissa of the FP64 in a standard format are divided into four parts. A first part includes the integer and the first 13 bits of the mantissa, a second part includes the 14.sup.th bit to the 27.sup.th bit, a third part includes the 28.sup.th bit to the 41.sup.st bit, and a fourth part includes the 42.sup.nd bit to the 52.sup.nd bit.

[0174] It should be further noted herein that when the 42.sup.nd bit to the 52.sup.nd bit of the mantissa in the fourth part are represented by an FP16 in a non-standard format, three zeros may be first padded after the 52.sup.nd bit of the mantissa, that is, the 42.sup.nd bit to the 52.sup.nd bit of the mantissa and the padded zeros are represented by an FP16 in a standard format. In addition, an exponent range of the FP16 is from −15 to 15, which may indicate that a decimal point can move 15 bits to the left and can move 15 bits to the right. When the FP16 in a non-standard format is used to represent the first part of the FP64, a fixed exponent bias value is 0; when the FP16 in a non-standard format is used to represent the second part of the FP64, a fixed exponent bias value is −14; when the FP16 in a non-standard format is used to represent the third part of the FP64, a fixed exponent bias value is −28; and when the FP16 in a non-standard format is used to represent the fourth part of the FP64, a fixed exponent bias value is −42. It may be learned that when the third part and the fourth part are represented, the corresponding fixed exponent bias value only exceeds the exponent range of the FP16. Therefore, a corresponding fixed exponent bias value may be extracted for an exponent of each FP16 in a non-standard format.

[0175] Therefore, an FP64 in a standard format may alternatively be represented as follows:

[0176] A.sub.6=2.sup.EA.sup.6 (a.sub.15+2.sup.−S.sup.2a.sub.16+2.sup.−2S.sup.2a.sub.17+2.sup.−3S.sup.2a.sub.18), where A.sub.6 is the FP64 in a standard format; EA.sub.6 is an exponent of A.sub.6; a.sub.15, a.sub.16, a.sub.17, and a.sub.18 are four FP16s in a non-standard format that are obtained through composition; and S.sub.2 is a smallest fixed exponent bias value. For the FP16 in a standard format, S.sub.2=−14.

[0177] Step 103: Determine various combinations including two second-precision floating-point numbers obtained by decomposing different first-precision floating-point numbers.

[0178] In implementation, every two of second-precision floating-point numbers obtained by decomposing different first-precision floating-point numbers are combined. An example in which two FP32s each are decomposed to obtain a plurality of FP16s, two FP64s each are decomposed to obtain a plurality of FP32s, and two FP64s each are decomposed to obtain a plurality of FP16s is used below for description.

[0179] Case 1: Two FP32s each are decomposed to obtain a plurality of FP16s.

[0180] I. Two FP32s in a standard format each are decomposed to obtain three FP16s in a standard format. The two FP32s are respectively A.sub.1 and B.sub.1, where A.sub.1 may be decomposed to obtain a.sub.0, a.sub.1, and a.sub.2; and B.sub.1 may be decomposed to obtain b.sub.0, b.sub.1, and b.sub.2. Therefore, there may be the following combinations between a.sub.0, a.sub.1, a.sub.2, b.sub.0, b.sub.1, and b.sub.2: a.sub.0b.sub.0, a.sub.0b.sub.1, a.sub.1b.sub.0, a.sub.0b.sub.2, a.sub.1b.sub.1, a.sub.2b.sub.0, a.sub.1b.sub.2, a.sub.2b.sub.1, and a.sub.2b.sub.2.

[0181] II. Two FP32s in a standard format each are decomposed to obtain two FP16s in a non-standard format. The two FP32s are respectively A.sub.2 and B.sub.2, where A.sub.2 may be decomposed to obtain a.sub.3 and a.sub.4; and Bmay be decomposed to obtain b.sub.3 and b.sub.4. Therefore, there may be the following combinations between a.sub.3, a.sub.4, b.sub.3, and b.sub.4: a.sub.3b.sub.3, a.sub.3b.sub.4, a.sub.4b.sub.3, and a.sub.4b.sub.4.

[0182] Case 2: Two FP64s each are decomposed to obtain a plurality of FP32s.

[0183] I. Two FP64s in a standard format each are decomposed to obtain three FP32s in a standard format. The two FP64s are respectively A.sub.3 and B.sub.3, where A.sub.3 may be decomposed to obtain a.sub.5, a.sub.6, and a.sub.7; and B.sub.3 may be decomposed to obtain b.sub.5, b.sub.6, and b.sub.7. Therefore, there may be the following combinations between a.sub.5, a.sub.6, a.sub.7, b.sub.5, b.sub.6, and b.sub.7: a.sub.5b.sub.5, a.sub.5b.sub.6, a.sub.6b.sub.5, a.sub.5b.sub.7, a.sub.6b.sub.6, a.sub.7b.sub.5, a.sub.6b.sub.7, a.sub.7b.sub.6, and a.sub.7b.sub.7.

[0184] II. Two FP64s in a standard format each are decomposed to obtain two FP32s in a non-standard format. The two FP64s are respectively A.sub.4 and B.sub.3, where A.sub.4 may be decomposed to obtain a.sub.8 and a.sub.9; and B.sub.4 may be decomposed to obtain b.sub.8 and b.sub.9. Therefore, there may be the following combinations between a.sub.8, a.sub.9, b.sub.8, and b.sub.9: a.sub.8b.sub.8, a.sub.8b.sub.9, a.sub.9b.sub.8, and a.sub.9b.sub.9.

[0185] Case 3: Two FP64s each are decomposed to obtain a plurality of FP16s.

[0186] I. Two FP64s in a standard format each are decomposed to obtain five FP16s in a standard format. The two FP64s are respectively A.sub.5 and B.sub.5, where A.sub.5 may be decomposed to obtain a.sub.10, a.sub.11, a.sub.12, a.sub.13, and a.sub.13; and B.sub.5 may be decomposed to obtain b.sub.10, b.sub.11, b.sub.12, b.sub.13, and b.sub.14. Therefore, there may be 25 combinations between a.sub.10, a.sub.11, a.sub.12, a.sub.13, a.sub.14, b.sub.10, b.sub.11, b.sub.12, b.sub.13, and b.sub.14, such as a.sub.10b.sub.10, a.sub.10b.sub.11, a.sub.11b.sub.10, a.sub.10b.sub.12, a.sub.11b.sub.11, a.sub.12b.sub.10, . . . , and a.sub.14b.sub.14. Combination manners herein are the same as those in the foregoing, and are not enumerated one by one herein.

[0187] II. Two FP64s in a standard format each are decomposed to obtain four FP16s in a non-standard format. The two FP64s are respectively A.sub.6 and B.sub.6, where A.sub.6 may be decomposed to obtain a.sub.15, a.sub.16, a.sub.17, and a.sub.18; and B.sub.6 may be decomposed to obtain b.sub.15, b.sub.16, b.sub.17, and b.sub.18. Therefore, there may be 16 combinations between a.sub.15, a.sub.16, a.sub.17, a.sub.18, b.sub.15, b.sub.16, b.sub.17, and b.sub.18, such as a.sub.15b.sub.15, a.sub.15b.sub.16, a.sub.16b.sub.15, . . . , and a.sub.18b.sub.18. Combination manners herein are the same as those in the foregoing, and are not enumerated one by one herein.

[0188] Step 104: Input the second-precision floating-point numbers in each combination into a second-precision multiplier to obtain an intermediate computation result corresponding to each combination.

[0189] In implementation, each obtained combination is input into the second-precision multiplier for computation to obtain the intermediate computation result corresponding to the combination. For different second-precision floating-point numbers, output intermediate computation results also have different precision. For example, if the first-precision floating-point number is an FP32, the intermediate computation result is an FP64; or if the first-precision floating-point number is an FP16, the intermediate computation result is an FP32. A quantity of second-precision multipliers may be the same as or may be different from a quantity of combinations of second-precision floating-point numbers.

[0190] When the quantity of second-precision multipliers is the same as the quantity of combinations of second-precision floating-point numbers, as shown in FIG. 5, two first-precision floating-point numbers A and B each are decomposed to obtain two second-precision floating-point numbers: A1 and A0, and B1 and B0. Four combinations may be obtained for A1, A0, B1, and BO, and there are four second-precision multipliers. Each combination of second-precision floating-point numbers is input into a second-precision multiplier, that is, each combination corresponds to a second-precision multiplier.

[0191] When the quantity of second-precision multipliers is different from the quantity of combinations of second-precision floating-point numbers, as shown in FIG. 6, two first-precision floating-point numbers A and B each are decomposed to obtain two second-precision floating-point numbers: A1 and A0, and B1 and B0. Four combinations may be obtained for A1, A0, B1, and BO, and there is only one second-precision multiplier. In this case, the four combinations of second-precision floating-point numbers are sequentially input into the second-precision multiplier.

[0192] Step 105: Determine a computation result for the plurality of to-be-computed first-precision floating-point numbers based on the intermediate computation result corresponding to each combination.

[0193] In implementation, when the first-precision floating-point number is decomposed to obtain the second-precision floating-point number, an exponent bias value corresponding to each second-precision floating-point number may be further obtained. In several cases in which the to-be-computed first-precision floating-point number is decomposed in step 102, exponent bias values corresponding to the second-precision floating-point number are separately described below.

[0194] Case 1: The first-precision floating-point number is an FP32, and the second-precision floating-point number is an FP16.

[0195] I. One FP32 is decomposed to obtain three FP16s.

[0196] In this case, the FP32 may be represented as follows: A.sub.1=2.sup.EA.sup.1 (a.sub.0+2.sup.−S.sup.1a.sub.1+2.sup.−2S.sup.1a.sub.2). Therefore, an exponent bias value corresponding to a.sub.0 is EA.sub.1, an exponent bias value corresponding to a.sub.1 is EA.sub.1−S.sub.1, and an exponent bias value corresponding to a.sub.2 is EA.sub.1−2S.sub.1. Alternatively, the FP32 may be represented as follows: A.sub.1=2.sup.EA.sup.1.sup.−S.sup.1 (a.sub.0+a.sub.1′+a.sub.2′). Therefore, exponent bias values corresponding to a.sub.0′, a.sub.1′, and a.sub.2′ each are EA.sub.1−S.sub.1.

[0197] II. One FP32 is decomposed to obtain two FP16s.

[0198] In this case, the FP32 may be represented as follows: A.sub.2=2.sup.EA.sup.2 (a.sub.3+2.sup.−S.sup.2a.sub.4). Therefore, an exponent bias value corresponding to a.sub.3 is EA.sub.2, and an exponent bias value corresponding to a.sub.4 is EA.sub.2−S.sub.2. Alternatively, the FP32 may be represented as follows: A.sub.2=2.sup.EA.sup.2.sup.−S.sup.2 (a.sub.3′+a.sub.4′). Therefore, exponent bias values corresponding to a.sub.3′ and a.sub.4′ each are EA.sub.2−S.sub.2.

[0199] Case 2: The first-precision floating-point number is an FP64, and the second-precision floating-point number is an FP32.

[0200] I. One FP64 is decomposed to obtain three FP32s.

[0201] In this case, the FP64 may be represented as follows: A.sub.3=2.sup.EA.sup.3 (a.sub.5+a.sub.6+a.sub.7). Therefore, exponent bias values corresponding to a.sub.5, a.sub.6, and a.sub.7 each are EA.sub.3.

[0202] II. One FP64 is decomposed to obtain two FP32s.

[0203] In this case, the FP64 may be represented as follows: A.sub.4=2.sup.EA.sup.4 (a.sub.8+a.sub.9). Therefore, exponent bias values corresponding to a.sub.8 and a.sub.9 each are EA.sub.4.

[0204] Case 3: In a case in which the first-precision floating-point number is an FP64 and the second-precision floating-point number is an FP16, that the FP64 is decomposed to obtain a plurality of FP16s may have the following cases.

[0205] I. One FP64 is decomposed to obtain five FP16s.

[0206] In this case, the FP64 may be represented as follows: A.sub.5=2.sup.EA.sup.5 (a.sub.10+2.sup.−S.sup.1a.sub.11+2.sup.−2S.sup.1a.sub.12+2.sup.−3S.sup.1a.sub.13+2.sup.−4S.sup.1a.sub.14). Therefore, an exponent bias value corresponding to a.sub.10 s EA.sub.5, an exponent bias value corresponding to a.sub.11 is EA.sub.5−S.sub.1, an exponent bias value corresponding to a.sub.12 is EA.sub.5−2S.sub.1, an exponent bias value corresponding to a.sub.13 is EA.sub.5−3S.sub.1, and an exponent bias value corresponding to a.sub.14 is EA.sub.5−4S.sub.1.

[0207] II. One FP64 is decomposed to obtain four FP16s.

[0208] In this case, the FP64 may be represented as follows: A.sub.6=2.sup.EA.sup.6 (a.sub.15+2.sup.−S.sup.2a.sub.16+2.sup.−2S.sup.2a.sub.17+2.sup.−3S.sup.2a.sub.18). Therefore, an exponent bias value corresponding to a.sub.15 is EA.sub.6, an exponent bias value corresponding to a.sub.16 is EA.sub.6−S.sub.2 an exponent bias value corresponding to a.sub.17 is EA.sub.6−2S.sub.2, and an exponent bias value corresponding to a.sub.18 is EA.sub.6−3S.sub.2.

[0209] Correspondingly, for the intermediate computation result corresponding to each combination, an exponent of the intermediate computation result corresponding to each combination may be adjusted based on the exponent bias value corresponding to the second-precision floating-point number in each combination, to obtain an adjusted intermediate computation result. Then, the adjusted intermediate computation results are accumulated to obtain a computation result. During accumulation herein, the adjusted intermediate computation results may be input into an accumulator to obtain the computation result.

[0210] When the exponent of the intermediate computation result is adjusted, the exponent of the intermediate computation result corresponding to each combination of second-precision floating-point numbers and the exponent bias value corresponding to the second-precision floating-point number in each combination may be added to obtain the adjusted intermediate computation result.

[0211] In an embodiment, a format of a second-precision intermediate computation result that is output by the second-precision multiplier may be adjusted to finally obtain a computation result with higher precision. Corresponding processing may be as follows: The second-precision floating-point numbers in each combination are input into the second-precision multiplier to obtain a first-precision intermediate computation result corresponding to each combination of second-precision floating-point numbers, and format conversion is performed on each first-precision intermediate computation result to obtain a third-precision intermediate computation result corresponding to each combination of second-precision floating-point numbers, where precision of the third-precision intermediate computation result is higher than precision of the first-precision intermediate computation result. An exponent of the third-precision intermediate computation result corresponding to each combination of second-precision floating-point numbers is adjusted based on the exponent bias value corresponding to the second-precision floating-point number in each combination, to obtain an adjusted third-precision intermediate computation result. A summation operation is performed on the adjusted third-precision intermediate computation results corresponding to all the groups of second-precision floating-point numbers, to obtain a third-precision computation result for the plurality of first-precision floating-point numbers.

[0212] When format conversion is performed on the first-precision intermediate computation result, zero padding processing may be performed on an exponent and a mantissa of each first-precision intermediate result to obtain the third-precision intermediate computation result corresponding to each combination of second-precision floating-point numbers.

[0213] For example, if the first-precision floating-point number is an FP32 and the second-precision floating-point number is an FP16, the first-precision intermediate computation result that is output by the second-precision multiplier is also an FP32. A format of the first-precision intermediate computation result may be adjusted to the third-precision intermediate computation result, and the third-precision intermediate computation result may be an FP64. Three zeros are padded after an end bit of an exponent of the first-precision intermediate computation result to extend a quantity of exponent bits from 8 bits to 11 bits, and the quantity of exponent bits is the same as a quantity of exponent bits of the FP64. For a mantissa of the first-precision intermediate computation result, 29 zeros are padded after an end bit to extend a quantity of mantissa bits from 23 bits to 52 bits, and the quantity of mantissa bits is the same as a quantity of mantissa bits of the FP64.

[0214] Then, after the exponent of the third-precision intermediate computation result is adjusted, the adjusted intermediate computation results are accumulated to obtain the third-precision computation result. Similarly, during accumulation herein, the adjusted intermediate computation results may be input into the accumulator to obtain the computation result.

[0215] Herein, to better reflect an overall procedure of the solution in the embodiments of this application, multiplication computation on first-precision floating-point numbers A and B is used as an example for description. FIG. 9 is a flowchart of a floating-point number multiplication computation method according to an embodiment of this application.

[0216] First-precision floating-point number decomposition logic is separately inputted for A and B, to separately perform first-precision floating-point number decomposition on A and B, to obtain second-precision floating-point numbers A1 and A0 that correspond to A and exponent bias values respectively corresponding to A1 and A0, and obtain second-precision floating-point numbers B1 and B0 that correspond to B and exponent bias values respectively corresponding to B1 and B0. The decomposition logic may be implemented by using a logic circuit of hardware. For a specific decomposition method, refer to step 102.

[0217] Then, the second-precision floating-point numbers obtained by decomposing different first-precision floating-point numbers are combined, and each obtained combination is input into a second-precision multiplier to obtain an intermediate computation result corresponding to the combination. For a specific combination method, refer to step 103. For a specific method for computing the intermediate computation result, refer to step 104.

[0218] Further, exponent adjustment logic is executed for the intermediate computation result corresponding to each combination, and an exponent of the intermediate computation result is adjusted by using the exponent bias value corresponding to the second-precision floating-point number in the combination, to obtain an adjusted intermediate computation result. For a specific step, refer to the adjustment method in step 105. The foregoing exponent adjustment may be performed by an exponent adjustment logic circuit.

[0219] Finally, the adjusted intermediate computation results corresponding to all the combinations may be input into an accumulator for accumulation to obtain a final computation result. For a specific step, refer to the method description in step 105. The accumulator is a hardware accumulator circuit.

[0220] Similarly, to better reflect an overall procedure of the solution in the embodiments of this application, multiplication computation on first-precision floating-point numbers A and B is used as an example for description. FIG. 10 is a flowchart of another floating-point number multiplication computation method according to an embodiment of this application.

[0221] First-precision floating-point number decomposition logic is separately inputted for A and B, to separately perform first-precision floating-point number decomposition on A and B, to obtain a plurality of second-precision floating-point numbers A3, A2, A1, and A0 that correspond to A and exponent bias values respectively corresponding to A3, A2, A1, and A0, and obtain a plurality of second-precision floating-point numbers B3, B2, B1, and B0 that correspond to B and exponent bias values respectively corresponding to B3, B2, B1, and B0. The decomposition logic may be implemented by using a logic circuit of hardware. For a specific decomposition method, refer to step 102.

[0222] Then, the second-precision floating-point numbers obtained by decomposing different first-precision floating-point numbers are combined, and each obtained combination is input into a second-precision multiplier to obtain a third-precision intermediate computation result corresponding to the combination. For a specific combination method, refer to step 103. For a specific method for computing the intermediate computation result, refer to step 104.

[0223] Further, format conversion logic is executed for the third-precision intermediate computation result corresponding to each combination, to convert a format of the third-precision intermediate computation result corresponding to each combination into a first-precision intermediate computation result. For a specific step, refer to the format conversion method in step 105. The foregoing format conversion may be performed by a format conversion logic circuit.

[0224] Further, exponent adjustment logic is executed for the first-precision intermediate computation result corresponding to each combination, and an exponent of the first-precision intermediate computation result is adjusted by using the exponent bias value corresponding to the second-precision floating-point number in the combination, to obtain an adjusted first-precision intermediate computation result. For a specific step, refer to the adjustment method in step 105. The foregoing exponent adjustment may be performed by an exponent adjustment logic circuit.

[0225] Finally, the adjusted first-precision intermediate computation results corresponding to all the combinations may be input into an accumulator for accumulation to obtain a final first-precision computation result. For a specific step, refer to the method description in step 105. The accumulator is a hardware accumulator circuit.

[0226] Similarly, to better reflect an overall procedure of the solution in the embodiments of this application, multiplication computation on first-precision floating-point numbers A and B is used as an example for description. FIG. 11 is a flowchart of another floating-point number multiplication computation method according to an embodiment of this application.

[0227] First-precision floating-point number decomposition logic is separately inputted for A and B, to separately perform first-precision floating-point number decomposition on A and B, to obtain a plurality of second-precision floating-point numbers A1 and A0 that correspond to A and exponent bias values respectively corresponding to A1 and A0, and obtain a plurality of second-precision floating-point numbers B1 and B0 that correspond to B and exponent bias values respectively corresponding to B1 and B0. The decomposition logic may be implemented by using a logic circuit of hardware. For a specific decomposition method, refer to step 102.

[0228] Then, the second-precision floating-point numbers obtained by decomposing different first-precision floating-point numbers are combined, and each obtained combination is input into a second-precision multiplier to obtain a first-precision intermediate computation result corresponding to the combination. For a specific combination method, refer to step 103. For a specific method for computing the intermediate computation result, refer to step 104.

[0229] Further, format conversion logic is executed for the first-precision intermediate computation result corresponding to each combination, to convert a format of the first-precision intermediate computation result corresponding to each combination into a third-precision intermediate computation result. For a specific step, refer to the format conversion method in step 105. The foregoing format conversion may be performed by a format conversion logic circuit.

[0230] Further, exponent adjustment logic is executed for the third-precision intermediate computation result corresponding to each combination, and an exponent of the third-precision intermediate computation result is adjusted by using the exponent bias value corresponding to the second-precision floating-point number in the combination, to obtain an adjusted third-precision intermediate computation result. For a specific step, refer to the adjustment method in step 105. The foregoing exponent adjustment may be performed by an exponent adjustment logic circuit.

[0231] Finally, the adjusted third-precision intermediate computation results corresponding to all the combinations may be input into an accumulator for accumulation to obtain a final third-precision computation result. For a specific step, refer to the method description in step 105. The accumulator is a hardware accumulator circuit.

[0232] In addition, it should be further noted that the floating-point number computation method provided in the embodiments of this application may be used to compute a floating-point number whose precision is higher than or equal to second precision. Herein, the second precision refers to precision of a floating-point number whose computation is supported by the second-precision multiplier.

[0233] For example, the second-precision multiplier is a half-precision multiplier, that is, precision of a floating-point number whose computation is supported is half precision. Therefore, in the embodiments of this application, computation of a half-precision floating-point number, a single-precision floating-point number, a double-precision floating-point number, and a floating-point number with higher precision can be implemented. It may be understood that for computation of a half-precision floating-point number, the half-precision floating-point number does not need to be decomposed, and it is only required to input a to-be-computed half-precision floating-point number into a half-precision multiplier. However, computation of a single-precision floating-point number and a floating-point number with higher precision may be implemented by using the foregoing floating-point number multiplication computation method.

[0234] In the embodiments of this application, a plurality of first-precision floating-point numbers with relatively high precision may be computed by the second-precision multiplier with relatively low precision, and a first-precision multiplier does not need to be used any longer. Therefore, the first-precision floating-point number with relatively high precision may be computed in a device having only the second-precision multiplier with relatively low precision, and the first-precision multiplier does not need to be additionally designed, thereby effectively saving computing resources.

[0235] Based on the same technical concept, an embodiment of this application further provides a floating-point number multiplication computation apparatus. As shown in FIG. 7, the apparatus includes an obtaining module 710, a decomposition module 720, a combination module 730, an input module 740, and a determining module 750.

[0236] The obtaining module 710 is configured to obtain a plurality of to-be-computed first-precision floating-point numbers, and may implement the obtaining function in step 201 and another implicit step.

[0237] The decomposition module 720 is configured to decompose each to-be-computed first-precision floating-point number to obtain at least two second-precision floating-point numbers, and may implement the decomposition function in step 202 and another implicit step, where precision of the second-precision floating-point number is lower than precision of the first-precision floating-point number.

[0238] The combination module 730 is configured to determine various combinations including two second-precision floating-point numbers obtained by decomposing different first-precision floating-point numbers, and may implement the combination function in step 203 and another implicit step.

[0239] The input module 740 is configured to input the second-precision floating-point numbers in each combination into a second-precision multiplier to obtain an intermediate computation result corresponding to each combination, and may implement the input function in step 204 and another implicit step.

[0240] The determining module 750 is configured to determine a computation result for the plurality of to-be-computed first-precision floating-point numbers based on the intermediate computation result corresponding to each combination, and may implement the determining function in step 205 and another implicit step.

[0241] In an embodiment, the decomposition module 720 is further configured to:

[0242] determine an exponent bias value corresponding to each second-precision floating-point number.

[0243] The determining module 750 is configured to:

[0244] adjust, based on the exponent bias value corresponding to the second-precision floating-point number in each combination, an exponent of the intermediate computation result corresponding to each combination, to obtain an adjusted intermediate computation result; and

[0245] perform a summation operation on the adjusted intermediate computation results corresponding to all the combinations, to obtain the computation result for the plurality of first-precision floating-point numbers.

[0246] In an embodiment, the determining module 750 is configured to:

[0247] add the exponent of the intermediate computation result corresponding to each combination of second-precision floating-point numbers and the exponent bias value corresponding to the second-precision floating-point number in each combination, to obtain the adjusted intermediate computation result.

[0248] In an embodiment, the intermediate computation result is a first-precision intermediate computation result, and the computation result is a first-precision computation result.

[0249] In an embodiment, the first-precision floating-point number is a single-precision floating-point number, the second-precision floating-point number is a half-precision floating-point number, the first-precision intermediate computation result is a single-precision intermediate computation result, the first-precision computation result is a single-precision computation result, and the second-precision multiplier is a half-precision multiplier; or

[0250] the first-precision floating-point number is a double-precision floating-point number, the second-precision floating-point number is a single-precision floating-point number, the first-precision intermediate computation result is a double-precision intermediate computation result, the first-precision computation result is a double-precision computation result, and the second-precision multiplier is a single-precision multiplier.

[0251] In an embodiment, the input module 740 is configured to:

[0252] input the second-precision floating-point numbers in each combination into the second-precision multiplier to obtain a first-precision intermediate computation result corresponding to each combination, and perform format conversion on each first-precision intermediate computation result to obtain a third-precision intermediate computation result corresponding to each combination, where precision of the third-precision intermediate computation result is higher than precision of the first-precision intermediate computation result.

[0253] The determining module 750 is configured to:

[0254] adjust, based on the exponent bias value corresponding to the second-precision floating-point number in each combination, an exponent of the third-precision intermediate computation result corresponding to each combination, to obtain an adjusted third-precision intermediate computation result; and

[0255] perform a summation operation on the adjusted third-precision intermediate computation results corresponding to all the combinations, to obtain a third-precision computation result for the plurality of first-precision floating-point numbers.

[0256] In an embodiment, the input module 740 is configured to:

[0257] perform zero padding processing on an exponent and a mantissa of each first-precision intermediate result to obtain the third-precision intermediate computation result corresponding to each combination of second-precision floating-point numbers.

[0258] In an embodiment, the first-precision floating-point number is a single-precision floating-point number, the second-precision floating-point number is a half-precision floating-point number, the first-precision intermediate computation result is a single-precision intermediate computation result, the third-precision intermediate computation result is a double-precision intermediate computation result, the third-precision computation result is a double-precision computation result, and the second-precision multiplier is a half-precision multiplier.

[0259] In an embodiment, the input module 740 is configured to:

[0260] input the second-precision floating-point numbers in each combination into the second-precision multiplier to obtain a third-precision intermediate computation result corresponding to each combination, and perform format conversion on each third-precision intermediate computation result to obtain a first-precision intermediate computation result corresponding to each combination.

[0261] The determining module 750 is configured to:

[0262] adjust, based on the exponent bias value corresponding to the second-precision floating-point number in each combination, an exponent of the first-precision intermediate computation result corresponding to each combination, to obtain an adjusted first-precision intermediate computation result; and

[0263] perform a summation operation on the adjusted first-precision intermediate computation results corresponding to all the groups of second-precision floating-point numbers, to obtain a first-precision computation result for the plurality of first-precision floating-point numbers.

[0264] In an embodiment, the first-precision floating-point number is a double-precision floating-point number, the second-precision floating-point number is a half-precision floating-point number, the third-precision intermediate computation result is a single-precision intermediate computation result, the first-precision intermediate computation result is a double-precision intermediate computation result, the first-precision computation result is a double-precision computation result, and the second-precision multiplier is a half-precision multiplier.

[0265] It should be noted that the foregoing modules may be implemented by a processor, or may be implemented by a processor together with a memory, or may be implemented by executing a program instruction in a memory by a processor.

[0266] It should be noted that division of the foregoing functional modules is only described as an example during floating-point number computation by the floating-point number multiplication computation apparatus provided in the foregoing embodiments. In actual application, the foregoing functions may be allocated, based on a requirement, to be implemented by different functional modules, an internal structure of the electronic device is divided into different functional modules to implement all or some of the functions described above. In addition, the floating-point number multiplication computation apparatus provided in the foregoing embodiment and the floating-point number multiplication computation method embodiment belong to a same concept. For a specific implementation process of the apparatus, refer to the method embodiment. Details are not described herein again.

[0267] Based on the same technical concept, an embodiment of this application further provides an arithmetic logic unit. The arithmetic logic unit is a hardware computation unit in a processor. As shown in FIG. 12, the arithmetic logic unit 1200 includes a floating-point number decomposition circuit 1202, a second-precision multiplier 1203, an exponent adjustment circuit 1207, and an accumulator 1209.

[0268] The floating-point number decomposition circuit 1202 is configured to: decompose each input to-be-computed first-precision floating-point number into at least two second-precision floating-point numbers, and output, to the exponent adjustment circuit 1207, an exponent bias value corresponding to each second-precision floating-point number, where precision of the second-precision floating-point number is lower than precision of the first-precision floating-point number. The plurality of first-precision floating-point numbers may be successively input into the floating-point number decomposition circuit 1202 for decomposition computation, or a plurality of floating-point number decomposition circuits may separately provide decomposition computation for one first-precision floating-point number.

[0269] The second-precision multiplier 1203 is configured to: receive a combination including two second-precision floating-point numbers obtained by decomposing different first-precision floating-point numbers, perform a multiplication operation on the second-precision floating-point numbers in each combination, and output, to the exponent adjustment circuit 1207, an intermediate computation result corresponding to each combination.

[0270] The exponent adjustment circuit 1207 is configured to: adjust, based on the exponent bias value corresponding to the second-precision floating-point number in each input combination, an exponent of the intermediate computation result corresponding to each input combination; and output an adjusted intermediate computation result to the accumulator 1209.

[0271] The accumulator 1209 is configured to: perform a summation operation on the adjusted intermediate computation results corresponding to all the input combinations, and output a computation result for the plurality of first-precision floating-point numbers.

[0272] In an embodiment, the exponent adjustment circuit 1207 is configured to: add the exponent bias value corresponding to the second-precision floating-point number in each input combination and the exponent of the intermediate computation result corresponding to each input combination, and output an adjusted intermediate computation result to the accumulator 1209.

[0273] In an embodiment, the intermediate computation result is a first-precision intermediate computation result, and the computation result is a first-precision computation result.

[0274] In an embodiment, the first-precision floating-point number is a single-precision floating-point number, the second-precision floating-point number is a half-precision floating-point number, the first-precision intermediate computation result is a single-precision intermediate computation result, the first-precision computation result is a single-precision computation result, and the second-precision multiplier is a half-precision multiplier; or

[0275] the first-precision floating-point number is a double-precision floating-point number, the second-precision floating-point number is a single-precision floating-point number, the first-precision intermediate computation result is a double-precision intermediate computation result, the first-precision computation result is a double-precision computation result, and the second-precision multiplier is a single-precision multiplier.

[0276] In an embodiment, the arithmetic logic unit 1300 further includes a format conversion circuit 1306 (see FIG. 13).

[0277] The second-precision multiplier 1203 is configured to: perform a multiplication operation on the second-precision floating-point numbers in each combination, and output, to the format conversion circuit 1306, a first-precision intermediate computation result corresponding to each combination.

[0278] The format conversion circuit 1306 is configured to: perform format conversion on each input first-precision intermediate computation result, and output, to the exponent adjustment circuit 1207, a third-precision intermediate computation result corresponding to each combination, where precision of the third-precision intermediate computation result is higher than precision of the first-precision intermediate computation result.

[0279] The exponent adjustment circuit 1207 is configured to: adjust, based on the exponent bias value corresponding to the second-precision floating-point number in each input combination, an exponent of the third-precision intermediate computation result corresponding to each input combination; and output an adjusted third-precision intermediate computation result to the accumulator 1209.

[0280] The accumulator 1209 is configured to: perform a summation operation on the adjusted third-precision intermediate computation results corresponding to all the input combinations, and output a third-precision computation result for the plurality of first-precision floating-point numbers.

[0281] In an embodiment, the format conversion circuit 1306 is configured to:

[0282] perform zero padding processing on an exponent and a mantissa of each input first-precision intermediate computation result, and output, to the exponent adjustment circuit, the third-precision intermediate computation result corresponding to each combination.

[0283] In an embodiment, the first-precision floating-point number is a single-precision floating-point number, the second-precision floating-point number is a half-precision floating-point number, the first-precision intermediate computation result is a single-precision intermediate computation result, the third-precision intermediate computation result is a double-precision intermediate computation result, the third-precision computation result is a double-precision computation result, and the second-precision multiplier is a half-precision multiplier.

[0284] In an embodiment, the arithmetic logic unit 1400 (see FIG. 14) further includes a format conversion circuit 1306.

[0285] The second-precision multiplier 1203 is configured to: perform a multiplication operation on the second-precision floating-point numbers in each combination, and output, to the format conversion circuit 1306, a third-precision intermediate computation result corresponding to each combination.

[0286] The format conversion circuit 1306 is configured to: perform format conversion on each input third-precision intermediate computation result, and output, to the exponent adjustment circuit 1207, a first-precision intermediate computation result corresponding to each combination.

[0287] The exponent adjustment circuit 1207 is configured to: adjust, based on the exponent bias value corresponding to the second-precision floating-point number in each input combination, an exponent of the first-precision intermediate computation result corresponding to each input combination; and output an adjusted first-precision intermediate computation result to the accumulator 1209.

[0288] The accumulator 1209 is configured to: perform a summation operation on the adjusted first-precision intermediate computation results corresponding to all the input combinations, and output a first-precision computation result for the plurality of first-precision floating-point numbers.

[0289] In an embodiment, the first-precision floating-point number is a double-precision floating-point number, the second-precision floating-point number is a half-precision floating-point number, the third-precision intermediate computation result is a single-precision intermediate computation result, the first-precision intermediate computation result is a double-precision intermediate computation result, the first-precision computation result is a double-precision computation result, and the second-precision multiplier is a half-precision multiplier.

[0290] In an embodiment, the arithmetic logic unit further includes a computation mode switching circuit.

[0291] The computation mode switching circuit is configured to: when the computation mode switching circuit is set to a second-precision floating-point number computation mode, set the floating-point number decomposition circuit 1202 and the exponent adjustment circuit 1207 to be invalid.

[0292] The second-precision multiplier 1203 is configured to: receive a plurality of groups of to-be-computed second-precision floating-point numbers that are input from the outside of the arithmetic logic unit 1400, perform a multiplication operation on each group of second-precision floating-point numbers, and input an intermediate computation result corresponding to each group of to-be-computed second-precision floating-point numbers.

[0293] The accumulator 1209 is configured to: perform a summation operation on the intermediate computation results corresponding to all the input groups of to-be-computed second-precision floating-point numbers, and output a computation result for the plurality of groups of to-be-computed second-precision floating-point numbers.

[0294] As shown in FIG. 14, the arithmetic logic unit 1400 may further support mode switching, that is, switching between a first-precision floating-point number operation mode and a second-precision floating-point number operation mode. In the first-precision floating-point number operation mode, a multiplication operation on the first-precision floating-point number may be implemented by using the floating-point number decomposition circuit 1202, the second-precision multiplier 1203, the format conversion circuit 1306, the exponent adjustment circuit 1207, and the accumulator 1209. In the second-precision floating-point number operation mode, the floating-point number decomposition circuit 1202, the format conversion circuit 1306, and the exponent adjustment circuit 1207 may be enabled to be invalid, and only the second-precision multiplier 1203 and the accumulator 1209 are used. A plurality of groups of to-be-computed second-precision floating-point numbers are directly input into the second-precision multiplier 1203, intermediate computation results corresponding to the plurality of groups of to-be-computed second-precision floating-point numbers are output, and then the intermediate computation results are input into the accumulator 1209 for an accumulation operation to obtain a computation result corresponding to the plurality of groups of to-be-computed second-precision floating-point numbers.

[0295] It should be noted herein that the logical operation unit provided in the foregoing embodiment and the floating-point number multiplication computation method embodiment belong to a same concept. For a specific implementation process of the logical operation unit, refer to the method embodiment. Details are not described herein again.

[0296] Referring to FIG. 8, an embodiment of this application provides an electronic device. The electronic device 800 includes at least one processor 801, a bus system 802, and a memory 803.

[0297] The processor 801 may be a general-purpose central processing unit (CPU), a network processor (NP), a graphics processing unit, a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits configured to control program execution of the solutions in this application.

[0298] The bus system 802 may include a path to transmit information between the foregoing components.

[0299] The memory 803 may be a read-only memory (ROM) or another type of static storage device capable of storing static information and instructions, or a random access memory (RAM) or another type of dynamic storage device capable of storing information and instructions, or may be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or another compact disc storage, an optical disc storage (including a compressed optical disc, a laser disc, an optical disc, a digital versatile disc, a Blu-ray disc, and the like), a magnetic disk storage medium or another magnetic storage device, or any other medium capable of including or storing expected program code in a form of an instruction or a data structure and capable of being accessed by a computer. However, the memory 803 is not limited thereto. The memory may exist independently, and is connected to the processor by using the bus. Alternatively, the memory may be integrated with the processor.

[0300] The memory 803 is configured to store application program code for performing the solutions of this application, and execution is controlled by the processor 801. The processor 801 is configured to execute the application program code stored in the memory 803, to implement the floating-point number computation method provided in this application.

[0301] In an embodiment, the processor 801 may include one or more CPUs.

[0302] A person of ordinary skill in the art may understand that all or some of the steps of the embodiments may be implemented by hardware or a program instructing related hardware. The program may be stored in a computer-readable storage medium. The computer-readable storage medium may include a read-only memory, a magnetic disk, an optical disc, or the like.

[0303] The foregoing description is merely an embodiment of this application, but is not intended to limit this application. Any modification, equivalent replacement, or improvement made without departing from the spirit and principle of this application shall fall within the protection scope of this application.