Processing unit, method and computer program for multiplying at least two multiplicands
11537361 · 2022-12-27
Assignee
Inventors
Cpc classification
International classification
Abstract
A processing unit and a method for multiplying at least two multiplicands. The multiplicands are present in an exponential notation, that is, each multiplicand is assigned an exponent and a base. The processing unit is configured to carry out a multiplication of the multiplicands and includes at least one bitshift unit, the bitshift unit shifting a binary number a specified number of places, in particular, to the left; an arithmetic unit, which carries out an addition of two input variables and a subtraction of two input variables; and a storage device. A computer program, which is configured to execute the method, and a machine-readable storage element, in which the computer program is stored, are also described.
Claims
1. A processing unit, which is configured to carry out a multiplication of at least two multiplicands, the multiplicands each having a first exponent and a first base, each of the first bases having a second base, a second exponent, and a third exponent, the processing unit comprising: at least one bitshift unit, the at least one bitshift unit configured to shift a binary number a specified number of places to the left; an arithmetic unit; and a storage device; wherein: the arithmetic unit is configured to carrying out a subtraction of the third exponents, the at least one bitshift unit is configured to carry out a first shift of a binary number of the first exponent of one of the multiplicands by a number of places of a value of the subtracted third exponents, wherein the at least one bitshift unit carries out the first shift by filling in a plurality of zeros on the right side of a register of the at least one bitshift unit as a function of the value of the subtraction of the third exponents; the arithmetic unit configured to carry out an addition of a result of the first shift of the first exponent of the one of the multiplicands, a result of the addition being split up into an integer part and a fractional part as a function of a value of a smaller exponent of the third exponents, a binary number of the fractional part being fetched out of the storage device, wherein the fractional part is used as an address for fetching the binary value of the fractional part out of the storage device, and the at least one bitshift unit configured to carry out a second shift of the binary number of the fractional part by a number of places of a value of the integer part.
2. The processing unit as recited in claim 1, wherein the storage device has at least 2.sup.n entries, n is equal to a magnitude of the smaller exponent of the third exponents.
3. The processing unit as recited in claim 1, wherein the processing unit also includes an accumulation unit, which accumulates results of a plurality of the multiplications of, in each instance, at least two multiplicands.
4. The processing unit as recited in claim 3, wherein the accumulation unit is implemented by an adder tree.
5. The processing unit as recited in claim 1, further comprising: a conversion unit, the conversion unit being a priority encoder, the conversion unit configured to convert a result of the second shift to an exponential notation.
6. A method for multiplying at least two multiplicands in a processing unit, the processing unit including at least one bitshift unit, an arithmetic unit, and a storage device, the multiplicands each having a first exponent and a first base, each of the first bases having a second base, a second exponent, and a third exponent, the method comprising the following steps: providing the first exponents of the multiplicands and the third exponents, each of the provided first exponents and the third exponents being quantized; subtracting, by the arithmetic unit, the third exponents; first shifting of a binary number of the first exponents of one of the multiplicands by a number of places of a value of the subtracted third exponents, wherein the first shifting is carried out by the at least one bitshift unit filling in a plurality of zeros on the right side of a register of the at least one bitshift unit as a function of the value of the subtraction of the third exponents; adding, by the arithmetic unit, a result of the first shifting to the first exponent of the one of the multiplicands; splitting up a result of the addition into an integer part and a fractional part as a function of a smaller exponent of the third exponents; fetching a binary number of the fractional part out of the storage device, wherein the fractional part is used as an address for fetching the binary value of the fractional part out of the storage device; and second shifting of the binary number of the fractional part by a number of places of a value of the integer part by the at least one bitshift unit.
7. The method as recited in claim 6, wherein the storage device includes a lookup table.
8. The method as recited in claim 6, wherein a result of the second shift is broken down into an exponent and a specifiable base.
9. The method as recited in claim 6, wherein each of the second bases has a value of two, and each of the second exponents has a value of two.
10. The method as recited in 6, wherein the provided first exponents and third exponents are each represented by a maximum of 10 bits.
11. A method for operating a machine learning system, in each instance, a plurality of parameters of the machine learning system and intermediate variables of the machine learning system being stored as multiplicands in a storage device, using an exponential notation, each of the stored multiplicands having a first exponent and a first base, each of the first bases having a second base, a second exponent, and a third exponent, multiplications of at least two of the stored multiplicands being carried out by a processing unit, the processing unit including at least one bitshift unit, an arithmetic unit, and a storage device, performing the following steps: providing the first exponents of the at least two multiplicands and the third exponents of the first bases of the at least two of the multiplicands, each of the provided first exponents and the third exponents being quantized; subtracting, by the arithmetic unit, the third exponents of the first bases of the at least two of the multiplicands; first shifting of a binary number of the first exponents of one of the at least two multiplicands by a number of places of a value of the subtracted third exponents, wherein the first shifting is carried out by the at least one bitshift unit filling in a plurality of zeros on the right side of a register of the at least one bitshift unit as a function of the value of the subtraction of the third exponents; adding, by the arithmetic unit, a result of the first shifting to the first exponent of the one of the at least two multiplicands; splitting up a result of the addition into an integer part and a fractional part as a function of a smaller exponent of the third exponents of the first bases of the at least two of the multiplicands; fetching a binary number of the fractional part out of the storage device, wherein the fractional part is used as an address for fetching the binary value of the fractional part out of the storage device; and second shifting of the binary number of the fractional part by a number of places of a value of the integer part by the at least one bitshift unit.
12. The method as recited in claim 11, wherein during training of the machine learning system, at least the first and second bases of the exponential notation of the intermediate variables of the machine learning system and of the parameters of the machine learning system are ascertained.
13. The method as recited in claim 11, wherein after training of the machine learning system, at least the first and second bases for the exponential notation of the intermediate variables and of the parameters of the machine learning system are ascertained.
14. The method as recited in claim 11, wherein before or after training of the machine learning system, at least the first and second bases for the exponential notation of the intermediate variables and of the parameters of the machine learning system are ascertained, and wherein the first and second bases are ascertained as a function of a propagated quantization error, the propagated quantization error characterizing a difference of the result of the multiplication of two multiplicands, using quantized exponents, and a result of the multiplication of the two multiplicands, using real exponents.
15. The method as recited in claim 14, wherein the first, second and third exponents are ascertained as a function of the ascertained base of the exponential notation, and the ascertained exponents are quantized, and during the quantization of the exponents, beginning with a resolution of a quantization of the exponents, using 10 bits, the resolution is reduced step-by-step, in each instance, by one bit, when a variable characterizing a quantization error is less than a specifiable quantity.
16. The method as recited in claim 11, wherein an input variable of the machine learning system is a variable, which is measured using a sensor, and a controlled variable is ascertained as a function of an output variable of the machine learning system.
17. A non-transitory machine-readable storage element on which is stored a computer program for multiplying at least two multiplicands in a processing unit, the processing unit including at least one bitshift unit, an arithmetic unit, and a storage device, the multiplicands each having a first exponent and a first base, each of the first bases having a second base, a second exponent, and a third exponent, the computer program, when executed by a computer, causing the computer to perform the following steps: providing the first exponents of the multiplicands and the third exponents, each of the provided first exponents and the third exponents being quantized; subtracting, by the arithmetic unit, the third exponents; first shifting of a binary number of the first exponents of one of the multiplicands by a number of places of a value of the subtracted third exponents, wherein the first shifting is carried out by the at least one bitshift unit filling in a plurality of zeros on the right side of a register of the at least one bitshift unit as a function of the value of the subtraction of the third exponents; adding, by the arithmetic unit, a result of the first shifting to the first exponent of the one of the multiplicands; splitting up a result of the addition into an integer part and a fractional part as a function of a smaller exponent of the third exponents; fetching a binary number of the fractional part out of the storage device, wherein the fractional part is used as an address for fetching the binary value of the fractional part out of the storage device; and second shifting of the binary number of the fractional part by a number of places of a value of the integer part by the at least one bitshift unit.
18. A non-transitory machine-readable storage element on which is stored a computer program for operating a machine learning system, in each instance, a plurality of parameters of the machine learning system and intermediate variables of the machine learning system being stored as multiplicands in a storage device, using an exponential notation, each of the stored multiplicands having a first exponent and a first base, each of the first bases having a second base, a second exponent, and a third exponent, the computer program, when executed by a computer, the computer including at least one bitshift unit, an arithmetic unit, and a storage device, causing the computer to perform multiplications of at least two of the stored multiplicands by performing the following steps: providing the first exponents of the at least two multiplicands and the third exponents of the first bases of the at least two of the multiplicands, each of the provided first exponents and the third exponents being quantized; subtracting, by the arithmetic unit, the third exponents of the first bases of the at least two of the multiplicands; first shifting of a binary number of the first exponents of one of the at least two multiplicands by a number of places of a value of the subtracted third exponents, wherein the first shifting is carried out by the at least one bitshift unit filling in a plurality of zeros on the right side of a register of the at least one bitshift unit as a function of the value of the subtraction of the third exponents; adding, by the arithmetic unit, a result of the first shifting to the first exponent of the one of the at least two multiplicands; splitting up a result of the addition into an integer part and a fractional part as a function of a smaller exponent of the third exponents of the first bases of the at least two of the multiplicands; fetching a binary number of the fractional part out of the storage device, wherein the fractional part is used as an address for fetching the binary value of the fractional part out of the storage device; and second shifting of the binary number of the fractional part by a number of places of a value of the integer part by the at least one bitshift unit.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
(7)
(8) The multiplication may be ascertained, using bit-shifting of a binary number of multiplicands a to the left by {circumflex over (b)} places:
a.Math.b=a<<{circumflex over (b)} (1)
(9) The operator << denotes a bitshift of multiplicand a to the left in the binary system by the number of places of value {circumflex over (b)}.
(10) For the case in which multiplicand a may also be represented by a power of two, a=2.sup.â, then:
a.Math.b=1<<(â+{circumflex over (b)}) (2)
(11) It is noted that the conversion of multiplicands a and b to the exponential notation has the result that, in order to store these values, only the exponent â, {circumflex over (b)} must be stored; the exponent being able to be stored, using fewer bits than original multiplicand a, b. Multiplicands a, b are preferably stored, using 32 bits, whereas exponents â, {circumflex over (b)} are preferably stored, using 8 or even fewer bits. It should be noted that, in addition, the information regarding the base, which has been selected for the exponential notation, must be known. This means that marked quantization of the exponent may be selected, through which storage space is reduced. Furthermore, it is noted that the multiplication according to one of the above-mentioned equations (1), (2) is independent of a hardware numeral representation format, e.g., fixed-point, since the multiplication is carried out in the binary system.
(12) For general bases B having the condition B≠2, any number c may be determined in an exponential notation:
c=B.sup.ĉ (3)
using an exponent ĉ, which is preferably quantized.
(13) In addition, in the following, the bases B are selected as follows, so that they satisfy the equation:
ld(B)=2.sup.z (4)
where z comes from the set of integers z∈ and preferably has a negative value.
(14) For the given quantized exponent ĉ of base B and given exponent z of base 2, a reconstruction of value c with given ĉ, z is calculated as follows:
c=B.sup.ĉ=2.sup.ld(B)ĉ=2.sup.2.sup.
(15) For the case in which z<0, bit-shifting to the right takes place, and a binary number, which has z radix places, is formed in the exponent.
(16) In addition, equation (5) may be simplified:
(17)
(18) It should be pointed out that the fractional part may be derived directly from the number ĉ, since the fractional part includes z places, as just mentioned.
(19) Equation (6) is preferably carried out exclusively by hardware. Then, it is possible for the value 2.sup.Fractional{ĉ<<z} to be stored in a lookup table (LUT).
(20) The LUT contains 2.sup.z entries, so that all of the necessary values for the expression 2.sup.Fractional{ĉ<<z} are stored.
(21) Consequently, the number c may be reconstructed efficiently, using a bit-shift of the number fetched out of the LUT to the left by the number of places of the value of the expression Integer{ĉ<<z}. It is noted that the value fetched out of the LUT is also quantized, preferably, using a quantization resolution between 5 and 30 bits.
(22) The method 10 according to
(23) In the exponential notation, the multiplication may take place as follows:
cd=B.sub.c.sup.ĉB.sub.d.sup.{circumflex over (d)}=2.sup.ld(B.sup.
(24) Now, if B.sub.m=min(B.sub.c, B.sub.d), then B.sub.m=2.sup.z.sup.
(25)
(26) Since, in this example, z.sub.d−z.sub.c>0, the addition of the exponents by hardware may take place with the aid of a bit adjustment, that is, by filling in binary zeros on the right side with respect to a bit-shift, as a function of the difference z.sub.d−z.sub.c.
(27) If z.sub.c<z.sub.d, then the multiplication by 2.sup.z.sup.
(28) Equation (8) may be simplified as follows:
(29)
(30) After step 11 is completed, step 12 follows. In this, a subtraction of the exponents (z.sub.d−z.sub.c) is carried out, as shown in the second line of equation (9).
(31) Subsequently, in step 12, a first bit-shift of one of the exponents {circumflex over (d)} by the number of places of the value of the result of the subtraction (z.sub.d−z.sub.c) is carried out. Preferably, the first bit-shift may be carried out by hardware in the form of a bit adjustment, as mentioned with regard to equation (8). The result of the first bit-shift is then added to further exponent ĉ.
(32) In the following step 13, using a second shift, the result of the addition is shifted (in particular, to the right) by the number of places of the value of z.sub.b. The result of this is now {circumflex over (p)}. In this context, it should be pointed out that for the case of z.sub.b<0, the second shift results in ∥z.sub.b∥ radix places in {circumflex over (p)}.
(33) Step 14 follows step 13. In this, the ascertained result {circumflex over (p)} from step 13 is split up into a fractional and an integer part, as in equation (6). As an alternative, step 13 may be skipped, and in step 14, the result of the addition from step 12 is divided up directly into a fractional and an integer part as a function of the value z.sub.b.
(34) The final result of the multiplication for c.Math.d=p is then given as:
c.Math.d=2.sup.Fractional{{circumflex over (p)}}<<Integer{{circumflex over (p)}} (10)
(35) This means that in step 14, the value of the fractional part is shifted by the number of places of the value of the integer part.
(36) The value of the term 2.sup.Fractional{{circumflex over (p)}} is preferably stored in an LUT, and the value is fetched out of the LUT, in order to ascertain the result c.Math.d. This LUT includes 2.sup.∥z.sup.
(37) It should be pointed out that the method may also be executed, using more than two multiplicands (a, b, c, . . . ). For this, the LUT contains 2.sup.∥min (z.sup.
(38) It is noted that method 10 may also be implemented, using at least one negative multiplicand. If one or the two multiplicands have a negative algebraic sign, then, in one further specific embodiment, method 10 may be executed up to and including step 14, while disregarding the algebraic sign of the multiplicands. In this specific embodiment, step 15 is then executed after step 14 has been finished. In step 15, the algebraic sign of the result of the multiplication of p=c.Math.d is adapted in accordance with the algebraic signs of the respective multiplicands c, d. If, for example, a multiplicand is negative, then the result of the multiplication becomes negative, as well. If the two multiplicands are negative, then the result of the multiplication is positive.
(39) With that, method 10 ends. It is possible for the method to be implemented by hardware or software or a mixture of software and hardware.
(40)
(41) The method 20 begins with step 21. In this, a trained machine learning system is provided. This means that a parameterization of the machine learning system was already determined during the training. The machine learning system may be trained with the aid of an optimization method, in particular, a gradient descent method, using supplied training data. Alternatively, the machine learning system may be trained in step 21.
(42) In subsequent step 22, the parameters and, additionally or alternatively, intermediate results, of the machine learning system are selected, which are converted to the exponential notation. Equations (cf. equations (13) and (14) below), which are optimized, are then set up for these parameters and/or intermediate results. The result of the optimization then yields the bases, which are suitable for an adequate depiction of the parameters and/or intermediate results in the exponential notation.
(43) If the machine learning system includes, by way of example, a neural network, the parameters, in particular, intermediate results, may be converted, in layers, to the exponential notation. Preferably, the parameters and/or intermediate results of the respective layers may each be represented with the aid of the same base. It is preferable for the constraint, that the bases have a value less than 2, to be considered during the determination of the bases.
(44) In addition, the exponents of the parameters and of the intermediate results of the machine learning system may be quantized in the exponential notation.
(45) For the intermediate results y.sup.(l) of layer (l) in the quantized exponential notation ŷ.sup.(l), the following applies:
y.sup.(l)≅B.sub.y.sup.ŷ(l)=:{tilde over (y)}.sup.(l) (11)
(46) The relationship shown in equation (11) is also valid for the representation of the parameters of the machine learning system, in particular, for the parameters, which are multiplied by other values, such as the intermediate results.
(47) The determination of the base B.sub.y, B.sub.w is carried out as a function of the quantization error q=y.sup.(l)−{tilde over (y)}.sup.(l).
(48) Alternatively, a propagated quantization error may be used, in order to ascertain base B.sub.y, B.sub.w. The propagated quantization error characterizes a difference between the result of the multiplication with and without quantization, or a difference between a further multiplication, in the case of which this result is reused as a multiplicand for the further multiplication.
(49) The propagated quantization error q.sub.p is given by the following equation:
q.sub.p=Σw.sup.(l+1)x.sup.(l+1)−Σw.sup.(l+1){tilde over (x)}.sup.(l+1) (12)
(50) In this connection, output variables y.sup.(l) of layer (l) of the machine learning system, in particular, of the neural network, are written as input variables x of layer (l+1). In addition, or as an alternative, the (l+n)th layer may also be used for ascertaining the quantization error. The training data used for training the machine learning system may be used as an input variable of the machine learning system.
(51) The optimal selection of B.sub.y, B.sub.w for the propagated quantization error is given by:
(52)
(53) After equations (13), (14) are set up, they are subsequently solved in step 22, in order to ascertain the specific bases.
(54) Equations (13) and (14) may be minimized, using combinatory testing of different values of the bases, or as an alternative, using a gradient descent method.
(55) After the end of step 22, step 23 may be executed. In step 23, the ascertained bases are assigned to the respective parameters and/or intermediate results.
(56) In subsequent step 24, the parameters and/or intermediate results may be converted to the exponential notation as a function of the assigned bases.
(57) It is preferable for step 25 to be executed subsequently. In this, the quantization of exponents ĉ, {circumflex over (d)}, z.sub.c, z.sub.d is optimized.
(58) The selection of the bit width of quantization of the exponents may be carried out iteratively. Preferably, the exponent is quantized initially, using 8 bits, maximally, using 10 bits, and optionally, using more than 10 bits, as well. Consequently, in each instance, one bit fewer is used step-by-step, as long as the machine learning system delivers results of sufficient quality, compared to, e.g., the forecast quality of the machine learning system, using the initial quantization.
(59) In step 26, the parameters and/or intermediate results in the exponential notation are optionally stored in a storage device. The bases and the exponents are stored for this. Alternatively, the bases may be stored in the exponential notation, as well. The base two is preferably used in the exponential notation of these bases.
(60) It should be noted that the steps of ascertaining the bases for the intermediate results and parameters may also be carried out during the training of the machine learning system. This may be accomplished with the aid of so-called shadow weights. For this, see the paragraph “fine-tuning” on page 3 in P. Gysel et al., “Hardware-oriented Approximation of Convolutional Neural Networks,” 2016, arxiv.org, [Online]
(61) https://arxiv.org/pdf/1604.03168.pdf.
(62) The method 20 concludes at step 26.
(63)
(64) The method 30 begins with step 31. In this step, the machine learning system is trained. Optionally, step 31 may be executed several times, one after the other.
(65) After the machine learning system is trained, step 32 is executed. In this, a plurality of parameters and/or intermediate results of the machine learning system are converted to the exponential notation. For this, the bases may initially be ascertained, e.g., according to method 20 in
(66) After step 32 is completed, step 33 follows. In step 33, the machine learning system ascertains intermediate results as a function of its parameters and a supplied input variable. The intermediate results, which are ascertained by multiplying at least two multiplicands, are ascertained according to the method 10 from
(67) Optionally, one result of the subtraction of exponents z.sub.d−z.sub.c may be stored per layer of the machine learning system. This has the advantageous effect that the calculation of the subtraction may be carried out by rapidly supplying the respective result.
(68) In subsequent step 34, a controlled variable for an actuator of a technical system may be ascertained as a function of an output variable of the machine learning system.
(69)
(70) A first and a second data line 401, 402 may each be connected to a register 403, 404 of the processing unit. Multiplicands c, d are loaded into these registers 403, 404. The multiplicands of this specific embodiment are the quantized exponents, preferably, binary exponents.
(71) The first multiplicand undergoes a bit-shift to the left. With this, the bit adjustment is carried out as a function of the subtraction z.sub.d−z.sub.c (see equations (8) and (9)), in a manner that is efficient with regard to hardware resources. Optionally, the width of the bit-shift or, more precisely, of the register, is adjustable, preferably, in each instance, for the layers of the machine learning system. According to
(72) Subsequently, exponent {circumflex over (p)} from equation (9) is calculated in a first logic unit 406. For this, first logic unit 406 includes at least one adder (advantageously, an ALU), which carries out the addition of the specific exponents according to equation (9), and at least one bit-shift unit, which shifts the result of the addition as a function of the smallest exponent z.sub.b. It should be pointed out that exponents z.sub.c, z.sub.d may be supplied, e.g., with the aid of further data lines of logic unit 406. In this connection, the result of first logic unit 406 corresponds to the {circumflex over (p)} according to equation (9).
(73) The result of logic unit 406 is subsequently split up into an integer part 407 and into a fractional part 408. Fractional part 408 is preferably ascertained as a function of the smallest exponent z.sub.b, which indicates the number of radix places.
(74) In a further exemplary embodiment, in particular, when the smallest exponent z.sub.b has a negative value, first logic unit 406 only includes the adder. The result of the addition is subsequently split up into an integer part 407 and into a fractional part 408, using a fictitious shift of the radix point as a function of the smallest exponent z.sub.b. The fictitious shifting of the radix point allows the shifting of the result of the addition by the bitshift unit to be omitted.
(75) Fractional part 408 is subsequently used as an address of LUT 409. A stored value of the LUT for the given address is subsequently transmitted to a second logic unit 410.
(76) Besides the value of the LUT, second logic unit 410 additionally obtains integer part 407 of the result of first logic unit 406. Second logic unit 410 carries out a bit-shift of the value of the LUT by the number of places of the value of integer part 407. For this, logic unit 410 advantageously includes a further bitshift unit or alternatively uses the bitshift unit of logic unit 406.
(77) Since the result from second logic unit 410 is not outputted in the exponential notation, a conversion unit 411 may be configured to convert its input variable to the quantized exponential notation. The output of conversion unit 411 may be connected to a third data line 412. The bit width of third data line 412 may be adjusted to the bit width of the quantized exponent, which means that the effective bandwidth is increased. Conversion unit 411 is preferably a priority encoder.
(78) Optionally, an accumulation unit 414 may be interconnected between second logic unit 410 and conversion unit 411 of the processing unit. In the case of repeated, serial execution of multiplications, the accumulation unit 414 is configured to accumulate the results of the multiplication in the accumulation unit. This result of the accumulation unit 414 may then be used to determine an activation, in particular, the activation of a neuron. It is noted that the ascertained instances of activation may also be results of folding operations. This means that with the aid of the accumulation unit 414, in addition to matrix multiplications, the processing unit may also ascertain folds, as occur, e.g., in folding neural networks (convolutional neural networks). In embodiments, the accumulation unit may be implemented by an adder tree.
(79) In one further exemplary embodiment, the processing unit may be used for supporting the operation of a machine learning system. Now, this processing unit may be connected to a calculating machine, on which, e.g., the machine learning system is operated. Multiplications, which must be carried out on the calculating machine during the operation of the machine learning system, may then be transferred to the processing unit.
(80)
(81)
(82) In addition, vehicle 60 includes calculating machine 64 and a machine-readable storage element 65. A computer program, which includes commands that, upon execution of the commands on calculating machine 64, lead to calculating machine 64 carrying out one of the above-mentioned methods 10, 20, 30, may be stored in storage element 65. It is also possible for a download product or an artificially generated signal, which may each include the computer program, to cause calculating machine 64 to execute one of these methods after being received at a receiver of vehicle 60.
(83) In an alternative exemplary embodiment, machine learning system 62 may be used for a building control system. A user behavior is monitored with the aid of a sensor, such as a camera or a motion detector, and the actuator control unit controls, for example, a heat pump of a heating installation as a function of the output variable of machine learning system 62. Machine learning system 62 may be configured to ascertain, as a function of a measured sensor variable, an operating mode of the building control system, which is desired on the basis of this user behavior.
(84) In a further exemplary embodiment, actuator control unit 63 includes a release system. The release system decides if an object, such as a detected robot or a detected person, has access to a region, as a function of the output variable of machine learning system 62. The actuator, for example, a door opening mechanism, is preferably controlled with the aid of actuator control unit 63. In addition, the actuator control unit 63 of the previous exemplary embodiment of the building control system may include this release system.
(85) In one alternative exemplary embodiment, vehicle 60 may be a tool or a factory machine or a manufacturing robot. A material of a workpiece may be classified with the aid of machine learning system 62. In this connection, the actuator may be, e.g., a motor, which drives a grinding wheel.
(86) In one further specific embodiment of the present invention, machine learning system 62 is used in a measuring system, which is not shown in the figures. The measuring system differs from the vehicle 60 according to
(87) In a further development of the measuring system, it is also possible for monitoring unit 61 to record an image of a human or animal body or a portion of it. For example, this may be accomplished with the aid of an optical signal, with the aid of an ultrasonic signal, or using an MRT/CT method. In this further development, the measuring system may include machine learning system 62, which is trained to output a classification as a function of the input; the classification being, e.g., which clinical picture is possibly present on the basis of this input variable.