PRIVATE DECISION TREE EVALUATION USING AN ARITHMETIC CIRCUIT
20230379135 · 2023-11-23
Inventors
Cpc classification
H04L9/0618
ELECTRICITY
G06N5/01
PHYSICS
International classification
H04L9/00
ELECTRICITY
H04L9/06
ELECTRICITY
Abstract
A non-interactive protocol is provided for evaluating machine learning models such as decision trees. A client can delegate the evaluation of a machine learning model such as a decision tree to a server by sending an encrypted input and receiving only the encryption of the result. The inputs can be encoded as vector of integers using their binary representation. The server can then evaluate the machine learning model using a homomorphic arithmetic circuit. The homomorphic arithmetic circuit provides an implementation that requires fewer multiplication than a Boolean comparison circuit. Efficient data representations are then combined with different algorithmic optimizations to keep the computational overhead and the communication cost low. Related apparatus, systems, techniques and articles are also described.
Claims
1. A computerized method comprising: homomorphically encrypting a plaintext data set; receiving a plurality of random strings from a randomizer; forming an encrypted input comprising the plurality of random strings and the homomorphically encrypted data set; transmitting the encrypted input to a server executing a decision tree for evaluation of the encrypted input by the server using a homomorphic arithmetic circuit; and receiving, from the server, data characterizing the evaluation by the server; and decrypting the evaluation.
2. The computerized method of claim 1, wherein the evaluation includes a classification generated on the homomorphically encrypted data set using a decision tree.
3. The computerized method of claim 2, wherein the decision tree is a machine learning model that maps an n-dimensional attribute vector to a finite set of classification labels.
4. The computerized method of claim 3, wherein the decision tree comprises a plurality of internal nodes that each comprise a test condition, and a plurality of leaf nodes that each comprise a classification label.
5. The computerized method of claim 4, wherein the classification comprises the classification labels.
6. The computerized method of claim 1, wherein the forming an encrypted input includes encrypting a combination of the plurality of random strings and the homomorphically encrypted data set using a public key, a private key, and an evaluation key.
7. The computerized method of claim 6, further comprising: sending the public key and the evaluation key to the server.
8. A system comprising: at least one data processor; and memory storing instructions which, when executed by the at least one data processor, result in operations comprising: homomorphically encrypting a plaintext data set; receiving a plurality of random strings from a randomizer; forming an encrypted input comprising the plurality of random strings and the homomorphically encrypted data set; transmitting the encrypted input to a server executing a decision tree for evaluation using a homomorphic arithmetic circuit; and receiving, from the server, data characterizing the evaluation by the server; and decrypting the evaluation.
9. The system of claim 8, wherein the evaluation includes a classification generated on the homomorphically encrypted data set using a decision tree.
10. The system of claim 9, wherein the decision tree is a machine learning model that maps an n-dimensional attribute vector to a finite set of classification labels.
11. The system of claim 10, wherein the decision tree comprises a plurality of internal nodes that each comprise a test condition, and a plurality of leaf nodes that each comprise a classification label.
12. The system of claim 11, wherein the classification comprises the classification labels.
13. The system of claim 8, wherein the forming an encrypted input includes encrypting a combination of the plurality of random strings and the homomorphically encrypted data set using a public key, a private key, and an evaluation key.
14. The system of claim 13, wherein the operations further comprise: sending the public key and the evaluation key to the server.
15. A non-transitory machine-readable medium storing instructions which, when executed by one or more processors, cause the one or more processors to perform operations comprising: homomorphically encrypting a plaintext data set; receiving a plurality of random strings from a randomizer; forming an encrypted input comprising the plurality of random strings and the homomorphically encrypted data set; transmitting the encrypted input to a server executing a decision tree for evaluation using a homomorphic arithmetic circuit; and receiving, from the server, data characterizing the evaluation by the server; and decrypting the evaluation.
16. The non-transitory machine-readable medium of claim 15, wherein the evaluation includes a classification generated on the homomorphically encrypted data set using a decision tree.
17. The non-transitory machine-readable medium of claim 16, wherein the decision tree is a machine learning model that maps an n-dimensional attribute vector to a finite set of classification labels.
18. The non-transitory machine-readable medium of claim 17, wherein the decision tree comprises a plurality of internal nodes that each comprise a test condition, and a plurality of leaf nodes that each comprise a classification label.
19. The non-transitory machine-readable medium of claim 18, wherein the classification comprises the classification labels.
20. The non-transitory machine-readable medium of claim 15, wherein the forming an encrypted input includes encrypting a combination of the plurality of random strings and the homomorphically encrypted data set using a public key, a private key, and an evaluation key.
Description
DESCRIPTION OF DRAWINGS
[0017]
[0018]
DETAILED DESCRIPTION
[0019] The current subject matter addresses the problem of privately evaluating a machine learning model such as a decision tree on private data. As will be described in more detail below, a server executes a private decision tree model and a client seeks to classify a private attribute vector using the server's private model. The goal of the computation is to obtain the classification while preserving the privacy of both—the decision tree and the client input. After the computation, the classification result is revealed only to the client, and beyond that, nothing further is revealed to neither party.
[0020] The following describes a client-server protocol that delegates the complete tree evaluation to the server while preserving privacy and keeping the performance acceptable. The current subject matter can use fully (somewhat) homomorphic encryption and evaluate decision trees on ciphertexts encrypted under the client's public key. As a result, no intermediate or final computational result is revealed to the evaluating server. The core operation in a decision tree evaluation is the comparison of integer. This comparison can be done using a binary circuit with the advantage of having a decision tree protocol with sublinear (in the size of the tree) to constant communication. However, comparing two μ-bit integers using a binary circuit requires a circuit with a depth of log(μ−1)+1 and (μ log(μ)) homomorphic multiplications. As such, a modified version of the integer comparison is provided herein such that it can be evaluated using an arithmetic homomorphic circuit. The resulting circuit has depth of log(μ−1) and requires exactly log(μ) multiplications. Moreover, when used for decision trees, the arithmetic comparison circuit does not increase the circuit multiplicative depth of the decision tree evaluation as the binary comparison circuit.
[0021] A homomorphic encryption (HE) allows computations on ciphertexts by generating an encrypted result whose decryption matches the result of a function on the plaintexts. Homomorphic encryption schemes (particularly lattice-based) can be used that allow many chained additions and multiplications to be computed on plaintext homomorphically. A HE scheme consists of the following algorithms: [0022] pk, sk, ek←KGen (λ): This probabilistic algorithm takes a security parameter λ and outputs public, private and evaluation key pk, sk and ek. [0023] c←Enc(pk,m). This probabilistic algorithm takes pk and a message m and outputs a ciphertext c. [[m]] is used herein as a shorthand notation for Enc(pk,m). [0024] c←Eval(ek,f,c.sub.1, . . . , c.sub.n): This probabilistic algorithm takes ek, an n-ary function ƒ and n ciphertexts c.sub.1, . . . , c.sub.n and outputs a ciphertext c. [0025] m′←Dec(sk, c): This deterministic algorithm takes sk and a ciphertext c and outputs a message m′.
[0026] With the current subject matter, the homomorphic encryption should have the property of indistinguishability under Chosen Plaintext Attack techniques (IND-CPA) and the following correctness conditions for all m.sub.1, . . . , m.sub.n: [0027] Dec (sk, Enc(pk, m.sub.i))=Dec(sk, [[m.sub.i]])=m.sub.i, [0028] Dec(sk Eval(ek, f, [[m.sub.1]], . . . , [[m.sub.n))=Dec(sk, [[ƒ(m.sub.1, . . . , m.sub.n)]]).
[0029] The encryption algorithm Enc adds “noise” to the ciphertext which increases during homomorphic evaluation. While addition of ciphertexts increases the noise linearly, the multiplication increases it exponentially. If the noise becomes too large, then correct decryption is no longer possible. To prevent this from happening, one can either keep the circuit's depth of the function flow enough or use a refresh algorithm. This algorithm consists of the bootstrapping procedure, which takes a ciphertext with large noise and outputs a ciphertext (of the same message) with a smaller amount of noise. With the current subject matter, the circuit's depth is kept low to ensure that the noise does not overwhelm the ciphertext and prevent correct decryption. This allows the usage of somewhat homomorphic encryption (SHE) and avoid bootstrapping. Therefore, in the current subject matter, the homomorphic operations will be prefixed with “SHE” for somewhat homomorphic encryption.
[0030] A Brakerski-Gentry-Vaikuntanathan (BGV) type homomorphic encryption scheme can be used. Plaintexts can be encrypted using an integer representation (an integer x.sub.i is encrypted as [[x.sub.i]]) or a binary representation (each bit of the bit representation x.sub.i.sup.b=x.sub.iμ . . . x.sub.i1 is encrypted). The encryption scheme as required herein can support Smart and Vercauteren's ciphertext packing (SVCP) technique to pack many plaintexts in one ciphertext. Using SVCP, a ciphertext consists of a fixed number s of slots, each capable of holding one plaintext, i.e. [[⋅|⋅| . . . |⋅]]. The encryption of a bit b replicates b to all slots, i.e., [[b]]=[b|b| . . . |b]]. However, the bits of x.sub.i.sup.b can be packed in one ciphertext and denoted by [[{right arrow over (x)}.sub.i]]=[[x.sub.iμ| . . . |x.sub.i1|0| . . . |0]]. The computation relies on some built-in routines, that allow homomorphic operations on encrypted data. The relevant routines required to the homomorphic encryption scheme are: addition (S
[0031] The routine S
S
[0032] Similarly, S
S
[0033] Let x.sub.i, x.sub.j be two integers, b.sub.ij=[x.sub.i>x.sub.j] and b.sub.ji [x.sub.j>x.sub.i], the routine S
[[b.sub.ij]],[[b.sub.ji]]←S
[0034] Note that if the inputs to S
[0035] Beyond arithmetic and comparison operation, it is assumed that the encryption support shift operations. Given a packed ciphertext [[b.sub.1| . . . |b.sub.s]], the shift left operation S
SheShiftL([[b.sub.1| . . . |b.sub.s]],i)=[b.sub.i| . . . |b.sub.s|0| . . . 0]].
[0036] The shift right operation is defined similarly for shifting to the right.
[0037] With the current subject matter, a decision tree (DT) is a function
:
.sup.n.fwdarw.{c.sub.0, . . . ,c.sub.k-1}
that maps an n-dimensional attribute vector (x.sub.0, . . . , x.sub.n-1) to a finite set of classification labels. The tree consists of: [0038] internal nodes (decision nodes) containing a test condition [0039] leave nodes containing a classification label.
[0040] A decision tree model consists of a decision tree and the following functions: [0041] a function thr that assigns to each decision node a threshold value, thr: [0, m−1].fwdarw., [0042] a function att that assigns to each decision node an attribute index, att: [0, m−1].fwdarw.[0, n−1], and [0043] a labeling function lab that assigns to each leaf node a label, lab: [m, M−1].fwdarw.{c.sub.0, . . . , c.sub.k-1}.
[0044] The decision at each decision node is a “greater-than” comparison between the assigned threshold and attribute values, i.e., the decision at node v is [x.sub.att(v)≥thr(v)].
[0045] Given a decision tree, the index of a node is its order as computed by breadth-first search (BFS) traversal, starting at the root with index 0. If the tree is complete, then a node with index v has left child 2v+1 and right child 2v+2.
[0046] The node with index v as the node v. W.l.o.g, [0, k−1] will be used as classification labels (i.e., c.sub.j=j for 0≤j≤k−1) and the first (second, third, . . . ) leaf in BFS traversal will be labeled with classification label 0 (1, 2, . . . ). For a complete decision tree with depth d the leaves have indices ranging from 2.sup.d, 2.sup.d+1, . . . , 2.sup.d+1−2 and classification labels ranging from 0, . . . , 2.sup.d−1 respectively. Since the classification labeling is now independent of the tree, =(T, thr, att) can be used to denote a decision tree model consisting of a tree T and the labeling functions thr, att as defined above. It can be assumed that the tree parameters d, m,
can be derived from
.
[0047] Given x=(x.sub.0, . . . , x.sub.n-1) and =(T, thr, att), then starting at the root, the Decision Tree Evaluation (DTE) evaluates at each reached node v the decision b←[x.sub.att(v)≥thr(v)] and moves either to the left (if b=0) or right (if b=1) subsequent node. The evaluation returns the label of the reached leaf as result of the computation. This result can be denoted by
(x).
[0048] Let x=(x.sub.0, . . . , x.sub.n-1) be a client's private attribute vector and =
thr, att) be a server's private decision tree model. A private DTE (PDTE) functionality evaluates the model
on input x, then reveals to the client the classification label
(x) and nothing else, while the server learns nothing, i.e.,
.sub.PDTE(
,x).fwdarw.(ϵ,
(x))
[0049] Let x=(x.sub.0, . . . , x.sub.n-1) be a client's private attribute vector and =(
, thr, att) be a server's private decision tree model. A protocol II correctly implements a PDTE functionality if after the computation it holds for the result c obtained by the client that c=
(x).
[0050] Besides correctness, parties must learn only what they are allowed to. To formalize this, the following two definitions are needed. A function μ: .fwdarw.
is negligible if for every positive polynomial p(.) there exists an ϵ such that for all n>ϵ: μ(n)<1/p(n). Two distributions
.sub.1 and
.sub.2 are computationally indistinguishable
if no probabilistic polynomial time (PPT) algorithm can distinguish them except with negligible probability.
[0051] In SMC protocols, the view of a party consists of its input and the sequence of messages that it has received during the protocol execution. The protocol is said to be secure, if for each party, one can construct a simulator that, given only the input of that party and the output, can generate a distribution that is computationally indistinguishable to the party's view.
[0052] With regard to PDTE Security, let x=(x.sub.0, . . . , x.sub.n-1) be a client's private attribute vector and =(
, thr, att) be a server's private decision tree model. A protocol Π.sub.PDTE securely implements the PDTE functionality in the semi-honest model if the following conditions hold: [0053] there exists a PPT algorithm Sim.sub.S.sup.pdte that simulates the server's view View.sub.S.sup.Π.sup.
, thr, att) such that:
(x)ϵ{0, . . . , k−1} such that:
[0055] A protocol Π.sub.PDTE securely implements the PDTE functionality with one-sided simulation if the following conditions hold: [0056] for every pair x, x′ of different client's input vector it holds:
[0058] Note that for the one-sided simulation, the requirement in Equation 3 is that the protocol should be indistinguishable against any PPT adversary that controls the server. This means, the server should not be able to distinguish between the case where the client uses x and the case where it uses x′. Moreover, the protocol should be simulatable in against any adversary controlling the client.
[0059] The following describes a comparison protocol using an integer encoding which is based on Lin and Tzeng protocol. Let x.sub.i and x.sub.j be inputs of client and server, respectively, with the goal to compute [x.sub.i>x.sub.j]. With regard to input encoding, let INT(y.sub.η . . . y.sub.1)=y be a function that takes a bit string of length η and parses it into the η-bit integer y=Σ.sub.i=1.sup.ηy.sub.l.Math.2.sup.l-1. The 0-encoding V.sub.x.sub.
[0060] where l′=l+1, r.sub.il.sup.(0), r.sub.il.sup.(1) are random numbers of a fixed bitlength v>μ (e.g. 2.sup.μ≤r.sub.il.sup.(0),r.sub.il.sup.(1)<2.sup.u+1 with LSB(r.sub.il.sup.(0))=0 and LSB(r.sub.il.sup.(1))=1 (LSB is the least significant bit). If the INT function is used the compute the element at position l, then we call it a proper encoded element otherwise we call it a random encoded element. Note that a random encoded element r.sub.il.sup.(1) at position l in the 1-encoding of x.sub.i is chosen such that it is guaranteed to be different to a proper or random encoded element at position l in the 0-encoding of x.sub.j, and vice versa. Hence, it enough if r.sub.il.sup.(1) and r.sub.il.sup.(0) are one or two bits longer than any possible proper encoding element at position l. Also note that the bit string x.sub.iμ, x.sub.iμ-1 . . . x.sub.il is interpreted by the function INT as the bit string y.sub.μ-l+1 . . . y.sub.l with length μ−l+1 where y.sub.1=x.sub.il, y.sub.2=x.sub.i(l+1), . . . , y.sub.μ-l+1=x.sub.iμ. If we see V.sub.x.sub.
[0061] For example, if μ=3, x.sub.i=6=110.sub.2 and x.sub.j=2=010.sub.2 then
V.sub.x.sub.
V.sub.x.sub.
[0062] and V.sub.xi.sup.1−V.sub.xj.sup.0=(0, r.sub.2, r.sub.1), where r.sub.2=3−r.sub.j2.sup.(0) and r.sub.1=r.sub.i1.sup.(1)−3 are random numbers. On the other hand, if x.sub.i=2=010.sub.2 and x.sub.j=6=110.sub.2 then V.sub.x.sub.
[0063] Let x.sub.i and x.sub.j be two integers, then x.sub.i>x.sub.j iff V=V.sub.x.sub.
[0064] A proof: if V=V.sub.x.sub.
[0065] If x.sub.i>x.sub.j then there exists a position l such that for each h, μ≥h≥l+1, x.sub.ih=x.sub.jh and x.sub.il=1 and x.sub.jl=0. This implies u.sub.il=v.sub.il.
[0066] For h, μ≥h≥l+1, either u.sub.ih bit string is a prefix of x.sub.i while v.sub.jh is random, or u.sub.ih is random while v.sub.jh bit string is a prefix of x.sub.j. From the choice of r.sub.ih.sup.(0), r.sub.ih.sup.(1), we have u.sub.ih≠v.sub.ih.
[0067] For h, l−1≥h≥1 there are three cases: u.sub.ih and v.sub.ih (as bit string) are both prefixes of x.sub.i and x.sub.j, only one of them is prefix, both are random. For the first case the difference of the bits at position l and for the other cases the choice of r.sub.ih.sup.(0) imply that u.sub.ih≠v.sub.ih. This concludes the proof.
[0068] Protocol: let [[V.sub.xi.sup.0]]=[[v.sub.iμ]], . . . , [[v.sub.i1]] (respectively=[[V.sub.x.sub.
[0069] In contrast to the original protocol, there can be differences such as:
[0070] Additively HE instead of multiplicative: As explained above multiplication increases the noise exponentially while addition increases it only linearly.
[0071] The I
[0072] The choice of random encoded elements r.sub.il.sup.(0), r.sub.il.sup.(1): We choose the random encoded elements as explained above and encrypt them, while the original protocol uses ciphertexts chosen randomly in the ciphertext space.
TABLE-US-00001 Algorithm 1: Modified Lin-Tzeng Comparison Protocol 1: function LINCOMPARE ([[V.sub.x.sub.
[0073] Encrypting the encodings on both side: In the original protocol, the evaluator has access to x.sub.j in plaintext and does not need to choose random encoded elements. By encoding as explained in our modified version, we can encrypt both encodings and delegate the evaluation to a third party which is not allowed to have access to the inputs in plaintext.
[0074] Aggregation: The multiplication of the ciphertexts returned by Algorithm 1 returns a ciphertext encrypting either 0 or a random number.
[0075] Edges of the tree can be marked with the corresponding comparison result. So if the comparison at node v is the bit b then the right edge is marked outgoing from v with b and the left edge is marked with 1−b. This information can be stored at the child nodes of v and refer to it as cmp.
[0076] For a decision tree model =(
, thr, att), Node can be a data structure that for each node v defines the following: [0077] v.threshold stores the threshold thr(v) of the node v [0078] v.aIndex stores the associated index att(v) [0079] v.parent stores the pointer to the parent node which is null for the root node [0080] v.left stores the pointer to the left child node which is null for each leaf node
TABLE-US-00002 Algorithm 2: Modified Lin-Tzeng Protocol for PDTE 1: function LINCOMPAREDT ([[V.sub.x.sub.
[0084] is used to denote the set of all decision nodes and
the set of all leave nodes of
. As a result, the equivalent notation
=(
, thr, att)=(
,
) can be used.
[0085] With the data structure defined above, the classification function can be defined as follows. Let x=(x.sub.0, . . . , x.sub.n−1) be an attribute vector and =(
,
) a decision tree model. The classification function can be defined as:
f.sub.c(x,)=tr(x,root), [0086] where root is the root node and tr is the traverse function defined as:
[0087] Let x=(x.sub.0, . . . , x.sub.n-1) be an attribute vector and =(
,thr, att)=(
,
) a decision model such that:
(x)=b.Math.tr(x,root.Math.right)+(1−b).Math.tr(x,root.Math.left), [0088] where b=[x.sub.att(root)≥thr(root)] is the comparison at the root node.
[0089] The proof follows by induction on the depth of the tree. In the base case, there is a tree of depth one (i.e., the root and two leaves). In the induction step, there are two trees of depth d which can be joined by adding a new root.
[0090] Initialization includes a one-time key generation in which the client generates appropriate triple (pk, sk, ek) of public, private and evaluation keys for a homomorphic encryption scheme. Then the client sends (pk, ek) to the server. For each input classification, the client just encrypts its input and sends it to the server. To reduce the communication cost of sending client's input, one can use a trusted randomizer that does not take part in the real protocol and is not allowed to collaborate with the server. The trusted randomizer generates a list of random strings r and sends the encrypted strings [[r]] to server and the list of r's to the client. For an input x, the client then sends x+r to the server in the real protocol. This technique is similar to the commodity-based cryptography with the difference that the client can play the role of the randomizer itself and sends the list of [[r]]'s (when the network is not too busy) before the protocol's start.
[0091] With regard to encoding and encrypting input, the protocol starts with the client encoding, encrypting and sending its input to the server. For each attribute value x.sub.i the client sends the encryptions [[V.sub.x.sub.
[[V.sub.x]]=[([V.sub.x.sub.
[0093] The server does the same with the threshold values. For each threshold value y.sub.j the server computes the encryptions [[V.sub.y.sub.
[[V.sub.D]]={([[V.sub.y.sub.∧v.threshold=y.sub.j}
denote the encoding and encryption of all threshold values in . This computation is illustrated by Encode(x) and Encode(
) in Protocol 6. Note that, this is still compatible with the trusted randomizer technique, where we will use sequences of random integers.
TABLE-US-00003 Algorithm 3: Computing Decision Bits 1: function EVALDNODE( , [[
]], [[V.sub.x]]) 2: for each v ∈ D do 3: let y.sub.j ← v.threshold and i ← v.alndex 4: [[v.right.cmp]] ← LINCOMPAREDT ([[V.sub.x.sub.
TABLE-US-00004 Algorithm 4: Aggregating Decision Bits 1: function EVALPATHS( ,
) 2: for each v ∈
do 3: let P.sub.v be the array of nodes on the path (root .fwdarw. v) 4: [[cost.sub.v]] ← [[0]] 5: for each u ∈ P.sub.v do 6: [[cost.sub.v]] ← [[cost.sub.v]] + [u.cmp]
[0094] With regard to evaluating decision nodes, the server evaluates the comparison at each decision node. For each decision node vϵ, let v.threshold=y.sub.j and i=v.aIndex. It can be assumed that x.sub.i≠y.sub.j for all i, j. The parties can ensure this by having the client adding a bit 0 to the bit representation of each x.sub.i, and the server adding a bit 1 to the bit representation of each y.sub.j before encoding the values. Then from the definition of the tree evaluation, we move to the right if [x.sub.i≥y.sub.j] or the left otherwise. This is equivalent of testing [x.sub.i>y.sub.j] or [y.sub.j>x.sub.i], since we assume x.sub.i≠y.sub.j. Therefore, for each decision node y.sub.j with corresponding attribute x.sub.i, the server uses L
[0095] With regard to aggregating decision results, then for each node vϵ, the server aggregates the comparison results along the path from the root to v. As a result of Algorithm 3, one edge of each decision node is marked with a ciphertext of 0, while the other will be marked with a ciphertext of a random plaintext. It follows that the sum of marks along each path of the tree, will result to an encryption of 0 for the classification path and an encryption of a random plaintext for other paths. This computation is illustrated in Algorithm 4.
[0096] To reveal the final result to the client, the following can be performed. For each ciphertext [[cost.sub.v]] of Algorithm 4, the server chooses a random number r.sub.v, computes [[result.sub.v]]←[[cost.sub.v.Math.r.sub.v+v.cLabel]] and sends the resulting ciphertexts to the client in a random order. This is illustrated in Algorithm 5. Alternatively, the server can make a trade-off between communication and computation by using the shift operation to pack many result, in a single ciphertext.
TABLE-US-00005 Algorithm 5: Finalizing 1: function FINALIZE(-C) 2: for each v ∈ do 3: choose a random number r.sub.v 4: [[result.sub.v]] ← [[cost.sub.v .Math. r.sub.v + v.cLabel]]
TABLE-US-00006 Protocol 6: The Basic Protocol Client Server Input: x Input: = (
,
) Output:
(x) Output: ε
[0097] As illustrated in Algorithm 6, the whole computation is performed by the server. The server sequentially computes the algorithms described above and sends the resulting ciphertexts to the client. The client decrypts and outputs the resulting classification label. The correctness is straightforward and follows from above.
[0098] With the current subject matter, ciphertext packing can be used to improve computation and communication. Ciphertext packing means that each ciphertext encrypts s bits, where s is the number of slots in the ciphertext. Then we can use this property in three different ways. First, one could pack each encoding (0 or 1) of each value (attribute or threshold) in a single ciphertext allowing the server to compute the difference in Step 6 L
[0099] Packing 0-Encodings and 1-Encodings. A modified Lin-Tzeng comparison as provided herein requires only component-wise subtraction and a multiplication of all components. Therefore, the client can pack the 0-encoding of each x.sub.i in one ciphertext and sends [[v.sub.iμ| . . . |v.sub.i1|0| . . . |0]] instead of [[V.sub.x.sub.
[0100] Packing Attribute Values. Let x.sup.(1), . . . , x.sup.(s) be s possible attribute vectors with x.sup.(l)=[x.sub.1.sup.(l), . . . , x.sub.n.sup.(l)], 1≤l≤s. For each x.sub.1.sup.(l), let V.sub.x.sub.
[0101] To shorten the notation, let y.sub.j denote the threshold of j-th decision node (i.e., y.sub.j=v.sub.j.threshold) and let V.sub.y.sub.
[0102] The above described encoding allows to compare s attribute values together with one threshold. This is possible because the routine L
L
where b.sub.ij.sup.(l)−0 if x.sub.i.sup.(l)>y.sub.j and b.sub.ij.sup.(l) is a random number otherwise. This results in a single ciphertext such that the l-th slot contains the comparison result between x.sub.i.sup.(l) and y.sub.j.
[0103] Aggregating decision bits remains unchanged as described in Algorithm 4. This results in a packed ciphertext [[b.sub.v]]=[[b.sub.v.sup.(1)| . . . |[b.sub.v.sup.(s)]] for each leaf vϵ, where b.sub.v.sup.(l)=0 if x.sup.(l) classifies to leaf v and b.sub.u.sup.(l) is a random number for any other leaf uϵ
−{v}. Algorithm 5 remains unchanged as well.
[0104] Packing Threshold Values. In this case, the client encrypts single attribute in one ciphertext, while the server encrypts multiple threshold values in a single ciphertext. Hence, for an attribute value x.sub.i, the client generates the ciphertexts as in Equation 7 for the 0-encoding and handles the 1-encoding similarly.
[0105] Let m.sub.i be the number of decision nodes that compare to the attribute x.sub.i (i.e., m.sub.i=|{v.sub.jϵ: v.sub.j.aIndex=i}|). The server packs all corresponding threshold values in
cyphertext(s) as illustrated in Equation 8 for the 0-encoding and handles the 1-encoding similarly.
[0106] The packing of threshold values allows to compare one attribute value against multiple threshold values together. Unfortunately, the slot cannot be accessed while performing homomorphic operation. Hence, to aggregate the decision bits, m.sub.i copies of the resulting packed decision results can be made, and each decision result can be shifted to the first slot. Then the aggregation of the decision result and the finalizing algorithm work as in the previous case with the only difference that only the result in the first slot matters and the remaining slots can be set to 0.
[0107]
[0108]
[0109] In one example, a disk controller 248 can interface with one or more optional disk drives to the system bus 204. These disk drives can be external or internal floppy disk drives such as 260, external or internal CD-ROM, CD-R, CD-RW or DVD, or solid state drives such as 252, or external or internal hard drives 256. As indicated previously, these various disk drives 252, 256, 260 and disk controllers are optional devices. The system bus 204 can also include at least one communication port 220 to allow for communication with external devices either physically connected to the computing system or available externally through a wired or wireless network. In some cases, the at least one communication port 220 includes or otherwise comprises a network interface.
[0110] To provide for interaction with a user, the subject matter described herein can be implemented on a computing device having a display device 240 (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information obtained from the bus 204 via a display interface 214 to the user and an input device 232 such as keyboard and/or a pointing device (e.g., a mouse or a trackball) and/or a touchscreen by which the user can provide input to the computer. Other kinds of input devices 232 can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback by way of a microphone 236, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input. The input device 232 and the microphone 236 can be coupled to and convey information via the bus 204 by way of an input device interface 228. Other computing devices, such as dedicated servers, can omit one or more of the display 240 and display interface 214, the input device 232, the microphone 236, and input device interface 228.
[0111] One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
[0112] These computer programs, which can also be referred to as programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores.
[0113] To provide for interaction with a user, the subject matter described herein may be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) and/or a touch screen by which the user may provide input to the computer. Other kinds of devices may be used to provide for interaction with a user as well; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
[0114] In the descriptions above and in the claims, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it is used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” In addition, use of the term “based on,” above and in the claims is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible.
[0115] The subject matter described herein can be embodied in systems, apparatus, methods, and/or articles depending on the desired configuration. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. For example, the implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of several further features disclosed above. In addition, the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. Other implementations may be within the scope of the following claims.