Giter Site home page Giter Site logo

Comments (29)

gogo9th avatar gogo9th commented on August 20, 2024 1

Thanks, now I fully understand how silent OT works. I really appreciate your infallible strong support. I will take down all the learned principles in this thread in my personal crypto textbook.

from libote.

ladnir avatar ladnir commented on August 20, 2024

binary beaver triples? see here https://github.com/Visa-Research/volepsi/tree/main/volePSI/GMW, specifically the SilentTripleGen file.

For arithmetic triples you can bits its more complicated. You need to perform bit decomposition on one of the shares, and then use the ots to either select r_i or r_i + x * 2^1. One party will have share s0 = -sum_i r_i while the other will sum their selected string. Heres an kinda an example of this protocol https://github.com/osu-crypto/libOTe/tree/master/libOTe/Vole/Noisy

from libote.

ladnir avatar ladnir commented on August 20, 2024

also note what i described is half of a beaver triple, where one party knows a, the other knows b and the get a sharing of c=ab. You can combine two of these to get a normal beaver triple.

from libote.

gogo9th avatar gogo9th commented on August 20, 2024

Thanks for your description. Could you check if my understanding on what you desribed is correct?

We create Beaver's triple between 2 parties as follows:
image

image

, where we do OLE (oblivious linear evaluation) as follows:

image

And you are suggesting that we can silently generate Beaver's triples by using silent 1-out-of-2 OT at step 3 of OLE above. Is this correct?

from libote.

ladnir avatar ladnir commented on August 20, 2024

yes, thats correct. So by your description you are interested in arithmetic beaver triples, not binary. An example of the OLE code is contained in the NoisyVole class. This protocol is slightly more general but if you simply make the vector of size 1, then its the same as OLE.

There's also a pretty picture way of visualizing how the OLE derandomization works. You can view it as a,b are the side lengths of a rectangle with area c. You want to convert this area into the area for a new rectangle with size lengths x,y. You can do this without revealing x,y.

image

I can never remember the details of the beaver triple stuff, but given this picture, its easy to derive it. You are just adding up areas to compute xy given, c,e,d and access to your local variables.

from libote.

gogo9th avatar gogo9th commented on August 20, 2024

Thanks for your clarification and your figure example is very intuitively clear-- we only need to remember this figure.

Today I tested OT examples with your library, and in particular I tried to understand the following two examples:

- Silent Random OT: void OtExt_Silent_random_Test(const CLP& cmd)

- Silent Correlation OT: void OtExt_Silent_correlated_Test(const CLP& cmd)

In these two examples, I realized that the sender doesn't get to manually set the message pairs (i.e., <msg1, msg2>) and these message pairs are generated randomly. And one different between silent random OT and silent correlation OT is that in case of silent random OT, the receiver can manually set the choice bits, but in silent correlation OT, the receiver's choice bit has to be randomly generated. Is my understanding correct?

I wonder if libOTe has any API for OT in which the sender can manually set the message pairs to send to the receiver.

from libote.

ladnir avatar ladnir commented on August 20, 2024

you can, so there are the following APIs:

  • silent send: messages and choice bits are random.
  • send: messages are random bit choice bits are chosen.
  • sendChosen: both messages and bits are chosen.
  • sendCorrelated: choice bits are chosen, the zero message is random, and the one message is correlated via the correlation function CorFn. in particular, message[i,0] = random, message[i,1] = CorFn(message[i,0], i)

https://github.com/osu-crypto/libOTe/blob/master/libOTe/TwoChooseOne/OTExtInterface.h#L75

FYI, if you want to optimize the ole protocol, you dont want send chosen. You can allow the zero message to be the bi in your descriptions (the ri in mine). Therefore, you dont need to send it. You could use correlated OT but its basically as easy as write it by hand.

from libote.

gogo9th avatar gogo9th commented on August 20, 2024

Thanks for the notes. Today I managed to run your library's silentOT protocol with AsioSockets between a client and server.

One issue I had was that the compilation of my example code complained about the -fPIC issue on the SimplestOT library. I managed to avoid this error by compiling your library as follows:

python3 build.py --boost --debug -DENABLE_SIMPLESTOT=OFF -DENABLE_SIMPLESTOT_ASM=OFF -DENABLE_SILENTOT=ON

Then I could compile my code with the following command:

g++ -msse4.1 -mpclmul -Wall -Wextra -std=c++20 -o beaver-server beaver-server.cpp -llibOTe -lbitpolymul -lcryptoTools -lcoproto

One question: in order to use these functions you mentioned:

  • sendChosen: both messages and bits are chosen.
  • sendCorrelated: choice bits are chosen, the zero message is random, and the one message is correlated via the correlation function CorFn. in particular, message[i,0] = random, message[i,1] = CorFn(message[i,0], i)

, which CMAKE compilation options should I enable? (I'd like to avoid the --all option, because doing so gives me the -fPIC related error when compiling my own program).

And I wonder if you happen to know any good study materials on how silent OT works. I saw the original paper https://eprint.iacr.org/2019/1159, but just wonder if there is any easier-to-read study materials by any chance..

from libote.

ladnir avatar ladnir commented on August 20, 2024

Here's some text I sent someone else and some slides https://1drv.ms/p/s!AmQ6D7DVFTx8gYZL5wC6GP8DFvmxzw

the subfield is G = {0,1} and the full field is F = {0,1}^128.

Whats implemented is
silent OT, G = {0,1}, F={0,1}^128
silent Vole, G = {0,1}^128, F={0,1}^128

Or more generally, we have arbitrary subfield implemented, where F is an extension of G. Once you pick your subfield G, you will also need to pick what extension field F you want, i.e. F=G^m for some integer m.
First, how does silent OT work where G={0,1}?

The sender samples a scaler d in field F
The receiver samples a sparse binary vector A' in G^{n'}
The parties use a PPRF to get secret shares of d * A' which we will denote as B',C'. Note that dA'=B-C
The parties use an LPN-friendly matrix M to compute
A=MA'
B=MB'
C=MC'
We now have uniformly random A,B,C such that dA = B-C. In particular, A is no longer sparse.
The receiver outputs Choice bits A and messages m_{i,Ai}=H(B_i)
The sender outputs messages , m_i0=H(C_i), m_i1=H(C_i+d)
Where does this fail for subfield VOLE? Well the definition of VOLE is to have d*A=B-C where A is over G and the rest are over the field F. However, the basic protocol for silent OT only works when A is binary.

Why? The functionality of PPRF allows one party to choose an index i and another to choose a value d in F. They then get to learn two random vector B',C' such that B'-C' is a unit vector with the value d' at position i. e.g.

B'-C' = 0000000000000d0000
                     ^
                     i'th position.                            

You can repeat this several times, once for each non-zero of A' and add the results together.

Since A' is binary for silent OT where G={0,1}, this is exactly what we want. A'_id=1d=d. For VOLE, we need the A' vector to be a sparse vector over a larger G, not binary. For example, G={0,1,...,11}. So now A'_i is some random non-zero element in G and we want B'-C' to have the value A'_i * d at the position i. The idea for doing this is clever.

We first perform another VOLE (noisy vole), where the inputs are a vector A'' and the scaler d, where A'' are the t non-zero values of A'. This gives us a secret sharing of A''*d. Instead of using d as the value to be used in the PPRF, this party (sender) will use their jth share of A''*d for the jth PPRF input scaler. The parties then get vectors B'-C' which is sparse and at the non-zero locations holds the sender's shares of A''*d. We can translate these shares to be what we want (i.e. dA'=B'-C') by having the receiver add their shares of A''*d to B' at the positions that they know the non-zeros are at. That is, they chose A' so know where the non-zeros are.

from libote.

gogo9th avatar gogo9th commented on August 20, 2024

Thanks for your detailed explanations. So today I learned a lot about how PPRF and PCF works.

I understand this part:

They then get to learn two random vector B',C' such that B'-C' is a unit vector with the value d' at position i. e.g.

B'-C' = 0000000000000d0000
^
i'th position.
You can repeat this several times, once for each non-zero of A' and add the results together.

We do the above process as many times as the number of 1-bits in A', and aggregate each of B', C', and A' matrices, and the result is this:
image
However, I think A' is not a sparse vector, because A' has as many 1-bits as the number of choice 1-bits set by the sender. The sender's likelihood of choosing 1-bits is 50% (in a uniformly random scenario). This means that 50% of vector A' will have 1-bits. If so, we need to do as many OTs as (the dimension of A) / 2. This is already a lot of OTs and inefficient. I feel I am missing something... Could you please correct me if I'm wrong?

from libote.

ladnir avatar ladnir commented on August 20, 2024

A' is sparse, maybe 100 nonzeros, independent of the the number of OTs.

The choice bits the user will see are computed as

A= M * A'

M is some random looking matrix that compresses A'. I think the slides use G instead of M. Regardless, A will look random according to the LPN / syndrome decoding assumption. A will have 50% ones. A' might have less than 1% ones.

from libote.

gogo9th avatar gogo9th commented on August 20, 2024

I see, so M is a very important matrix which is committed to uniformly randomizing A. I wonder about the properties of M.

  • How do we make M such that it uniformly randomizes A and also preserves the uniform randomness of B and C?
  • Is there any limit on the number of rows of M?
  • Is M a publicly well-known re-usable matrix, or should we generate a new M for every silent OT session?

Thanks, it's really exciting to grasp the internals of the silent OT protocol.

from libote.

ladnir avatar ladnir commented on August 20, 2024

Much work has gone into the design of M.

https://eprint.iacr.org/2019/1159
https://eprint.iacr.org/2021/1150
https://eprint.iacr.org/2022/1014
https://eprint.iacr.org/2023/882
Along with more unpublished work ;)

Preserving B, C is easy enough, you just need the to not have linear dependancies. For A, it's more tricky. But basically if M has high minimum distance when viewed as a error correcting code generator matix (codeword xM for message x), then A should be uniform. This is an active area of research with several papers at crypto 2024 studying it.

Limit of the number of rows of M? Yes, M should have high minimum distance which also limits the number of rows. Typically M compresses by A' to half the size.

M is public, and is reused.

from libote.

gogo9th avatar gogo9th commented on August 20, 2024

I am sorry, I was not much familiar with error correcting code, so I've looked into this.

You said:

But basically if M has high minimum distance when viewed as a error correcting code generator matix (codeword xM for message x), then A should be uniform.

When you say "high minimum distance" above, do you mean that the rows of M should have a high minimum distance, or the listof codwords generated by M (i.e., A' * M) should have a high minimum distance?

Also, I wonder if having a high minimum distance is the only requirement, because if we look at the following generator matrix (G):
image
(In the above figure, G is the generator matrix M, and A is a transpose of G, and x i A', and Gx is A).

Its minimum distance is 4 (both the rows of G and the list of codewords generated by G). But the generated codeword does not seem to be uniformly random. The reason is that if we look at G, the first n rows form an identity matrix, which means that the generated codeword's first n rows is always exactly the same as A'. But in our siilentOT protocol, A' is a sparse vector whose most elements are zero. This means that the codeword generated by G will have mostly zeros in its first n rows. This means that the generated codeword is not necessarily uniformly random, but will have many zeros. So, I think there should be some extra requirements besides having a high minimum distance. Is this correct?

from libote.

ladnir avatar ladnir commented on August 20, 2024

ok, lets rename M as G. So for the protocol we compute GA' = A. G is a wide matrix with k rows and n columns. A' is a tall sparse vector of size n. Therefore A is a short vector of size k. So in the protocol is A is maybe length k=2^20 and A' is n=2^21.

image

So for a short "message" x in {0,1}^k, you can also generate a "codeword" c=xG.
image

The minimum distance d can be defined as the smallest hamming weigh over all { c = xG | forall x in {0,1}^k }.

I'll add more to this later tonight...

from libote.

ladnir avatar ladnir commented on August 20, 2024

So why does A look random?

The problem statement (G,A) is indistinguishable from (G, random) is known as syndrome decoding (SD) or dual LPN.

There are two version of this problem. The other is called Learning Parity With Noise (LPN). LPN and SD are equivalent.

Let H be the "parity check matrix" for G, with m=n-k rows and n columns. This basically means that G*H = 0^m.

The LPN problem state that H * s + A' = random. Here, s is chosen uniformly at random while A' is sparse.
image

However, if we left multiply this all by G, we get
image

But we know GH=0 so we get
image

By the LPN assumption, we know r (and therefore Gr) looks random and therefore so must GA'.

Ok, SD looks random if LPN looks random, but why LPN look random?

Well it looks random by assumption. However, we have some evidence. Most all known attacks on LPN are what we call "linear tests". That is, the best way to have to distinguish r from something truly random is to study at the matrix H and then come up with some positions of r to sum up. If these positions sum to zero, then we guess that r isn't actually random. For truly random r this happens with Pr = 1/2. So if this happens noticeable more often then 1/2 then the attack is considered effective.

For you to have any hope that the positions of r you are summing up to be biased towards zero, you need the contribution from the Hs to cancel itself. This can be done if you choose a codeword c for your positions.

In particular, let c=xG be some codeword. By the properties of error correcting codes, for all c, it holds that cH=0. Therefore, if you use c to choose your sum of r, i.e. sum_i r_i * c_i = <r,c> then we get
image

However, if you don't choose a codeword, you result <r,c> is provable random due to Hs not canceling with itself. Lets continue to assume c is a codeword. The result is still random if there is overlap between A' and c (here we assume the noisy positions of A' are set uniformly as opposed to always being 1). However, if c is a codeword and misses the noisy positions of A', then the result will be zero! This is what the adversary needs to do to win (i.e. get the sum to be biased towards being zero).

We can then ask how likely is this to happen? Well A' can be chosen to have t noisy uniformly random positions. The probability of some codeword c with at least d non-zeros intersects the t noisy positions of A' can be computed. Recall that d is the minimum distance of the code with generator matrix G and parity check matrix H. I can't remember the formula for this probability but its in the links above. But suffice it to say, if d is decently big, say 0.1n and t=128 or something, then with probability 1-2^{-128}, A' and c will intersect and therefore the test <r,c> will look random.

There are also some ways to attack this problem that aren't "linear tests". For example https://eprint.iacr.org/2023/176

Also, if you choose G to have special structure, e.g. G is the reed solomon code, then you can break the security.

from libote.

ladnir avatar ladnir commented on August 20, 2024

Here's another explainer https://geoffroycouteau.github.io/posts/LPN-linear-tests/

from libote.

ladnir avatar ladnir commented on August 20, 2024

So i think we also had some confusion on if A' is a matrix or vector. For silent OT A' is a sparse binary vector but it just so happens that B,C are vectors over GF(128) elements. The question about LPN/SD security is still with respect to a binary vector A'.

However, for vole A' is a sparse vector over some larger field F. For example, GF(128). B',C' can also be over this field or
you can instantiate the protocol when A' is a vector of some subfield F' while B,C are over F. That is, the larger field F is an extension of F', i.e. F=F'^w.

from libote.

gogo9th avatar gogo9th commented on August 20, 2024

I am sorry for the late response. It took some time for me to learn more about the fundamentals of error correcting code, generator matrix, and parity-check matrix. These webpages were helpful to me:

Given this background, your explanation above was very clear like a textbooks. Thanks a lot and I fully understand why increasing the minimum distance (i.e., Hamming weight) of the generator matrix enhances security. But I wonder if this security enhancement is actually needed in our silent-OT protocol. My understanding on silent-OT is as follows:

We have the relation B' - C' = delta * A', where Sender generates delta and C, and Receiver generates A', and Receiver learns B' = Delta*A' + C'. But before Sender and Receiver start communication, based on Receiver's A' (i.e., a sparse noise vector), Receiver can create an LPN relation: Hs + A' = x (where s is some secret vector and H is some public parity-check matrix, and x is the resulting vector). Then, GHs + GA' = GA' = Gx (where G is a codeword generator matrix, which is the counterpart of H). Notice that Gx is a random vector, because x is a random vector. Therefore, GA' is also a random vector. So, Sender and Receiver can pre-share some generator matrix G. Then, Sender computes C = C'G, and Receiver computes A = A'G. C and A are more compact vectors compared to C' and A', and A is now a purely random-looking choice vector. Based on C, delta, and A, Sender and Receiver runs the silent-OT protocol, after which Receiver learns B = delta*A + C.

Given this silent-OT protocol, Sender does not get access to the computation result of cA' (where c is the specially chosen codeword of minimum distance d as described in your attack), because A' is fully owned by Receiver. Sender doesn't get access to even A'G (i.e., a uniformly randomized choice vector), as a guarantee of the OT protocol. So, I think we don't need to worry about increasing the minimum distance of G. We can choose any valid generator matrix G, Isn't it?

And I'd like to verify that in the silent OT protocol, once a specific instance of G has been used, next time a new G has to be used, right? This is because in the LPN problem, we assume that (G,A) is indistinguishable from (G, random), which implies that G has to be newly chosen for each new tuple of (G,A). Is this correct?

from libote.

ladnir avatar ladnir commented on August 20, 2024

If the sender never see A=GA', then yeah, maybe you can do something else where A is less random looking. But this isn't how the security is defined. VOLE requires that the sender gets (C,delta) and the receiver get (A,B) where each of these looks uniformly random to the other party.

For example, the protocol that uses the VOLE might reveal A+X to the sender. If A wasn't uniform, then some information about X would be leaked.

Also, Hs+A' is referred to as LPN while GA' is referred to as SD. The protocol only makes use of SD. The use of LPN in the description is purely to argue security (also, the LPN assumption is more common than SD).

You can sample G each time. However, this is not necessary. The implementation of G does take a seed but this is just a constant. Some argue you might have better security if you sample G fresh but there isn't strong evidence for this.

from libote.

gogo9th avatar gogo9th commented on August 20, 2024

Yes, I do agree that ideally A=GA' should be uniformly random, because its distribution may need to be publicly known (like the distribution of AES keys should be ideally uniformly random given its distribution is publicly known). But in order to make A uniformly random, I think it's sufficient for the receiver to choose a uniformly random G and then compute A=GA' (i.e., no need to design G to generate codewords having a high minimum Hamming distance). This is because in order to launch this attack you described:

Lets continue to assume c is a codeword. The result is still random if there is overlap between A' and c (here we assume the noisy positions of A' are set uniformly as opposed to always being 1). However, if c is a codeword and misses the noisy positions of A', then the result will be zero! This is what the adversary needs to do to win (i.e. get the sum to be biased towards being zero).

We can then ask how likely is this to happen? Well A' can be chosen to have t noisy uniformly random positions. The probability of some codeword c with at least d non-zeros intersects the t noisy positions of A' can be computed. Recall that d is the minimum distance of the code with generator matrix G and parity check matrix H. I can't remember the formula for this probability but its in the links above. But suffice it to say, if d is decently big, say 0.1n and t=128 or something, then with probability 1-2^{-128}, A' and c will intersect and therefore the test <r,c> will look random.

... the attacker needs to have access to the computation result of cA' (in order to observe the statistic probability of cA' being zero). But the attacker does not have access to cA', because s/he does not know A'. Therefore, I think we don't need to worry about such a statistical analysis attack, and therefore we don't need to design G to generate codewords having a high minimum Hamming distance. I think any G that generates valid codewords should suffice (i.e., multiplying a parity-check matrix to each codeword always gives 0). Does this reasoning make sense?

from libote.

ladnir avatar ladnir commented on August 20, 2024

The attacker doesn't need A'. I think a couple things might be getting mixed up. Given A' there isn't a need for an attack, you have already lost. A' makes it trivial to see things are not random looking.

Given A=GA' you can perform the attack. As described, the attack requires the distribution

Hs+A'=r

But we currently just have A... That's OK, we can actually get there. The attacker samples arbitrary solution A" to the system

GA"=A

Lots of these solutions exist, you can let A" be any of then. For example, maybe the second half is all zeros. Now, observe that for some s" it holds that

Hs"+A"=r"

You can randomize this by sampling a random s' and adding Hs' to both sides. What you get is the same distribution as

Hs+A'=r

That is, you can define r := r" + Hs' and this is the same distribution as sampling s and defining r := Hs+A'

You then perform the attack on r where you use the codewords etc.

from libote.

gogo9th avatar gogo9th commented on August 20, 2024

I'm really sorry, but I got a little lost... I understand this part:

The attacker samples arbitrary solution A" to the system
GA"=A
Lots of these solutions exist, you can let A" be any of then. For example, maybe the second half is all zeros.

But I got lost here:

Now, observe that for some s" it holds that

Hs"+A"=r"

What is r" here? Is it that we randomly choose s" and the result of computing Hs" + A" is r"?

And regarding this:

You can randomize this by sampling a random s' and adding Hs' to both sides. What you get is the same distribution as
Hs+A'=r

If we add Hs' to both sides, then we get H(s" + s') + A" = r" + Hs'. How did you get Hs+A'=r?

And here:

That is, you can define r := r" + Hs' and this is the same distribution as sampling s and defining r := Hs+A'
You then perform the attack on r where you use the codewords etc.

I assume you assume that r" + Hs' = Hs + A', but why is it so?

from libote.

ladnir avatar ladnir commented on August 20, 2024

r" yeah, maybe I didn't say that quite right. Sample s" and define r". You can also just show for a uniform s", then r, r', r" all have the same distribution.

You don't necessarily get the same s. But the main thing is how s is distributed. It's uniform according to LPN. That is, lpn states

Sample a uniform s, sparse e and for some H output (H, Hs+e).

You can view this as a distribution. If you can show something else has the same distribution, then by definition it's also an lpn sample.

We have a way of doing this. Give A, we constructed

(H, Hs+A')

For some unknown s.

Another way to think about it is that given such a sample, you can randomize it by adding Hs'. You get the same distribution back.

(H, H(s+s') +A')

The s's are uniform so it's all the same.

from libote.

ladnir avatar ladnir commented on August 20, 2024

Another way to look at lpn is in terms of error correcting codes. If G is our normal ecc generator matrix, then H is the parity check. But you can switch these and let Hs be the codeword for s.

If you think about all codewords { Hs | forall s }, they form a grid in high dimensional space.
images
When you add sparse error e=A', what you are doing is picking a point that's near a lattice/grid point. The attacker's job is to find the nearest lattice point.

When we add Hs, what we are doing is shifting the grid points. However, we are still just e away from the nearest grip point. So if our task is to round to the nearest, the core problem hasn't really changed. All lattice points look similar, the core hardness is simple rounding away the error e to get the near grid point Hs

from libote.

gogo9th avatar gogo9th commented on August 20, 2024

Thanks for illustrating the LPN problem in terms of the lattice problem (which looks very close to the LWE problem).

So, our conclusive goal of multiplying A' with G is to convert the sparse random binary vector A' into a uniformly random (half-sized) binary vector A. And we want each row of G to have a high Hamming weight, because if they have too many 0s, then the rows of GA' is more likely to be 0 than 1 (as the 1s in G are more likely to miss the positions of 1s in A'). In that case, I wonder if it is sufficient for us to let G to be just a uniformly random matrix with each row having a high Hamming weight. Or is it actually necessary for G to be a error-correcting codeword generator (which is a counterpart of the parity-check matrix H)?

from libote.

ladnir avatar ladnir commented on August 20, 2024

Yes, lpn and lwe are essential the same but just have a different notion of what "small error" means. Lpn means e has small hamming weight (mostly zeros) but otherwise uniform. Lwe means e has small Euclidean size (nonzero basically everywhere but each entry is close to zero).

The ecc generated by G having high minimum distance doesn't just the rows of G have high hamming weight. It does impl this, but it also means that all linear combinations of the rows also have high hamming weight.

All attacks in syndrome decoding (distinguishing GA' from random) imply an attack on lpn (distinguishing Hs+A') and vice versa. Most attacks focus on lpn. Moreover, these attacks can be summarized in the linear test framework.

Given an lpn sample

(H, Hs+e=r)

The attacker studies H and comes up with some test vector c. They guess that r isn't actually random if <c, r>=0.

If the attack fits this framework, you can show (as discussed above) that if c isn't a codeword, the the result is uniformly random. If c is a codeword, ie c=xG for some x, then c has hamming weight at least d (the minimum distance). <c, r> is uniform if c has nonzeros in where the e has a noisy value. Therefore, the higher the minimum distance, the less likely the attack works.

The protocol itself only computes Ge where e=A'. Its not so much that A=GA' will have too many zeros if G has low minimum distance but that you can find linear correlations in A. That is, some positions of A will sum to zero more often than random chance would suggest. The attack above tells you how you might find these linear correlations.

G being uniformly random would be secure. This is the original definition of lpn & sd. A random G is a "good ecc". But it's very inefficient. G has O(n^2) nonzeros for n=1000000 or so. So computing G*B would take forever.

from libote.

gogo9th avatar gogo9th commented on August 20, 2024

Thanks for the clarification. So, according to what you explained:

Given an lpn sample
(H, Hs+e=r)
The attacker studies H and comes up with some test vector c. They guess that r isn't actually random if <c, r>=0.

The above topic is not only about properly compressing the sparse vector A' into a uniformly random vector A, but also fundamentally about ensuring uniformly random distribution of the LPN relation: Hs + A' = r, right? In other words, we can generalize our finding such that in the LPN relation Hs + A' = r where H is a fixed matrix, s is a uniformly sampled vector, A' is sparsely uniformly sampled vector, and r is the computed resulting vector, H must have a high minimum Hemming distance in order to ensure r to be a uniformly random vector, because otherwise the aggregation of certain positions of r can be biased to be 0, which makes r not a uniformly random vector. Is this correct?

And after we compute a uniformly random (1-2 OT choice) vector A = GA', is it okay for us to use the same G and choose a different A' in order to compute a new uniformly random choice vector A? I think the result will be another uniformly random vector even if we use the same G. This is because, suppose we have the following two instances of LPN relations:

Hs1 + A1' = r1
Hs2 + A2' = r2

, where we sample a different set of (s, A') in each of the relation, while using the same H. By the assumption of LPN, r1 and r2 are uniformly random from each other. Now, we multiply both sides by G to get:

GHs1 + GA1' = Gr1
GHs2 + GA2' = Gr2

Since GH = 0, the above is equivalent to:

GA1' = Gr1
GA2' = Gr2

Since r1 and r2 are mutually uniformly random (by the LPN assumption), Gr1 and Gr2 are also mutually uniformly random. Therefore, I think there is no need to use a different H when we create a new A with a different set of A' and s. is this correct?

from libote.

ladnir avatar ladnir commented on August 20, 2024

Yes, thats right. You can reuse G.

from libote.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.