February 07, 2017

Anonymous Traffic Filtering

Anonymous traffic filtering is the problem of restricting traffic flow to only that which is presumably generated by a human (as opposed to a botnet). As an example use case, this might be done by a CDN edge server providing content to clients behind Tor. Cloudflare is one such company that makes use of this technique. Their edge servers tend to weigh Tor exit nodes with low “reputations” [1] since they produce an abnormal amount of traffic. By the nature of Tor, the server is unable to distinguish a properly behaving exit node funneling traffic on behalf of many clients and one that is acting maliciously. In response, these nodes are served a CAPTCHA so that the human behind the scenes can prove their presence. However, one CAPTCHA is often insufficient to vouch for the entire traffic. As a result, Tor users tend to be served an excessive amount of CAPTCHAs. This leads to a bad user experience that discourages those users from using Tor. In this post, I’ll talk about a variety of mechanisms by which this so-called “CAPTCHA challenge” can be automatically bypassed to prevent user experience disruptions while simultaneously preventing malicious botnets from abusing the system. The idea is to solve CAPTCHA once and then, for future CAPTCHAs, present a cryptographic token of authenticity that can vouch for the human’s presence. But before going into details, some background is needed.

Cryptographic Foundations

The schemes I’ll discuss depend on two closely related cryptographic protocols: (RSA) blind signatures and oblivious pseudorandom functions (OPRFs).

Blind Signatures

A blind signature scheme is one in which a client can obtain the signature of a message from a server without the server learning . The RSA blind signature scheme from Chaum works roughly as follows. (Here, is a message to be signed by a server and is assumed to be a member of the RSA group. Also, the RSA modulus is and the public and private keys are and , respectively.)

  1. Generate a random blinding element from the RSA group.
  2. Send to the server.
  3. Sign by computing
  4. Send to the client.
  5. Remove the blinding factor to obtain the original signature as .

By the properties of RSA, is clearly a valid signature for .

OPRFs in the Random Oracle Model

OPRFs are a special type of PRF in which two parties work to compute , where is a key owned by the sender and is an input owned by the receiver. The goal of an OPRF protocol is that the sender (server) learns nothing about the computation, whereas the receiver learns . (Hence the name, oblivious.) In the Random Oracle Model (ROM), we may build an OPRF out of a blind signature scheme by simply hashing the resultant signature [3]. That is, we may build the PRF as , where is the blind signature scheme and is a cryptographic hash function modeled as a random oracle. According to [3], the OPRF itself unveils the blind signature and the actual OPRF output, thereby allowing the receiver to remove its blind and yet still provide a valid OPRF value. The protocol for this particular OPRF functionality works as follows.

  1. Generate a random blinding element from the RSA group.
  2. Compute .
  3. Send to the server.
  4. Sign by computing .
  5. Hash to get .
  6. Send and to the client.
  7. Remove the blinding factor on to obtain .

At the end, the client has a message and accompanying OPRF value .

As a matter of technical correctness, this is not precisely an OPRF, since the server learns the input and there is partial output other than the OPRF value. It is an open problem to extend blind signature schemes, such as the RSA one due to Chaum [2], to a secure computation .

Another type of OPRF protocol was described recently by Hugo Krawczyk at RWC 2017 [4]. The PRF itself computes , where is a hash function that maps its input to an element of a proper Diffie Hellman group and is a random exponent. The protocol to compute this functionality runs as follows:

  1. The client computes for some random value and sends the result to the server.
  2. The server computes and returns it to the client.
  3. The client computes , yielding .

By running this protocol, the client gets a signature over a random element in the group . One very useful property of this is that the output is malleable, i.e., it can be re-randomized at will. (I’ll exploit this later on.) For example, assume the client has and . It can then compute and . If the client sends this tuple to the server, it can then remove the exponent by computing and check that this equals the message sent. If so, the client has just proven that it is in possession of a message that was previously signed by the server, but it has no knowledge about what message it was since it was signed. We call this sort of value malleable.

Blind Signatures

In this section we describe a series of approaches to the anonymous traffic filtering problem based on blind signatures.

One-Time Anonymous Tokens

In [1], the authors present a first approach towards solving this problem that uses “one-time anonymous tokens” to verify the human presence while only requiring them to solve one CAPTCHA. The scheme works as follows. When the client is first served a CAPTCHA, the solution is presented to the server along with blinded tokens (RSA group elements). (Here, a blinded token is a blinded RSA group element.) Upon verifying the CAPTCHA solution, the server will sign each of the blinded tokens and return the corresponding signatures to the client along with their desired resource. Later, when the client is presented with another CAPTCHA, it will encrypt one of its unblinded tokens and the corresponding signature under the server’s (assumed to be known) public key. The server can then verify the token and its signature and, if it is deemed valid, return the desired content.

The main benefit of this design is that each token is unlinkable to the rest (assuming each was originally generated at random). This prevents the server from linking same-origin requests based on these tokens. However, it gives a false sense of security, since it is within the client’s power to re-randomize (or blind) the message and signature to create a fresh forged token and signature. Specifically, given a message and its signature , one can compute and . To verify this, the server would compute , which matches the provided input. This malleability could be easily abused by clients or other nefarious people.

As a final note, this scheme requires work for the server when generating token signatures and state on the client. The former may be prohibitive whereas the latter is hardly a problem.

Piggy-Backed Anonymous Tokens

An optimization on the anonymous tokens idea is to continually create them in a piggy-back fashion instead of in a single batch request. When the client submits its CAPTCHA response, it also submits a single blinded token to sign. If the CAPTCHA is valid, the server responds with the appropriately blinded signature. Later, when that same token is redeemed, the client submits another blinded token to be signed in the encrypted envelope. The server then verifies the unblinded token’s signature and, if valid, signs the new blinded token and replies with the response. This variant incurs less overhead on the server since it signs at most one token per CAPTCHA challenge. Moreover, this process can repeat indefinitely so long as it is bootstrapped with a valid CAPTCHA solution. As a result, it permits an unlimited number of tokens for every single CAPTCHA solution. This could potentially be abused if the CAPTCHA solution is stolen or somehow forged. In contrast, the original solution permits at most tokens for every CAPTCHA.

Linkable Anonymous Tokens

Linkable anonymous tokens are a variant to the one-time anonymous token approach based on Lamport’s one-time password scheme [5]. It works as follows. Instead of the client generating unique tokens to be signed in response to a CAPTCHA, the client generates a single token and computes its -th hash , i.e., for a token (bit string) . It then blinds this token just as before. After verifying the CAPTCHA response, the server will then sign the blinded digest and return the signature to the client. Later, when the client is prompted with the first followup CAPTCHA, it provides the preimage of and the unblinded signature, both of which are encrypted under the server’s public key. The server receives the preimage, computes its hash to produce , and then verifies its signature. If it is valid, the server adds the preimage to a cache of verified responses. Later, when the client receives the -th CAPTCHA, it provides the -th preimage alone, encrypted under the server’s public key. Upon receipt, the server computes the hash of this preimage and checks to see if it has previously cached the value. If so, the server caches the preimage and returns. Otherwise, the server signals to the client that its token is invalid. This process can be done at most N times until the client can no longer generate valid preimages.

The benefits of this approach are much faster verification at the server: it only requires one hash computation and a cache lookup. However, the cache requires state at the server. Moreover, the server can link preimages together to link these anonymous tokens together and associate them with the same origin. (This effectively violates the unlinkable properties of client traffic in Tor.) Also, the server no longer has control over the number of tokens that can be delegated per CAPTCHA since is controlled by the client. This is a very undesirable feature.

Malleable Tokens with OPRFs

Let’s now consider what we could do if we used OPRFs. As previously discussed, an OPRF lets two parties jointly compute while only one party learns the true OPRF value. The obvious application is to assume that the server owns the secret key and the client provides the message to be signed. Specifically, assume we use the one-round OPRF protocol from Hugo [4]. The server is assumed to have a secret key it can use to compute the PRF and the client’s secret input is a randomly chosen value. The output of this protocol is, essentially, an encryption of the client’s input, but one with the special malleable property described earlier. We can exploit this when clients need to redeem their tokens. Specifically, they can simply blind and its encryption with a fresh random value and send both to the server. The server then verifies that removing the factor (by decryption) reveals the blinded value.

One obvious concern are replay attacks due to the token malleability. If an attacker gets one valid token and its encryption, it can then distribute this to other machines (zombies) to forge new valid encryptions and bypass CAPTCHAs. Since the malleability is controlled by the client, there is no way to prevent such an attack in this scenario. As of now, I don’t see a way around this with the protocols considered here.

Performance Considerations

The performance of these malleable anonymous filtering schemes is determined by that of the underlying primitive, i.e., an RSA blind signature or OPRF. Since the latter is easily portable to elliptic curves, which benefit from less computation and memory overhead in comparison to RSA public key systems, I’ll focus on that going forward. To implement the OPRF scheme from [4], we need a suitable curve, a function to hash onto the curve, and the ability to do some basic arithmetic. (For simplicity, I’ll skip the hash function part. I’m not familiar with how this is typically done, so I’ll just point to [6] as a temporary reference, which has been in my queue for a long time now.) Luckily, Sage [7] provides all of this for us. I implemented the basic malleable encryption scheme previously discussed using Curve25519 as the base for the arithmetic. The code for which is below. It shows how to generate the initial token, blind it, sign (encrypt) it, unblind it, and then randomize it as needed later on.

A Comment on Commitments

The malleability problems in both the RSA blind signature and OPRF schemes can be solved by transforming the signature scheme into a commitment scheme. For example, suppose that before the client and server embark on a protocol to compute a blind signature or OPRF, the client first commits to its input by hashing it with a hash function , e.g., . The output of the protocol would be a signature over the hashed value. Later, to redeem the token, the client presents this hash preimage and the signature to the server, who can (a) compute the hash and then (b) verify the signature is correct. By signing the hash of the input, instead of the input itself, the client is unable to randomize its input without affecting the signature. This solves our malleability problem. In fact, this is essential what’s done in one of Hugo’s other papers [7]. The OPRF protocol is similar to what’s described above, but instead of the value being the unblinded output received by the client, i.e., , the OPRF is computed as . (Obtaining the PRF value is simple once the client has the blindly-signed value from the server.)

I feel unsatisfied with these solutions. I want something that is flexible (read: malleable), but only malleable in a controlled fashion. I’ll look into that going forward.

References