Closed Bug 1064636 (CVE-2014-1568) Opened 10 years ago Closed 10 years ago

RSA PKCS#1 signature verification forgery is possible due to too-permissive SignatureAlgorithm parameter parsing

Categories

(NSS :: Libraries, defect)

defect
Not set
critical

Tracking

(firefox32+ verified, firefox33+ verified, firefox34+ verified, firefox35+ verified, firefox-esr3132+ verified, b2g-v1.4 fixed, b2g-v2.0 fixed, b2g-v2.0M fixed, b2g-v2.1 fixed, b2g-v2.2 fixed)

RESOLVED FIXED
Tracking Status
firefox32 + verified
firefox33 + verified
firefox34 + verified
firefox35 + verified
firefox-esr31 32+ verified
b2g-v1.4 --- fixed
b2g-v2.0 --- fixed
b2g-v2.0M --- fixed
b2g-v2.1 --- fixed
b2g-v2.2 --- fixed

People

(Reporter: briansmith, Assigned: KaiE)

References

(Blocks 1 open bug)

Details

(Keywords: sec-critical, Whiteboard: [status-firefox-esr24:fixed][status-b2g-v1.3:fixed][status-b2g-v1.3t:fixed][adv-main32+][adv-esr31.1+])

Attachments

(11 files, 20 obsolete files)

20.20 KB, image/png
Details
5.21 KB, text/plain
Details
8.37 KB, text/plain
Details
2.99 KB, text/plain
Details
8.48 KB, text/plain
Details
21.92 KB, application/x-gzip
Details
10.45 KB, patch
KaiE
: review+
Details | Diff | Splinter Review
3.59 KB, patch
rrelyea
: review+
Details | Diff | Splinter Review
7.81 KB, patch
Details | Diff | Splinter Review
668 bytes, patch
Details | Diff | Splinter Review
13.46 KB, patch
rrelyea
: feedback-
Details | Diff | Splinter Review
In bug 1054659 comment 20, Antoine Delignat-Lavaud wrote:
> It is indeed possible to attack RSA-PKCS#1v1.5 in NSS.
> Earlier you pointed me to the file nss/lib/cryptohi/secvfy.c for signature
> verification. Near the top of the file there's a function with this
> interesting comment:
> 
>     /* make sure the "parameters" are not too bogus. */
>     if (di->digestAlgorithm.parameters.len > 2) {
> 	goto sigloser;
>     }
> 
> This allows PKCS#1 signatures to include up to two arbitrary parameters in
> the unpadded ASN.1 structure, instead of the required NULL. This is quite
> bad, because it opens up a Beichenbacher-like attack: using the homomorphic
> signature property of RSA, it is possible to build from one or more valid
> signatures, more valid signatures for a different hash, or even for a
> different hash function. Thus, the attack I described above is in fact
> possible even against RSA-PKCS#1.
> 
> Even though we initially thought this was a new variant, it turns out that
> it was in fact already described by Oiwa et al. in the GNUTLS-SA-2006-4
> report. Instead of explaining how it works precisely, I urge you to read
> section 5 of
> https://www.cdc.informatik.tu-darmstadt.de/reports/reports/sigflaw.pdf which
> provides an excellent description of a practical exploit against e=3
> signatures. However, please note that we were able to create a proof of
> concept even for a e=65537 signature. Let me know if you'd like me to attach
> it for a test case - we confirmed that neither IE/schannel nor OpenSSL, nor
> GnuTLS are vulnerable.

Antoine, please attach it to this bug. Thanks!

When you say "up to two arbitrary parameters," you actually mean one parameter of at most two arbitrary bytes, right?
* The related NSS bugs mentioned in the "sigflaw.pdf" paper were fixed in bug 351079 and bug 350640 and maybe others.

* The buggy parameters check that Antonio quoted above is present in both cryptohi/secvfy.c and softoken/pkcs11c.c.

* The buggy check was added in bug 351079, r=rrelyea, sr=wtc.

* In bug 351079 comment 12, bug 351079 comment 15, and bug 351079 comment 29, Wan-Teh suggested that the check be strengthened to only allow the parameters to be the two-byte sequence { 0x05, 0x00 }, and maybe also allow empty. According to him, Microsoft's implementation allows/allowed both values, but { 0x05, 0x00 } is what is required by the PKCS#1.5 specification.

* The issue of the parameter validation being too weak is discussed extensively in bug 351079. We should consider this flaw to be well-known and likely to be exploited.
I'd suggest that, rather than parsing, the expected message is constructed and then compared (in constant-time) against the result for the decryption operation. The ASN.1 prefixed are constant for a given hash function (see https://boringssl.googlesource.com/boringssl/+/master/crypto/rsa/rsa.c#292 for them).
(In reply to Adam Langley from comment #2)
> I'd suggest that, rather than parsing, the expected message is constructed
> and then compared (in constant-time) against the result for the decryption
> operation. The ASN.1 prefixed are constant for a given hash function (see
> https://boringssl.googlesource.com/boringssl/+/master/crypto/rsa/rsa.c#292
> for them).

I agree. I am currently doing some testing of the QuickDER DER parser, because I think I've found multiple issues within QuickDER that could potentially be exploited to similar effect. I will file separate bugs about those things, but for now, I recommend not relying on QuickDER for this.
I am going to make this bug specifically about being too permissive about the parameter value. I will clone this bug for the QuickDER issues.
Summary: RSA PKCS#1 signature verification forgery is possible due to too-permissive SignatureAlgorithm parsing → RSA PKCS#1 signature verification forgery is possible due to too-permissive SignatureAlgorithm parameter parsing
[Tracking Requested - why for this release]: I'm assuming Antoine is correct when he says he has a working exploit for this. There is no way to patch this bug without making it obvious what the security problem is. That means as soon as the patch lands anywhere, it becomes zero-day-ish. I imagine that if this is exploitable, Chrome will be landing a fix for it pretty soon.

Adding NEEDINFO so that it is clear we're interested in getting Antoine's PoC for this when he has it ready.
Flags: needinfo?(antoine)
Keywords: sec-critical
For my current PoC I cheated by using the private key of my own CA on the malformed PKCS#1 block to demonstrate the buggy behavior, but considering the nice freedom enabled by Bug 1064670 for the length encoding (which is one of the annoying parts), I'll try to see if I can forge a real trusted malicious CA tomorrow. The paper I cited before explains precisely how to do it, but it's still a nice little crypto exercise. If you need a CA with public exponent 3 to give it a try you can search for "WeakIntermediateRSAExponent" on https://ca.ht.vc
Tracking for all channels including release. We are currently spinning a 32 chemspill. It would have been nice to have this fixed so that it could be included in 32.0.1. However, if we need to chemspill again, so be it.
I don't think we need to chemspill for a non-public critical issue. This isn't a zero day. We should check a fix into everywhere late in the cycle (before last beta or two), like we often do.

I assume we need this on ESR31.
Note: I believe there 2 bytes were intentionally allowed because some software encoded a 2 byte '0' parameter instead of a NULL parameter. 2 bytes should not be enough to allow blechinbaucher. (the sited test was added specifically to prevent such an attach).

bob
Also note, blechinbaucher is only usable against rsa keys with an exponent of 3. I think we've flushed these from the public internet. If there is an exploit (which I doubt), it's still not a chemspill since it would only affect private CA systems with poorly constructed certs.
I wasn't able to forge a 2048 bit signature yet, but I was somewhat successful with 1024 bit.

I have a test server running on https://bad.ht.vc:666 with the malicious CA certificate. Some notes:

- there's something PKIX doesn't like in this chain. I think it's a syntactic issue, if Brian can tell me what it is exactly I should be able to get it accepted on FF32. For now, I'm turning off security.use_mozillapkix_verification.

- There aren't that many 1024 bit CAs with e=3 that are still accepted. I picked a Starfield CA which I think got removed in FF32 (that proves the effort to get rid of old CAs is indeed very useful). You may have to re-import that root or downgrade to FF31 depending on your platform.
Flags: needinfo?(antoine)
My coworker in Paris reported that on his OSX FF31 the certificate chain was accepted with no warning.
(In reply to Antoine Delignat-Lavaud from comment #11)
> I wasn't able to forge a 2048 bit signature yet, but I was somewhat
> successful with 1024 bit.
> 
> I have a test server running on https://bad.ht.vc:666 with the malicious CA
> certificate. Some notes:
> 
> - there's something PKIX doesn't like in this chain. I think it's a
> syntactic issue, if Brian can tell me what it is exactly I should be able to
> get it accepted on FF32. For now, I'm turning off
> security.use_mozillapkix_verification.

mozilla::pkix doesn't like the fact that the root CA certificate is missing basicConstraints.cA == true. It is probably being too strict there and we should probably change that.

> - There aren't that many 1024 bit CAs with e=3 that are still accepted. I
> picked a Starfield CA which I think got removed in FF32 (that proves the
> effort to get rid of old CAs is indeed very useful). You may have to
> re-import that root or downgrade to FF31 depending on your platform.

In bug 1054659, you mentioned that you also have a PoC for e=65537; is that correct?

Anyway, the test case is actually exploiting bug 1064670, not this bug (1064636). Still, it is much (!!!) appreciated!

(In reply to Robert Relyea from comment #9)
> Note: I believe there 2 bytes were intentionally allowed because some
> software encoded a 2 byte '0' parameter instead of a NULL parameter. 2 bytes
> should not be enough to allow blechinbaucher. (the sited test was added
> specifically to prevent such an attach).

Antoine mentioned that he had a PoC that affects this bug, so if I understand his claim correctly, he was able to do it in two bytes (in fact, one byte, since the length byte has to be zero, I think).

(In reply to Robert Relyea from comment #10)
> Also note, blechinbaucher is only usable against rsa keys with an exponent
> of 3.

Antoine said he had a PoC for e=65337 in the other bug.

I think we've flushed these from the public internet.

> If there is an
> exploit (which I doubt), it's still not a chemspill since it would only
> affect private CA systems with poorly constructed certs.

See https://www.imperialviolet.org/2012/03/16/rsae.html: "Three should be fine, 2^16+1 saved some buggy software a couple of times and any larger is probably a mistake." In any case, the Mozilla packaged app system has a public key with e=3, and who knows how many sub-CA certificate and/or end-entity certificates have e=3 public keys?

Regardless of all that, and hopefully stating the obvious, we accept signatures with keys where e=3 so our implementation must be secure when e=3.
(In reply to Robert Relyea from comment #9)
> Note: I believe there 2 bytes were intentionally allowed because some
> software encoded a 2 byte '0' parameter instead of a NULL parameter.

With BoringSSL (which will be in Chrome for Android with Chrome 38 and Chrome on OS X with Chrome 39) I ripped out the ASN.1 parser in RSA signature verification and am matching exactly against the expected bytes. There are no reports of problems in the beta channel so far, at least.

Nearly a decade later this has become a very tiresome bug. I'm happy to spend a little of the breakage budget to ensure that we don't have to process ASN.1 in RSA verification.
> Antoine mentioned that he had a PoC that affects this bug, so if I understand his claim correctly,
> he was able to do it in two bytes (in fact, one byte, since the length byte has to be zero, I
> think).

That clearly changes how I would prioritize this bug.

> With BoringSSL (which will be in Chrome for Android with Chrome 38 and Chrome on OS X with Chrome
> 39) I ripped out the ASN.1 parser in RSA signature verification and am matching exactly
> against the expected bytes. There are no reports of problems in the beta channel so far, at least.

That would work, but it requires hacking your RSA parser every time you add a new hash algorithm, though it does seem to indicate that our phantom empty versus null parameter issue may be fixed. Does the BoringSSL code process any other kinds of signatures than just certs and SSL client auth? I'm thinking S/MIME in particular.

bob
(In reply to Robert Relyea from comment #17)
> That would work, but it requires hacking your RSA parser every time you add
> a new hash algorithm

That's probably true, but I think that's what PKCS#1 entails. (Alternatively you could thing of the ASN.1 prefix as the "hash identifier" and pass it into the RSA functions as an opaque lump of bytes. That might be a nicer design, but in practice I imagine that the APIs are pretty much nailed down and thus that's not possible.)

> though it does seem to indicate that our phantom empty
> versus null parameter issue may be fixed. Does the BoringSSL code process
> any other kinds of signatures than just certs and SSL client auth? I'm
> thinking S/MIME in particular.

Now that it's not the weekend I've remembered that we don't use that path in BoringSSL for certificate validation in Chrome on Android (we use the system certificate libraries so that the system-wide Android certificate UI applies to Chrome). Thus it's evidence that ServerKeyExchange signatures are safe to process strictly, but doesn't speak for certificates. For certificates, Go uses the same design and we've never received a report of a problem, but Go's usage is much more limited than a browser. None the less, I'm still very keen on solving this strictly. Anything less appears to be an invitation for more such problems in the future.

I've no evidence about S/MIME.
As Adam said, bug 1064670 and bug 1064636 should be considered two instances of the same bug, which is that a generale purpose ASN.1 parser is used with the template seq{seq{alg oid; params ANY;}; sig OSTR;}

The exploit for this bug that I considered doing (against the ANY params) was to use a zero-padded INT value, because for some reason the DER decoder will treat has having length 1. This is an existential forgery attack in the crypto sense, but to exploit, it requires a lot of cert hashes (but not a preimage!).

On the other hand, bug 1064670 is much nicer to exploit because it allows arbitrary byte injection instead of only zeros, and a 1024 bit CA can be produced with almost no effort as shown above. I think a very simple combination of 1064670 and 1064636 can be used to forge a 2048 bit CA at very little cost (no more than 2^40 cubings, but possibly less). If I feel like it, I may try to make one, as it allows for some nicer things (such as Extended Validation impersonation on the GoDaddy root). Or someone else can try it as an exercise.

Regardless, what matters is that you should avoid relying on the perfect DER adherence of your ASN.1 parser in particular if the template contains an ANY field. Thus, I urge you to follow Adam's suggestion of using static concatenation for signature blocks, even though you may need to consider 3 possible cases for the parameters field.
> That's probably true, but I think that's what PKCS#1 entails. 
> (Alternatively you could thing of the ASN.1 prefix as the "hash identifier" and pass it into the
> RSA functions as an opaque lump of bytes. That might be a nicer design, but in practice I imagine
> that the APIs are pretty much nailed down and thus that's not possible.)

Actually, NSS decodes the PKCS #1 value to determine what hash algorithm to use. There is an API to pass a value in. If no value is passed, NSS uses the hash value pulled from the signature. If a hash value is passed, it verifies that the hashes match, so we are unlikely to be able to blindly use the trick of not decoding the value.

> Regardless, what matters is that you should avoid relying on the perfect DER adherence of your
> ASN.1 parser in particular if the template contains an ANY field. Thus, I urge you to follow 
> Adam's suggestion of using static concatenation for signature blocks, even though you may need to
> consider 3 possible cases for the parameters field.

To make sure I understand what you are saying... It's the matching of the any field in the template for the parameter, not really the fact we are using a DER decoder. I'm fine with checking that field specifically. I don't think we can just handle hard-coded bytes for the pkcs #1 code without making a real hash of things (and making it harder to support in the future).

The critical thing for me is that we can handle parsing the hash oid without rewriting the code when we need to add, say SHA-3.

Clearly we do have to completely tighten up what the parameters look like, and only accept the minimum possible to maintain compatibility.

bob
(In reply to Robert Relyea from comment #20)
> The critical thing for me is that we can handle parsing the hash oid without
> rewriting the code when we need to add, say SHA-3.

Given the constraints that you outline I think the following might be reasonable:

1) Parse the ASN.1 with anything, including the current code, to get the hash function used.

2) Reserialise the expected ASN.1 (not necessarily using a full ASN.1 stack - once you have the DER serialisation of the OID, the rest is very simple). Perhaps serialise two or three versions if you think that we need to cope with buggy signatures.

3) Compare the data against the serialised versions exactly.

Thankfully, since this is signature validation, constant-time considerations don't come into play.
(In reply to Robert Relyea from comment #20)
> Actually, NSS decodes the PKCS #1 value to determine what hash algorithm to
> use. There is an API to pass a value in. If no value is passed, NSS uses the
> hash value pulled from the signature. If a hash value is passed, it verifies
> that the hashes match, so we are unlikely to be able to blindly use the
> trick of not decoding the value.

Last night, I spent some time working on a patch for this issue, and I also discovered that (mis-)feature in secvfy.c. It seems the PKCS#1 specification also mandates that behavior when you don't pass in a mechanism that identifies the hash algorithm?

In any case, I intend to post a patch that fixes the issue for the case where the caller did supply the hash algorithm, because that is what the certificate verification and ServerKeyExchange processing code does. On top of that, we can build a patch that handles the unknown-hash-algorithm case.


> I don't think we can just handle hard-coded bytes for the pkcs #1 code without
> making a real hash of things (and making it harder to support in the future).
> 
> The critical thing for me is that we can handle parsing the hash oid without
> rewriting the code when we need to add, say SHA-3.

The way I am doing things requires the addition of 5 lines of code for each new algorithm, in one function:

    static const PKCS1DigestInfoInfo SHA1_PREFIX = {
        SHA1_LENGTH, 15, { 0x30, 0x21, 0x30, 0x09, 0x06, 0x05, 0x2b, 0x0e,
                           0x03, 0x02, 0x1a, 0x05, 0x00, 0x04, 0x14 }
    }; 

    [...]

    case SEC_OID_SHA1: dii = &SHA1_PREFIX; break;

I think this is a reasonable cost. Also, I don't think we should worry two much about future SHA-3 support now, given that this is quite a serious issue affecting us in the present.

> Clearly we do have to completely tighten up what the parameters look like,
> and only accept the minimum possible to maintain compatibility.

In the case of Firefox and Chrome, it seems like it will be difficult to get a good sense of the compatibility impact within the timeframe in which those two products should fix this bug. A conservative approach, in terms of compatibility risk, would be to allow the parameter field to be either exactly { 0x05, 0x00 } or empty, and then tighten that up later. Does anybody see that being problematic?
(In reply to Adam Langley from comment #21)
> 1) Parse the ASN.1 with anything, including the current code, to get the
> hash function used.
> 
> 2) Reserialise the expected ASN.1 (not necessarily using a full ASN.1 stack
> - once you have the DER serialisation of the OID, the rest is very simple).
> Perhaps serialise two or three versions if you think that we need to cope
> with buggy signatures.
> 
> 3) Compare the data against the serialised versions exactly.

Once you do #1 to get the SECOidTag, the function can simply call itself recursively with that SECOidTag, which avoids needing to reserialize anything while still doing the bytewise comparison.

In contrast to the BoringSSL code, in order to support the "no parameters" and "NULL parameters" cases, you need to maintain two prefixes for each algorithm, and check for both, instead of just one. This isn't a significant complication, though.
(In reply to Brian Smith (:briansmith, :bsmith, use NEEDINFO?) from comment #23)
> (In reply to Adam Langley from comment #21)
> > 1) Parse the ASN.1 with anything, including the current code, to get the
> > hash function used.
> > 
> > 2) Reserialise the expected ASN.1 (not necessarily using a full ASN.1 stack
> > - once you have the DER serialisation of the OID, the rest is very simple).
> > Perhaps serialise two or three versions if you think that we need to cope
> > with buggy signatures.
> > 
> > 3) Compare the data against the serialised versions exactly.
> 
> Once you do #1 to get the SECOidTag, the function can simply call itself
> recursively with that SECOidTag, which avoids needing to reserialize
> anything while still doing the bytewise comparison.

Actually, one does not need to use the ASN.1 decoder at all, with the current set of supported digest algorithms, because the encoded length byte [the second byte in the prefix] is a unique selector.
In Firefox 33 and later, mozilla::pkix accepts a short whitelist of algorithm identifiers OIDs for the signatureAlgorithms field of certificates and for the similar field for OCSP responses. (mozilla::pkix doesn't even bother with CRLs at all.) Because this version hasn't hit the release channel yet, the compatibility impact of Firefox 33's short whitelist will have.

In contrast, NSS's certificate verification logic accepts a much wider set of OIDs in these fields, including in particular OIDS that identify only public key algorithms without identifying the digest algorithm. In those cases, NSS's CERT_VerifySignedDataWithPublicKey function (which is what is used to verify the signatures of certificates and OCSP responses in NSS) will trust the algorithm of the DigestInfo within the PKCS#1 signature.

I recommend that people closely study the current code for the CERT_VerifySignedDataWithPublicKey, NSS_GetAlgorithmPolicy and VFY_VerifyDataWithAlgorithmID (called by CERT_VerifySignedDataWithPublicKey), and sec_DecodeSigAlg (called by VFY_VerifyDataWithAlgorithmID), as all of that code affects non-mozilla::pkix certificate/OCSP/CRL signature verification. In particular, things that rely on NSS's built-in certificate stuff will be dependent on the SEC_OID_UNKNOWN logic mentioned in comments 21 through comment 24 above. I have written a patch (that I still need to test) that tries to minimize the risk there.

You can see the short whitelist of AlgorithmIdentifier values that Firefox 33's mozilla::pkix accepts at:
https://mxr.mozilla.org/mozilla-central/source/security/pkix/include/pkix/pkixtypes.h?rev=a4a8b3b58191#42. Also note that mozilla::pkix requires that the parameters field of the signatureAlgorithm field be exactly { 0x05, 00 } or empty, whereas sec_DecodeSigAlg does do quite a bit of parsing of other values of that field.

I recommend that people start evaluating the compatibility impact of adopting Firefox 33's mozilla::pkix's stricter approach by compiling statistics on the counts of the distinct values of signatureAlgorithm in certificates (e.g. using the various CT databases). If that shorter whitelist works, then NSS's certificate verification logic can (eventually) be changed to avoid the SEC_OID_UNKNOWN nastiness too.
$ certutil -d .. -N
$ certutil -d .. -A -n valicert -t "C,C,C" -i /c/p/fr/cert3.der

Before patch:
$ vfychain -d .. -u 1 -pp cert0.der cert1.der cert2.der
Chain is good!

After patch:
$ vfychain -d .. -u 1 -pp cert0.der cert1.der cert2.der
Chain is bad!
PROBLEM WITH THE CERT CHAIN:
CERT 3. valicert [Certificate Authority]:
  ERROR -8182: Peer's certificate has an invalid signature.

Note that using vfychain's "-t cert3.der" argument to temporarily trust the valicert root does not work due to bug 1068182.
I hacked together a code generator for the prefixes to make sure there would be no typos, based on the mozilla::pkix code generator for OIDs. You can run it like so:

$ python DottedOIDToCode.py --dii foo bar

(The "foo" and "bar" arguments are ignored; I just didn't feel like modifying the script's argument parsing.)

Note that I reformatted the code and removed the "tag" field after I generated it.
Here's the breakdown of the signature algorithm from the certificates in the CT log.

The first column is the hex encoded, DER of the signature algorithm. The second is what OpenSSL calls it (this is not unique - several DER strings may map to the same name) and the last column is the count.

There was no filtering for unexpired certificates, although I can do that if needed.

300d06092a864886f70d0101020500  :md2WithRSAEncryption           1
300b06092a864886f70d01010b      :sha256WithRSAEncryption        2
300b06092a864886f70d010105      :sha1WithRSAEncryption          1
300d06092a864886f70d01010c0500  :sha384WithRSAEncryption        20
300a06082a8648ce3d040303        :ecdsa-with-SHA384              20
300a06082a8648ce3d040302        :ecdsa-with-SHA256              97
300d06092a864886f70d0101040500  :md5WithRSAEncryption           6512
300d06092a864886f70d01010d0500  :sha512WithRSAEncryption        7715
300d06092a864886f70d01010b0500  :sha256WithRSAEncryption        483338
300d06092a864886f70d0101050500  :sha1WithRSAEncryption          4498605
Note: I have not yet tested the unsafeAllowMissingParameters==true case, and in general I haven't tested this a lot. I'm posting it before I've written all the tests to expedite the feedback process.

This code lives in libnssutil3 instead of libnss3 because it needs to be shared by the VFY_* functions and softoken's PKCS#1 mechanism implementation.
Attachment #8490281 - Flags: review?(agl)
This modifies the VFY_* functions to use the new padding checking logic.

Note in particular that when the caller supplies the digest algorithm (which is always the case for the verification of ServerKeyExchange signatures, and is usually the case for certificate signatures), we don't invoke SGN_DecodeDigestInfo (the DER decoder for the DigestInfo) at all. Originally, I was planning to replace the use of SGN_DecodeDigestInfo with much simpler logic like I mentioned in earlier comments. I've already written that logic, but in the interests of reducing the scope of the change to the minimum necssary, I decided not to do that. Also, a better strategy for dealing with that problem is to stop calling the VFY_* functions with SEC_OID_UNKNOWN as the digest algorithm.
Attachment #8490283 - Flags: review?(agl)
Note that this patch is completely untested, AFAICT.

Note that certificate verification and ServerKeyExchange verification do not use this logic; instead, they use the VFY_* functions that were patched in part 2. So, for web browsers, part 3 is less critical than part 2.

It seems to be the case that all certificate-related signature verification goes through the function CERT_VerifySignedDataWithPublicKey, which calls VFY_VerifyDataWithAlgorithmID, which is affected by part 2. It seems all PKCS#1 ServerKeyExchange signature verification goes through ssl3_VerifySignedHashes, which uses VFY_VerifyDigestDirect. (Incidentally, the logic in ssl3_VerifySignedHashes is not as clear as I would like; I've also written some patches to clean it up, which I will attach to another bug. Those patches don't have any bearing on these security issues though.)
Attachment #8490288 - Flags: review?(agl)
I see that recently NSS integrated GTest so that we can write GTests within NSS's test suite. I will adapt my tests to work with NSS's test suite and post them as well.
Comment on attachment 8490281 [details] [diff] [review]
Part 1: Implement new PKCS#1 padding checking logic

Review of attachment 8490281 [details] [diff] [review]:
-----------------------------------------------------------------

::: lib/util/manifest.mn
@@ +58,5 @@
>  	nssb64e.c \
>  	nssrwlk.c \
>  	nssilock.c \
>  	oidstring.c \
> +  pkcs1sig.c \

(nit: indentation.)

::: lib/util/pkcs1sig.c
@@ +107,5 @@
> +  const PKCS1DigestInfoInfo* dii;
> +  const Prefix* prefix;
> +
> +  if (!digest || !digest->data ||
> +      !dataRecoveredFromSignature || !dataRecoveredFromSignature->data)

Something is wrong here - missing if body?

@@ +127,5 @@
> +    return SECFailure;
> +  }
> +
> +  /* We don't attempt to avoid timing attacks on these comparisons because
> +   * signature verification is a public key operation, not a private key=

remove = from end.

::: lib/util/pkcs1sig.h
@@ +38,5 @@
> +#define SGN_PKCS1_DIGESTINFO_LENGTH_MAX \
> +        (SGN_PKCS1_DIGESTINFO_PREFIX_LENGTH_MAX + HASH_LENGTH_MAX)
> +
> +/* dataRecoveredFromSignature must be the result of calling PK11_VerifyRecover
> + * or equivalent. SECUTIL_VerifyPKCS1Digest verifies that the length of the

The first sentence should probably not be first.

The comment mentions SECUTIL_VerifyPKCS1Digest, but the function is SGN_VerifyPKCS1DigestInfo.
Comment on attachment 8490283 [details] [diff] [review]
Part 2: Change NSS's VFY_* functions to use the new PKCS#1 padding checking logic

Review of attachment 8490283 [details] [diff] [review]:
-----------------------------------------------------------------

::: lib/cryptohi/secvfy.c
@@ +114,5 @@
> +static SECStatus
> +verifyPKCS1DigestInfo(const VFYContext* cx, const SECItem* digest)
> +{
> +  SECItem pkcs1DigestInfo;
> +  pkcs1DigestInfo.data = (PRUint8*) cx->u.buffer;

why reference it as |buffer|? Wouldn't |pkcs1DigestInfo| be clearer here?

@@ +385,5 @@
>      if (sig) {
>  	switch (type) {
>  	case rsaKey:
> +	    rv = recoverPKCS1DigestInfo(hashAlg, &cx->hashAlg,
> +					cx->u.buffer,

ditto here.

@@ +559,5 @@
>  	if (sig) {
> +	    PORT_Assert(cx->hashAlg != SEC_OID_UNKNOWN);
> +	    SECOidTag hashid;
> +	    rv = recoverPKCS1DigestInfo(cx->hashAlg, &hashid,
> +					cx->u.buffer,

ditto.
Comment on attachment 8490288 [details] [diff] [review]
Part 3: Change softoken to use the new PKCS#1 padding verification logic

Review of attachment 8490288 [details] [diff] [review]:
-----------------------------------------------------------------

LGTM modulo "XXX" comment, which is not clearly intended for landing.
Sorry; I didn't squash all the commits to that file together before exporting it.
Attachment #8490281 - Attachment is obsolete: true
Attachment #8490281 - Flags: review?(agl)
Attachment #8490369 - Flags: review?(agl)
Attachment #8490369 - Attachment is obsolete: true
Attachment #8490369 - Flags: review?(agl)
Attachment #8490376 - Flags: review?(rrelyea)
Attachment #8490376 - Flags: review?(agl)
Comment on attachment 8490283 [details] [diff] [review]
Part 2: Change NSS's VFY_* functions to use the new PKCS#1 padding checking logic

Review of attachment 8490283 [details] [diff] [review]:
-----------------------------------------------------------------

::: lib/cryptohi/secvfy.c
@@ +114,5 @@
> +static SECStatus
> +verifyPKCS1DigestInfo(const VFYContext* cx, const SECItem* digest)
> +{
> +  SECItem pkcs1DigestInfo;
> +  pkcs1DigestInfo.data = (PRUint8*) cx->u.buffer;

I agree that referencing it as |buffer| is worse, but then it would be inconsistent with the rest of the code, and also there is this comment in the code:

    /*
     * This buffer holds either the digest or the full signature
     * depending on the type of the signature (key->keyType).  It is
     * defined as a union to make sure it always has enough space.
     *
     * Use the "buffer" union member to reference the buffer.
     * Note: do not take the size of the "buffer" union member.  Take
     * the size of the union or some other union member instead.
     */

I am happy to change it to pkcs1DigestInfo. Let's see what Bob says.
Attachment #8490283 - Flags: review?(rrelyea)
Comment on attachment 8490288 [details] [diff] [review]
Part 3: Change softoken to use the new PKCS#1 padding verification logic

Review of attachment 8490288 [details] [diff] [review]:
-----------------------------------------------------------------

Adam, are you concerned that the /*XXX: unsafeAllowMissingParameters*/ comments will reveal too much what we are fixing? Do you want me to strip them out? Note that the same comment appears in part 2 (VFY_*) and "unsafeAllowMissingParameters" is also the name of the parameter in part 1. I can remove the comments if people think it is too revealing to people who are watching us check stuff in, but otherwise I think the comments are helpful.
Attachment #8490288 - Flags: review?(rrelyea)
Comment on attachment 8490376 [details] [diff] [review]
Part 1: Implement new PKCS#1 padding checking logic [v3]

Review of attachment 8490376 [details] [diff] [review]:
-----------------------------------------------------------------

This is pretty much what I said we shouldn't do.

I'm OK with not using the decoder, but this code should work on new hash oids without us having to hack it when we add new hash algorithms. NSS is carefully constructed to minimize the places you need to change to ad new hash oids.....

So there are two choices: 1) use the quick decoder with a very tight template, or 2) use the encoder (or hand encode if you want) to construct the expected header on the fly using the hash oid. (you may have to add hash_len to your verify function).
Attachment #8490376 - Flags: review?(rrelyea) → review-
> Last night, I spent some time working on a patch for this issue, and I also discovered that 
> (mis-)feature in secvfy.c. It seems the PKCS#1 specification also mandates that behavior when you
> don't pass in a mechanism that identifies the hash algorithm?

Not only does it have this feature, much of the code and parts of mozilla depend on it. I actually pointed this out in my comments above. That's one reason you can't just compare the blobs.

> 3) Compare the data against the serialised versions exactly.

I'm OK with this as long as the serialized versions are generated dynamically (that is the function doesn't have explicit knowledge of the various hash algorithms built in).
Comment on attachment 8490283 [details] [diff] [review]
Part 2: Change NSS's VFY_* functions to use the new PKCS#1 padding checking logic

Review of attachment 8490283 [details] [diff] [review]:
-----------------------------------------------------------------

::: lib/cryptohi/secvfy.c
@@ +24,5 @@
> +** Otherwise, parse the DigestInfo structure and store the decoded digest
> +** algorithm into digestAlgOut.
> +**
> +** Store the encoded DigestInfo into digestInfo.
> +** Store the DigestInfo length into digestInfoLen.

It seems to me a better semantic is to always parst the DigestInfo and if givenDigestAlg isn't SEC_OID_UNKNOWN and doesn't match we should fail immediately.

It's not a bug and I won't force this semantic, but it makes error correctness analysis simplier.

@@ +48,3 @@
>  
> +    digestInfoItem.data = digestInfo;
> +    digestInfoItem.len = maxDigestInfoLen;

A comment here warning that maxDigestInfoLen is not the same as maxDigestLen

@@ +59,4 @@
>  
> +    if (givenDigestAlg != SEC_OID_UNKNOWN) {
> +        *digestAlgOut = givenDigestAlg;
> +        return SECSuccess;

This obviously is where you would implement my suggestion.

@@ +114,5 @@
> +static SECStatus
> +verifyPKCS1DigestInfo(const VFYContext* cx, const SECItem* digest)
> +{
> +  SECItem pkcs1DigestInfo;
> +  pkcs1DigestInfo.data = (PRUint8*) cx->u.buffer;

In this case, we know we are dealing with and RSA signature (PKCS1 is only RSA), so referencing the RSA specific value is probably correct, independent of the rest of the code. (e.i. you don't have to make a global change to the file's use of buffer). Note below we are using the pkcs1DigestInfoLen field, not some generic length, so using pkcs1DigestInfo would be consistent.

@@ +385,5 @@
>      if (sig) {
>  	switch (type) {
>  	case rsaKey:
> +	    rv = recoverPKCS1DigestInfo(hashAlg, &cx->hashAlg,
> +					cx->u.buffer,

This function isn't as clear cut. The function is key independent (though the code block is key dependent). I could go either way on this one.
Comment on attachment 8490288 [details] [diff] [review]
Part 3: Change softoken to use the new PKCS#1 padding verification logic

Review of attachment 8490288 [details] [diff] [review]:
-----------------------------------------------------------------

Assuming changes to the SGN_VerifyPKCS1 API isn't necessary do to my other review.
Attachment #8490288 - Flags: review?(rrelyea) → review+
Comment on attachment 8490376 [details] [diff] [review]
Part 1: Implement new PKCS#1 padding checking logic [v3]

Review of attachment 8490376 [details] [diff] [review]:
-----------------------------------------------------------------

Hi Bob. Thanks for the quick review turnaround time!

(In reply to Robert Relyea from comment #40)
> I'm OK with not using the decoder, but this code should work on new hash
> oids without us having to hack it when we add new hash algorithms. NSS is
> carefully constructed to minimize the places you need to change to ad new
> hash oids.....

NSS requires several files to be changed to add a new hash algorithm. See this bug, which added SHA-224 support, and in particular notice how many files had to be changed and how many patches (not just in that bug, but in other bugs) were required:
https://bugzilla.mozilla.org/show_bug.cgi?id=356713#c12.

You are right that so far in NSS, the OID bytes are only encoded in secoid.c. But, it seems like maintaining that as an invariant is optimizing for the wrong thing. This issue is exploitable precisely because the DER decoder has a serious bug in it (bug 1064670, and previously bug 351079 and bug 350640); I think it clear by now that using the DER decoder for this is not prudent.

> 2) use the encoder (or hand encode if you want) to construct
> the expected header on the fly using the hash oid. (you may have to add
> hash_len to your verify function).

This would be more complicated and more error-prone, especially given the previously-stated requirement that we have to encoded the DigestInfo in two different ways, to account for implementations that omit the NULL parameter. Also, note that the data that this patch is based on *is* the result of a DER encoder.

I respectfully maintain that the way I've coded this is better than the alternatives suggested insofar in that it is an incredibly safe and low-risk approach. I think it would be a mistake to change to a different approach. And, I definitely don't think the code I've written is so terrible that it deserves to be rewritten, especially considering the circumstances. In particular, this is a time-sensitive matter for a huge vulnerability in many, many products. Mozilla and Google and Red Hat are going to need to invest a substantial amount of effort doing compatibility testing in the next few days/weeks as the fix for this bug gets rushed out. The more the fixing of this bug is delayed, the more risk it adds to all your products.

This bug was reported over a a week ago, and we've had the forged certificate PoC for almost a week already. I think that, even if my approach is not perfect in terms of future potential maintenance, it is good to at least fix it in this simple, clearly-correct way, low-risk manner, and *already-written* manner while we debate cleaner approaches to improve future maintainability for SHA-3 and other things in parallel.

So, I respectively request that you reconsider your review. Thanks!
Attachment #8490376 - Flags: review?(wtc)
Attachment #8490376 - Flags: review?(rrelyea)
> You are right that so far in NSS, the OID bytes are only encoded in secoid.c. But, it seems like
>  maintaining that as an invariant is optimizing for the wrong thing.

Nope, it's a basic requirement. I'm the one who has to deal with the fall out. I've been working to reduce the number of places we have to touch... not increase them. A patch that doesn't meet that requirement is a no go into NSS. period.

> This issue is exploitable precisely because the DER decoder has a serious bug in it 
> (bug 1064670, and previously bug 351079 and bug 350640); I think it clear by now that 
> using the DER decoder for this is not prudent.

You can still tighten the code up by doing what I suggest in option 2. It is not complicated, I'm even ok if you want to code it by hand (it's not difficult DER). The code should be simple enough to easily verify that it's correct, and you'll know it's wrong instantly. The multiple hardcoded values are a total hack, and not maintainable in the future.

I'm fine with the rest of the patch, please do not cut corners on this last piece.

bob
Comment on attachment 8490376 [details] [diff] [review]
Part 1: Implement new PKCS#1 padding checking logic [v3]

r- don't try to go around me on this either. What I've asked for is not difficult to code.
Attachment #8490376 - Flags: review?(wtc)
Attachment #8490376 - Flags: review?(rrelyea)
Attachment #8490376 - Flags: review-
I think it would be good for somebody else to work on this bug.
Attached file build_hash.c
here's what I was thinking.
(not tested, once you include it, you could probably optimize away a bunch of unnecessary structs).
This addresses the review comments for part 2. I didn't add a comment about maxDigestInfoLen vs. maxDigestLen because there is nothing named maxDigestLen in the file.
Attachment #8490283 - Attachment is obsolete: true
Attachment #8490283 - Flags: review?(rrelyea)
Attachment #8490283 - Flags: review?(agl)
This patch adds some additional comments about why we don't check the encoded DigestInfo against the given digest OID at the time we recover the DigestInfo from the signature. Basically, it would be redundant and misleading to do that check in recoverPKCS1DigestInfo. I split this into a separate patch in case it is decided that the comments are too helpful for people trying to reverse-engineer the vulnerability from the patches. Part 4 can be checked in separately at a later date, if so desired.

As I mentioned in my previous comment, I don't have much more time to work on this, so I'm going to hand it off to others to finish.
Attachment #8490536 - Flags: review+
Comment on attachment 8490537 [details] [diff] [review]
Part 4: Add additional comments for the changes to VFY_*

Review of attachment 8490537 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks for considering my suggestion. As I meantioned, I'm OK with what you did, comments will help future people looking in this code understand it's constraints.
Attachment #8490537 - Flags: review+
Thanks for jumping on this Brian, Kai says he'll look at finishing it up, since it's not a whole lot of work. I should hold back since wtc is out until at least Friday, possibly even Monday, so we'll still need an NSS reviewer on this.

bob
We're very likely going to be releasing Firefox 32.0.2 this week. This bug can ride along in 32.0.2 if complete in time.
Can we get rid of the Apache license header in the new lib/util files?
(In reply to Lawrence Mandel [:lmandel] from comment #53)
> We're very likely going to be releasing Firefox 32.0.2 this week. This bug
> can ride along in 32.0.2 if complete in time.

Fixes in NSS, especially when introducing a new API as expected with this patch, require a new full NSS release. Afterwards, Mozilla needs to pick up the new NSS release.

Should we consider to release a NSS 3.17.1 update, with this bugfix only, based on NSS 3.17 - and renumber current NSS development trunk to 3.17.2 ?
(unrelated to fixing this bug, but related to getting this landed into stable branches, another heads up: 
In order to land this into the 24.x and 31.x branches, because they still use NSS 3.16.2, which doesn't yet remove the 1024-bit legacy roots, and because of the thinking to keep the set of root CA certs frozen on the 24/31 branches for now:
It will require that we'll have to produce a special release tag of 3.17.1 that still uses the root CA list from 3.16.2.
That special release tag could be created on a NSS mini branch and named NSS_3_17_1_WITH_CKBI_1_98_RTM.)
I strongly suspect that Apple implementation is completely vulnerable to this bug.

I had a quick look over their code and after jumping around I reached the conclusion that the function that checks certificate signatures is "SecCertificateIsSignedBy" in sec/Security/SecCertificate.c

This function starts with the following lines:

    SecAsn1AlgId algId;
    algId.algorithm.Length = certificate->_tbsSigAlg.oid.length;
    algId.algorithm.Data = certificate->_tbsSigAlg.oid.data;
    algId.parameters.Length = certificate->_tbsSigAlg.params.length;
    algId.parameters.Data = certificate->_tbsSigAlg.params.data;

This seems to indicate that the parameters inside the PKCS#1 block are taken directly from the "to be signed" part of the certificate which is 100% under attacker control.

I have yet to check that this hunch is correct but considering that Apple takes ages to fix these things it may be a good idea to wait a bit more before rolling this update.
Comment on attachment 8490498 [details]
build_hash.c

I'm trying to understand the inspiration that Bob intended to give with this patch, but are having trouble.

For example, this line:
    outerSequenceLengthValue = derOid->oid.len + hashOid->oid.len + 8

I understand taking the length of hashOid, but I don't understand what derOid is referring to?
(In reply to Kai Engert (:kaie) from comment #58)
> I understand taking the length of hashOid, but I don't understand what
> derOid is referring to?

Ok, it's referring to the length of the following digest (e.g. 16 for md5).
(In reply to Antoine Delignat-Lavaud from comment #57)
> I strongly suspect that Apple implementation is completely vulnerable to
> this bug.

OK, I took a closer look at SecKey.c and in fact, the DEREncodeDigestInfoPrefix function uses hard-coded NULL parameters (0500). There is still a bug that causes MD5 signatures to be accepted (by using the md5 oid instead of md5withRSA) but no signature forgery.
Bob:

In order to dynamically encode the prefix, we must dynamically look up the size (in bytes) of the hash, starting with SECOidTag digestAlg.

I think we shouldn't use the size that we can see in the signature we're trying to verify, right?

I searched NSS for tables that would allow me to perform the lookup. I found HASH_GetHashTypeByOidTag() and HASH_GetHashObject(), which contains the size. Unfortunately, they are within libnss3, and cannot be accessed from libnssutil3.

I also found something in libfreebl/rawhash.c but I think we cannot access libfreebl from nssutil either.

Unless you can point me to another place that's accessible from nssutil, we must add a mapping from hash type to hash size to nssutil.
The hash that's passed in is not the hash from the signature, but the hash calculated by the caller over the data we just calculated. It should be correct because it comes from the underlying hash implementation.

If we aren't happy with that, then we need to have the caller pass the hash size in. There are implementations of the hasht interface in both softoken and in nss proper (one points to freebl, the other points to PK11wrap).

We could also just require the caller to check the length if we want. We should not be depending on the length inside the signature block, since that is something the attacker can fake. This code doesn't depend on that block because we don't actually decode it anywhere in the code. Instead the code takes the full der encoded signature and compares it to a hand created der encoded signature.

bob
Ok, so you're saying, I can simply trust the value digest->len passed to SGN_VerifyPKCS1DigestInfo().
Attached patch part1 - v4 - work-in-progress (obsolete) — Splinter Review
This patch is on top of the other parts 2+3+4.

It builds, but fails during tests.

I will write test code that dumps the dynamically encoded prefixes to files, so I can compare them with the intended results...
Attachment #8490376 - Attachment is obsolete: true
Attachment #8490376 - Flags: review?(agl)
Attached patch part 1 - [v5] - with debug code (obsolete) — Splinter Review
This one passes the "cert" part of our suite.
It contains some debug code, which I used to dump the encoded bytes.
I did a bytewise comparison of about half of the structs that bsmith had in his patch, and they matched.

I'll attach a clearner patch for review next...
Attachment #8491627 - Attachment is obsolete: true
Attached patch part 1 - [v6] (obsolete) — Splinter Review
Also, I removed the vim header, removed the apache license, and changed indendation to the usual 4-spaces.
Attachment #8491709 - Flags: review?(rrelyea)
Comment on attachment 8491709 [details] [diff] [review]
part 1 - [v6]

Review of attachment 8491709 [details] [diff] [review]:
-----------------------------------------------------------------

r+ rrelyea.

Thanks Kai.
Attachment #8491709 - Flags: review?(rrelyea) → review+
See Also: → 1069405
Attached file Signature validation tests (obsolete) —
I've generated a number of test cases around this (attached) and tried it against OpenSSL and GnuTLS. Both accept the case where the NULL is missing but are otherwise ok.

I'll see if I can find an Apple device this afternoon to test that.
Adam, it seems like you already have a convenient way of generating certificates with malformed DigestInfos. It would be great if you could share it so that I can add some more test cases.

Here is the test I made for OpenSSL. On the OpenSSL trunk, there is a patch to ignore leading zeros in the length field of TLV constructs. Also, OpenSSL trunk does accept the indefinite length encoding of lengths. Finally, OpenSSL trunk accepts the high tag number form even for low-numbered tags. Note that I only tested the OpenSSL trunk, not any previous releases, and not LibreSSL or BoringSSL.
Brian: I'm tweaking the Go crypto/rsa library and using the attached script to generate certificates. I'll try adding high-valued tags and indefinite lengths as a test case. I think that bernull.crt tests leading zeros already (at least in the params) but I didn't test with OpenSSL trunk.
(In reply to Adam Langley from comment #68)
> I'll see if I can find an Apple device this afternoon to test that.

I checked the source and they only accept NULL parameters and do not use any DER decoding (they use memcmp with hard-coded ASN.1)
Only thing dodgy is that they accept MD5 signatures but it's not clear whether it's a bug or a feature.
OpenSSL and GnuTLS both accept indefinite lengths or high valued tags. But in neither case do they have an overflow that allows lots of arbitrary data. Additionally, OpenSSL trunk accepts leading zeros on the length.

These are all things that should be fixed, but I don't see an exploit given that flexibility alone.

Updated test cases attached.
Attachment #8492326 - Attachment is obsolete: true
Attachment #8492355 - Attachment is obsolete: true
Whiteboard: [status-firefox-esr24:affected]
Whiteboard: [status-firefox-esr24:affected] → [status-firefox-esr24:affected][status-b2g-v1.3t:affected]
Comment on attachment 8491709 [details] [diff] [review]
part 1 - [v6]

Review of attachment 8491709 [details] [diff] [review]:
-----------------------------------------------------------------

Adding some f- for style/future compat issues, but I think the actual _code_ is correct.

::: lib/util/pkcs1sig.c
@@ +10,5 @@
> +#include "secoid.h"
> +
> +typedef struct
> +{
> +    size_t len;

This type is
1) Missing the necessary includes (stddef.h)
2) Fairly inconsistent with respect to how size-types are handled (see, for example, line 20 as a prime example of this)

@@ +12,5 @@
> +typedef struct
> +{
> +    size_t len;
> +    PRUint8 data[SGN_PKCS1_DIGESTINFO_PREFIX_LENGTH_MAX];
> +} Prefix;

naming-wise, this seems to violate the internal consistency of NSS.

Internal structures are leading lower-case, camel-capped, with a prefix indicating the module it belongs to, to avoid any ODR issues, with "Str" suffix for structs, with brace on the same line as typedef

typedef struct pkcs1PrefixStr pkcs1Prefix;
struct pkcs1PrefixStr {
  size_t len;
  PRUint8 data[...];
};

@@ +19,5 @@
> +{
> +    unsigned int digestLen;
> +    Prefix prefixWithParams;
> +    Prefix prefixWithoutParams;
> +} PKCS1DigestInfoInfo;

No better naming than the "InfoInfo" ?

@@ +22,5 @@
> +    Prefix prefixWithoutParams;
> +} PKCS1DigestInfoInfo;
> +
> +static SECStatus
> +encodePrefix(const SECOidData *hashOid, unsigned int digestLen,

naming wise, this is also inconsistent with respect to nss

pkcs1_encodePrefix()?

@@ +29,5 @@
> +    /* with params coding is:
> +     *  Sequence (2 bytes) { 
> +     *      Sequence (2 bytes) { 
> +     *               OID (2 bytes)  {
> +     *                   Oid value (derOid->oid.len)

inconsistent capitalization

@@ +35,5 @@
> +     *               NULL (2 bytes)
> +     *      }
> +     *      OCTECT (2 bytes);
> +     *
> +     * without Params coding is:

inconsistent capitalization (Params vs line 29, params - prefer 'params')

@@ +39,5 @@
> +     * without Params coding is:
> +     *  Sequence (2 bytes) { 
> +     *      Sequence (2 bytes) { 
> +     *               OID (2 bytes)  {
> +     *                   Oid value (derOid->oid.len)

inconsistent capitalization (OID -> Oid)

@@ +56,5 @@
> +        extra = 2;
> +    }
> +
> +    if (outerSeqLen >= 128 ||
> +        (outerSeqLen +2 -digestLen) > SGN_PKCS1_DIGESTINFO_PREFIX_LENGTH_MAX) {

Inconsistent spacing here makes it difficult to read

(outerSeqLen + 2 - digestLen) = readable.

Note, however, the need for digestLen to be trusted, otherwise overflow. But I think that's a reasonable assumption here.

@@ +61,5 @@
> +        /* this is actually a library failure, It shouldn't happen */
> +        PORT_SetError(SEC_ERROR_INVALID_ARGS); 
> +        return SECFailure;
> +    }
> +    

There's trailing whitespace on lines 30, 31, 40, 41, 62, and 65 - can you please fix this before committing.

@@ +65,5 @@
> +    
> +    prefix->data[0] = SEC_ASN1_SEQUENCE|SEC_ASN1_CONSTRUCTED;
> +    prefix->data[1] = outerSeqLen;
> +    prefix->data[2] = SEC_ASN1_SEQUENCE|SEC_ASN1_CONSTRUCTED;
> +    prefix->data[3] = innerSeqLen;

You make no check on innerSeqLen being <= 127.

The only reason this is safe is because of the transitive properties of lines 59 (which checks outerSeqLen) and line 50 (which constructs outerSeqLen from innerSeqLen), and that hashOid->oid.len should be less than a boundary value (if it was untrusted data, one could cause mischief).

This is at least 'surprising', and might benefit from an unnecessary, but self-documenting, check on line 59 to also check innerSeqLen doesn't have the high-bit set (therefore making it suitable for direct length encoding here)

@@ +76,5 @@
> +    }
> +    prefix->data[6+hashOid->oid.len+extra] = SEC_ASN1_OCTET_STRING;
> +    prefix->data[6+hashOid->oid.len+extra+1] = digestLen;
> +
> +    prefix->len = 6+hashOid->oid.len+extra+2;

Inconsistent spaces between operators here (and really, throughout the file)

Specific remediation: Add spaces on lines 74, 75, 77, 78, and 80 (in addition to the remarks on line 60), which bring it to consistency with lines 49/50/54/55/56/59 and, in general, the direction we've been trying to push NSS.

@@ +98,5 @@
> +    }
> +
> +    hashOid = SECOID_FindOIDByTag(digestAlg);
> +    if (hashOid == NULL) {
> +        PORT_SetError(SEC_ERROR_INVALID_ARGS); 

Trailing whitespace

@@ +111,5 @@
> +            != SECSuccess) {
> +        return SECFailure;
> +    }
> +
> +    {

Unnecessary block statement

@@ +116,5 @@
> +        /* We don't attempt to avoid timing attacks on these comparisons because
> +         * signature verification is a public key operation, not a private key
> +         * operation.
> +         */
> +        const Prefix* comparingTo;

Let's name this something more meaningful: expectedPrefix

::: lib/util/pkcs1sig.h
@@ +16,5 @@
> +#define SGN_PKCS1_DIGESTINFO_PREFIX_LENGTH_MAX 19
> +
> +/* The length of the longest valid DigestInfo. */
> +#define SGN_PKCS1_DIGESTINFO_LENGTH_MAX \
> +        (SGN_PKCS1_DIGESTINFO_PREFIX_LENGTH_MAX + HASH_LENGTH_MAX)

Are we sure we want to hardcode this in a public API?

I can see no reason to expose this via the header, and doing so makes it difficult to add new digest info support in future versions if the value expands (e.g. NSS 4 might expect this structure to be 23 bytes, but the caller compiled with the headers from NSS 3 and only got 19 bytes)

Since this is not used/needed as any part of the public API, I see no reason to expose it as such.
Attachment #8491709 - Flags: feedback-
Comment on attachment 8490536 [details] [diff] [review]
Part 2: Change NSS's VFY_* functions to use the new PKCS#1 padding checking logic [v2]

Review of attachment 8490536 [details] [diff] [review]:
-----------------------------------------------------------------

::: lib/cryptohi/secvfy.c
@@ +48,3 @@
>  
> +    digestInfoItem.data = digestInfo;
> +    digestInfoItem.len = maxDigestInfoLen;

Why this change from using SECKEY_PublicKeyStrength (which was forward compatible in source and binary) to using a hardcoded value - especially one that may be less than the strength of the given key.

Have I missed something here?
Flags: sec-bounty?
Comment on attachment 8490536 [details] [diff] [review]
Part 2: Change NSS's VFY_* functions to use the new PKCS#1 padding checking logic [v2]

Review of attachment 8490536 [details] [diff] [review]:
-----------------------------------------------------------------

::: lib/cryptohi/secvfy.c
@@ +48,3 @@
>  
> +    digestInfoItem.data = digestInfo;
> +    digestInfoItem.len = maxDigestInfoLen;

If one were to change the code to allocate a buffer of SECKEY_PublicKeyStrength bytes, instead of using the statically-allocated buffer, that would be fine by me. The main advantage of that change is that nearly all malicious inputs would be caught by verifyPKCS1DigestInfo, instead of only a subset of them.

As far as we understand the possible attacks, nearly all of them require a DigestInfo structure that is larger than the largest one we accept, so PK11_VerifyRecover returning a "output buffer too small" error is actually what rejects basically all previously-working malicious inputs in this version of the patch.

However, dynamically allocating a buffer and then properly managing that buffer seemed more error-prone and unnecessarily complicated to me, so I did it this way.

Note that mixing and matching different versions of libnss and libnssutil is not supported anyway. And, also, even if you compile against one version of libnssutil and then link against a different version of libnssutil, you will still be safe (assuming PK11_VerifyRecover does its bounds-checking correctly); it's just that you may too strictly reject some valid signatures for new hash algorithms that have outputs that are either larger than SHA-512 and/or that have very long OIDs. But, mixing and matching libnssutil and libnss is likely to give you much more severe problems than that anyway.
(In reply to Brian Smith (:briansmith, :bsmith, use NEEDINFO?) from comment #75)
> Comment on attachment 8490536 [details] [diff] [review]
> Part 2: Change NSS's VFY_* functions to use the new PKCS#1 padding checking
> logic [v2]
> 
> Review of attachment 8490536 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> ::: lib/cryptohi/secvfy.c
> @@ +48,3 @@
> >  
> > +    digestInfoItem.data = digestInfo;
> > +    digestInfoItem.len = maxDigestInfoLen;
> 
> If one were to change the code to allocate a buffer of
> SECKEY_PublicKeyStrength bytes, instead of using the statically-allocated
> buffer, that would be fine by me. The main advantage of that change is that
> nearly all malicious inputs would be caught by verifyPKCS1DigestInfo,
> instead of only a subset of them.

Alternatively, it's one less place to be 'surprised' when you add a new algorithm (which is why I was surprised Bob R+'d this, given his earlier objections)

> 
> As far as we understand the possible attacks, nearly all of them require a
> DigestInfo structure that is larger than the largest one we accept, so
> PK11_VerifyRecover returning a "output buffer too small" error is actually
> what rejects basically all previously-working malicious inputs in this
> version of the patch.
> 
> However, dynamically allocating a buffer and then properly managing that
> buffer seemed more error-prone and unnecessarily complicated to me, so I did
> it this way.
> 
> Note that mixing and matching different versions of libnss and libnssutil is
> not supported anyway.

Are you sure about that? I'd be surprised to hear this from Red Hat, given the FIPS-lications of this.

> And, also, even if you compile against one version of
> libnssutil and then link against a different version of libnssutil, you will
> still be safe (assuming PK11_VerifyRecover does its bounds-checking
> correctly); 

Historically, that assumption - both for softoken AND for general tokens - has been wrong. softoken not doing proper bounds checking has been a particular source of Web Crypto bugs when implementing, so I no longer feel confident making this assumption.

> it's just that you may too strictly reject some valid signatures
> for new hash algorithms that have outputs that are either larger than
> SHA-512 and/or that have very long OIDs. But, mixing and matching libnssutil
> and libnss is likely to give you much more severe problems than that anyway.

Correct, that's the concern, and it's an echo of Bob's remarks in comment #45 that greatly surprised me. This is just a hardcoded constant by another name.
> > +#define SGN_PKCS1_DIGESTINFO_PREFIX_LENGTH_MAX 19
> > +
> > +/* The length of the longest valid DigestInfo. */
> > +#define SGN_PKCS1_DIGESTINFO_LENGTH_MAX \
> > +        (SGN_PKCS1_DIGESTINFO_PREFIX_LENGTH_MAX + HASH_LENGTH_MAX)
> 
> Are we sure we want to hardcode this in a public API?
> 
> I can see no reason to expose this via the header,

It looks like the reason (as in motivation) was to find a fix with minimal effort, allowing to use the value in softoken and cryptohi, without calculating them again.


> and doing so makes it
> difficult to add new digest info support in future versions if the value
> expands (e.g. NSS 4 might expect this structure to be 23 bytes, but the
> caller compiled with the headers from NSS 3 and only got 19 bytes)

You raise a valid point, thanks for bringing this up.
I agree it would be good to do it differently.

The value depends on the maximum length of the assigned OID for allowed hash algorithms.

In pkcs1sig.c, the code wants to check the maximum reasonable length.
We could use dynamic code that looks at the length of all Oids of all allowed hash algorithms, and figure out the maximum?

What should be done in pkcs11c.c and secvfy.c?
We need compile time constants.
If I understand correctly, we simply require safe maximum sizes for buffers, not necessarily being the exact maximums?
What should we use? Should we figure out the maximum possible length for an Oid, and calculate a maximum possible encoding?
But this would involve HASH_LENGTH_MAX, which also seems to be hard coded to the largest currently supported hash (currently SHA512).
Are we at risk that other code in NSS might be using buffers designed for that size, and have issues like you described, when mixing future portions of NSS?
(In reply to Ryan Sleevi from comment #73)
> Adding some f- for style/future compat issues, but I think the actual _code_
> is correct.

That's good, thanks for reviewing, and this was the priority.

I tried to minimize the number of changes after Brian's work...
Also I personally don't worry to much about consistency and think the patch is readable as is, but I'll try to address your comments.

> @@ +10,5 @@
> > +#include "secoid.h"
> > +
> > +typedef struct
> > +{
> > +    size_t len;
> 
> This type is
> 1) Missing the necessary includes (stddef.h)
> 2) Fairly inconsistent with respect to how size-types are handled (see, for
> example, line 20 as a prime example of this)

For the sake of simplicity, I'll remove size_t (FYI there are other files in NSS that don't explicitly include stddef.h either), e.g. lib/util/secload.c


> typedef struct pkcs1PrefixStr pkcs1Prefix;
> struct pkcs1PrefixStr {
>   size_t len;
>   PRUint8 data[...];
> };

ok. Will use unsigned int.


> > +} PKCS1DigestInfoInfo;
> 
> No better naming than the "InfoInfo" ?

pkcs1Prefixes


> > +encodePrefix(const SECOidData *hashOid, unsigned int digestLen,
> 
> naming wise, this is also inconsistent with respect to nss
> pkcs1_encodePrefix()?

Is this really a problem? It's a static function, local to the file only.


> You make no check on innerSeqLen being <= 127.

will add


> Unnecessary block statement

It was necessary, because I added a local variable to the scope. But since you don't seem to like it, I moved the variable to the global function scope and removed the block brackets.


> Let's name this something more meaningful: expectedPrefix

ok
Attached patch part 1 - [v7] (obsolete) — Splinter Review
This addresses most of Ryan's small requests for naming of types and variables and consistency.

The only code change from v6 is: Added the length check for innerSeqLen, which is kind of redundant, because innerSeqLen < outerSeqLen, and outerSeqLen was already checked for being within limits.
(In reply to Ryan Sleevi from comment #76)
> > And, also, even if you compile against one version of
> > libnssutil and then link against a different version of libnssutil, you will
> > still be safe (assuming PK11_VerifyRecover does its bounds-checking
> > correctly); 
> 
> Historically, that assumption - both for softoken AND for general tokens -
> has been wrong. softoken not doing proper bounds checking has been a
> particular source of Web Crypto bugs when implementing, so I no longer feel
> confident making this assumption.

I think that is a fair concern. I agree with you that the person assigned to this bug should change that part of the patch.

> Correct, that's the concern, and it's an echo of Bob's remarks in comment
> #45 that greatly surprised me. This is just a hardcoded constant by another
> name.

I agree that none of the patches in this bug in their current form seem to meet Bob's goal for this reason.
Attached patch part 1 - [v8] (obsolete) — Splinter Review
Ryan, this patch attempts to address the issues from comment 77.
(I suggest to click "diff" on this patch, then diff it against "part 1 - [v7]")

Does this look OK to you?
Do you have better ideas than using an arbitrary "99" for the maximum possible length for an encoded Oid?

Should we go much higher, or can we derive the maximum?
Attachment #8493387 - Flags: feedback?(ryan.sleevi)
Assignee: nobody → kaie
Ryan:

I will work on dynamically allocating. One issue, however, is the patch "part 3", which contains a buffer for obtaining the digest, and which currently uses the constant.

It seems difficult to change this code to dynamic allocation!

Bob suggested, if we want to be safe, we can use 2048 bytes (RSA_MAX_MODULUS_BITS (16384) / 8).
(In reply to Kai Engert (:kaie) from comment #82)
> Ryan:
> 
> I will work on dynamically allocating. One issue, however, is the patch
> "part 3", which contains a buffer for obtaining the digest, and which
> currently uses the constant.

Sorry, I mean part 2.

    union {
	unsigned char buffer[1];

	/* the encoded DigestInfo from a RSA PKCS#1 signature */
	unsigned char pkcs1DigestInfo[SGN_PKCS1_DIGESTINFO_LENGTH_MAX];

At this place, I want to use 

	unsigned char pkcs1DigestInfo[RSA_MAX_MODULUS_BITS / 8];

Ok?
Attached patch part 2 - [v3] (obsolete) — Splinter Review
This changes part 2 to no longer use constant SGN_PKCS1_DIGESTINFO_LENGTH_MAX (because Ryan suggested to get rid of it), and replaces it with a safe maximum.
Attachment #8490536 - Attachment is obsolete: true
Comment on attachment 8493419 [details] [diff] [review]
part 2 - [v3]

this is obsolete, because more work is required to fix comment 74 and following.
Attachment #8493419 - Attachment is obsolete: true
Alias: CVE-2014-1568
OK, I apologize if it's simply due to lack of sleep or ignorance, but reviewing the patch sets again still seems to suggest there's a lot of unnecessary complexity that ALSO hides a lot of corners being cut and '64K should be enough for everyone' style assumptions, which seem unnecessary.

Working from first principles, the issue here is that we're decoding the entire message and trusting the outer wrapping to be well-formed. The fix, at least from this CL, is to also compare the encoding.

This 'seems' fairly simple to fix, re: RFC 3447 Section 8.2, and without any of the hardcoding complexities being demonstrated here.

I don't want to distract from this bug being fixed, and I think this patchset does so (which is the most pressing issue), but I also think we should clean-up. To be clear: I don't want to delay fix for cleanup.

I understand the need to handle verification where we don't know the hash algorithm (SEC_OID_UNKNOWN) and those when we do.

When verifying, and tag == SEC_OID_UNKNOWN
- Allocate a buffer = public modulus size
- RSA_CheckSignRecover <-- guaranteeing that PKCS#1 padding is wellformed (mostly)
- Decode the DisgestInfo (QuickDER - bugs and all)
- Extract the algorithm tag

When verifying, and tag != SEC_OID_UNKNOWN (or after the above steps)
- Allocate a buffer = public modulus size
- RSA_CheckSignRecoverRaw (this recovers the full message, padding and all)
- Construct the two canonical DigestInfos
  - With NULL and (if allowing it to be ommitted) without NULL params
- Construct the FULL PKCS#1 SSA-EM (RFC 3447 9.2)
- PORT_Memcmp


As far as where this code lives
- "Decode the DigestInfo" is part of util
- RSA_CheckSignRecover is part of freebl
- PKCS#1 encoding/decoding is part of freebl (moved down from softoken)
- Verification happens in cryptohi (in order to handle the recovery case)


In order to support the above, the _only_ thing we seem to need to change/update/handle is having http://mxr.mozilla.org/nss/source/lib/cryptohi/secvfy.c#19 JUST extract the digest (in the case of SEC_OID_UNKNOWN)

Then, update the stupid VFY functions to use the stupid PKCS#11 functions as they're intended - namely, CKM_RSA_PKCS, where the input buffer is the Digest Info. This will fix VFY from doing the stupid hashing itself.

This matches precisely what RFC 3447 says, and avoids any issues with the ASN.1 decoder EXCEPT for when using the SEC_OID_UNKNOWN for RSA keys, in which case, you're dependent on it to 'at least' get the algorithm right.
(In reply to Ryan Sleevi from comment #86)
> In order to support the above, the _only_ thing we seem to need to
> change/update/handle is having
> http://mxr.mozilla.org/nss/source/lib/cryptohi/secvfy.c#19 JUST extract the
> digest (in the case of SEC_OID_UNKNOWN)

s/extract the digest/extract the digest ALGORITHM/
Actually, strike that. We're introducing a new public API that we'll have to support 'forever', when the existing VFY_ functions should be able to handle it just fine. 

vfy_VerifyDigest ( http://mxr.mozilla.org/nss/source/lib/cryptohi/secvfy.c#612 ) and VFY_EndWithSignature ( http://mxr.mozilla.org/nss/source/lib/cryptohi/secvfy.c#545 )are _already_ doing the right thing for !RSA keys, by calling PK11_Verify.
RSA keys are _already_ correctly mapped to CKM_RSA_PKCS in PK11_Verify ( http://mxr.mozilla.org/nss/source/lib/pk11wrap/pk11mech.c#1842 )
CKM_RSA_PKCS is _already_ correctly mapped to sftk_RSACheckSign ( http://mxr.mozilla.org/nss/source/lib/softoken/pkcs11c.c#3023 ) which maps to RSA_CheckSign ( http://mxr.mozilla.org/nss/source/lib/softoken/pkcs11c.c#2932 )

RSA_CheckSign _already_ does the right thing of making sure ALL of the PKCS#11 padding is correct - AND that it's properly sized (minimum 8 octets of padding)

So the only thing that's 'missing' - which covers both cert verification and the ServerKeyExchange verification - is updating the vfy_ functions I mentioned above to _just_ grab the DigestInfo for the SEC_OID_UNKNOWN case (which bsmith@ asserts is never used for these two paths anyways, and I _think_ he's right)

That is, the _extent_ of changes nescessary SHOULD be at:
http://mxr.mozilla.org/nss/source/lib/cryptohi/secvfy.c#382
http://mxr.mozilla.org/nss/source/lib/cryptohi/secvfy.c#550
http://mxr.mozilla.org/nss/source/lib/cryptohi/secvfy.c#597

Explicitly:
- http://mxr.mozilla.org/nss/source/lib/cryptohi/secvfy.c#87 - do away with rsadigest, and instead store the full signature (same as DSA or ECDSA)
- In CreateContext, if and only iff hashAlg == SEC_OID_UNKNOWN do we decode the signature (and only if provided)
- in VFY_EndWithSignature, we KNOW hashAlg can not = SEC_OID_UNKNOWN (because VFY_Update needs the hash alg), so it's not even necessary to decode the signature
- in VFY_EndWIthSignature, for RSA, we encode as fresh DigestInfo
  - matching either our NULL params or missing params, using SGN_CreateDigestInfo (or a possibly extra structure) and SGN_EncodeDigestInfo
- Pass this to PK11_Verify
- Profit
(In reply to Ryan Sleevi from comment #88)
> - In CreateContext, if and only iff hashAlg == SEC_OID_UNKNOWN do we decode
> the signature (and only if provided)

The problem with this, which is minor to me, but might matter to some people, is that you end up doing the RSA calculation *twice* in the SEC_OID_UNKNOWN case.

> - Pass this to PK11_Verify

The problem with this is similar to the problem you mentioned regarding my use of PK11_VerifyRecover with a buffer smaller than the RSA key strength: You don't know that the PKCS#11 module implemented PKCS#1 signature verification correctly. In fact we have good evidence that many (non-PKCS#11) implementations do not implement PKCS#1 signature verification correctly. Therefore, it is actually more conservative for libnss3 to use PK11_VerifyRecover and then do as much of the DigestInfo checking itself. My guess is that PKCS#11 tokens are probably less likely to have an output buffer overflow in the VerifyRecover than they are to get PKCS#1 wrong, based on the evidence we've gathered so far.

I've actually implemented exactly what you suggest here already, but I abandoned that work after running into the "duplicate API / duplicate code path" issue and the "we can't really trust the PKCS#11 implementation to get it right" issues. I'm happy to share one or two of those attempts, if it would be helpful.
(In reply to Ryan Sleevi from comment #88)
> vfy_VerifyDigest (
> http://mxr.mozilla.org/nss/source/lib/cryptohi/secvfy.c#612 ) and
> VFY_EndWithSignature (
> http://mxr.mozilla.org/nss/source/lib/cryptohi/secvfy.c#545 )are _already_
> doing the right thing for !RSA keys, by calling PK11_Verify.

Yes, they are. But, additionally to the two reasons I cited in my previous comment, I was also concerned that it could be the case that some PKCS#11 tokens support C_VerifyRecover for RSA keys but might not support C_Verify for RSA keys. It is unlikely, but because this is being rushed into release, I decided to be conservative with the compatibility risk here. (Again, I think I have some patches that do this.)
But this only applies to public keys, which we always use softoken for.

I think hypothetical malformed tokens is a poor excuse for long term API debt. Even if we accepted the argument, it is still something we can handle entirely without exposing additional API surface.
(In reply to Ryan Sleevi from comment #91)
> But this only applies to public keys, which we always use softoken for.

I did not know that that was guaranteed. It would be good to update the documentation for PK11_Verify to guarantee that it will never use anything other than softoken. My understanding is that, at least in theory, NSS can be configured to prefer another token even for public key operations, or that we should account for that possibility in the future.
PK11_Verify doesn't guarantee this, but the other functions that acquire the handle do (e.g. the SECKEY functions that setup the handles)

Either way, if your concern is that some PKCS implementation only supports part of the mechanism CKM_RSA_PKCS operations - something purely hypothetical and not affecting the 99.99% of users - we can still keep this entirely constrained in the vfy_ functions, without having to export some new function from until that is high impossible to use right / useful.

Specifically, if we need to (and I suspect Bob will readily say its unnecessary), we can do several things:
1) Import the public key (since it only applies to cert verification, and we ALWAYS have the cert)
2) Do the full (raw) verify recover and compare the encoded results.

Either solution avoids the hard coding fragility, while still maintaining correctness.

For discussion, I'm going to implement this tonight, as it will be a smaller change that has greater long term maintainability and correctness.
(In reply to Ryan Sleevi from comment #93)
> Either way, if your concern is that some PKCS implementation only supports
> part of the mechanism CKM_RSA_PKCS operations - something purely
> hypothetical and not affecting the 99.99% of users - we can still keep this
> entirely constrained in the vfy_ functions, 

Like I said, I wrote my patch to minimize risk w.r.t. compatibility and safety. It isn't the patch I would write if it didn't need to be deployed into production with basically no testing. The whole point of pk11wrap is to make it possible to mix and match a bunch of different PKCS#11 modules in a wide variety of configurations, and I didn't (don't) want things to fail to work correctly in some some unusual configuration.

> Specifically, if we need to (and I suspect Bob will readily say its
> unnecessary), we can do several things:
> 1) Import the public key (since it only applies to cert verification, and we
> ALWAYS have the cert)

This function isn't specific to certificates, so you can't, in general, assume that there is some certificate to import the key from.

> 2) Do the full (raw) verify recover and compare the encoded results.

I don't understand what you mean by this one.

> Either solution avoids the hard coding fragility, while still maintaining
> correctness.

Keep in mind that even for SHA-3, HASH_LENGTH_MAX doesn't change, and the OIDs for SHA-3 hash algorithms are the same length as the SHA-2 OIDs according to http://csrc.nist.gov/groups/ST/crypto_apps_infra/csor/algorithms.html. And, even if the constants were out of sync as far as libnss and libnssutil perceive them, there would be no security failure. The only issue would be that some hypothetical non-SHA-1, non-SHA-2, non-SHA-3 hash algorithms wouldn't be accepted, which would be detected immediately during testing.

In contrast, relying on the PKCS#11 module to do the PKCS#1 verification correctly fails whenever the PKCS#11 module doesn't do PKCS#1 verification correctly. So, the flexibility of NSS works against us in this case. I don't see how we could confidently guarantee that VFY_* functions will only ever use softoken for RSA verification--not just now, but also in the future, due to that flexibility. I am happy to be proven wrong, though.
(In reply to Brian Smith (:briansmith, :bsmith, use NEEDINFO?) from comment #94)
> Like I said, I wrote my patch to minimize risk w.r.t. compatibility and
> safety. It isn't the patch I would write if it didn't need to be deployed
> into production with basically no testing. The whole point of pk11wrap is to
> make it possible to mix and match a bunch of different PKCS#11 modules in a
> wide variety of configurations, and I didn't (don't) want things to fail to
> work correctly in some some unusual configuration.

I agree, but at the same time, we shouldn't use the possibility of esoteria to prevent us from writing good patches. Again, I'm not trying to r- an emergency security fix, but I think we're unnecessarily cutting corners/making assumptions, and forcing a bad API to be maintained indefinitely.

If the answer is "I can't be sure about the paths to this code", then we should answer "What are the paths to this code"

> This function isn't specific to certificates, so you can't, in general,
> assume that there is some certificate to import the key from.

All of the VFY functions work on SECKEYPublicKey

A SECKEYPublicKey is _always_ stored in it's data form. This is precisely the issue we were discussing two weeks ago when Richard Barnes was dealing with ECC import (Bug 1057161)

So we know, for all paths using a SECKEYPublicKey, we have the full public key AND that it's imported to NSS softoken on-demand for the operation.

> 
> > 2) Do the full (raw) verify recover and compare the encoded results.
> 
> I don't understand what you mean by this one.

VerifyRecoverRaw gives you the full PKCS#1 RSA-SSA-EM. That is, block type, padding string, etc. You can reconstruct this and compare. This guarantees, for example, that the padding is not hostile and is appropriate (e.g. 8 bytes)

> 
> > Either solution avoids the hard coding fragility, while still maintaining
> > correctness.
> 
> Keep in mind that even for SHA-3, HASH_LENGTH_MAX doesn't change, and the
> OIDs for SHA-3 hash algorithms are the same length as the SHA-2 OIDs
> according to
> http://csrc.nist.gov/groups/ST/crypto_apps_infra/csor/algorithms.html. And,
> even if the constants were out of sync as far as libnss and libnssutil
> perceive them, there would be no security failure. The only issue would be
> that some hypothetical non-SHA-1, non-SHA-2, non-SHA-3 hash algorithms
> wouldn't be accepted, which would be detected immediately during testing.

So what? It's a bad public dependency that is virtually unchangable and creates source-compatibility if you compile with an old header, targeting a new API. It'd be a straight up buffer overflow, since they disagree on the size of the buffer they're writing into.

The solution, which is adding slop/padding, doesn't really strike as a clean solution.

> 
> In contrast, relying on the PKCS#11 module to do the PKCS#1 verification
> correctly fails whenever the PKCS#11 module doesn't do PKCS#1 verification
> correctly. So, the flexibility of NSS works against us in this case. I don't
> see how we could confidently guarantee that VFY_* functions will only ever
> use softoken for RSA verification--not just now, but also in the future, due
> to that flexibility. I am happy to be proven wrong, though.

Yes. Security fails when a security module fails to be secure. I don't see that as an inherent argument against a cleaner API.
(In reply to Ryan Sleevi from comment #95)
> If the answer is "I can't be sure about the paths to this code", then we
> should answer "What are the paths to this code"
> 
> > This function isn't specific to certificates, so you can't, in general,
> > assume that there is some certificate to import the key from.
> 
> All of the VFY functions work on SECKEYPublicKey
> 
> A SECKEYPublicKey is _always_ stored in it's data form. This is precisely
> the issue we were discussing two weeks ago when Richard Barnes was dealing
> with ECC import (Bug 1057161)
> 
> So we know, for all paths using a SECKEYPublicKey, we have the full public
> key AND that it's imported to NSS softoken on-demand for the operation.

How do you know that? SECKEYPublicKeyStr contains these members:

    PK11SlotInfo *pkcs11Slot;
    CK_OBJECT_HANDLE pkcs11ID;

How do you know that the caller didn't already set pkcs11ID and/or pkcs11Slot to something that NSS wouldn't normally set, that isn't softoken? Keep in mind that the code might be some strange Firefox extension calling VFY_* directly using SECKEYPublicKey objects it constructed or obtained in some unusual (to us) way. 

> > perceive them, there would be no security failure. The only issue would be
> > that some hypothetical non-SHA-1, non-SHA-2, non-SHA-3 hash algorithms
> > wouldn't be accepted, which would be detected immediately during testing.
> 
> So what? It's a bad public dependency that is virtually unchangable and
> creates source-compatibility if you compile with an old header, targeting a
> new API. It'd be a straight up buffer overflow, since they disagree on the
> size of the buffer they're writing into.

There is no possibility of buffer overflow in the patch I wrote, no matter how you screw up the headers and linking, AFAICT. If that was a possibility then of course I would agree with you. If there is the possibility of buffer overflow then please point out the buffer overflow.

> The solution, which is adding slop/padding, doesn't really strike as a clean
> solution.

I didn't suggest adding slop, and I don't think that is a good idea.

> Yes. Security fails when a security module fails to be secure.

Then I misunderstand your argument about the buffer passed to PK11_VerifyRecover. Why it is good to assume that PK11_VerifyRecover got a trivial thing wrong, but it is OK to assume that PK11_Verify got a complicated thing correct?

> I don't see that as an inherent argument against a cleaner API.

Nobody is arguing against a cleaner API.
(In reply to Brian Smith (:briansmith, :bsmith, use NEEDINFO?) from comment #96)
> How do you know that the caller didn't already set pkcs11ID and/or
> pkcs11Slot to something that NSS wouldn't normally set, that isn't softoken?
> Keep in mind that the code might be some strange Firefox extension calling
> VFY_* directly using SECKEYPublicKey objects it constructed or obtained in
> some unusual (to us) way. 

Because creating SECKEYPublicKey objects by hand is "not a supported API"?

> Then I misunderstand your argument about the buffer passed to
> PK11_VerifyRecover. Why it is good to assume that PK11_VerifyRecover got a
> trivial thing wrong, but it is OK to assume that PK11_Verify got a
> complicated thing correct?

PK11_Verify for CKM_RSA_PKCS works on encoded digest infos (or more aptly, raw data). It's sole responsibility - which hasn't changed at all with your patch (vis-a-vis VerifyRecover) is to do the post-padding comparisons of whatever data that remains. For an RSA signature, this is either the encoded DigestInfo (byte for byte) OR the SHA1/MD5 concatenation (<TLS 1.2 SKX).

There's no "complicated" thing here. PK11_VerifyRecover returns you the bytes, and you're left to compare them. PK11_Verify just compares them for you.
(In reply to Ryan Sleevi from comment #97)
> (In reply to Brian Smith (:briansmith, :bsmith, use NEEDINFO?) from comment
> #96)
> > How do you know that the caller didn't already set pkcs11ID and/or
> > pkcs11Slot to something that NSS wouldn't normally set, that isn't softoken?
> > Keep in mind that the code might be some strange Firefox extension calling
> > VFY_* directly using SECKEYPublicKey objects it constructed or obtained in
> > some unusual (to us) way. 
> 
> Because creating SECKEYPublicKey objects by hand is "not a supported API"?

OK. It isn't clear that one cannot construct a SECKEYPublicKey via supported APIs, that reference a token other than softoken, though.

> > Then I misunderstand your argument about the buffer passed to
> > PK11_VerifyRecover. Why it is good to assume that PK11_VerifyRecover got a
> > trivial thing wrong, but it is OK to assume that PK11_Verify got a
> > complicated thing correct?
> 
> PK11_Verify for CKM_RSA_PKCS works on encoded digest infos (or more aptly,
> raw data). It's sole responsibility - which hasn't changed at all with your
> patch (vis-a-vis VerifyRecover) is to do the post-padding comparisons of
> whatever data that remains. For an RSA signature, this is either the encoded
> DigestInfo (byte for byte) OR the SHA1/MD5 concatenation (<TLS 1.2 SKX).
> 
> There's no "complicated" thing here. PK11_VerifyRecover returns you the
> bytes, and you're left to compare them. PK11_Verify just compares them for
> you.

OK, I now understand what you mean. I don't think it is great to assume, in the long term, that PK11_Verify gets the minimum padding length check right, because of the more theoretical "it might not be softoken" problem. But, also, nobody has fixed bug 1067214 yet, which is a concrete instance of getting the padding length check wrong for PK11_VerifyRecover, so even if it is softoken, it is wrong!

So, I agree with you that if libnss wants to guard against bad PKCS#11 modules, then it should use the VerifyRecoverRaw way you suggested above. But, doing that ultimately means you end up with something very much like pkcs1sig.c in libnssutil, because softoken and libnss both expose functions that need the same logic in that case. Like I mentioned above, I didn't switch to PK11_Verify or PK11_PubDecryptRaw like you suggested because I didn't want to change which PKCS#11 functionality the code depended on, in the name of minimizing compatibility risk. But, again, I think that is the best long-term approach. Note that even continuing to use PK11_VerifyRecover, you can add a check that the length of the returned DigestInfo is at least 11 bytes shorter than SECKEY_PublicKeyStrength(k), but you'd still be depending on the PKCS#11 module to check the padding bytes.

Speaking of which, I noticed this code in sslcon.c (the SSL 2.0 code), which looks very wrong for exactly that reason:

    rv = PK11_PubDecryptRaw(sc->SERVERKEY, kbuf, &ddLen, modulusLen, ek, ekLen);
    if (rv != SECSuccess) 
	goto hide_loser;

    /* Is the length of the decrypted data (ddLen) the expected value? */
    if (modulusLen != ddLen) 
	goto hide_loser;

    /* Cheaply verify that PKCS#1 was used to format the encryption block */
    if ((kbuf[0] != 0x00) || (kbuf[1] != 0x02) || (dk[-1] != 0x00)) {
	SSL_DBG(("%d: SSL[%d]: strange encryption block",
		 SSL_GETPID(), ss->fd));
	PORT_SetError(SSL_ERROR_BAD_CLIENT);
	goto hide_loser;
    }

Also, in case it wasn't clear, I also support removing the #defines in pkcs1sig.h. I don't think it is practically problematic if they stay (I don't think there is any buffer overflow to worry about, for example).
Ryan, would you mind helping us by rewriting the patch in the way you think is ideal?

It seems to me that would be much simpler than this discussion.

I honestly don't want to upset you, but it's complicated to follow this discussion and figure out how exactly you want this fixed, and you already seem to have a very clear understanding of how you want this done.

Given you're worried about introducing this new API, it seems you really prefer to get the cleanup done immediately, in the initial release of a fix.
See the end of comment #93 Kai.

Brian: w/r/t the buffer overflow, the concern is precisely the malformed modules that you're concerned about (re: a buffer of ModulusLen size vs the 'guesstimate based on header', even if it seems academic). Considering we've found and fixed several of those in softoken for Web Crypto and OAEP, I think it's reasonable to be paraboid.

I think you're right, having played more with it, that because we want to allow bad encodings in both the high and low layers, we likely end up either with a function in util to try both, or we duplicate the logic in both places. I'm still working through the deps there.

I really don't like the manual encoding - is there a reason you explicitly avoided SGN_EncodeDigestInfo? I'm concerned it may simply be a mozilla::pkix-ism of NIH/trust nothing, and while I understand why it is, I don't think it's good for NSS to not trust itself.

I've got two approaches to upload tomorrow - one that uses PK11_Verify twice (when necessary), using the valid and invalid digestinfo, and one that uses VerifyRecover approach that is (largely) functionally equivalent to this, but using SGN_EncodeDigestInfo to do the heavy lifting.
(In reply to Ryan Sleevi from comment #100)
> Brian: w/r/t the buffer overflow, the concern is precisely the malformed
> modules that you're concerned about (re: a buffer of ModulusLen size vs the
> 'guesstimate based on header', even if it seems academic). Considering we've
> found and fixed several of those in softoken for Web Crypto and OAEP, I
> think it's reasonable to be paraboid.

Agreed.

> I really don't like the manual encoding - is there a reason you explicitly
> avoided SGN_EncodeDigestInfo? I'm concerned it may simply be a
> mozilla::pkix-ism of NIH/trust nothing, and while I understand why it is, I
> don't think it's good for NSS to not trust itself.

My original patch was attachment 8490281 [details] [diff] [review], where the encoding was all done ahead of time, with a code generator that generates static const data. That wasn't due to any distrust of NSS, but rather a general distrust of executing code that is more complex than it must be, in favor of trusting static data when possible. Notice in particular that that patch was able to replace a lot of "goto" usage with clearer error handling logic, because it avoided malloc/free issues, because of its static approach. I personally think that is a very good approach, but I don't want to bikeshed about that any further. Almost any new approach is better than the current approach.
Group: crypto-core-security
Independently of the new discussion that Ryan and Brian have started, I have continued my task from yesterday, and have reworked the patch to rid of the fixed constants that Ryan disliked.

I removed the fixed size buffer in secvfy.c and replaced it with dynamic storage.

I changed pkcs1sig.c to more clearly document on which assumptions the constants are based. Since those constants are used locally, for verifying maximum expected lengths, not for static buffers, I think it should be OK.

(If you want, we can have a follow-up where we somehow dynamically iterate through the list of digest algorithsms and figure out the length of the largest involved digest Oid, but given this would be based on a fixed list of allowed digests, this would still be a hardcoded list... Unless you can tell me a dynamic mechanism to produce the list of allowed digests.)

In pkcs11c.c function RSA_HashCheckSign(), we need a maximum buffer size. I don't know of a way to know the correct maximum buffer here. However, this code passes the size of available space into the called RSA_CheckSignRecover, so I assume it doesn't matter to know the exact size.

I believe I have addressed Ryan's request from comment 74, and comment 77 (from end of comment 73).

I will attach the updated patches, for you to consider, whether they are be sufficient for the initial round of the emergency patching.


These patches in no way attempt to address the new discussion that Ryan has started in comment 86.

I'll let Bob decide if we should wait for newer patches from Ryan/Brian, based on the arguments from comment 86 and later.
Attachment #8493387 - Flags: feedback?(ryan.sleevi)
Attachment #8491708 - Attachment is obsolete: true
Attachment #8493339 - Attachment is obsolete: true
Attachment #8493387 - Attachment is obsolete: true
Attachment #8493772 - Flags: review?(rrelyea)
Attached patch part2-v9 (obsolete) — Splinter Review
Attachment #8493773 - Flags: review?(rrelyea)
Attached patch part3-v9 (obsolete) — Splinter Review
Attachment #8493775 - Flags: review?(rrelyea)
Attachment #8490288 - Attachment is obsolete: true
Attachment #8490288 - Flags: review?(agl)
Comment on attachment 8490537 [details] [diff] [review]
Part 4: Add additional comments for the changes to VFY_*

These comments have been merged into the other patches.
Attachment #8490537 - Attachment is obsolete: true
Attachment #8491709 - Attachment is obsolete: true
Comment on attachment 8493772 [details] [diff] [review]
part1-v9 (will be replaced shortly)

Ok, I'll rewrite this part to get rid of the constants, and allocate dynamically, too.
Attachment #8493772 - Attachment description: part1-v9 → part1-v9 (will be replaced shortly)
Attachment #8493772 - Flags: review?(rrelyea)
Attached patch part1-v9c (obsolete) — Splinter Review
> Ok, I'll rewrite this part to get rid of the constants, and allocate
> dynamically, too.

done
Attachment #8493772 - Attachment is obsolete: true
Attachment #8493805 - Flags: review?(rrelyea)
There's a lot of thrashing here, so in the interest of getting something out, I'm going to review kai's existing patch. There is a cost in that it introduces a new API. I think we can add an under-bar to that API to limit exposure. The function is relatively small, and most of the issues Ryan is worried about isn't in this final helper function (which is only public because both softoken and nss proper use it).

I'm OK if Ryan wants to create a more optimized version of the patch to update later.
Comment on attachment 8493773 [details] [diff] [review]
part2-v9

Review of attachment 8493773 [details] [diff] [review]:
-----------------------------------------------------------------

I have some style comments, no bugs or required changes.

I would like SGN_VerifyPKCS1DigestInfo changed to_SGN_VerifyPKCS1DigestInfo. That request affects one line of this patch.

::: lib/cryptohi/secvfy.c
@@ +82,2 @@
>  
> +    if (rv == SECSuccess) {

This if is redundant with the previous one.

@@ +99,1 @@
>      }

NSS style is to use a goto to blow out on error rather than checking for rv == SECSuccess everywhere.

@@ +157,5 @@
> +  pkcs1DigestInfo.data = cx->pkcs1RSADigestInfo;
> +  pkcs1DigestInfo.len = cx->pkcs1RSADigestInfoLen;
> +  return SGN_VerifyPKCS1DigestInfo(
> +           cx->hashAlg, digest, &pkcs1DigestInfo,
> +           PR_TRUE /*XXX: unsafeAllowMissingParameters*/);

Here is where my one global comment should change: SGN_VerifyPKCS1DigestInfo ->
_SGN_VerifyPKCS1DigestInfo.

We'll warn that it's private.

@@ +419,4 @@
>      cx->hasSignature = (sig != NULL);
>      cx->encAlg = encAlg;
>      cx->hashAlg = hashAlg;
>      cx->key = SECKEY_CopyPublicKey(key);

for my own paranoia, I would have explictly set cx->pkcs11RSADigestInfo to NULL, but cx is allocated with Zalloc, so it's not necessary.
Attachment #8493773 - Flags: review?(rrelyea) → review+
Right, of course I missed the obvious solution of calling it 'private'.

Bob, that gives us the flexibility to remove it, say in the next release, right?
Comment on attachment 8493775 [details] [diff] [review]
part3-v9

Review of attachment 8493775 [details] [diff] [review]:
-----------------------------------------------------------------

I would like SGN_VerifyPKCS1DigestInfo changed to_SGN_VerifyPKCS1DigestInfo. That request affects one line of this patch.

::: lib/softoken/pkcs11c.c
@@ +2865,5 @@
> +    SECItem pkcs1DigestInfo;
> +    SECItem digest;
> +
> +    pkcs1DigestInfo.data = pkcs1DigestInfoData;
> +    pkcs1DigestInfo.len = sizeof(pkcs1DigestInfoData);

We could make this allocated as well. pkcs1DigestInfoData.len must be less then key->u.rsa.modulus.len.

@@ +2878,5 @@
> +    digest.data = (PRUint8*) digestData;
> +    digest.len = digestLen;
> +    return SGN_VerifyPKCS1DigestInfo(
> +               digestOid, &digest, &pkcs1DigestInfo,
> +               PR_TRUE /*XXX: unsafeAllowMissingParameters*/);

SGN_VerifyPKCS1DigestInfo -> _SGN_VerifyPKCS1DigestInfo.
Attachment #8493775 - Flags: review?(rrelyea) → review+
Comment on attachment 8493805 [details] [diff] [review]
part1-v9c

Review of attachment 8493805 [details] [diff] [review]:
-----------------------------------------------------------------

I have some style comments, no bugs or required changes.

I would like SGN_VerifyPKCS1DigestInfo changed to_SGN_VerifyPKCS1DigestInfo. That request affects three lines in three separeate files of this patch.

::: lib/util/nssutil.def
@@ +272,5 @@
>  ;+       *;
>  ;+};
> +;+NSSUTIL_3.17.1 {         # NSS Utilities 3.17.1 release
> +;+    global:
> +SGN_VerifyPKCS1DigestInfo;

Here one place in this patch where my one global comment should change: SGN_VerifyPKCS1DigestInfo ->
_SGN_VerifyPKCS1DigestInfo.

We'll warn that it's private.

::: lib/util/pkcs1sig.c
@@ +94,5 @@
> +    return SECSuccess;
> +}
> +
> +SECStatus
> +SGN_VerifyPKCS1DigestInfo(SECOidTag digestAlg,

Here one place in this patch where my one global comment should change: SGN_VerifyPKCS1DigestInfo ->
_SGN_VerifyPKCS1DigestInfo.

We'll warn that it's private.

@@ +145,5 @@
> +        } else {
> +            PORT_SetError(SEC_ERROR_BAD_SIGNATURE);
> +            rv = SECFailure;
> +        }
> +    }

Same comment as patch 2 wrt if rv == SECSuccess blocks versus goto loser;

::: lib/util/pkcs1sig.h
@@ +21,5 @@
> + *
> + * If unsafeAllowMissingParameters is true (not recommended), then a DigestInfo
> + * without the mandatory ASN.1 NULL parameter will also be accepted.
> + */
> +SECStatus SGN_VerifyPKCS1DigestInfo(SECOidTag digestAlg,

Here one place in this patch where my one global comment should change: SGN_VerifyPKCS1DigestInfo ->
_SGN_VerifyPKCS1DigestInfo.

We'll warn that it's private.
Attachment #8493805 - Flags: review?(rrelyea) → review+
Attached patch part2-v10 (obsolete) — Splinter Review
- added paranoia NULL init for cx->pkcs11RSADigestInfo
- renamed function

carrying forward r+
Attachment #8493845 - Flags: review+
Attached patch part2-v10bSplinter Review
fix typo
Attachment #8493845 - Attachment is obsolete: true
Attachment #8493855 - Flags: review+
Attached patch part3-v10Splinter Review
(In reply to Robert Relyea from comment #112)
> > +    pkcs1DigestInfo.data = pkcs1DigestInfoData;
> > +    pkcs1DigestInfo.len = sizeof(pkcs1DigestInfoData);
> 
> We could make this allocated as well. pkcs1DigestInfoData.len must be less
> then key->u.rsa.modulus.len.

Bob, I changed it to allocate.
Did I do it right?
Attachment #8493775 - Attachment is obsolete: true
Attachment #8493856 - Flags: review?(rrelyea)
Attached patch part1-v10Splinter Review
renamed.

I didn't change to goto style, since we're under time pressure.
Attachment #8493773 - Attachment is obsolete: true
Attachment #8493805 - Attachment is obsolete: true
Comment on attachment 8493856 [details] [diff] [review]
part3-v10

r+ no more static buffers.
Attachment #8493856 - Flags: review?(rrelyea) → review+
Attached patch part-bustage1Splinter Review
windows build bustage, the usual "variable not declared at beginning of block", fix attached.

I've checked in the patches and the bustage fix to the NSS default branch.

https://hg.mozilla.org/projects/nss/rev/ad411fb64046
https://hg.mozilla.org/projects/nss/rev/4e90910ad2f9
https://hg.mozilla.org/projects/nss/rev/fb7208e91ae8
https://hg.mozilla.org/projects/nss/rev/8dd6c6ac977d
The attached patch is against a pre-patched trunk, and shows an alternative approach. It simplifies the code-path for RSA signatures in the VFY_ functions and forces the caller to be explicit about allowing degenerate signatures.

The key difference from the approach here is what Brian highlighted early on - C_VerifyRecover vs C_Verify as the PKCS#11 entry point.
Attachment #8493992 - Flags: feedback?(rrelyea)
Attachment #8493992 - Flags: feedback?(brian)
Bustage fix landed on Aurora and Beta. As best we can tell, the issue we hit only affects the branches bug 1025998 landed on.
https://hg.mozilla.org/releases/mozilla-aurora/rev/fab3c895bc89
https://hg.mozilla.org/releases/mozilla-beta/rev/2431af782661
Comment on attachment 8493992 [details] [diff] [review]
Alternative patch for discussion; reduces exposure of public APIs

Review of attachment 8493992 [details] [diff] [review]:
-----------------------------------------------------------------

::: lib/cryptohi/secvfy.c
@@ +19,3 @@
>  static SECStatus
> +GetAlgorithmForUnknownDigestInfo(SECKEYPublicKey* key, const SECItem* sig,
> +                                 SECOidTag* algorithm, void* wincx)

This is much clearer than before. There is a performance penalty in the case where the caller did not pass in a hash algorithm to use. However, that case should be rare enough (in mozilla::pkix and libssl, it never happens, for example) that we shouldn't have to care about the performance implications. In fact, I consider all such code to be wrong and need fixing.

@@ +78,5 @@
>      union {
>  	unsigned char buffer[1];
>  
> +	/* the full RSA signature */
> +	unsigned char rsasig[RSA_MAX_MODULUS_BITS / 8];

This is the same "64K is enough for anybody" issue you mentioned before, right? The PKCS#11 token (perhaps a newer version of Softoken) might have a larger RSA max modulus size and then you run into the issue again of buffer overflow of PK11_VerifyRecover is buffer-overflowy. (BTW, bug 921687 is the bug that you are worried about here, right?)

@@ +543,5 @@
> +                                                 digestInfo);
> +        /* Try to verify with a properly encoded NULL params. */
> +        if (encodedDigestInfo && PK11_Verify(cx->key, sig, encodedDigestInfo,
> +                                             cx->wincx) != SECSuccess) {
> +            /* Fall back to improperly encoded, omitted params */

It is better to avoid doing this fallback. Otherwise, you make every RSA signature verification failure twice as expensive in CPU time. Normally, the efficiency of failure is not so important, but in some unfortunate cases of PKIX path building (both libpkix and mozilla::pkix), you may end up later finding a valid path after signature verification failure, and you still want that to be efficient. For this reason, I don't think this new patch is a net improvement.

We can avoid doing this fallback after we verify that it is safe to reject signatures that don't have the parameters encoded as an explicit NULL. We should have data on this from BoringSSL soon, since it doesn't do the fallback.

@@ +550,5 @@
> +            encodedDigestInfo = SGN_EncodeDigestInfo(digestInfo->arena, NULL,
> +                                                     digestInfo);
> +        }
> +        if (encodedDigestInfo && PK11_Verify(cx->key, sig, encodedDigestInfo,
> +                                             cx->wincx) == SECSuccess) {

I think you intended this second PK11_Verify call to be conditional on the first one failing, right? But, instead, the code always calls PK11_Verify twice.

@@ +608,5 @@
>      rv = SECFailure;
>  
>      cx = vfy_CreateContext(key, sig, encAlg, hashAlg, NULL, wincx);
>      if (cx != NULL) {
> +        rv = vfy_VerifyDigestInternal(cx, digest, sig);

In a similar way to the way we cleaned this up, a similar cleanup can/should be done in ssl3_VerifySignedHashes.

::: lib/softoken/pkcs11c.c
@@ +2880,5 @@
> +        goto done; /* Success! */
> +    }
> +    /* Fall back to degenerate encoding where parameters was omitted, which
> +     * is a violation of RFC 3447. This is only included for backwards
> +     * compatability */

Again, this ends up hurting performance in the case where signature verification fails, which could still be important for some applications, such as applications that try to use PKCS#11 directly instead of using VFY_* for certificate signature verification. Ultimately, it doesn't seem like a net improvement.
Attachment #8493992 - Flags: feedback?(brian)
We are recommending to partners with 1.3 that they apply the fix from this bug. I don't see a 1.3 fix in this bug and I'm not sure if we are/can still land on this branch. 

However IIUC, the patch for b2g28 should be the same as b2g30 (1.4) since they share the same NSS version - is this correct? What should be advising partners to do in terms of actually applying the fix to this issue?
(In reply to Paul Theriault [:pauljt] from comment #134)
> We are recommending to partners with 1.3 that they apply the fix from this
> bug. I don't see a 1.3 fix in this bug and I'm not sure if we are/can still
> land on this branch. 
> 
> However IIUC, the patch for b2g28 should be the same as b2g30 (1.4) since
> they share the same NSS version - is this correct? What should be advising
> partners to do in terms of actually applying the fix to this issue?

Hi Paul, if it is confirmed that the patch for 1.4 can be applied to 1.3/1.3T as well, then TAM can help to ask partners to cherry-pick the patch into thier future 1.3/1.3T release(as I know, at least TCL/ZTE will have 1.3 FOTA release)

Thanks
Flags: needinfo?(ptheriault)
Hi, 

Thanks Vance and all for the info. Just to highlight that we'd need 1.3/1.3T patches, as Vance stated, and I think also the patch for FxOS 1.1. 

Not all the partners that launched FxOS 1.1 devices are planning to do an OTA to 1.3 (unfortunately). I think OEMs should have the possibility to integrate the 1.1 patch without jumping to 1.3. 

Thanks!
David
(In reply to Vance Chen [:vchen][vchen@mozilla.com] from comment #135)
> (In reply to Paul Theriault [:pauljt] from comment #134)
> > We are recommending to partners with 1.3 that they apply the fix from this
> > bug. I don't see a 1.3 fix in this bug and I'm not sure if we are/can still
> > land on this branch. 
> > 
> > However IIUC, the patch for b2g28 should be the same as b2g30 (1.4) since
> > they share the same NSS version - is this correct? What should be advising
> > partners to do in terms of actually applying the fix to this issue?
> 
> Hi Paul, if it is confirmed that the patch for 1.4 can be applied to
> 1.3/1.3T as well, then TAM can help to ask partners to cherry-pick the patch
> into thier future 1.3/1.3T release(as I know, at least TCL/ZTE will have 1.3
> FOTA release)
> 
> Thanks

Thanks Vance, this was the motivation behind my request. Can anyone help out?



(In reply to David Palomino [:dpalomino] from comment #136)
> Hi, 
> 
> Thanks Vance and all for the info. Just to highlight that we'd need 1.3/1.3T
> patches, as Vance stated, and I think also the patch for FxOS 1.1. 
> 
> Not all the partners that launched FxOS 1.1 devices are planning to do an
> OTA to 1.3 (unfortunately). I think OEMs should have the possibility to
> integrate the 1.1 patch without jumping to 1.3. 
> 
> Thanks!
> David

I'm not sure there is a lot of value in taking just this one fix back to 1.1 - 1.1 only contains security fixes up to FF23, and as such contains a large number of security vulnerabilities and patching just one doesn't seem to me to make users any more safe. (in the order of 40 high or critical security fixes at rough guess). Even 1.3 is insecure at this point, but its a lot better than 1.1.

That said, if partners want this then maybe we could invest the effort into either updating the NSS version of 1.1 or backporting the fix (I don't know what is easier or less risky or how feasible this is.
Flags: needinfo?(ptheriault)
Whiteboard: [status-firefox-esr24:fixed][status-b2g-v1.3t:wontfix]
Whiteboard: [status-firefox-esr24:fixed][status-b2g-v1.3t:wontfix]
(In reply to Paul Theriault [:pauljt] from comment #137)
> (In reply to Vance Chen [:vchen][vchen@mozilla.com] from comment #135)
> > (In reply to Paul Theriault [:pauljt] from comment #134)
> > > We are recommending to partners with 1.3 that they apply the fix from this
> > > bug. I don't see a 1.3 fix in this bug and I'm not sure if we are/can still
> > > land on this branch. 
> > > 
> > > However IIUC, the patch for b2g28 should be the same as b2g30 (1.4) since
> > > they share the same NSS version - is this correct? What should be advising
> > > partners to do in terms of actually applying the fix to this issue?
> > 
> > Hi Paul, if it is confirmed that the patch for 1.4 can be applied to
> > 1.3/1.3T as well, then TAM can help to ask partners to cherry-pick the patch
> > into thier future 1.3/1.3T release(as I know, at least TCL/ZTE will have 1.3
> > FOTA release)
> > 
> > Thanks
> 
> Thanks Vance, this was the motivation behind my request. Can anyone help out?

Hey Vance/Paul,

Yep b2g28 (1.3 and 1.3t) were updated to nss 3.16.2 in https://bugzilla.mozilla.org/show_bug.cgi?id=1020695. So we basically need to point partners to this commit https://hg.mozilla.org/releases/mozilla-b2g30_v1_4/rev/5f35498d4aa1 on top on latest 1.3. These should apply cleanly to 1.3 and 1.3T per ryan.


Kaie/Ryan, 

Please comment if the above does not hold true.
> 
> 
> 
> (In reply to David Palomino [:dpalomino] from comment #136)
> > Hi, 
> > 
> > Thanks Vance and all for the info. Just to highlight that we'd need 1.3/1.3T
> > patches, as Vance stated, and I think also the patch for FxOS 1.1. 
> > 
> > Not all the partners that launched FxOS 1.1 devices are planning to do an
> > OTA to 1.3 (unfortunately). I think OEMs should have the possibility to
> > integrate the 1.1 patch without jumping to 1.3. 
> > 
> > Thanks!
> > David
> 
> I'm not sure there is a lot of value in taking just this one fix back to 1.1
> - 1.1 only contains security fixes up to FF23, and as such contains a large
> number of security vulnerabilities and patching just one doesn't seem to me
> to make users any more safe. (in the order of 40 high or critical security
> fixes at rough guess). Even 1.3 is insecure at this point, but its a lot
> better than 1.1.
> 
> That said, if partners want this then maybe we could invest the effort into
> either updating the NSS version of 1.1 or backporting the fix (I don't know
> what is easier or less risky or how feasible this is.

David,

I agree to Paul's comment, given how old 1.1 is and the unknown effort/risk involved in backporting these patches. I'd highly recommend us to push partners to update to atleast 1.3 at this point.
(In reply to bhavana bajaj [:bajaj] from comment #138)
> (In reply to Paul Theriault [:pauljt] from comment #137)
> > (In reply to Vance Chen [:vchen][vchen@mozilla.com] from comment #135)
> > > (In reply to Paul Theriault [:pauljt] from comment #134)
> > > > We are recommending to partners with 1.3 that they apply the fix from this
> > > > bug. I don't see a 1.3 fix in this bug and I'm not sure if we are/can still
> > > > land on this branch. 
> > > > 
> > > > However IIUC, the patch for b2g28 should be the same as b2g30 (1.4) since
> > > > they share the same NSS version - is this correct? What should be advising
> > > > partners to do in terms of actually applying the fix to this issue?
> > > 
> > > Hi Paul, if it is confirmed that the patch for 1.4 can be applied to
> > > 1.3/1.3T as well, then TAM can help to ask partners to cherry-pick the patch
> > > into thier future 1.3/1.3T release(as I know, at least TCL/ZTE will have 1.3
> > > FOTA release)
> > > 
> > > Thanks
> > 
> > Thanks Vance, this was the motivation behind my request. Can anyone help out?
> 
> Hey Vance/Paul,
> 
> Yep b2g28 (1.3 and 1.3t) were updated to nss 3.16.2 in
> https://bugzilla.mozilla.org/show_bug.cgi?id=1020695. So we basically need
> to point partners to this commit
> https://hg.mozilla.org/releases/mozilla-b2g30_v1_4/rev/5f35498d4aa1 on top
> on latest 1.3. These should apply cleanly to 1.3 and 1.3T per ryan.
> 
Thanks Bhavana, i will inform partners and ask them to apply the security patch on their 1.3/1.3T source tree
Flags: sec-bounty? → sec-bounty+
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
(In reply to Vance Chen [:vchen][vchen@mozilla.com] from comment #139)

> Thanks Bhavana, i will inform partners and ask them to apply the security
> patch on their 1.3/1.3T source tree

Hi, 

Just to double check, for 1.3 then partners should first apply the patch related to: 
https://bugzilla.mozilla.org/show_bug.cgi?id=1020695.

And after that, apply then this issue's patch. Is that right?

Then I guess we need to land this also in 1.3 (and probably in 1.3T). Please, take into account that there are still some partners running IOTs and probably taking the code from our 1.3 branch. Probably this will be safer than asking to the OEMs to apply this patch separately (as they can do it anyway if they're working on their own branch). 

Thanks, 
David
(In reply to David Palomino [:dpalomino] from comment #141)
> Just to double check, for 1.3 then partners should first apply the patch
> related to: 
> https://bugzilla.mozilla.org/show_bug.cgi?id=1020695.
> 
> And after that, apply then this issue's patch. Is that right?

No, 1.3/1.3t are already on NSS 3.16.2. The patch that landed on 1.4 (https://hg.mozilla.org/releases/mozilla-b2g30_v1_4/rev/5f35498d4aa1) should apply cleanly without any additional patches required.
(In reply to Ryan VanderMeulen [:RyanVM UTC-4] from comment #142)

> No, 1.3/1.3t are already on NSS 3.16.2. The patch that landed on 1.4
> (https://hg.mozilla.org/releases/mozilla-b2g30_v1_4/rev/5f35498d4aa1) should
> apply cleanly without any additional patches required.

Thanks Ryan!

Then I think we need to land the patch also in 1.3 and 1.3t. Can you confirm that this is going to be landed? Is it possible to have some STR to check with a 1.3 device that the patch is been applied, and working fine?

Please, take into account that 1.3 is not dead at all, and there are several 1.3 based devices being deployed right now. 

Cheers, 
David
(In reply to David Palomino [:dpalomino] from comment #143)
> Then I think we need to land the patch also in 1.3 and 1.3t. Can you confirm
> that this is going to be landed?

That's a decision for B2G Release Management to make, not me.
Bhavana, Could you please read comment #143 and approve the uplift of the patch to 1.3 and 1.3T branches as they are affected now and being released by some partners?
Thanks in advance.
Flags: needinfo?(bbajaj)
Antoine, is there any part of this bug report or the related ones that you don't want to be public at this time? I'd like to make it public now so people can learn from it.
Flags: needinfo?(antoine)
Hi Brian, feel free to make it public.
Flags: needinfo?(antoine)
(In reply to Beatriz Rodríguez [:brg] from comment #145)
> Bhavana, Could you please read comment #143 and approve the uplift of the
> patch to 1.3 and 1.3T branches as they are affected now and being released
> by some partners?
> Thanks in advance.

You should be able to apply this https://hg.mozilla.org/releases/mozilla-b2g30_v1_4/rev/5f35498d4aa1  on the 1.3 the 1.3T codebase. It should apply cleanly.
Flags: needinfo?(bbajaj)
(In reply to bhavana bajaj [:bajaj] from comment #148)
> (In reply to Beatriz Rodríguez [:brg] from comment #145)
> > Bhavana, Could you please read comment #143 and approve the uplift of the
> > patch to 1.3 and 1.3T branches as they are affected now and being released
> > by some partners?
> > Thanks in advance.
> 
> You should be able to apply this
> https://hg.mozilla.org/releases/mozilla-b2g30_v1_4/rev/5f35498d4aa1  on the
> 1.3 the 1.3T codebase. It should apply cleanly.

We've technically killed the branch on our side, hence you will have to cherry-pick and land this on your branches. Alternatively, I will check with release engineering if there is a way they can apply this in someway and spawn a build.
Whiteboard: [status-firefox-esr24:fixed][status-b2g-v1.3t:wontfix] → [status-firefox-esr24:fixed][status-b2g-v1.3t:wontfix][adv-main32+]
Whiteboard: [status-firefox-esr24:fixed][status-b2g-v1.3t:wontfix][adv-main32+] → [status-firefox-esr24:fixed][status-b2g-v1.3t:wontfix][adv-main32+][adv-esr31.1+]
(In reply to bhavana bajaj [:bajaj] from comment #149)
> (In reply to bhavana bajaj [:bajaj] from comment #148)
> > (In reply to Beatriz Rodríguez [:brg] from comment #145)
> > > Bhavana, Could you please read comment #143 and approve the uplift of the
> > > patch to 1.3 and 1.3T branches as they are affected now and being released
> > > by some partners?
> > > Thanks in advance.
> > 
> > You should be able to apply this
> > https://hg.mozilla.org/releases/mozilla-b2g30_v1_4/rev/5f35498d4aa1  on the
> > 1.3 the 1.3T codebase. It should apply cleanly.
> 
> We've technically killed the branch on our side, hence you will have to
> cherry-pick and land this on your branches. Alternatively, I will check with
> release engineering if there is a way they can apply this in someway and
> spawn a build.

Hi Bhavana -

Since some partners ship 1.3 devices around May, they didn't get a chance to update to the NSS 3.16.2. Hence I ask these partners to update to NSS 3.16.2 by appplying https://bugzilla.mozilla.org/show_bug.cgi?id=1020695 first, and then pick https://hg.mozilla.org/releases/mozilla-b2g30_v1_4/rev/5f35498d4aa1. However, they got the following compile errors:

gecko/netwerk/base/src/BackgroundFileSaver.cpp:7:21: fatal error: pk11pub.h: No such file or directory
gecko/netwerk/protocol/http/nsHttpChannel.cpp:43:18: fatal error: sslt.h: No such file or directory
gecko/netwerk/protocol/http/nsHttpConnection.cpp:17:18: fatal error: sslt.h: No such file or directory
compilation terminated. 

Could you suggest is there anything I missed here?

Thanks
Flags: needinfo?(bbajaj)
(In reply to Vance Chen [:vchen][vchen@mozilla.com] from comment #150)
> Since some partners ship 1.3 devices around May, they didn't get a chance to
> update to the NSS 3.16.2. Hence I ask these partners to update to NSS 3.16.2
> by appplying https://bugzilla.mozilla.org/show_bug.cgi?id=1020695 first, and
> then pick
> https://hg.mozilla.org/releases/mozilla-b2g30_v1_4/rev/5f35498d4aa1.
> However, they got the following compile errors:
[...]

Hi, 

I really think we should facilitate the integration of this to our OEM partners, if we want them to apply the patch. For doing this, I think it'd be important to land this in 1.3 and 1.3t branches. That will make things much easier to our partners. 

What do you think?

David
Can we please take the B2G 1.3 backporting discussion out of the bug? It's adding a lot of noise unrelated to the actual fixing of this issue.
Flags: needinfo?(bbajaj)
Landed on v1.3/v1.3t at Bhavana's request.

https://hg.mozilla.org/releases/mozilla-b2g28_v1_3/rev/b606d95b4011
https://hg.mozilla.org/releases/mozilla-b2g28_v1_3t/rev/b606d95b4011
Whiteboard: [status-firefox-esr24:fixed][status-b2g-v1.3t:wontfix][adv-main32+][adv-esr31.1+] → [status-firefox-esr24:fixed][status-b2g-v1.3:fixed][status-b2g-v1.3t:fixed][adv-main32+][adv-esr31.1+]
Thanks a lot for the effort to include them in 1.3 branches.
Do you know an easy way to verify if a user build is including this fix?
(In reply to Ryan VanderMeulen [:RyanVM UTC-4] from comment #153)
> Landed on v1.3/v1.3t at Bhavana's request.
> 
> https://hg.mozilla.org/releases/mozilla-b2g28_v1_3/rev/b606d95b4011
> https://hg.mozilla.org/releases/mozilla-b2g28_v1_3t/rev/b606d95b4011

Thanks a lot!!!
David
See Also: 1069405
Group: core-security
Comment on attachment 8493992 [details] [diff] [review]
Alternative patch for discussion; reduces exposure of public APIs

Review of attachment 8493992 [details] [diff] [review]:
-----------------------------------------------------------------

Sorry I'm months late to the party. 

I think Brian's feedback is pretty good here. I'm actually happy with what got checked in.

bob
Attachment #8493992 - Flags: feedback?(rrelyea) → feedback-
Attachment #8493860 - Attachment is patch: true
Attachment #8493860 - Attachment mime type: text/x-patch → text/plain
Blocks: 1296986
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: