Practice Cryptography!

Even with all of the cryptologic and cryptographic technology that has existed in the world for the past 60 years, we still don't really know what encryption is good for or how to use it -- or, more importantly, why it's important. Maybe it's time for people and coders to actually start practicing how to use it, like any other skill.

Saturday, March 04, 2006

 
Hmmm... I should probably go back to ground zero and identify precisely all the individual cryptographic dogmas that I'm attacking, and more importantly which ones I'm willing (from my own experience) to accept as useful building blocks.

New encryption verusus old encryption
New random number generation versus old random number generation
new network protocols versus old network protocols.
New encodings versus old encodings
New application and protocol building blocks versus old application and protocol building blocks.

(And once again, I bemoan Netscape: Why, oh why, did you have to use X.509 instead of coming up with something that could be USEFUL?)

Friday, March 03, 2006

 
http://www.ciphersbyritter.com/

I'm honestly not sure what to make of this particular site. He seems to want to push people into using unproven ciphers (preferably ones that he's patented, yet given no licenses for cryptanalysis or study), based on the idea that there really is no way to determine the 'lower bound' of cipher strength.

I can accept that particular argument (that there is no way to determine 'lower bound' of cipher strength)... but on the flip side, I also know that elementary cryptanalysis courses offer what appear (to the untrained eye) to be completely random ciphertexts that, with a bit of probing, can have their plaintext determined. If people who have been studying cryptanalysis for years can't find the plaintext from a given ciphertext using a given algorithm, and they've been trying for years, I would much rather put my faith in that particular algorithm more than something that hasn't had any cryptanalysis at all.

In an email to me, Mr. Ritter stated that he finds patents more noble than making things available for free, since patenting things helps to make up for the cost of invention plus funding new invention. Okay, whatever... his viewpoint, not my place to try to change his mind. However, he goes on to say that "patents haven't worked out for him".

Gee, I wonder why. Forbidding any cryptanalysis by failing to license the patent for that use (thus any cryptanalysis is a violation of patent, and in the event of an unfavorable finding the push would be to keep it secret because otherwise you'll have suit brought against you for using a patented invention without a license), then fearmongering by stating that "the enemies of the users of cryptography operate in secret, without letting anyone know that they've found a crack in the algorithm" to try to get people to move away from and shun long-tested and long-studied algorithms... this sounds more like a move I'd expect from Microsoft (reminiscent of its Fear, Uncertainty, and Doubt campaign against Linux) than a useful marketing system.

And, in fact, it makes me wonder if he himself knows of any breaks to his algorithms that he hasn't published.

 

Context and relationship identity

I was thinking, today... and I realized something that should have been fairly obvious, but which isn't. A given identity is tied to the context it is used within... so the certified relationship is between the identity-holder and the context, not one identity-holder to another. This has some fairly important implications, and answers a lot of unsolved issues, notwithstanding the problems that I discussed in an earlier entry.

First, a person's legal identity is tied to the legal context. This means that it's the legal context that is responsible for changing any identifying information (such as name, address, national ID number, whathaveyou)... but the legal context keeps track of those changes so that one may be able to change aspects of their identity's tied information, but not their core identity.

This allows others in the legal context to 'trust' that the identity presented is verifiable and useful for creating relationships within the legal realm (such as contracts).

In the same way, the relationship between an identity and an organization is what can be certified -- thus, the organization itself controls the context. This certification is what allows one member of the organization to be able to strongly identify another member of the organization... at least as far as authentication can go. (usernames and passwords used to log into a MUD which has a certifying authority that signs public-key certificates... it's the problem of usernames and passwords being much less secure than cryptography, and security is only as strong as its weakest link.)

A problem with current certificate mechanisms, though, is 'information leakage' -- if I can cryptographically prove that I am the owner of a given credit card, and electronically sign a charge slip, does the merchant I'm doing business with /really/ need to know my address? Or bank account number? Or mortgage information? Identity theft is a lot easier when one has access to a lot more information, and since realistically nothing's going to change in the next 15 years, I want a solution that allows for that information to be hidden from people who don't need it, while simultaneously allowing for it to be selectively unhidden when circumstances warrant.

Thursday, March 02, 2006

 

Standards... that aren't.

"pkcs12_parse will, in its current form, only parse well-formed PKCS#12 files which contain a private key, its corresponding certificate, and zero or more CA certificates." -- Dr. Stephen Henson, openssl_users mailing list, 2006Mar02.

Every standard that exists in the cryptographic literature has cross-references to pretty much every other standard that exists in the cryptographic literature. You can't read the TLS 1.1 specification without a cross-reference to an obsolete version of the PKCS#1 specification, which it ACKNOWLEDGES as obsolete, but which was maintained "to minimize terminology differences between TLS 1.0 and TLS 1.1".

It's almost impossible to find all the specifications you're looking for. Worse, such as in the case of the ITU, many of the specifications cost real amounts of money, as though they expect that only corporations with deep pockets are going to be using them. (Individuals can get 3 specifications per year free from the ITU, but that's not really of much help when THEIR specifications all cross reference each other so that you need some kind of graph to figure out what you need and what you don't.)

So, I say, let them. Let the big corporations use the ITU specifications, which have been shown to be impossible to implement, impossible to understand, and rife with security problems. The only reason why they're the standard (and why the IETF bows to them) is because nobody has come up with a different, better standard... the only person who ever led an attempt to try something else was Phil Zimmerman, the guy who wrote PGP.

We need to practice cryptography. We need to understand what it is we do, and we need to find what's wrong in current practice and fix it. We need to reduce the complexity of using it, while learning how to implement it securely. (Developers, this also means that you need to learn how to build secure programs and operating systems, as well as secure cryptographic pieces.)

The only way we can maintain our privacy is by applying rigorous security to our information. The only way we can be sure we're talking to who we think we're talking to is by applying rigorous security to our interactions. The only way we can be sure that we're getting what the other person is saying is by applying rigorous security to our communications. These are axiomatic -- why should we let anyone else dictate the terms of how we do it?

Wednesday, March 01, 2006

 

Random Numbers... another source of amusement.

Another endless source of amusement for me is the lack of understanding that people have about what they need to do with random numbers... as in, how to generate them, how to use them, and how to prevent them from being found after they're used.

In order to be secure, a cryptographic system requires a truly random source of information -- the measure of which is called 'entropy'. The problem is that there are relatively few places where such information can be obtained from a normal, everyday, consumer-level computer.

A correspondent of mine, Philipp Gühring (of the CACert.org project), sent me a paper describing some possible sources of entropy that are available in most computer systems (the effect of the brownian motion of air in changing the precise timing of a hard disk's read head obtaining the proper data being one of them), and possible ways to make use of them to obtain truly random data (causing artificial thrashing of the swap file and measuring the read and write times, for example). There is only a little bit of trouble with this approach: it really doesn't create that much random data per operation. This causes extension to be required, which is normally the realm of hashing algorithms. However, even these hashes have to be expired, once the chaos that led to their creation is exhausted.

(As an aside, there's a government standard -- FIPS 140-2 -- that mandates the parameters of any encryption used by the US government. This standard actually doesn't certify 'nondeterministic random number generators' -- the technical name for 'truly random number generators' -- it only certifies 'deterministic random number generators' (pseudo-random number generators) of a specific algorithm. This could cause... issues.)

Linux, FreeBSD, Windows 2000/XP/2003/Vista, NetBSD, OpenBSD, Solaris 9 and 10... these all have built-in means of getting the entropy that the system generates on a fairly consistent basis into a random number generator device that is managed by the system. Alas, there's also usually a secondary pseudo-random number generator, which will happily continue giving pseudorandom bytes even if the chaos is exhausted. This isn't appropriate for cryptographic operations.

/dev/random is your cryptographic friend.
/dev/urandom is not.

(this oversimplifies, based as it is on the concept and requirement that the names refer to the proper device Major/Minor numbers.)

On MS Windows, there's an API called the CryptoAPI. (I'm going to get into the problems with this in another post -- there's absolutely NO excuse for their negligence in documentation.) This API will, upon request, produce pseudorandom and (according to MS, at least) cryptographically-practical numbers by applying the RC4 stream-cipher algorithm to a value with a perhaps-specified key.

Either way, though, if the numbers aren't random, then the encryption is open to attack, since the "keyspace" (the number of possible keys, usually 2 to the power of however many bits the cipher uses) is greatly reduced. (A brute force attack against 2127 possibilities takes half as long to search as 2128, 2126 takes half as long as that, and so on. If an attacker can guess at the pool you're getting your random numbers from, that reduces the power-of-two dramatically, sometimes halving or quartering it... and 240 and 256 have been shown to be easily brute-forceable by modern hardware.)

So, how to make sure your source is random? I honestly don't know. For RSA, Philipp suggested chopping off the top bit and the bottom 8 bits, and testing the remaining bits for various statistical anomalies. I have not yet looked at his code [he's rather hoping that someone else will take over development of it], but it's available at RNGQA-light.tar.bz2 on his website.

As well, there's another issue: Proper usage of random numbers. This is a topic I'll go into in another post.

Sunday, February 26, 2006

 

X.509 stupidity...

I was just made aware of yet another piece of stupidity by the ITU, in their X.509:

A Certifying Authority is known by its name, not by any other criteria. The purpose of this was to allow for a creation of a new CA certificate if necessary, which would be able to issue CRLs for the old one.

Uhhhh... riiiiight. NSS fell victim to an attack like this, but I'm not at all certain what the current toolkits available do with it. Were I to create a new, self-signed CA with the same name as, a different serial number from, and a later "valid from" time than an existing certification authority, and then issue a CRL which explicitly revokes the prior CA key...

...would I have essentially yanked operation of the CA from the prior owner?

(Information from a post made on the openssl-users mailing list made by Erwann ABALEA on 26Feb2006):

The X.509 says it all.

From this standard, a CA is a name (not a key, really a name). That
allows you to renew the CA's key (and certificate), and this
key+certificate still belongs to the same CA. Whence, you can revoke
an issued certificate that was signed by an anterior CA key.

This (issuer name, serial number) uniqueness is clearly stated in
chapter 7 ("Public-keys and public-key certificates"):
"serialNumber is an integer assigned by the CA to each certificate. The
value of serialNumber must be unique for each certificate issued by a given
CA (i.e., the issuer name and serial number identify a unique certificate)."

Archives

2006-02-12   2006-02-19   2006-02-26   2006-03-05   2006-03-12   2006-03-19   2006-03-26   2006-04-02   2006-04-09   2006-04-16   2006-04-23   2006-07-23   2008-01-13   2008-01-20   2008-02-03   2008-02-17   2008-03-16   2008-04-06   2008-05-11  

This page is powered by Blogger. Isn't yours?