Practice Cryptography!

Even with all of the cryptologic and cryptographic technology that has existed in the world for the past 60 years, we still don't really know what encryption is good for or how to use it -- or, more importantly, why it's important. Maybe it's time for people and coders to actually start practicing how to use it, like any other skill.

Saturday, February 25, 2006

 

Who do you trust?

Who do you trust... some arbitrary third-party corporation which has only its bottom line as its overriding concern? Or do you trust yourself to figure out who you want to trust to be honest, and who you want to distrust because you don't trust them to be honest?

 

Assumptions made by the classic identity model

There's a set of assumptions that the classic identity model makes, but fails to disclose... and these assumptions are perhaps fatal.

1) If you have interacted with a name, you are interacting with the same person when you meet that name again.

This is problematic for several reasons, not the least of which is the lack of a completely unique identifier per-person. (Let's try this example: C=US/ST=Arizona/CI=Tucson/CN=Rodriguez/CN=Jose ... or would that be CN=Jose Rodriguez? Or what syntax should be used here? I don't know... but I do know that given the ethnic population of Tucson, it's very likely that there is more than 1 who would match that name.) Say you have a service where you provide information about goings-on in a specific part of town. One Jose Rodriguez logs in, sets up his profile to point to the northwest (his wife likes shopping at the mall up there, so he wants to get offers from them -- she spends an arm and a leg anyway, might as well give her some coupons to cut down on the financial outlay). Then, another one logs in, wanting to know about the goings-on of the neighborhood revitalization committee of the southwest. He sees that he's already (somehow) set up in the system with completely bogus information.

How to solve this? Well, we could always use the E= qualifier -- email address. But, that goes to prove that there's a problem which can't be easily dealt with. Say the same Jose Rodriguez has five different email accounts, and he gets certificates with them all separately. He decides he's going to sign up for a freebies list five times -- get five different free dinners at a local restaurant. And there'd be no way to know that he's ripping the restaurant off.

It's actually impossible to come up with any specific means of verifying that one party hasn't interacted with another party when only digital signatures are involved. (The other way would be for mandatory unique identifiers -- like driver's license numbers, or serial numbers, or whathaveyou that are embedded into the certificates -- but there are privacy concerns there as well. Several states have laws on the books preventing the information on ID cards or driver's licenses from being collected and used -- including their ID numbers.)

So, the thing is, digital certificates are a strong form of authentication -- but they're a two-factor form that effectively equals only one factor (something you have). Another thing would be a username (something you know), or a hash of a thumbprint or retina scan (something you are). Of these, usernames (or user IDs) seem to be the best idea -- identify what credentials you're trying to use to access whatever resources you want to use.

But why can't these be embedded into the certificates, too?

The main thing here is to realize that there's no way to verify that you're dealing with someone new... and in the current "tightly-bound" system, there's no way to verify that you're dealing with someone old, either.

That is where "context" comes into play... but even still, that solves only one half of the problem: you know that the person you're dealing with is the same one you dealt with before... but you can't know that a new identity doesn't belong to a person that you've dealt with before. This is an example of something that cryptography CAN'T guarantee, and it's something that system designers have to be aware of.

Friday, February 24, 2006

 

tight versus loose identity binding...

That last post of mine, I wrote about a year ago (at least according to the date on the file). In that time, I've learned a couple of things, but they only reinforce the concepts I mention.

A company CA verifies that its employees' certificates are, in actually, owned and controlled by employees of the company. This means, effectively, that the company CA maintains the context of the company.

Now, it would be nice (for companies) if they could tightly bind the employee's legal identity with the position within the company. However, there's some issues that make that difficult, including the fact that the certificate goes out to everyone, and even knowledge of the internal structure of a company can be good for people trying to perform social engineering attacks. (Not everything is done electronically, after all -- most things are still done with voice calls.)

Another aspect is that a tight binding like that couldn't actually be created by the company, it would have to be created by the maintainers of the legal identity context. (We've already seen that there aren't any of those, really -- or rather, there's far too many of them.) And promotions/demotions/firings... those would be difficult to deal with, as well, as the credential must be revoked and a new one issued (as appropriate). So, it's a loose binding -- cryptographically speaking. The top executives of corporations are on file with the state they are incorporated in, so there is a tighter binding there... and as we shall see, sometimes that's not such a good idea.

 

Questioning the necessity of single-identity cryptographic trust models

By Kyle Hamilton

In every current discussion of cryptography and cryptographic trust models, a single theme emerges: one key belongs to one person. One person may have many keys, but all keys owned by that person are bound to a single identity – his legal identity in his sociopolitical division.

In this paper, we challenge this model as being useless for anything except political and financial affairs, and explore some of the reasons why multiple-identity cryptography (and variant levels of trust) are not only necessary, but desirable.

X.500 and its continuing saga of non-implementability

The OSI (Open Systems Institute), in conjunction with the ITU (International Telecommunications Union, formerly the CCITT or International Telephone and Telegraph Communications Committee), created a thorough and horribly overengineered set of protocols that vendors and companies could use to ensure interoperability. This led to the creation of the OSI “7-layer model” for computer networking, a suite of protocols by which each layer could request the services of the layer below, and a means of tying all user information together. This last, the X.500 Directory, was originally intended to be a single, worldwide, distributed directory (akin to the DNS, or Domain Name System, of the Internet) that would allow anyone, anywhere, to query the whereabouts and contact information of any other individual in the world.

Unfortunately, the CCITT (as it was known back then), being a branch of the United Nations, had only political and large industrial designers in its meeting and standardization procedures. The result was a protocol and structure that was geared toward the needs of governments and large corporations, but had nothing to offer those with less-sophisticated needs. As well, large corporations generally balked at making their entire internal employee directories – as well as employee lists – public. For these and other reasons (not the least being the sheer impossibility of implementing the entire OSI suite of protocols), X.500 never was and never will be fully implemented.

Along came a challenger to the previously iron-handed grip the CCITT had held on the standardization process – the Internet Engineering Task Force. Here was a much smaller, much less politically-oriented, much more adaptable form of standardization, which stole all the wind from the OSI network model with its much-more-easily implemented TCP/IP suite of protocols. The market has decided who won that particular battle, and it wasn’t the United Nations.

However, not all was lost. Within the data formats and protocols so painstakingly assembled by the CCITT lay a couple of nuggets of goodness. First was X.500 – if it were to have its interconnection requirements severed, it would make a good internal database for large corporations. (From this realization came LDAP, the Lightweight Directory Access Protocol.) The second came from a standard called X.509 – a means of exchanging identity information in small blobs, at least fairly-easily read and transformed into human readability by computer.

In its original form, X.509 didn’t have provisions for cryptographic keys… but then two projects emerged: one, an attempted implementation of encrypted mail called PEM (Privacy Enhanced Mail), and two, a little company called Netscape attempted to put businesses on the Internet and enable electronic order fulfillment. Both projects saw that the X.509 standard format had the potential to be much, much more than it then-currently appeared.

PEM was doomed to failure for many of the same reasons that X.500 was – it required a single global root that delegated trust to countries which then delegated trust to Certifying Authorities (private-sector companies that assured the validity of information contained in a certificate) which then delegated trust to companies which then delegated trust to users. Since there never was such a global root, it failed.

But the other project, by Netscape, was at that time attempting to find some way of combating the plain-text phenomenon that was the Internet at that time. Financial institutions and businesses were unwilling to trust plain-text, unencrypted transmissions with important things like credit cards and business-to-business orders. (Who could blame them? The message could be altered en-route, or could be spied upon to perform fraudulent transactions, and there would be no way that anyone could prevent the huge losses that would undoubtedly occur.)

So, Netscape used the X.500 concept of the Directory, combined with X.509 certificates (and Certifying Authorities), to raise the bar. They used the X.500 Directory format to identify companies who wanted to offer encrypted pages, and X.509 certificates to bind public keys to the servers themselves. (These X.509 certificates were signed by the Certifying Authorities, who were, literally, put into business by this practice. Verisign was the first one, and for a long while the only one. And boy, were they expensive.)

Netscape also put into play “client certificates”. These were certificates, issued by a Certifying Authority, that were limited so that they could only identify people, not specifically DNS-named servers. They were to be used during an SSL exchange to identify the user to the server. (The server certificate was to be used during an SSL exchange to identify the server to the user.)

In any case, these certificates provided assurance to the recipient that the encryption key presented belonged to the entity (natural or legal person) which it said it belonged to. Of course, the limitation of liability usually associated with these certificates made these assurances not very useful to the entity that relied upon the assurances in the first place.


The Aftermath

VERY few places actually use client certificates, even ten years after they were first conceived and implemented. There are several reasons for this; the reasons I list here are merely within my own experience. There are undoubtedly others.

1) There is no way to verify that the client certificate presented has been issued to the correct entity instance of the name. (My apologies for the strange wording.)

Basically, what this means is if there are two or more people named ‘Jose Ferdinand Rodriguez’, and both of them obtain credentials from a given Certifying Authority, it is impossible to determine which of them is presenting his credential at any given access attempt. There is no uniqueness in names within the sociopolitical foundations of our society – and for merchants and banks, with no way to differentiate between people with the same name, this is a disastrous flaw.

In an attempt to redress this, one CA (by the name of Thawte Consulting, Pty, based in South Africa) offered the ability for their institutional clients to associate a unique identifier with a given individual, through the use of an extension embedded into certificates issued to that individual. The process by which this was to be set up was very cumbersome, rife with technical issues, and expensive to set up and maintain. The security that would have thus been provided would be not cost-effective – and so, most financial institutions have elected to keep passwords as their main authentication mechanism.

2) The existing mechanisms to implement such authentication are not general enough.

The software that exists (Apache with mod_ssl, Windows IIS, virtually any HTTP server software) does not make it easy to figure out what is really in the certificate that has been presented by the user. Unfortunately, some of that information (including the aforementioned extensions that Thawte Consulting would embed in the certificates issued to its individual end-users) is downright vital to properly reacting to the client’s authentication credentials.

Apache/mod_ssl does have the capability of handling this circumstance: the entire PEM-encoded client certificate is put into an environment variable for its CGI or PHP programs to be able to decode, and the status of the client verification is put into another variable. However, in order to get the extension and the value of the extension, a PEM/DER decoding library must be linked into the CGI (it can’t just use the library that’s already part of the webserver), thus causing decoding to happen more than once per request. This is inefficient at best.

With Windows IIS, it’s even worse – I have yet to be able to figure out how to get the client certificate data from subprocesses. (ASP might be able to get it, or extension DLLs – but if it’s possible for any non-component to get it, I haven’t been able to figure it out at all.)

Others? I have not used them. I don’t know how easy it is to get the information that’s needed… but if Apache with mod_ssl doesn’t make it easy, chances are there hasn’t been anything that’s made it a selling point.

(As an aside: what I would like to see is a file descriptor opened that had the entire certificate decoded, which could then be read on a line-by-line basis until the extension(s) of interest came up.)

3) Very few things actually use SSL, and those that do only use it for its server authentication, encryption, and tamper-resistance.

This is a bigger problem than most people think. There is no security without proper authentication. Passwords are not good authentication. (We’ve been using passwords for more than two thousand years. We know what all the problems are with them, and there are a LOT of them. One would think that with the advances we have in technology, we would just be able to do away with them.

Even more importantly, encryption is not useful unless you know, with certainty, who you are talking to. Someone could encrypt a conversation that he’s having about being frustrated with whatever government makes laws that he is subject to, and how he wishes that certain members of government were removed from office – but without any knowledge of who he’s talking to, he could just as easily be talking to an agent of that government (perhaps through a man-in-the-middle attack) as the person he thinks he’s talking to.

4) In order to use client authentication certificates, it is mandatory that the server have a certificate of its own.

This is simply because the SSL protocol mandates it. (Which makes sense for currency transactions, but which makes no sense for other kinds of interactions.) And, there are several reasons why this is a stumbling block:

a. The management overhead to get authenticated, and thus certified, is simply too onerous for any but the most demanding situations.

When applications such as SSH use unverified keypairs for those most necessary of functions – authentication of a user to the system that he’s using, and authentication of the system to the user who’s trying to use it – it is obvious that there’s a severe problem. Certification should be just a matter of common sense, and it is – but even if the management overhead were a surmountable hurdle, the next reason most certainly is not:

b. Identity certification is simply too expensive for any entity that isn’t either a financial institution or transacting commerce.

Certifying Authorities all charge slightly different amounts for verification of identity, and issuance of certificates. This doesn’t change the fact that it ALWAYS costs at least $100, and often $250 or $300, for a single year of certification. (This is on top of the business costs, and the Dunn & Bradstreet listing, and everything related – since no Certifying Authority that’s accepted into, say, Microsoft’s Certifying Authority program, will issue a server certificate to an individual who isn’t also a sole proprietor. Microsoft’s program – as well as the other, similar programs for other browsers – relies upon audits of CA business practices performed by an association of Certified Public Accountants, each audit costing upward of $10,000.

Because of this, in order to recoup the initial costs, the CAs have no choice but to charge exorbitant fees.

All of these reasons point to a simple, but hard-to-swallow fact: The current identity structure is priced far outside of the realm of what most people can afford, and doesn’t meet even the most cursory needs of the function it was supposed to support.

On top of this, though, there are other issues at play.

Reality Strikes

On the Internet, which is where all of this technology is supposed to be used, there is a very peculiar phenomenon: people are known more by what they wish to be known as than as their legal names.

As an example: The author of this piece has several identities, in various places around the ‘net. For the software that he helps write, he’s known as “winged”. For interactions on various IRC networks, he’s known as “Winged” or “Aerowolf”. On MUCKs and MUDs, he’s known as “Aero”, “Aryd”, “Elrick”, “Greybolt”, “Winged”, and “Jarek”. On various web forums, he’s “aerowolf”, “aerolupus”, “aeroloup”, or even “wolfoftheair”. And his email addresses start with “wingd_wolf”, “aerowolf”, and “kyle_a_hamilton”.

As well, when he meets up with people he knows from projects on the Internet, they tend to call him “Winged” or “Aero”.

Other people he knows from the Internet exhibit the same phenomenon.

Very few people intentionally obscure their legal identities. However, constraints within the systems pretty much demand that alternate names – “nicknames” – be chosen and adhered to. (Some systems are for role-playing, and thus all but demand different names. Some systems are limited to 8-character names. Some systems are for games, or for other not-serious purposes, and no transactions occur with the chosen nicknames.) These nicknames are perfectly, completely valid as unique identifiers within the systems that they’re chosen within.

However… the current identity structure (where everything devolves to the legal name) is almost useless within these contexts. (If I interact with someone as, say, “Dweezil” for a number of years, that person’s legal name is going to stick less for me than the nickname I’ve used all these years. Thus, if I suddenly receive a certificate that contains only the legal name, I’m not going to map it to the individual properly.)

As well, there’s a more economically and reputationally damaging issue at stake: If the CEO of a major corporation wanders around online, and comes across a discussion about something of interest that could damage her reputation, should she need to use an identity that can be traced back to her professional identity? This isn’t as far-fetched of a question as it might seem – already, executives are placed under scrutiny by their peers, and if any hint of impropriety is found they are summarily dismissed. Impropriety often means ‘having an interest in romance with another person of the same gender’, or “having a family member who is autistic”, or other such personal affairs.

Since there are circumstances under which it makes less sense to use a legal name for authentication (remember, ‘authentication’ is basically ‘making sure you’re talking to who you think you’re talking with’), there should be a way to assure a name that is not the legal name (and a context within which the name is valid and unique).

(If you think about it, the current ‘single-identity’ model essentially attempts to assure a name within the context of ‘legal names’… and, as mentioned earlier, fails horribly because individual names just aren’t unique. In the US, the social security number is not unique. Other countries /may/ have something unique that can be applied to all of their citizens and/or residents, but privacy advocates are so outspoken against the idea of national identities that unique identifiers are very difficult to put into place.)

Now, normal Certifying Authorities can’t assure names in contexts that they aren’t equipped to deal with. This is partly because there is, literally, no way to specify a context other than “legal names” in the current X.509 model, which is what all current Certifying Authorities use… but it’s also partly because it would make no sense for them to do so, from a business standpoint. (The only way, other than name recognition, that CAs can increase their server subscriber base is by having more clients that can be authenticated through their assurance. This is a problem, though, for the reasons I outlined above.)

What is the solution to this? How can any of these names be assured if the commercial CAs can’t do it?

Well, what is needed is some entity that is able to assure each name. Fortunately, there are entities within each and every context who can provide such assurance, and models under which these assurances can be asserted.

Specifically: The creator/owner/maintainer of each context is completely authoritative for the assurance of names within that context. (Basically, the creator/owner/maintainer has absolute access to the database which holds the names, and can verify where each request for assurance comes from. If a request for assurance for a name comes from a connection authenticated as that name, then it’s a fair bet to state that the request can be assured.)

If the creator/owner/maintainer of the context is unwilling to assert its authority in assuring identities (they’re unwilling to code the assurance service, for example, or they don’t have time to add another layer to their maintenance work), then other people within the same context can make the assertions. (This “web-of-trust model” works well within the PGP/GPG community.) It’s not necessarily more or less secure, though many people are less likely to be in collusion than a few people.

Multiple-Identity/Tightly Bound versus Multiple-Identity/Loosely Bound

There’s a subtext to the single-identity concept that currently exists in its current usage: “I have not seen this name before, therefore I have not interacted with this entity.” There’s also the opposite concept: “I have seen this name before, so I have previously interacted with this entity.”

This is (obviously) bunk, but a lot of importance seems to be placed on unique identification. The typical way this has been done in the past was to associate every identity with a main (usually legal) identity, and use that identity linkage to determine prior interaction.

I’ve explained all the reasons why this fails, above… including the lack of pseudonymity necessary when interacting in places and with groups that could have undesirable repercussions.

The current terminology doesn’t really have a good description for identity binding. So, I’m going to introduce a couple new terms here: “Tightly-bound” and “loosely-bound”. And, for the sake of argument, I’m even going to try to find decent, useful definitions for both:

Tightly-bound identities are associated with each other in a way that is impossible to ignore – the knowledge of the binding is encoded within the credential itself.

Loosely-bound identities stand on their own, and are not associated with each other unless the holder of the identities wish for them to be in some manner.

For purposes of pseudonymity, loosely-bound identities are much more desirable (it’s impossible to have a useful pseudonym if it’s inherently tied, in every instance, to one’s legal identity). Tightly-bound identities are desirable for any organization which desires absolute knowledge of every identity used by an individual… but they leak information that the identity holder may not wish revealed (sometimes, as in the case of the CEO with an autistic child, with very good reason).

Since the current state of identity management revolves around single identities, or secondary identities tightly bound to those single identities, most of the issues with those types of paradigms are already known. For the rest of this paper, then, I’m going to focus on the issues within a loosely-bound multiple identity system.

Inherent Attributes of Loosely-Bound Multiple Identity Systems

Since we’re looking at a completely different way of managing identity, we’re going to have to throw out everything that we thought we knew about it. Starting from zero-principles, then:

An ‘identity’ is an addressable way of referring to a specific entity within a context.

An ‘entity’ is an addressable interactive structure with all the capabilities that any person has within the context. (i.e., a user account. Note that there is nothing prohibiting a single entity from being used by multiple persons.)

A ‘context’ is any kind of administrative division within which one or more entities may be created and used in manners that lead to the assertion of identity.

A ‘person’ is an individual who has the ability to interact within the context as an entity.

‘authentication’ is the process by which one entity proves its identity to an acceptable level of assurance to another identity.

As an example, think of a Linux-based server that has many user accounts. This server is in a single administrative division (the division maintained by the owner and administrators of the server hardware), and every user account has a unique name and user ID. These accounts are only accessed by the respective persons that they are assigned to. However, there are a couple of user accounts that are used by several different persons, for various reasons. As well, there are two user accounts that are used by other computers in order to transfer data back and forth.

Within this example, the Linux server is the ‘context’. Every user account is an ‘entity’ – at the very least, the rights and privileges of the account are asserted to the operating system and its administrators. This makes every account name and user ID an ‘identity’. Each human being that accesses the system is a person, even those who only access the shared group accounts – and, the other computers that use those two other user accounts? Those are persons too. (They interact with the system the way people do, asserting their rights and privileges in a highly specialized manner in order to perform their functions.)

Now, within this system (and even sometimes when one side of the exchange is not interacting from within the same system) there are ways of asserting identity to other entities within the context, or who trust the context. (Take the case of email – the system ensures that the identity of the user is well-known in the message header, even if the From: line says something else.)

But, what happens when entityA wants to verify that someone he’s interacting with, outside of the system, is actually entityB inside the system? Traditionally, there have been passwords or passphrases or codewords or other fairly-weak authenticators exchanged between entities within a system for offline authentication. The problem is, these are fairly weak, and it’s easy for them to fall into the wrong hands.


Wednesday, February 22, 2006

 

RTFM

RTFM

Here's a guy who actually seems to get the idea that there's something dreadfully wrong with the current state of affairs, but who doesn't seem to understand what the issues actually are -- he merely sees that they aren't working the way they are.

Sunday, February 19, 2006

 

So, what do I mean by "Practice Cryptography"?

We've all seen it, we've all done it -- we've all practiced the skills we have, to become better and more proficient at them.

We've started out with baby steps when just learning how to walk, and then we grew more confident and started to run... sometimes we tripped and fell, but we learned how not to make the same mistake the next time.

We've sat on a bicycle and haven't been able to figure out just how we're supposed to keep it going straight, much less how to balance it. It was absolutely foreign to the senses we'd developed to walk and run and skip and play hopscotch. Maybe we kept falling over for a while, but we all got the hang of it... and eventually, as we practiced, we learned how to ride without hands on the handlebars, or to do wheelies, or to do tricks.

Maybe you've held a yo-yo, and had no idea how to make it come back up, pulling your hand up slowly when it hit the bottom of the string. But, you learned how to make it come back to your hand quickly. Perhaps you even learned to do tricks -- by practicing them, and not giving up if you didn't do it quite right.

I think that we've all had these experiences... but we've all fallen for a trap that our governments and large corporations have devised for us. That trap is, there is only One True Way to do cryptography. And I can't believe that. Messages have been sent encoded for military maneuvers since the days of Julius Caesar. In World War 2, the Germans had the Enigma rotary machine, and the Allies eventually broke it because of a few messups. That sucked, but it was bound to happen.

Eventually, cryptography came to be used in the commercial sector. If you used Yahoo or Gmail or MSN or Hotmail today, you've used cryptography (without even realizing it). There are many, many good reasons to have unbreakable codes... usernames and passwords, for example. Why would you want those to get out onto the 'net? Credit card numbers -- even (or especially!) if you use PayPal or some other payment processing service, you don't want them to be found out by just anyone who could be hanging around a router out on the 'net.

But in the translation from cryptography from the military to the civilian sector, military ideas took hold. "Everyone has an identity, and every message can be traced back to that identity." Well, yes. But honestly, there's something missing here. And to figure out what it is, we're going to have to go back to what the three main reasons for cryptography really are:

1) Authentication: Verifying that a message came from who it seems to have come from. If you write checks or sign credit card receipts, you know the concept of authentication very well -- your signature is authenticated to verify that you are the one who authorized the transfer of your funds to the person you're giving them to. Same thing with cryptography -- it can be used for that.

2) Integrity. This follows closely on the heels of authentication. For example, if you authenticate a check to a bank, the bank needs to make sure that what it received was what you wrote the check for, that it hasn't been tampered with or altered from the time it left your hand to the time it got to the bank.

3) Confidentiality. We take it for granted, here in the US, that our phones are wiretap-free. But the recent spate of illegal wiretapping by our government should show that no conversation is safe. YOUR privacy could be compromised. YOUR information could fall into the wrong hands. And if your information falls into the wrong hands... then, to the computer, someone else /is/ you, and can do all the things that you can do -- forging your identity, stealing it, and making life a hassle for you. So, keep it secret... keep it safe.

But there's another application for confidentiality, that's close to the "identity theft" idea above...

 

And more and more...

The "single ID" is being touted as the solution to all the Web's identity management problems.

http://www.projectliberty.org/ is based around the concept of a "federated identity system". (Basically, a bunch of companies, including AOL, IBM, RSA, Intel, Sun, GM, American Express, Bank of America, Computer Associates, Nokia, NEC, and Verisign, just to name a few) are trying to build a system where your interactions with others within the federation basically act as "introducers" to other places that use the federated identity. This might work for protecting companies against consumer fraud, but how is this going to help protect consumers against corporate fraud? (I have a nice link somewhere about how an otherwise-reputable company issued a digital certificate to a phisher, who used the automatic SSL verification to trick a LOT of consumers into providing a LOT of their information.)

There are entities that do need single identities. These entities are called 'organizations'. (Organizations should be able to delegate the authority to send email and conduct business to actual people within the organization, but that's beside the point...for the moment.)

And there's also a massive repurposing going on for state Departments of Motor Vehicles. This is no small feat, in actuality -- even though the credentials that the DMVs issue are used as identity documents in any case. (It's rather ludicrous, if you think about it -- the state just wants to make sure that the person is legally allowed to drive. Everyone else wants the state to make sure that the identity credentials are accurate. Which makes sense, since the state (and political subdivisions thereof) are the ones who maintain birth records, and the US Department of State checks with the original issuer before issuing a passport.)

I can understand the need for a single legal identity... but there's a rather severe problem with this that I'll get to later on.

Archives

2006-02-12   2006-02-19   2006-02-26   2006-03-05   2006-03-12   2006-03-19   2006-03-26   2006-04-02   2006-04-09   2006-04-16   2006-04-23   2006-07-23   2008-01-13   2008-01-20   2008-02-03   2008-02-17   2008-03-16   2008-04-06   2008-05-11  

This page is powered by Blogger. Isn't yours?