Practice Cryptography!

Even with all of the cryptologic and cryptographic technology that has existed in the world for the past 60 years, we still don't really know what encryption is good for or how to use it -- or, more importantly, why it's important. Maybe it's time for people and coders to actually start practicing how to use it, like any other skill.

Thursday, May 15, 2008

 

I would love to see the design rationale document behind X.509.  Perhaps if I saw that, I'd be able to be less angry and hateful towards it.

As it stands, though, many of the design decisions just Don't Work.  Not in the Internet realm, where X.509v3 was defined to make up for some of the annoyances, and still didn't do a good job.

Friday, April 11, 2008

 

One of the largest issues that I have with the current spate of specifications is the need that specification writers seem to have to encode policy into what really should be a simply technical specification.

As an example, TLS requires that any server that wishes to ask for client authentication identify and provide its authentication credentials first.  For a peer-to-peer protocol, though, this is inappropriate -- if someone connects to me and asks me for services, I want to know who they are before I make any decision whether to even tell them who I am.

Now, the issues involved are complex, and I can understand why they didn't want to allow it -- but the fact remains that an otherwise perfectly useful protocol has been rendered perfectly not simply because they wanted to make sure that there was no way that the protocol could be used to attack people who were trying to connect to bank sites with SSL/TLS and having their credential information harvested by some rogue in-between.

Grr.

Thursday, March 20, 2008

 

Why are trust errors automatically fatal?

Why are 'trust' errors automatically fatal?

I mean, you're moving along the web, trying to look at some svn repository that is using a self-signed certificate, and BAM! all of a sudden you're presented with a "can't verify the identity of this site" dialog.

There's no reason for this to be fatal.  There's no reason for this to do anything more than pop up a balloon tip, and maybe pop up the warning dialog on a form submission.  (And you might as well show balloon tips showing the verified subject information and what root it's verified through, too.)

Adium (and Pidgin) have a plug-in, OTR ('off-the-record') that show how to do this in a sane manner.  "OTR session established, identity not verified."  This is the kind of thing that we should have for ALL of our applications, not just our instant messengers.

I mean, if "cannot verify the identity of this site" were supposed to be a fatal error, then browsing via http would be brought to a screeching halt.

Saturday, February 23, 2008

 

Assumptions in the Literature

Hmm... I've started reading through the Handbook of Applied Cryptography by A. Menezes, P. van
Oorschot, and S. Vanstone (CRC Press, 1996), and I have to say that it seems to be very well-written and well thought-out. However, I'm looking at it from the viewpoint of someone who has recently realized that there are issues in how the field views itself and presents itself, and I figure it's probably appropriate to describe some of these issues.

At its heart, cryptography is a means of ensuring that a policy is communicated along with a communication. Whether that policy is "I want nobody else to see this", "I want you to know that it came from me and wasn't forged in the meantime", "I am making a good faith effort to participate in this protocol and I expect you to do the same", or anything else, the policy is communicated by the presence of mathematical transformations of the data (and possibly any secret information which isn't communicated). The fact of the matter, though, is that there is no means to guarantee that the recipient will follow the policy that's communicated. In my view, this is a fundamental flaw of the current state of the art and state of the science in cryptography.

The cryptographic literature appears to assume that the ultimate goal of any and all of its users expect the same level of security, the same guarantees, the same possibilities and the same impossibilities, in each and every communication they make. This is simply and categorically not the case -- and it is an assumption which has damaged the credibility of the field as relates to information security objectives of non-business consumers. It has also damaged the case for applying legislative frameworks within which cryptography has useful functionality, because the assumptions made often simply do not align with real-world information security goals.

(One of my friends made an observation: "I have never met any crypto[grapher] who was not at least a little off." 'off', in this sense, refers to 'eccentric', or 'possessing a skewed worldview' -- essentially, cryptography is an extremely interesting game, but it teaches paranoia and legalistic preciseness as a worldview.)

Bringing this back to the HAC, this viewpoint is brutally apparent in Chapter 1 and its listing of "some information security objectives". To be fair, most of these make sense in a specific worldview -- but only if you can rely on a court system to enforce that specific worldview, and recognize the precise guarantees that each mathematical algorithm can provide. Otherwise, it provides a lot of opportunity and potential for misuse. I'm not going to address all of the points they bring up, here... but I will point out a few of the more interesting (to me) ones.

'privacy or confidentiality' -- well, we often want to know that what we're sending out over the network isn't going to be read by anyone who isn't supposed to read it. The problem with this is that it's damnably difficult to set up -- and it doesn't work at all when sending messages to a mailing list or to a Usenet group.

'data integrity' -- Of all the things it lists, this is the one that almost everyone can agree on. Realistically, it's almost more conspicuous in our current communications protocols for its absence -- even as few as 5 years ago, much of our communication was easily damaged by line noise on our modems (and in some parts of our country, it still is). Now we have to deal with cellular phones, which (at least where I live) often drop packets, causing warbling and inability to understand on each end of a connection.

'entity authentication or identification' -- um... identification and authentication are two separate processes, guys. At least as far as the Information Technology/Information Services field goes, and even as far as X.509 and SET go. And since when is a credit card an 'entity'? Wouldn't that kind of destroy e-commerce as we know it? It may be a token providing evidence of authority (to use a specific account number in a manner that can create liability), but it's not an actual entity. :P

'non-repudiation' -- "preventing the denial of previous commitments or actions." Uhh.. this would be akin to stating that everything ever written must be signed by the person who wrote it, and whoever's signature was on it would be completely actionable for it at any point in the future. I don't know about the authors, but I tend to regard my freedom of speech (and freedom of anonymous speech) as integral to my social identity.

In certain contexts, signatures and non-repudiation are generally considered to be 'bad things'. These contexts include OTR and other direct peer-to-peer non-formal interactions. Non-repudiation is necessary in formal interactions (where you need to prove that someone actually committed to or signed something like a contract), but is overkill and perhaps even damaging in non-formal situations.

"Why wouldn't you want to put your name to everything you write? Do you have something to hide?" Does it matter? I don't want the drama that comes from it. I don't need to be involved in "splash damage" from situations where a friend or colleague's computer is improperly accessed for their logs.

Why would I want to put my name on something that I wrote that might be considered 'unprofessional' where I work, that might cause me to lose my job? (example: I write an erotic story and post it to the Internet, I put my name to it, and suddenly anyone who does a web search on my name finds it. What I do in my own time should be my business, even if I choose to share it with others who enjoy it elsewise -- but if I were to be working at a bank or somesuch, the 'distinct lack of professionalism' would probably get me fired.)

Until our society becomes more accepting of private lives becoming public, I'm not going to sign my name to the stuff I do in my own private life, unless I think it's appropriate.

Thursday, February 21, 2008

 

Social networks are, by their very nature, designed to promote interaction between their members.  Community sites (such as slashdot.org, blogger.com, furaffinity.net, and any other which provides username/password authentication and inter-user communication), as well as more traditional social networking sites (such as facebook, myspace, orkut, and others of their ilk), are designed to facilitate communications between users.

However, as the large number of these sites continues to grow, and the topics become ever-more more specialized and diverse, requiring the users to come back to the site repeatedly is almost too much of a requirement.  This does 'wonders' for ad revenue (i.e., it drops it down to nigh-useless levels), and such sites quickly drop out of the mainstream if the bandwidth bills get too high.

(This looks almost like a contradiction.  if the bandwidth bills get too high, there are site visitors.  The cost of providing the site, though, almost can't be met by whatever ad revenue can be eked out, when the traffic gets so high.)

One possible solution which hasn't been examined is the possibility of offloading communications costs onto other networks.  This would allow users to use clients that they already have -- such as AIM®, MSN® Messenger®, Yahoo!® Messenger®, or any Jabber client -- to keep the systems and clients they already have, and would increase those messenger networks' traffic, thus leading to higher ad revenue for them.  The downside, of course, is that these systems don't, generally, want to rely on systems outside of their own for authentication, and generally refuse to support any third-party extension or use of their networks.

OTR (Off-The-Record, with protocol details and an implementation available at http://www.cypherpunks.ca/otr/) is such an extension.  However, most of the things that OTR does are based on the idea of not having a Trent involved at all -- not having a community manager or identity manager involved.

I think there's a place for identity/community management as well as non-managed identity.

Saturday, February 09, 2008

 

Message(s) I posted to dev-tech-crypto (mozilla)

I'm just going to point out something that a couple of friends
recently pointed out to me. The business models of commercial CAs
involves what is essentially "selling trust".

If you look at the fact that they have no real accountability, no
procedure in place in any of the browsers to revoke their trust as a
matter of policy if they violate their CSPs, and a need to maintain a
positive cash flow, you will quickly see that there are severe
conflicts of interest inside the individual organizations.

(If you don't believe my assertion that there is no means to remove
root certificate trust as a matter of policy, I am still waiting for
action on Thawte's issuing of SSL123 certificates by a root which had
a CSP which stated that no SSL server certificates would be issued
without at least "medium assurance" of identity. This issue was
brought up before I moved to my Mac as my primary machine, so over a
year and a half ago.)

Frankly, this entire discussion is utterly and disgustingly ludicrous
in light of this.

Add to this the fact that there is no legal recourse available for
"relying parties" if the CA somehow fails to live up to its CSP, and
the entire argument falls completely on its face.

You all seem to be frighteningly disconnected from the realities of
the situation if you're still arguing the minutae of trust models
allowed by CSPs. I lost my faith in the process you're trying to
follow long ago.

-Kyle H

On Feb 9, 2008 8:50 AM, Frank Hecker <hecker@mozillafoundation.org> wrote:
> We also have the problem that the cure (removal of root certs) is often
> seen as worse than the disease (problems with particular CAs), in the
> sense that the actual security threat to users is perceived as not
> justifying provoking user annoyance at having a whole set of SSL sites
> suddenly stop working. So instead of going with the "nuclear option" of
> removing root certs, in practice we've fallen back on the alternative of
> nagging CAs to improve their practices (of which the issue at hand is
> yet another example).

See, that's the problem... there's also a conflict of interest in
Mozilla (and the other browser vendors). They have to maintain market
share, which means ensuring compatibility -- even when the
compatibility flies in the face of one of the reasons why the CA
program exists in the first place (basically, it was started by
Netscape to make it possible for people to have faith in the
identities of the entities they were giving their credit card numbers
to, in order to facilitate electronic commerce).

The end result is that anyone who chooses to spend a hundred thousand
bucks or so on a single audit can then go around selling the benefit
of their inclusion in the trust list to the highest bidder without
fear of repercussion. Which is what they've been doing. And nobody
has the balls to stand up and say "user security is more important
than user convenience". (In addition, roots have been sold to other
companies, which have not passed continuing conformance audits.)

With this kind of a view, it's more of a "you have to have money and
spend money to make money" game than any kind of attempt to adhere to
the principles that actually allow the system to be 'secure'.

Without fear of delisting and decertification, CAs are running
roughshod (not just 'are going to run roughshod', but 'ARE RUNNING
roughshod'), making a farce of the process and the 'trust' in place.
Without a clear view of user security held by a majority of the
Mozilla Foundation board, everything that happens on this list with
respect to CA inclusion requests is as effective as pseudointellectual
masturbation.

Not that my vote counts for anything since I'm not a member of MoFo,
but until these issues are resolved I must vote 'nay' to any
additional inclusion requests under the current guidelines.

-Kyle H

Thursday, February 07, 2008

 

UAC and what sucks about it

Okay, usually I try not to use this forum to rant about and particular technology that's associated with a specific corporation. However, UAC (user account control) is a technology that gets this treatment, if only because Microsoft came so close to getting it right -- but fell far too short of the mark to be useful.

First, a little overview of what I understand about it:

UAC creates two user tokens for every logon by an administrator. The user's shell (usually explorer.exe) and all processes created directly by it get the standard, unprivileged, normal user account token. (Even if a piece of malware exploits a hole in the software running under this token, only the user's own files are subject to tampering. In order to get access to the administrative token, when UAC is enabled the user is prompted for permission to use it.

Next, UAC also allows for what are called "integrity levels", which are used to roughly classify applications into how much of an attack surface the application wants to expose. Access Control Lists can contain entries that specify the minimum integrity level that a process must have in order to write to the file or registry key, though reading values is always permitted. They are implemented as additional attributes (like group memberships) set in the token that the application is run with -- and the case of "no explicit integrity level" is treated as "medium" (protected operating system files are given "high" integrity level requirements, which are only assigned to the administrator tokens that UAC prompts the user for. Realistically, this means that the only usually-available IL's are "medium" and "low".

Internet Explorer running in Protected Mode runs with an Integrity Level of Low.

On 32-bit systems, there is an additional thing that Microsoft does when UAC is enabled, so that it can support badly-written software that could run on older systems. This is file and registry virtualization. What it does is provide every old program its own view of the registry and the filesystem, such that program can put their own things where the expect them to be, but only for that software running under that user account. It is implemented as a shadow mount with copy-to-shadow on write semantics.
Notably, it is not supported for 64-bit apps or in situations where the UAC user interface is disabled.

And now that I've explained what I know, it's time to state what's wrong with this setup, in my view.

First: virtualization. It should be supported more places, but not for backward-compatibility reasons. As an example of a situation that virtualization could help, we can look at the case of Mozilla Thunderbird and its Eudora-inspired next beta version... When you run the beta, it directly manipulates the settings that are used by the prior version. Since it's beta, it is prone to bugs in every part of it. This means that your working settings can be fubar'ed by the beta, causing a need to restore or recreate the settings that worked pre-beta.

Next, UAC relies on the digital signatures on software to ensure that it has not been tampered with. Since we can no longer rely on our software doing only what we think it should be, why can't we assign every application running under a given token context its own security identifier? This would allow for explicit management of what things can access what -- in such a way that the OS itself can use its well-honed ACL system to perform the access checks.

For this to work, there would need to be a means of allowing applications the means to request access to other applications' data stores, and possibly a "copy other app settings", "virtualize access to other app's settings", or "allow direct access to other app's settings" choice that the user can respond to. This would function a lot like the integrity level concept... In that certain processes that manipulate data stores directly could be given a "allow user-integral access by this app to all data stores" SID. (as an additional note, this cannot only be granted to Windows components, it must be allowable for any piece of software at anythng above Low integrity-level. Since the IL is designed to reduce the attack surface, it should be honored.)

In addition, things running at Low IL should always have virtualization turned on. This would allow Internet Explorer to install ActiveX controls solely for its own use, for example, without requiring that Protected Mode be turned off.

Application datastores should be viewed as OS-level objects -- not as "files" or "keys", but rather as specific settings locations.

Users (yes, even home users) need to be able to positively identify any app running, even if the app itself does not have a digital signature from its developer. This is so that software like City of Heroes can be granted its own configuration space, and so that the OS can grant a security ID to it. This can be accomplished by creating detached signatures, perhaps in separate NTFS file streams.

The base that UAC can provide is mind-bogglingly useful. However, its current implementation is not well-described in ways that users (especially home users) can understand, and is so different from what users are used to that they end up just turning it off. As well, the technology has gotten enough bad rap that the name is not going to inspire trust in those people who have turned it off ever again. It poses a very difficult-to-understand change in how the OS works, and even most developers I talk with just disable it because they have work to do... and they don't even have the 2 weeks to play with it that I have had to get even this much idea of what it does. Thus, Microsoft should retire the name when they revisit it.

Now, let's hope my iPhone doesn't eat this when I post it...

Labels: , ,


Archives

2006-02-12   2006-02-19   2006-02-26   2006-03-05   2006-03-12   2006-03-19   2006-03-26   2006-04-02   2006-04-09   2006-04-16   2006-04-23   2006-07-23   2008-01-13   2008-01-20   2008-02-03   2008-02-17   2008-03-16   2008-04-06   2008-05-11  

This page is powered by Blogger. Isn't yours?