Speakers:
Steve Kent, BBN
Simon Lam, University of Texas
Dan Schutzer, Citibank
Scribes: Darren Hardy and Duane Wessels
The goal of this session was to assess issues related to the deployment of security technologies on the Internet, and the role of security middleware in supporting this deployment. In particular, the statement was posed that security middleware is particularly important because it allows the infrastructure to be put in place independent of high level policy issues and allows security to be more quickly deployed than the alternative of adding functionality directly to individual software products.
Steve Kent started the discussion by posing several questions:
The first question was the basis for much discussion throughout the rest of the session, with many different answers posed by the workshop participants.
Simon Lam was the second speaker and discussed his observations of security services from his work on Authentication and Authorization Services at the University of Texas at Austin. His principal observaton is that there are many core security services that are often repeated in each application and that these services should be factored out and provided as middleware. Such services include:
By providing these services separately, available as middleware, one is able to offload the end servers, avoid the need to repeatedly re-implement common functionality, separate authentication from authorization providing for anonymous authorization, and provide a uniform interface for accounting and auditing.
Dan Schutzer concluded the presentation phase of the session by laying out some of the security concerns from the financial community, and discussing their needs for security middleware. Among the concerns were the need for:
Before opening the floor to discussion from other participants, Cliff Neuman re-iterated the need to understand precisely what security middleware is. Is it an API? Where does it fit in the protocol stack? Security can not be provided at a single layer in the stack because the attacker can mount attacks at many layers. Instead, security is a separate stack, and links are needed from many points in the application/network protocol stack to the security stack. Some understanding of security is needed by the application in order to provide fine-grained access controls, since the definition of the object to be protected occurs at these higher layers.
Discussion from the participants followed. Dave Clark defined middleware as being above the network, more than just moving bits; the network transports bits and, although the middleware is below the applications, it knows more about semantics than just moving bits. Christian Huitema suggested that middleware might be defined as an application which has already been deployed and doesn't need to be re-invented, e.g. the Web won't do its own DNS, but it will do its own security. Joe Touch suggested that middleware should have state and process and he asked what about security is an ongoing process which must be maintained, if given a library, are you done? Joe Touch was also concerned that middleware should not become too large, like AI. One person asked whether middleware was simply standards?
Simon Spero asked whether SHTTP and MOSS constitute security middleware. Email and the WEB have security support, but it is tied to a specific application, rather than a service that can be used by other applications. Steve Kent responded that its better to have generic and re-used code/services. But is the Web so popular that it should have special-purpose and very good security? Storing certificates for different things is example of common service. Web growth is a case for moving forward on security, but where do we draw the line on middleware for generic vs. custom (like WWW). Karen Sollins asked whether we can really build a security architecture that provides building blocks which need to be redone for evolving applications?
Barry Leiner mentioned that we have been talking about the need for COMMON security services for some time, not independent solutions for things like the web or mail, but where are they? What's taking so long? There seem to have been few results.
Karen Sollins suggested that the discussion so far is missing an important component: there is no good way of talking about policy domains. The dilemma is that organizations want to choose policies. Yet, interoperability is a real issue. Clifford Neuman agreed that scalability is a critical issue, and that doing so across administrative domains is very hard. In fact, it is this issue that has hindered deployment of common security services, and in particular the lack of agreement across organizations with respect to policies, etc. If we had universal agreement on policies, we would have an easier time establishing connected certification hierarchies.
The policy issue was discussed at length. Dan Schutzer mentioned that in financial services, there is great concern about the misuse of credentials, and what the precise meaning is for a users key to be certified by a bank or other financial service provider. Banks do not want to be liable for a certificate they issue being used for some other purpose. It was noted that while the USPS would like to issue public key certificates that are used widely, some other organization would prefer to issue certificates only used for narrow scope.
Bob Kahn suggested that gaining agreement on policies is often not practical, since we will have different security regimes in different countries. We need to ask what we can do in middleware so different security regimes can interoperate? Simon Spero noted that nothing on a technical level can change the law, though he suggested (jokingly) that maybe the Internet could go on strike, to influence policy.
In general, the solution to the policy problem has been known for some time, and has become almost synonymous with the word "policy" when describing implementations. As Dave Clark pointed out, we must separate policy and mechanism. IP encryption is a mechanism, whether one uses it for a particular application is policy. Steve Kent noted that if we do this, we need to make sure that people understand what is happening on their systems. We need "VCR-plus" for security issues, to allow one to set the policy. This is a very complicated issue.
There was a discussion of the security threats for which protection must be provided, and the threat of denial of service came up, including the question of whether middleware could help solve such problems. Steve Kent mentioned that denial of service is a great, unsolved security problem, and that we need to be careful that in defining security middleware to prevent other kinds of attacks, possibly breaking it into separate components (e.g. like Simon Lam mentioned), that we don't create more opportunities to deny service.
The effect of caching on security and the effect of security on caching was discussed. Larry Masinter noted that as soon as you add security for certain data, you lose the ability to share caches. Cliff Neuman suggested that one may not have control over caching. Instead, the caching midleware needs to handle these cases, so we must get different pieces of middleware (security, caching) to work together. Simon Spero mentioned that one can cache the encrypted form of data, as is done in http-ng and pilot proxies. It was noted that data can be encrypted in advance, then operated on by server (and caches) as opaque tokens.
Simon Lam raised the issue of how certificates would be carried, smart cards, files, PCs? Dan Shutzer responded (jokingly) everywhere else but smart cards... The general consensus was that certificates would be stored in directory services (as separate from the question of where the private keys are stored). Unfortunately, as Simon Spero pointed out, there is presently no global personal directory.
It was felt by many that such a directory, or more precisely, some online authority, was needed to deal with issues such as revocation, (CRL's), etc. Dan Schutzer pointed out that by using short lifetimes on certificates, one can get around some of the issues pertaining to universally accessibility of an authority to provide the current certificate.
A final, inevitable, topic of discussion was key escrow. Larry Masinter asked to what extent any necessity for key escrow could be exploited? None of the networking stuff seems to use key escrow. Can you exploit key escrow positively to make algorithms more efficient? No middleware work has addressed key escrow mechanics. Dan Schutzer noted the positive aspects of key escrow (usually separated into corporate key escrow, as opposed to government key escrow): If a key is lost, hardware breaks, or an employee leaves the company, it is often important to be able to recover lost keys.