Nov 28

I know, I’m a terrible blogger – normally the recent Cretaceous boundary events in both my personal and professional life would have led an outpouring of activity, but in this particular case it hasn’t. Even so, I have to jot down some thoughts about ‘P.I.A.’…

 Let’s start with the fact that I hate the term – it’s really not an “impact assessment” at all – at least not in the way we have “critical event analysis” which occur post-facto, in the ‘let’s find out what when wrong’ sense; even the slightly more proactive ‘after we’ve done this how could we have done it better’ type of analysis one might often commission if things went well. Rather, it ought to be a component part of the process. But if you google that right now you’ll find little other than guidance and opinion – certainly no commonly adopted  processes or standards. It’s hugely immature.

 I’m struck by the similarity of ‘privacy’ with the evolution of ‘security’ and, more recently , ‘identity ‘ in that respect – here we now talk about “assurance” in relation to those concepts – why are we not using the equivalent nomenclature?  Possibly because only a few academics and lawyers are truly interested? “Privacy assurance” (at the time of writing don’t expect much) is surely the better term? It’s part of the process of developing systems that process personal data. Any personal data, whatsoever. At least it ought to be. We should be considering privacy at every step of our designs and implementations. An example…

Five or so years ago on a project far, far, away we had a technical design discussion that went something like this:

Q: Yes, but do we assume the government owners of this system to be both good and competent?

A. No, therefore we must put in place mechanisms that will make it as difficult as possible for a corruptible entity to abuse its potential power whilst still saving itself from its innate inability to be effective…

I doubt very much that such design considerations were widespread. But they should have been. At the time we called this “security by design” or “security-led design” depending on whom you talked to; now its “privacy by design”. That’s ‘A Good Thing’ in my view – at least we can discuss privacy issues in broad daylight in a way that means something to senior stakeholders. So saying security is still, in my mind, the overarching concept here – because security and privacy only really begin to trade-off when identity (or identifiers or identity data) are introduced.

Simple example: mechanical lock and key to gain me entry to my house does not depend  on my identity – there is no implicit or explicit semantic assumption that I – and only I (or my delegated identities) can enter that building. Anyone with the appropriate physical key – whether actual or otherwise forged – can. And that’s the point – it’s not dependant on my identity. Therefore this scenario does not require a P.I.A. as privacy, in terms of identity, is irrelevant. However an entry system that does depend on identifiers or identity ought to.

So, ‘conducting’ a P.I.A. – which probably means ‘getting in consultants to review an implementation’- is not what my defintion of ‘privacy assurance’ is about. Privacy assurance ought to be a fundamental, integrated part of the process of designing and assuring solutions, not a methodology or a discrete task in that process…

Share
Jul 12

Sandisk TrustedSignins

So, back in 2006 Sandisk, RSA and Verisign together released something called TrustedSignins for U3. Instead of a dedicated token, a multipurpose U3 usb stick would do the same job, indeed it could securely manage multiple tokens, and it’d also be a usefully encrypted usb drive. ubiquitously and cheaply available at retail outlets everywhere, the advantages seem obvious. so when Verisign offer a free two-factor VIP for their OpenId PIP, I popped into town and bought a cruzer, since any SanDisk U3 drive will do the trick it seems:

Verisign supports SanDisk U3

Activation is simple, just plug in the cruzer, open the U3 LaunchPad and click on TrustedSignins:

Activate Verisign VIP on U3

Except the U3 LaunchPad doesn’t have a TrustedSignins option. I check the cruzer has the latest software installed, and spend a good couple of hours searching and finally emailing verisign, sandisk an u3 support. Now, the Sandisk doco says, “A benefit of TrustedSignins over dedicated tokens is that your company does not need to bear the expense of stocking and supplying them to your customers. After an employee or customer buys a standard SanDisk device at any of the 185,000  retail locations, it is registered with their account at your company. As an incentive, your company can even offer a rebate.

But when I bought the cruzer, I just picked one off the shelf, I was neither asked to register nor offered a rebate. And it doesn’t work. So what is going on here? Turns out there are two types of SanDisk U3 – retail and OEM, and only the OEM version can be programmed with the TrustedSignins utility. also the OEM version is not available from retail outlets. This is certainly not what either Verisign or Sandisk are claiming though, is it? Why has Sandisk not made the TrustedSignins available on all its U3 devices? Why does Verisign not make it clear that only a very select few SanDisk U3 drives are actually compatible with their VIP. Am I really the only person in the last 2 years to try and activate a Verisign VIP on a SanDisk U3?

Share
Jul 09

that time of year, then, for the little known european data protection awards, otherwise known by the snappy ‘Prize to Data Protection Best Practices in European Public Services (fifth edition)‘. Scorchio! Last year the ICO supported the nomination of my current project, which was pretty well received. it’s awarded by Madrid’s regional data protection agency – not a European level body by any means, yet the awards have become the de-facto European honour in terms of data protection. partly, i suspect, as it has avoided becoming a nepotistic, bureaucratic back-slapping exercise.

Share
Jul 21

I finally made the site P3P compliant, and it was a bit of a hassle:

i created an html privacy page, handcrafted the required xml policy reference and policy files and added the meta element to them on all my pages. it would validate okay, but IE peristed on blocking some files and issuing a privacy report warning.

the only thing i hadn’t done was to implement the ‘compact’ http header, which is optional in terms of P3P, so I supposed IE must be looking for that. but then, i thought, that couldnt be right – hosted static sites – of which there are more than a few – couldnt possibly generate that header (without access to the web server’s admin), and if IE was basing its checks on that, then…

well, wouldnt be the first time a browser vendor ‘interpreted’ the standard for their own commericial ends; and the msdn doco says, “Internet Explorer 6 uses these compact policies to filter cookies based on a user’s privacy preferences“.

hmmm. without either a header, a meta element, or a ‘well known location’ – /w3c/p3p.xml – (all of which are optional in the spec) there’s no way for a user agent to determine the presence of a P3P policy or not. and that’s one of the problems i have with the spec – to be be successfully implemented, to enable the implementation of the spec, specific requirements surely have to be placed on user agents, and here it falls downs. for the most part the P3P spec is only five things: an xml locator file, the xml policy file, a ‘well-known location’ for the previous two, an http header extension (the so called ‘compact policy’), or an xhtml extension (e.g. meta element). the msdn link above suggests that the first four are required to stop IE blocking and issuing a privacy report warning.

however, after some experimenting setting cookies both with and without the header, it turns out i hadn’t added the optional tag in my policy reference file, and that’s what IE was really looking for. if only the doco had been clearer and right…

Share
preload preload preload