Nov 28

I know, I’m a terrible blogger – normally the recent Cretaceous boundary events in both my personal and professional life would have led an outpouring of activity, but in this particular case it hasn’t. Even so, I have to jot down some thoughts about ‘P.I.A.’…

 Let’s start with the fact that I hate the term – it’s really not an “impact assessment” at all – at least not in the way we have “critical event analysis” which occur post-facto, in the ‘let’s find out what when wrong’ sense; even the slightly more proactive ‘after we’ve done this how could we have done it better’ type of analysis one might often commission if things went well. Rather, it ought to be a component part of the process. But if you google that right now you’ll find little other than guidance and opinion – certainly no commonly adopted  processes or standards. It’s hugely immature.

 I’m struck by the similarity of ‘privacy’ with the evolution of ‘security’ and, more recently , ‘identity ‘ in that respect – here we now talk about “assurance” in relation to those concepts – why are we not using the equivalent nomenclature?  Possibly because only a few academics and lawyers are truly interested? “Privacy assurance” (at the time of writing don’t expect much) is surely the better term? It’s part of the process of developing systems that process personal data. Any personal data, whatsoever. At least it ought to be. We should be considering privacy at every step of our designs and implementations. An example…

Five or so years ago on a project far, far, away we had a technical design discussion that went something like this:

Q: Yes, but do we assume the government owners of this system to be both good and competent?

A. No, therefore we must put in place mechanisms that will make it as difficult as possible for a corruptible entity to abuse its potential power whilst still saving itself from its innate inability to be effective…

I doubt very much that such design considerations were widespread. But they should have been. At the time we called this “security by design” or “security-led design” depending on whom you talked to; now its “privacy by design”. That’s ‘A Good Thing’ in my view – at least we can discuss privacy issues in broad daylight in a way that means something to senior stakeholders. So saying security is still, in my mind, the overarching concept here – because security and privacy only really begin to trade-off when identity (or identifiers or identity data) are introduced.

Simple example: mechanical lock and key to gain me entry to my house does not depend  on my identity – there is no implicit or explicit semantic assumption that I – and only I (or my delegated identities) can enter that building. Anyone with the appropriate physical key – whether actual or otherwise forged – can. And that’s the point – it’s not dependant on my identity. Therefore this scenario does not require a P.I.A. as privacy, in terms of identity, is irrelevant. However an entry system that does depend on identifiers or identity ought to.

So, ‘conducting’ a P.I.A. – which probably means ‘getting in consultants to review an implementation’- is not what my defintion of ‘privacy assurance’ is about. Privacy assurance ought to be a fundamental, integrated part of the process of designing and assuring solutions, not a methodology or a discrete task in that process…

Share
preload preload preload