Nov 28

I know, I’m a terrible blogger – normally the recent Cretaceous boundary events in both my personal and professional life would have led an outpouring of activity, but in this particular case it hasn’t. Even so, I have to jot down some thoughts about ‘P.I.A.’…

 Let’s start with the fact that I hate the term – it’s really not an “impact assessment” at all – at least not in the way we have “critical event analysis” which occur post-facto, in the ‘let’s find out what when wrong’ sense; even the slightly more proactive ‘after we’ve done this how could we have done it better’ type of analysis one might often commission if things went well. Rather, it ought to be a component part of the process. But if you google that right now you’ll find little other than guidance and opinion – certainly no commonly adopted  processes or standards. It’s hugely immature.

 I’m struck by the similarity of ‘privacy’ with the evolution of ‘security’ and, more recently , ‘identity ‘ in that respect – here we now talk about “assurance” in relation to those concepts – why are we not using the equivalent nomenclature?  Possibly because only a few academics and lawyers are truly interested? “Privacy assurance” (at the time of writing don’t expect much) is surely the better term? It’s part of the process of developing systems that process personal data. Any personal data, whatsoever. At least it ought to be. We should be considering privacy at every step of our designs and implementations. An example…

Five or so years ago on a project far, far, away we had a technical design discussion that went something like this:

Q: Yes, but do we assume the government owners of this system to be both good and competent?

A. No, therefore we must put in place mechanisms that will make it as difficult as possible for a corruptible entity to abuse its potential power whilst still saving itself from its innate inability to be effective…

I doubt very much that such design considerations were widespread. But they should have been. At the time we called this “security by design” or “security-led design” depending on whom you talked to; now its “privacy by design”. That’s ‘A Good Thing’ in my view – at least we can discuss privacy issues in broad daylight in a way that means something to senior stakeholders. So saying security is still, in my mind, the overarching concept here – because security and privacy only really begin to trade-off when identity (or identifiers or identity data) are introduced.

Simple example: mechanical lock and key to gain me entry to my house does not depend  on my identity – there is no implicit or explicit semantic assumption that I – and only I (or my delegated identities) can enter that building. Anyone with the appropriate physical key – whether actual or otherwise forged – can. And that’s the point – it’s not dependant on my identity. Therefore this scenario does not require a P.I.A. as privacy, in terms of identity, is irrelevant. However an entry system that does depend on identifiers or identity ought to.

So, ‘conducting’ a P.I.A. – which probably means ‘getting in consultants to review an implementation’- is not what my defintion of ‘privacy assurance’ is about. Privacy assurance ought to be a fundamental, integrated part of the process of designing and assuring solutions, not a methodology or a discrete task in that process…

Share
Jul 13

Admittedly 5 hours behind the curve on this one, but I just randomly stumbled on this on google trends. looks like someone’s got a new googlehack on the go - in the form of the clever upside down ǝlƃooƃ noʎ ʞɔnɟ. what is interesting is that this term doesn’t seem to have existed prior to today, so to get it into the top search spot in just a few hours is significant. a deliberate mass google search? would need an impressive bot-network to pull off the millions of hits required though (or a viral network of lots of people with time on their hands). perhaps a google trends vulnerability then? hmm. either way, having hit the top spot – i’d now expect blogs to punt the search frequency back up after this initial spike. which is perhaps the real hack…

today’s googlehack

Share
Jul 12

Sandisk TrustedSignins

So, back in 2006 Sandisk, RSA and Verisign together released something called TrustedSignins for U3. Instead of a dedicated token, a multipurpose U3 usb stick would do the same job, indeed it could securely manage multiple tokens, and it’d also be a usefully encrypted usb drive. ubiquitously and cheaply available at retail outlets everywhere, the advantages seem obvious. so when Verisign offer a free two-factor VIP for their OpenId PIP, I popped into town and bought a cruzer, since any SanDisk U3 drive will do the trick it seems:

Verisign supports SanDisk U3

Activation is simple, just plug in the cruzer, open the U3 LaunchPad and click on TrustedSignins:

Activate Verisign VIP on U3

Except the U3 LaunchPad doesn’t have a TrustedSignins option. I check the cruzer has the latest software installed, and spend a good couple of hours searching and finally emailing verisign, sandisk an u3 support. Now, the Sandisk doco says, “A benefit of TrustedSignins over dedicated tokens is that your company does not need to bear the expense of stocking and supplying them to your customers. After an employee or customer buys a standard SanDisk device at any of the 185,000  retail locations, it is registered with their account at your company. As an incentive, your company can even offer a rebate.

But when I bought the cruzer, I just picked one off the shelf, I was neither asked to register nor offered a rebate. And it doesn’t work. So what is going on here? Turns out there are two types of SanDisk U3 – retail and OEM, and only the OEM version can be programmed with the TrustedSignins utility. also the OEM version is not available from retail outlets. This is certainly not what either Verisign or Sandisk are claiming though, is it? Why has Sandisk not made the TrustedSignins available on all its U3 devices? Why does Verisign not make it clear that only a very select few SanDisk U3 drives are actually compatible with their VIP. Am I really the only person in the last 2 years to try and activate a Verisign VIP on a SanDisk U3?

Share
Jul 09

that time of year, then, for the little known european data protection awards, otherwise known by the snappy ‘Prize to Data Protection Best Practices in European Public Services (fifth edition)‘. Scorchio! Last year the ICO supported the nomination of my current project, which was pretty well received. it’s awarded by Madrid’s regional data protection agency – not a European level body by any means, yet the awards have become the de-facto European honour in terms of data protection. partly, i suspect, as it has avoided becoming a nepotistic, bureaucratic back-slapping exercise.

Share
Aug 30

Writing Secure Code, Second Edition

Some years ago i worked for a software house with over 30 developers, of which only one other had read the first edition of this book. I don’t think that was uncommon. Few developers cared about application security in general terms, their encounters with security being an inconvenience that either ‘broke’ code or (often post-exploit) resulted in ‘extra work’ bug-fixing. I use the past-tense, but i’ve really no evidence to suggest that things have changed all that much. Hopefully the wider distribution and publicity granted this second edition will help change that.

The book is organised into four major sections.

The first provides background material that outlines the need to secure systems and techniques for designing secure systems. It is carefully written, appropriately illustrated and has only two very small code examples (one of which pseudo-code, the other a couple of lines of asp), making it good for photocopying and distribution to project managers…

The second and third sections provide the bulk of the book – secure coding techniques. As you’d expect buffer overruns, acls, least privilege, crypto, canonical mistakes, sql injection, cross site scripting, dos attacks, to name a few are all covered, and there are chapters on internationalisation, sockets, rpc, and one – surprisingly small – on .net. I say surprisingly because a good part of the marketing for this book was that it was updated to cover .net, which it has – but not to the extent you’d think. If you’re looking for an in-depth analysis of .net security, this work doesn’t have it. But it doesnt needs it – if there is one single message in the second and third sections it is that there is no replacement for responsible, informed programming regardless of the syntax or technology used. The chapter entitled ‘All Input Is Evil’ makes that point well, it – like the others – applies whether you use .net or not.

The final section covers ‘everything else’ – testing, code reviews, installation, error messages, and a good – but brief – chapter on privacy and data security, and an excellent chapter on general good practises.

Part of what made the first edition a classic, to my mind, is that it addressed the security fundamentals *every* programmer on a microsoft platform should be aware of. After reading it i was in doubt of the importance of application security, the core principles, threats and coding countermeasures, and i went on to apply those in subsequent projects. This edition builds, updates and expands on the first and is, simply, required reading. unlike many sequels, it does not disappoint.

Share
Jul 25

Finally got round to updating all client ‘mailto’ links to a server side C# SMTP implementation. though pretty basic it works well enough, despite my ISP‘s constant tinkering with the web servers config.

Share
Jul 11

Improving Web Application Security – Threats and Countermeasures is part of microsoft’s ‘patterns and practises’ group. Though much of the guidance is general best practise, specific guidance is given for .NET web apps. This really is an excellent paper complete with appendices of resource links, checklists and how-to’s. Which is appropriate to what is a very practically focused document, not the idealised theorising that it could so easily have been, and which certain other papers in the series indulge in. Simply put, it is an essential read.

Share
Jul 01

Hijacking .Net Vol 1: Role Based Security

I had to read this – touted as the first volume in a series that could be for .NET what Appleman’s books were for the Win32 API. A fair bit of the book is just a guided tour of windows role based security, well written though. The core of the ‘hijacking’ part could be boiled down to a couple of pages. Essentially it’s this:

Marking a class or method as private in .NET impacts its visibility, but not its security boundary – i.e. it is possible to invoke private methods. And vs.net provides all the means necessary to do so:

Step One – navigate to the library/class you want with ildasm and have a peek at the IL. From that its pretty straightforward to grok the private objects/methods you might be interested in.

Step Two – use the InvokeMember method of the Type class to make use of private class/method.

That’s it. Classic Win32 API Appleman this is not, how useful the technique is – I’m not sure (not in commercial work), but it’s still worth a read.

Share
preload preload preload