Tuesday, 24 March 2009

Database State report - FAIL

Let me start by stating that, in general, I'm a pretty even-tempered chap. It usually takes a lot to make me grumpy, excepting those days when I'm hungover or suffering from a lack of sleep. Today I am neither hungover not tired however I am more than grumpy. I'm positively angry. The reason? The Rowntree report entitled Database State – available from

http://www.cl.cam.ac.uk/~rja14/Papers/database-state.pdf

I've rarely seen such an unbalanced piece of FUD - and I've been working in IT Security for over a decade! I don't doubt that the intentions of the authors are noble but would it have been too outrageous to ask them to leave their personal agendas behind and take a more mature approach to the subject? (The subject by the way being the legality and justifications underlying a number of UK Government databases. I'll stick up my hand and admit an interest having been working in HMG IT security since 2001 and being employed by a major supplier to HMG since 2002. I must also stress here that the opinions in this blog are my own – this is my blog not theirs :-)).

Wherever a pejorative could be used in the report, it is. Wherever a picture could be painted grey, it's painted as the darkest shade of black. Examples of interpretive liberties include:

In Scotland, where the SCR project has been completed, there has already been an abuse case in which celebrities had their records accessed by a doctor who is now facing charges.”

I'm sorry but how is this a negative for the system? The guy got caught. That suggests that the system is working to me. What's the alternative – prevent all doctors from accessing data without explicit consent? It may just be me, however if I were taken to a hospital unconscious I would much rather have my records available and accessed rather than have those providing my care debate whether my privacy was more important! The sensible compromise is to provide access to those who need it (subject to role based access control) and audit (and discipline) any violations of the acceptable use policies. Shockingly enough that's what's happening. Besides which, it's not as if privacy violations do not happen when the data is held locally – I could link to a number of stories where local health trusts have inappropriately accessed records of celebrities held locally or displayed other poor practice such as this story today:

http://www.theregister.co.uk/2009/03/24/hospital_data_breach_notice/

Another example of biased picture painting - the following quote from the Deloitte report into the ContactPoint database is used as an indication of bad security:

It should be noted that risk can only be managed, not eliminated, and therefore there will always be a risk of data security incidents occurring.”

That's more of a statement of the bleeding obvious than a criticism of data sharing. Given the calibre of the authors I'm sure they could have done better than this.

Another tendency of the report that I find objectionable are baseless statements such as:

For these reasons, the use of SUS in research without an effective opt-out contravenes the European Convention on Human Rights and European data-protection law. It is also considered morally unacceptable by millions of UK citizens.”

Really? I'm surprised the report was ever finished if they've been off polling everyone in the country for their moral perceptions of government IT. Oh. They didn't? And then there's this statement referring to the Police National Database:

Soft intelligence includes opinion, hearsay, tips from informants and even malicious accusations; letting such things leak from the world of intelligence into that of routine police operations is dangerous, and some intelligence officers think it a mistake.”

Hmmm... I wonder if that 'some' is 10% of intelligence officers? 20%? 90%? 3? That bloke down the pub next to New Scotland Yard? This kind of comment is fine in conversation but surely not in a report that's supposed to be taken seriously.

What is lacking in this report is any discussion of the background to the creation of the databases it criticises. For example, the ContactPoint database was initiated following the tragic death of Victoria Climbie. The Police National Database was initiated following the Bichard enquiry into the deaths of the Soham schoolgirls. Lack of information sharing was a factor (not a cause!) in the deaths of these children. What price privacy vs personal safety? I don't have the answer but it would be a good debate to have rather than the pantomime we currently see between HMG and privacy campaigners.

I find some of the recommendations to be naive. In particular, Recommendation 4,

By default, sensitive personal information must be kept on local systems and shared only with the subject’s consent or for a specific lawful purpose. Central systems must be simple and minimal, and should hold sensitive data only when both proportionate and necessary.”

Have the authors actually seen the local systems in places like NHS surgeries and trusts or within the police service? If so, are they really comfortable that our data is more secure in such systems than in centrally managed databases? The use of a distributed federated information sharing model is often suggested as an alternative but this is the worst of both worlds – almost unfettered access to information in dribs and drabs controlled by manual procedure with no central ability to monitor misuse. (Apologies I seem to have slipped into overgeneralisation and hyperbole – must be contagious.) Sigh...

Now, please don't get the idea that I'm an avid supporter of all HMG databases and information sharing schemes. I'm not. There are two in particular that I'm really not convinced have any justifiable business case or overall positive effect for the citizen. What I do believe in is informed debate, unfortunately any debate on the security of HMG systems is never going to be fully informed – the security requirements for the most sensitive systems will be protectively marked and therefore (rightly) will not be made available to those who do not have a need to know. Commenting on the security of systems when you don't have access to the facts is verging on foolish and leads to mistakes such as referring to a “SECRET” level of clearance in the recommendations when there is no such clearance level. Pedantic I know but a display of basic ignorance of HMG security mechanisms which is worrying.

What can we do? Have debate but have sensible debate. Perhaps if we start by banning the use of overly emotive terms such as “database state” or “big brother” on one hand and the over use of “part of the fight against terrorism” as a justification for intrusion into the lives of citizens on the other we might get to a common position where information can be shared where necessary to protect life and safety whilst maintaining an acceptable degree of privacy. But where's the fun and headlines in that?

Saturday, 14 March 2009

Just like buses...

No posts for a week and then two in one day...

Thought I'd post some more cloudy musings

i) It's not all new – we've been doing computing on shared resources since forever. I remember working at one of the high street banks who were running their production and development environments on the same MVS mainframe


ii) What is new can be new in subtle and interesting ways, examples:

  • the hypervisor; like it or not the hypervisor is a definite point of failure for security controls
  • network security – you'll find that some of your firewalls and IDS are a little useless when all of the comms take place within a single piece of hardware (caveat, some software firewalls are supported in virtual environments but I'm guessing there are still a few niggles to be ironed out. And you can get IDS that operate inside the hypervisor – simplification - checkout http://www.catbird.com/)
  • the potential hypervisor problems mean that your threats have just increased – you now need to worry about the threats facing all the systems processed within the same virtualised infrastructure – how can you do this if you don't know who's sharing the kit?
  • incident management – what happens when a client has an incident on shared hardware? How do you limit the exposure to co-located services?

iii) private and closed community clouds are good, let's not just dismiss them as an edge case

iv) cloud computing is going to drive Jericho-style deperimeterisation at an increased pace; move the protection closer to the data

v) compliance is still going to be a pig. But then what's new?

vi) Organisations need to be honest with themselves with respect to their current physical and technical security controls before scoping out what they expect from a cloud provider – clouds should not necessarily have to be better than the existing controls, simply acceptable from a cost/risk ratio perspective

vii) oldie but goodie – organisations need to decide what they want to do (with whom and with what data) before deciding that cloud is the answer

viii) It's probably the most interesting security problem out there at the moment from policy and technology perspectives.

So that's an unconference....

I attended the CloudCamp event in London last Thursday night. Here are my thoughts:

i) Between 600 and 700 attendees. I think those kinds of numbers show that it's not really correct to view cloud as fringe or up and coming - it's here and it's real. Not everyone was there just for the free beer and pizza ;-)

ii) It was not simply vendors pitching to vendors. The Enterprise Cloud discussion track after the lightning talks clearly included attendees from large organisations either already doing cloud or in the process of considering cloud. One example was that of an investment bank who run their Monte Carlo simulations in the cloud.

iii) Nice thing about the event - vendor pitches are banned. Some of the lightning talks came perilously close but the lack of blatent pitches in the discussion tracks made for a better quality of discussion.

iv) Some interesting topics covered in the cloud talks around federation, particularly regarding http://www.arjuna.com/agility and http://bitbucket.org/dotcloud/dotcloud/wiki/Home (the latter being academic and open sourcey at present but interesting nonetheless).

v) The fate of Coghead - http://www.coghead.com/ - vividly demonstrates the dangers of SaaS vendor lock-in. If you're going to do cloud you're probably better going lower down in the the stack to IaaS where there is less lock-in. (It should be easier to migrate your Linux VM plus hosted apps in multiple clouds than moving your Force.com or GoogleApps proprietary assets!).

vi) It's not just vendor lock-in to worry about - you also need to consider data lock-in. What happens when you have so much data in the cloud that you can't get it back out again? For example, you may have insufficient local storage or insufficient bandwidth to extract the data in the required timeframe. Interesting problem, possibly an argument for distributing storage amongst different clouds so that you don't amass too much in one place - but this does cause other issues. This is the kind of problem that makes this cloud stuff so much fun!

Friday, 6 March 2009

CloudCamp!

I came across this post

http://www.doxpara.com/?p=1274

over at Dan Kaminsky's blog earlier this week. It links to an excellent set of slides that Kaminsky gave at CloudCamp in Seattle. It's really enthusing to see guys like Kaminsky getting excited by Cloud Computing - it would be really easy for the 'name' security researchers to give the Cloud concept a good kicking (it's an easy target) but Kaminsky (unsurprisingly) shows a good understanding of the pros and cons of Cloud and comes down firmly on the side of Cloud being a positive way ahead for IT service delivery. I'm hoping that there are going to be some equally good presentations at the upcoming CloudCamp event here in London on the 12th March.

Feel free to get in touch if anyone out there wants to meet up for a beer or two at the event!