The third sector is getting serious about disaster recovery
Last month at The Langham, London, we hosted our second annual charity IT roundtable – and the discussions were certainly illuminating.
Compared to last year's event, the shift in attitudes towards cloud services was clear. Last year's event was very much a general conversation around how to make the most of cloud services, and there was a huge mix of both experienced cloud users and those who barely had it on their radar. This year, attitudes had definitely matured. Cloud computing wasn't just a mere possibility anymore, it was a reality. Nearly every organisation in the room, both large and small, was using at least one form of cloud service.
With a greater focus on disaster recovery, the roundtable highlighted that the issues faced by the sector today are much more practical than theoretical. Instead of questioning the "what ifs" of moving to a cloud environment, IT teams are getting down to the nitty-gritty of developing an effective DR plan, testing, and defining responsibility through the organisation.
Security
Security remains a concern, but is far less of a prohibiting factor now than 12 months ago. When it comes to decision making, understanding has matured enough over the last year that organisations are able to more confidently define what they consider to be a secure enough environment for their data. High-profile data snooping cases have raised awareness of how important the location of both primary and secondary data centres is, and the room was unanimous in its decision to only store data in UK based data centres.
Whilst protection of sensitive data remains important within the sector, a lack of demand from customers, as well as prohibitively high costs, means that accreditations like IL3 or ISO 27001 are requirements for suppliers but sit very low on the agenda for most organisations to achieve internally.
Testing
Testing seems to be the biggest cause of sleepless nights for charity IT professionals today. While most have a DR plan in place, a worryingly low number have actually tested that it works. Consequently, current confidence in DR plans is questionable.
The lack of testing generally boils down to two factors – fear of the impact it could have, and lack of cooperation from senior business executives.
There was a distinct worry that if you failed-over your live data to your test environment at the weekend and something went wrong, you'd have a big issue, and an angry team, to face on Monday morning.
Senior business managers also share these fears. We heard a lot of discussion about the IT team wanting to test, but when trying to negotiate a good time to do so with managers, no agreement can be made. "You can test, but just not on any day ending in "Y"" was a phrase that seemed to ring true with many.
Responsibility
Defining responsibility was another item on this year's to-do list for many. The realisation that an IT disaster recovery plan is a subset of a wider business continuity plan has meant the role of the IT department has changed, and people don't know where their responsibility begins and ends.
There was a feeling that in terms of disaster recovery, it is the responsibility of the IT team to now play a more advisory role. It is their job to understand data and processes and consequences – but then they need to pass these findings on to the directors to make a final decision.
Read about all the key issues, and the best practice used to overcome them, in our Disaster Recovery for Charities 2014: Key Findings.
Save