Three steps to improving IT resiliency and security
By Martin PoirierFeatures Data Security Opinion aptum cloud data resiliency
"Protecting data at one physical location is no longer possible. You need to protect it everywhere it resides."
IT and security professionals often equate IT resilience with disaster recovery. However, in today’s business environment, simply having a disaster recovery plan isn’t enough. People expect data and online services to be available at any time. Even a relatively small service disruption can damage a company’s reputation.
True IT resilience requires security teams and managers to take a more proactive approach to preventing incidents before they cause a disruption. Upgrading IT infrastructure and practicing effective patch management are great first steps. But the real key to resilience is planning and preparation. Below are three critical steps to creating a secure IT infrastructure and minimizing the possibility of downtime.
Assess the impact of each workload and system in your organization
A Business Impact Assessment is an ideal place to start because it will identify potential exposures in your IT assets, including hardware, software, and services, to uncover any potential problem areas and establish where you need to take additional steps.
Next, you need to figure out your most critical business functions and calculate an acceptable recovery point objective (RPO) — the maximum time period for which data might be lost from an outage — and recovery time objective (RTO) — the duration within which a process must be restored after an outage — for each function. For instance, losing a payroll system for a few days likely won’t have the same impact as a payment processing system going down for a few hours.
Once you’ve identified your risks and acceptable downtime, you can set up service level agreements for each of your business processes and put solutions in place to make sure you can meet these requirements. You need to balance the cost of making a business function more IT resilient to the risk involved in having that function disrupted. Then you can match appropriate technology to your business and budget objectives.
There may not necessarily be one solution that will meet all your needs, but there are several commonly used to boost IT resilience. One example is Managed Disaster Recovery as a Service (DRaaS), which replicates data in the cloud, so you can switch to your backup systems quickly if you experience an incident with your production environment.
Implement a holistic security approach
Security is key to minimizing downtime across all your workloads. If your data isn’t secure, you’re at risk of a distributed denial of service attack, a data breach or a ransomware attack, all of which will disrupt your operations. In the not-too-distant past, companies were able to protect sensitive information with a combination of firewalls, anti-malware software and intrusion detection solutions because the information resided almost exclusively in a central site. If you locked down your centralized servers, you could protect your data.
In today’s business world though, data lives everywhere — in the public cloud, on private clouds and on the devices of remote employees working from home. Protecting data at one physical location is no longer possible. You need to protect it everywhere it resides. That’s why identity management has become key to IT resilience. You need to be able to identify every user and every device on your network and define what information each user is able to access in order to be truly secure. But you still also need centralized solutions that track where your data resides and moves to, especially if you need to meet compliance requirements.
In some ways, the public cloud has made IT simpler. If you use a public cloud service, you can expand or reduce your IT services fairly easily, while leveraging built-in security. But the public cloud alone doesn’t make your IT environment truly resilient.
Put the right workloads in the right places
The public cloud has emerged as an ideal environment for many workloads. There are no up-front capital costs, you can add capacity easily, there’s built-in security and it’s relatively simple to manage. However, it’s not suited to all applications. For example, database applications, where there are large volumes of reads and writes taking place, are often expensive to run in the public cloud and might be better suited to a private cloud infrastructure where costs can be controlled.
Private clouds are also ideal for workloads where compliance or data residency is an issue. You control directly where your data is stored, where it moves to and how it’s secured, ensuring you can meet any regulatory requirements.
Older applications that can’t be easily updated to run in the cloud may need to remain on-premises. The advantage of on-premises infrastructure is that you own and control all the hardware, software and security. On the other hand, you’re also responsible for maintaining, managing and upgrading everything, which can be time-consuming.
Once you understand the requirements for each of your workloads and set your objectives in terms of RPO and RTO, you can place each workload where it belongs, whether that’s in a public cloud, a private cloud or on-premises.
To ensure IT resiliency you need to employ a combination of careful planning and assessment, workload management and security that covers multiple IT environments down to the user level. Only then will you have an IT infrastructure that maximizes uptime and contributes to achieving your organization’s business objectives.
Martin Poirier is a Cloud Solutions Architect at Aptum, a global hybrid, multi-cloud managed service provider based in Toronto.
Print this page
Leave a Reply