There is no way to sugarcoat it. The coronavirus pandemic is the most disruptive global crisis in decades. It will change every industry, every business, every employee and most of all, every human being on this planet.
So, what can I, a CEO of a technology company, say in a blog about this?
My natural leaning is, of course, to talk about technology. In many ways, it is glueing the world together right now. It is helping a lot of people continue their work, keeping the wheels of business turning.
But really, it’s about much, much more than that. So, I’d like to talk about the thing that is at the heart of all technology: People.
Helping teachers teach, and students learn
The most significant effect of the pandemic that we’ve seen on the people we serve relates to education. We help some 4,000 schools and education establishments with their technology – and right now, among all the other support that this community needs and deserves, is technology that helps teachers to continue teaching.
It is no small undertaking to meet that most critical of challenges.
After all, our youngsters need their developing brains stimulated and nurtured. They need routine. And even with the best will in the world, their parents and families cannot do this alone.
At the same time, our teachers want desperately to teach. They want to give their students as much as stability and continuity with their education as possible.
I know this, because not only am I a parent myself but because AdEPT has worked with the education community for a long time. And so, I’m proud and honoured to say we’re playing our part. It’s where our own people have come to the fore.
For example, working with organisations such as the London Grid for Learning (LGfL) and Virgin Media Business, we’ve been able to massively strengthen Freedom2Roam. This service allows school staff to remotely connect to school servers from their own devices and locations. From there, staff can access essential files and information – such as lesson planning documents, marking assessments and management reports.
In the wake of the pandemic and school closures, we’ve seen such a huge demand for the Freedom2Roam service that we have made it a top priority, putting the best brains from our back-end infrastructure and our front-end UX and UI teams onto this service.
We’ve expedited our normal, ongoing work; boosted its capacity to meet the 1,047 per cent increase in demand we’ve seen; and introduced a browser-based interface to make the service easier and quicker to use. Because – perhaps more than ever – no teacher or education professional wants to spend time downloading, installing and figuring out new software.
Of course, Freedom2Roam is only one tool to help – and it’s no substitute for face-to-face classroom time – but it is helping teachers get on with their job. One of them recently described it as a ‘godsend’. It is a real privilege to hear such praise.
I should also say a big thank you here to our staff here for working with the experts at LGfL to help develop guidance for schools around safeguarding. Through this work, we’ve contributed to official government guidance available here, under the ‘Children and online safety away from school and college’ heading.
Helping community healthcare communicate
Another area of work we’ve been doing in response to the pandemic pertains to public healthcare. I wish I could say here how we have somehow swapped our engineers’ day jobs for making testing kits, personal protective equipment for our fantastic NHS, or ventilators for those suffering from coronavirus.
I can’t say this. We are not specialists in any of those things. But we do specialise in helping public health organisations use technology to communicate. It is a less obvious and less pivotal aspect of the response to the pandemic, but still an important one.
One example of this is a recent project by our Wakefield team who work with a local GP practice. Like all primary care organisations right now, the practice needed to tackle a seemingly-impossible, threefold, challenge: respond to a surge in calls from concerned patients, maintain everyday community healthcare, but at the same time protect staff from exposure to coronavirus.
Among our considerations was the sense that if primary care organisations like this cannot continue working, then there would be even more pressure on our NHS. So, for this practice, our Wakefield team set up a cloud-hosted soft phone system meaning staff could use their own mobile phones to answer practice calls while working from home.
Through this phone system, patients still dial the same number and get the service they are familiar with – a reassuring kind of continuity that is especially important right now. From the practice’s viewpoint, calls are recorded in the usual way, the setup adheres to NHS technology and data protection rules – and most importantly, staff can protect their own health and in turn keep community healthcare running.
Again, I am immensely proud of our team to have helped this practice, because they have played their part in protecting the welfare of health professionals, and ultimately, the public.
Adapting to increasing and changing demand
Away from public sector organisations, we’ve seen an enormous increase in demand from commercial businesses and some fundamental changes in the nature of those demands. One indicator of this is the 85 per cent increase in calls to our general helpdesk.
One way we’re responding is to use our own remote access and diagnostic technology to resolve queries. But such tools are the tip of the iceberg: in truth, the real difference to our clients is our people. They have genuinely shone – working longer hours and doing things that are over and above their day jobs.
For example, we’ve moved staff who would ordinarily be working in sales – or visiting sites to install equipment – into helpdesk roles. Not only does this reflect our culture of rolling up sleeves and getting stuck in, but it is also a real testament to having a workforce with breadth and depth of technical knowledge.
We’ve seen clients requesting products and services for temporary periods. Under normal circumstances, we’d work to long-term contracts, but this is not the time for red tape. For instance, a customer asked for extra phone lines for a short period and we’ve pulled together to solve this unique challenge.
Another sign of the times is the rise we’ve seen in orders of laptops. And here’s where I must thank our suppliers – it’s because of them that we’ve been able to honour every order. And I must thank our customers too – particularly the one who requested toilet paper, paracetamol and a few G&Ts with his laptop order. We very much value this humility and humour during this difficult time.
There are other, additional steps we are taking in light of the pandemic.
At the risk of being pests, we’re overcommunicating with our clients. In many ways, because we help organisations in technology, we get to see those organisations’ inner workings. We’re seeing the challenges and the repercussions of the pandemic first hand, every day. So, that means when we reassure our clients and say ‘we understand, we’re in your corner’ and ‘we’re available to help’, we’re saying it because we genuinely empathise.
When it comes to our staff, we keep in mind that, as technology specialists, we’re classified by the government as key workers – rather like the fourth utility. So, we’re not going to do anything at all that compromises the health and safety of our workforce.
Of course, we’re doing all of this with the incredible help of our partners. These are businesses and organisations like the LGfL and Virgin Media Business, which are facing and meeting demands on them from left, right and centre. There’s Gamma, whose staff are doing a lot of fancy footwork to increase voice capacity for our clients. And there’s Avaya, which is doing brilliant work to support our clients in remote-access technology.
There is little I can say to mitigate the challenges we’re facing now and will continue to face. Right now, it’s all hands to the deck and we’re busy – and in some ways, working from home is a novelty. But there may be a point where loneliness kicks in. I say that from experience as a regular home worker. So, among my responsibilities is keeping company morale buoyant.
There are a million articles out there about best practices for working from home. So, I’ll only offer a few tips.
Be flexible and adaptable. Be prepared to get involved in activities that are generally not part of your job role. Of course, those tasks should not be an unreasonable diversion from your usual work, but adopting a can-do attitude helps your own self-preservation and the spirit of your colleagues.
Overcommunicate. As mentioned above, we’re already doing this with clients, but it’s equally important to do that with colleagues. Calling or messaging a teammate to share a joke might not feel as spontaneous or natural as banter across office desks, but it matters. It’s ok to laugh among all of this.
Maintain the regular cadence of business. I’m still having my regular Monday review meeting. And my Friday sales meeting. And I’m still meeting investors. Even if all those meetings are virtual and I’m getting tired of seeing my head on the screen.
Thank your teams. You really can’t thank colleagues enough at this time. I hope I’ve highlighted the fantastic work of my colleagues in this blog, but in case it isn’t clear: thank you, from the bottom of my heart.
Most of all, take the government instructions seriously and follow them to the letter. At the heart of all of this is our collective responsibility to save people’s lives. There is no other responsibility to take more seriously. After all, it’s people that matter before everything else.
- Phil Race is the CEO of AdEPT Technology Group. You can connect with him on LinkedIn here.
The topic of remote working has never been so relevant as it is today; no thanks to the coronavirus.
Despite this global, frightening phenomenon, the popularity of remote working is ever increasing amongst businesses and their employees. This is partially due to the major shift in businesses adopting Cloud services or various forms of off-site infrastructure solutions; taking the onus of working away from being just office-based and instead, encouraging/supporting a distributed workforce.
This is reinforced by the vast improvements in web conferencing and collaboration technology, making it easier to communicate using voice, video and share content, from anywhere with a solid internet connection.
Many studies have shown that enabling a flexible remote working practice results in greater productivity and quality of work, more engagement, loyalty and reduced absenteeism. Outside of the office it also helps manage a work/life balance.
However, businesses are at various stages of their remote working strategy; some don’t even have one yet, whilst others are fully committed to it and have already enabled their workforce with the necessary tools to implement remote working.
No matter where you are on your journey, here are a few pointers to consider.
These will help those businesses and organisations who are at the early stages of a remote working strategy, to those who are already benefitting from remote working practises.
No remote working solution will work effectively if the connectivity foundations are not adequate and security measures are not fit for purpose.
Providing access to IT applications and resources remotely starts with connectivity to the Internet and the corporate network, mapping out how employees will securely interact to access what they need.
There is also the question of employees having reliable and fast internet access from their remote location. If staff are in areas that are not yet on the UK fibre network, you will find their experience of working remotely significantly diminished, having a direct impact on their productivity and morale.
It is wise to survey your staff in order to quantify how many are able to work remotely, as and when the need arises. For key staff or those in rural areas, you may wish to invest in new or upgraded Internet access from their remote location, or look into mobile Internet access, to ensure they are online.
The key to this is bandwidth. Quantifying how much bandwidth remote workers will need to replicate their in-office productivity is vital. This of course varies across sectors and industries depending on the nature of the data and how often it needs to be synced to the corporate network.
For media, design and production businesses, this requirement is high due to the volume of high resolution images and video that are pushed and pulled across the network. This can also have a significant impact on conferencing and voice services, if they are also delivered over the same connection.
Unified Communications and Collaboration Tools
The enhancement of reliable real-time collaboration tools, like Teams, goes a long way to alleviate the bandwidth issue, due to the reduction in frequent uploading and downloading of large files across the network. Teamwork is an essential part of any successful business, hence the need to support this activity despite the location, is a key part of any remote working strategy. This is where today’s modern workforce collaboration tools play a significant part in keeping workers connected, whether it is using Slack, Microsoft Teams, Workplace by Facebook, unified communications/instant messaging solutions from Avaya, as well as web conferencing/meeting tools like GoToMeeting and Zoom.
The collaboration tools you choose will have a direct impact on your remote workforce and their performance, hence it is wise to minimise the number of tools you throw at your staff. Where possible, take advantage of the tools that are available or bolted on from existing providers, to encourage the user adoption and also make tool management easier.
Delivering remote access to the corporate network, data and applications in a secure manner is critical. This needs to be deployed both at the user end and also on the network perimeter.
For users, this can be done by creating an encrypted network connection from their device to the corporate network, using a Virtual Private Network (VPN) software application. VPN technology is reliable and proven to work, if installed, configured and maintain correctly. If not, there could be a detrimental effect on the performance of the device and the upload/download speeds.
There are a number of VPN solutions available that deliver an additional level of security and safety for your remote workers, so it is important to discuss this openly with your IT partner, to ensure you apply the right product for your business needs.
Two-factor authentication (2FA) should also be considered for the users, to double check their login procedure. In addition to their username/ID and password, 2FA is now commonly used to verify that only the designated users are allowed access. Again, your IT partner can recommend which 2FA service is fit for purpose, which the likes of Microsoft now including this service within specific Office 365 licences.
For the network, a fit for purpose firewall solution is a must. Again, they vary in size, spend and complexity. A firewall system should be designed to prevent unauthorised access to or from the corporate (or private) network, whether in hardware or software form, or a combination of both.
It begins and ends with your people, the most prised asset of any business and organisation.
Remote working is more than choosing the right technology, it is a cultural shift for many. Some may be against the idea and due to extreme circumstances, are forced to work remotely. What seem like trivial aspects of office life, like banter and the quick chat whilst making the tea or coffee, can have a major impact when missed.
Therefore it is essential that the transition from office based to remote working is made as simple and straightforward as possible….for the USER too!
It is critical that an equal amount of focus and emphasis is placed on user adoption when choosing the right remote working tools and applications, as well as being technically proven, cost effective and recommended from a trusted source (IT partner).
Having a detailed Remote Working Policy in place can make a big difference to act as a guideline for the business and staff when it needs to be implemented at short notice. It is highly likely that you already have remote workers in your business, hence the ‘power remote users’ can play a great part in making those new to remote working settle in efficiently.
Speak to a Trusted Partner
At AdEPT, we help thousands of businesses and organisations with their remote working needs, from designing networks and security solutions, to delivering Cloud services, hosted desktop and telephony platforms, to unified communications and collaboration solutions; all managed by our in-house IT support teams.
If you have any questions on how to tackle the current issues and get ready for remote working, get in touch today to learn more about our wide range of services.
As BT accelerates its plans to migrate the UK voice network from copper to fibre, the pressure to change solutions becomes ever more urgent. BT will withdraw the Wholesale Line Rental (WLR) service by 2025. It sounds an age, but it isn’t. Schools need to be thinking about their future communications solutions.
AdEPT Education provide specialist telephony services for schools, so we’ve outlined below a few ways you can prepare. If you want to discuss your options in more detail please don’t hesitate to book a review with one of our experts.
Have a look at one of your recent bills. Do you see any of these items listed?
- Analogue Line
- Business Line
- Alarm Line
- PTSN Line
If you do then you need to start thinking about your long term telephony arrangement. In short, anyone with an on-site PBX, telephone line, fax line, PDQ line or broadband line is affected and will need to make a plan.
BT & Openreach announced some time ago the intention to switch off the ISDN services from the Public Switched Telephone Network (PSTN) by 2025. They have also announced that they intend to switch off the whole PSTN service by 2025, with no new supply after 2023.
Although they’ve been the most reliable solutions to date, PSTN and ISDN are rapidly becoming out of date technology, and expensive to operate and maintain. Openreach plans to invest instead in fibre infrastructure rather than further invest in a new version of the PSTN (which is essentially Victorian technology).
This means that any individual or organisation still using these traditional voice services will need to have moved to newer SIP and IP voice solutions by then or, simply put, they won’t be able to use their phones.
What are our options? SIP and VoIP.
The terms SIP and VoIP refer to telephony based services delivered using IP signalling. Historically, telephony based services have been delivered using technology and signalling which is now over 30 years old, such as ISDN30 and PSTN lines.
SIP services are generally used to connect lines to a telephone system and these are a direct replacement for the ISDN30 technology. VoIP is a general term used to describe routing voice calls over an IP network. The term is closely associated with hosted telephones, where a telephone system installed at a customer’s premises is replaced with a central system shared between many different locations.
What do I do now?
Essentially we all have 3 options.
- Ignore it all and do nothing
It should go without saying, but consider how important your phones are to your school. Though 2025 may seem like a long way off, 6 years can fly by.
Recent studies have concluded that a large proportion of UK organisations are unaware that the change is taking place. Don’t run the risk. Have a plan in place and be ready for the change. VoIP and SIP based solutions will almost certainly offer cost-savings if deployed correctly and they’ll offer more functionality for your school, and be future-proofed for years to come.
- Panic and rip it all out tomorrow
Though it is time to take action, that doesn’t mean now is the time to change – you may not be ready. It may not make economic sense, or you may not have resource available to manage the transition. You could make the wrong decision, and chose a solution that offers little or no additional benefits over your current service, or even worse, spend time and money implementing something that will only help you out for the next few years. Make an informed decision, you still have time and options to explore.
- Engage with an industry professional to better understand my options and make a self-paced evolution to the future.
AdEPT Education have years of experience providing communication solutions to schools, including SIP and VoIP solutions, refining our portfolio to best match their customers’ requirements. We have already developed a number of IP and VoIP services which are available to replace the current PSTN and ISDN services and would be happy to discuss the benefits of these over your current solution.
We’re offering both new and existing customers alike the opportunity for a free telephony audit. This audit will review all of the telephony services currently supplied to your business, providing a report on the services and a recommendation of the actions needed to prepare for the withdrawal of the PSTN and ISDN network in 2025.
If you’d like to discuss you telephony requirements in more detail or to book a free telephony audit please get in touch. To read more about our Voice solutions, please get in touch.
Get in touch
For more information on any of our services or to talk about how we may be able to help you, please get in touch with us using the form opposite or by clicking the link below.
AdEPT delivers on a promise
In 2018 AdEPT announced a significant government contract win with the NHS. However, winning a contract is only half the battle – it is crucial to deliver on the promise made in this substantial contract process.
AdEPT is therefore delighted to announce that, under the guidance of the NHS Trusts in Kent, AdEPT has delivered improved network and bandwidth capacity to more than 100 hospital and specialist care sites across the region.
This project facilitates greater collaboration in handling the health and welfare needs of Kent residents.
Following the success of this initial network programme, AdEPT are completing the roll-out of improved bandwidth services to the 300 GP surgeries in the region. This will complete the upgrade of the entire NHS network in Kent.
This ultimately means that 1.6 million people across Kent will receive better care through improved network and bandwidth capacity, financial savings and improved access to clinical systems.
The challenge to be addressed
In 2017, the NHS decided that the 12 years old ‘N3 network’ needed to be retired.
But what was the ‘N3 network’? N3 was a decade old national broadband network for the English NHS, connecting all NHS locations and 1.3 million employees across England, a solution formerly managed by BT.
As a single supplier service, N3 was principally designed to provide access to national applications, such as patient records, hospital appointments and prescription services for NHS organisations.
However, as with all single supplier markets, the network became outdated.
The Health and Social Care Network (HSCN) was devised as a multi-supplier marketplace adhering to single credentials – it is designed to provide an improved way for health and social care organisations across the country – from both inside and outside the NHS – to access and exchange electronic information.
This multi-supplier approach also encouraged competition for the provision of the network, leading to a substantial cost reduction for the NHS.
The digital transformation being felt in all walks of society is being experienced in equal measure across the NHS.
Front line care is increasingly digital. A recent Healthcare News report clearly highlighted a host of initiatives that demonstrate how this transformation is impacting the NHS. Examples of ICT initiatives across the NHS include;
Information security, patient analytics, digitised patient engagement, population health, Electronic Health Records, remote patient monitoring and revenue cycle management.
The healthcare world is clearly changing, with; virtual surgeries, remote consultations and telehealth all improving the way health services are delivered.
However, all these transformations depend on a high speed, secure, cost effective network infrastructure.
Specifically Kent, and the benefit HSCN brings
The delivery of a new Health and Social Care Network (HSCN) to NHS hospitals and specialist trusts in Kent replaces an outdated N3 network, delivering improved access to information and technology and substantial cost savings. Underpinning the transformation of health and social care services in the region.
This improvement was made possible by the competition between network suppliers driven by HSCN.
Kent chose AdEPT because it demonstrated that it would be a flexible and responsive partner to the NHS in the region.
How has this substantial programme been delivered?
The change programme has required strong collaboration between a number of critical partners;
• the NHS Trusts in Kent,
• NHS Digital, and
• AdEPT Technology Group
“The N3 community of interest network (COIN) within Kent was one of, if not, the largest and most complex in England. It’s a credit to the strong leadership and collaboration between the seven Trusts in Kent, that not only was a successful migration of services to HSCN completed, but we were the first to do so in the UK”
commented Tim Scott, Chief Commercial Officer and HSCN Programme Lead at AdEPT.
“Strong programme delivery is critical to complex technology projects. There are four key disciplines and attributes that allowed us to deliver this programme so well: leadership, structure, collaboration and flexibility.
In AdEPT, we found a partner – rather than a supplier – aligned to us in each of these disciplines”.
Michael Beckett, Director of IT, Maidstone and Tunbridge Wells NHS Trust.
“The migration of the Kent CoIN demonstrates everything HSCN was designed to achieve;
greater collaboration, both locally and with suppliers;
reduced costs for the NHS by virtue of the HSCN marketplace and
using technology to provide enhanced capabilities, that will deliver better care though health and social care integration.”
Mike Oldfield-Marsh, HSCN Migration Manager NHS Digital.
London, Easter 2015, and a crew of ageing criminals led by ringleader Brian Reader pull off an audacious heist from a vault in Hatton Garden. Diamonds, gold, jewellery and cash amongst a haul of over £20m according to Scotland Yard. A burglary that, according to the presiding Judge, Christopher Kinch, ‘…stands in a class of its own’.
What on earth does this have to do with Cyber Crime?
Well it’s great to have a physical parallel to the ethereal world of technology, and there are many lessons to learn that apply to both.
And here at AdEPT we think it’s a risk that deserves attention. It’s estimated that, on average, a cyber incidence costs an organisation $369,0001 with the loss of critical data, intellectual property and source files that can cost a company its reputation, let alone financial loss. Research also suggests that 27.9% of organisations will have a data breach in the next two years, with 61% reporting a cyber-attack in the past year.
In any risk assessment there’s a simple equation – Risk = Likelihood x Impact. With Cyber the equation is High Likelihood x High Impact = so, High Risk, therefore High Priority!
Yet, Cyber Readiness (as measured by the insurer, Hiscox) remains low – that’s despite intense regulation (GDPR et al) and a mass of education. In the Hiscox survey only 10% reached their defined Expert threshold with 74% classed as novices. This in-depth study looked at two dimensions of readiness; technology / process on the one hand, and oversight / resourcing on the other, and is well worth a read.
Back to Hatton Garden – during the heist the alarm actually went off! A security guard was dispatched to the building to investigate. After wandering round, on a quiet weekend evening, he reported that the building appeared secure and no alarm was sounding, a false alarm was declared.
The heist continued...
Human ill-discipline, lack of attention and poor processes are incredibly common as causes for cyber-crime. For example, the most common password in 2018 was ‘123456’2, with ‘password’ a close second! It’s no wonder then that every 14 seconds a business will be attacked by Ransomware, with the frequency and type of attack rising every year. Criminals are targeting the weakest link – us humans!
So, the cheapest, but potentially the most difficult, defence against Cyber Crime is trained employees. Any Cyber defence strategy should look first at making people aware of the risks and the consequences. As data files grow exponentially, with thumb drives & memory sticks allowing information to be so easily downloaded and shared, the impact of complacency can be widespread and crippling.
It’s no wonder then that there’s been a rise in Identity and Access Management (IAM) tooling. AdEPT are increasingly delivering two factor identification solutions – demanding fingerprint / evidence of ID using a second device – to prove an individuals’ identification before they are allowed to open the ‘digital door’.
The most common form of cyber protection helps here too, Endpoint Security / Antivirus. AdEPT are deploying a range of tools from market leaders such as Sophos, Symantec and McAfee that scan incoming threats and halt them before they get to that precious data.
Physical – the morphing boundary
Our Hatton Garden master criminal, Brian, and his crew spent two years planning the robbery. They visited the vault several times and obtained blueprints of the vault. They learnt that the building had been re-designed, leaving a weak point of entry – a lift shaft that gave easier access to the building. Leading in turn to a metal doorway. The thieves abseiled down the lift shaft, prized open the metal door and entered an area covered by CCTV – more on that later – a hallway perfect to house a massive drill.
So, despite the security firm’s best endeavours the ‘edge’ of the secure area in Hatton Garden had changed. This is not unlike businesses that are constantly morphing in terms of; technology, employees, buildings and working practices.
In the world of cyber, firewalls were deployed to create a clear technical ‘edge’ defence. An insurmountable barrier, digital barbed wire patrolled by cyber guard dogs. Firewalls remain a necessary defence, AdEPT deploy this technology across thousands of schools for example, but they’re no longer a solid barrier. The ‘edge’ now changes constantly with employees bringing their own devices, using their own applications, browsing the web from work devices, sharing data using memory sticks and working from home. The digital world has created a porous barrier.
Physical – the challenge of age
In Hatton Garden the vault security was old, with out of date CCTV, poor alarm systems, and weak doors. The criminals had identified all the weakness in ageing physical infrastructure.
This is no different to the systems embedded within businesses across the UK which can at times be unloved and un-maintained. There’s a great recent case study that demonstrates the risks of lack of maintenance.
The case study relates to a virus called WannaCry, where ageing Microsoft software created a technological open door for criminals.
In May 2017, IT Directors and Security professionals went white as a sheet as they learnt of the WannaCry ransomware attack, infecting unpatched systems running Microsoft. Although the NHS was not the specific target of the attack, the impact in this world alone proved significant: 34 trusts were directly infected, 80 trusts experienced some indirect disruption, and 603 primary care organisations suffered.
6,912 patients had to cancel or re-arrange appointments (including 139 patients with an urgent cancer appointment).
As a result, the NHS increased spending towards cyber by over £150m3. Truly a case of bolting the door after the horse had bolted.
It’s clear that there is no silver bullet to this type of crime but there are some basic actions that build defences, and removing the risk by continuously updating the IT estate is a necessity – not an option. It’s like fixing a car following an MOT to ensure that its safe to drive.
Can Cloud help?
At the end of the Hatton heist the criminals grabbed the hard drive, which was stored locally near the vault, and destroyed it – along with all the CCTV footage from inside the building. Yet again a low-tech security solution was easily foiled by the criminals.
Yet the risk of loss of images could, potentially, be easily remedied with storage of CCTV in the Cloud.
The Cloud is certainly a haven with expensive defences – AWS, Azure and all those other public cloud players invest massively in Cloud security. Microsoft alone fends off 7 trillion cyberthreats per day and allocates over $1 billion each year to cybersecurity4. It’s like a massive data vault – far bigger and more secure than a Hatton Garden hard drive for sure!
“Through 2022, 95% of cloud security failures will be the customer’s fault” Gartner
Are criminals becoming more intelligent?
You can lock and bolt the front door, electrify the fences and buy in guard dogs. But, if you leave the back door open or invite the criminal fraternity into your data ‘house’, then all that security goes to waste.
The battle is constant, evolving, and with the advent of Artificial Intelligence and Robotics cyber-attacks are increasing in frequency and sophistication.
Just like ‘Basil’, supposedly the red headed, bewigged, brains of the team, the criminals are getting more and more clever.
OMG – what can be done?
Cyber security is about people, processes and technology. We can’t blame ignorance anymore – the search term Cyber Security reveals 548,000,000 Google hits. There’s a mass of information out there.
Prevention is certainly better than fixing the resultant mess.
If Hatton Garden had undergone a risk appraisal, a cyber MOT if you will, I suspect they’d have spotted the out of date kit, the old-fashioned security and the flawed processes. They’d have probably fixed it for a little less than the £20m stolen? A range of tools exist to reduce that risk & probability equation. At AdEPT we’d recommend;
• Undertaking a risk assessmente
• Continually educating employees
• Evaluating and deploying tools
• Proactively maintaining the entire IT estate
• Understanding the boundary of your organisation
• Remembering that it’s a continuous process, as the threats morph and change/VoIP
According to the Telegraph in 2015 the Hatton Garden vault saw a floor “strewn with discarded safe deposit boxes and numerous power tools, including an angle grinder, concrete drills and crowbars.” Of the £20m stolen in the Hatton Garden robbery some £9m is apparently still unaccounted for.
Cyber-crime doesn’t leave such a physical mess, but it does leave a financial, psychological, and in many cases brand, mess. So well worth checking those people, processes and technology.
1 Hiscox Cyber Readiness Report 2019
2 SplashData annual list
3 For local services, from 2018/9 to 2020/21
4 Tech Republic article – Feb 14th 2018
If you work in IT you may have heard about Birmingham City Council ending a 13-year IT and HR contract with Capita. It’s significant news in technology circles – after all, Capita is a multi-billion-pound outsourcing giant and Birmingham City Council is the largest local authority in Europe.
Many have already questioned if the Council’s decision is a sign of things to come. Some argue that in the future, more and more organisations will abandon the long-running practice of outsourcing functions that are not core to the business. You may even be considering bringing your own services such as IT back in house – a process called ‘insourcing’. Or you may have written off the idea of ever outsourcing your IT.
Is it really that straightforward? Do IT outsourcing firms and managed service providers (MSPs) like us need to hang up our hats?
But MSPs shouldn’t rest on their laurels. So it’s worth exploring why a polarising ‘outsourcing versus insourcing’ debate is not useful for anyone, including you and your business.
Paving the way: the iPhone and Office 365
Before we consider how companies might work with MSPs in the future, we need to look back.
Let’s start with the first iPhone. Released in 2007, it is one of the few modern gadgets that I consider a true disruptor. It transformed many of the time-consuming, laborious functions of our desktop machines into an elegant, accessible mobile format. Overnight, our love-hate relationship with technology became a love affair with the smartphone. Thanks in large part to the arrival of the iPhone, we would no longer settle for unreliable internet connectivity, clunky productivity tools and fiddly email processes.
By the time smartphones had really got into full swing, including the arrival of Google’s Android, something else happened that’s crucial to the story of outsourced IT: Microsoft Office 365.
Prior to the arrival of this Cloud-based software in 2011, most businesses were wary of the Cloud and many still relied on physical software. This meant CD-ROMs or on-site servers; licensing and installation headaches; and painful upgrade processes.
When Office 365 arrived, it introduced businesses and their workforces to the Cloud. And most importantly, it did so on a huge scale.
Of course, we’d all been using the Cloud previously. The internet, emerging social media and iPods had been gently ushering us towards the concept of data being up there, somewhere. But because Microsoft Office was, and still is, so widespread, it meant there was simply no escaping the new era of the Cloud. It had arrived in the workplace to stay.
The turning point in our collective mindset
You may wonder why I’ve talked about the iPhone and Office 365. Ultimately, one is a handheld device that’s immensely popular around the world, and the other is a piece of productivity software that’s integral to the modern workplace. But it isn’t so much the literal function that matters here. It’s how they have transformed the mindsets of many.
In the case of the iPhone, it has spawned a world where people expect seamless user experience. They won’t tolerate inefficiencies in technology. They expect reliability, insist on simplicity and won’t tolerate speed that’s anything less than instant. And all of these expectations will only increase.
Meanwhile, Office 365 has shown businesses in droves that having their data in the Cloud needn’t be scary. It’s demonstrated that Cloud-based software and services are ideal for the modern workforce, where people no longer need to be chained to their desktop computer. It’s proved that Cloud software can easily grow with a business, in what’s now termed as ‘scalability’. And let’s not forget the cost savings. Like most Cloud-based services, Office 365 offers a way to reduce or remove certain aspects of capital expenditure. It can also do away with the expense and hassle of relying on, and maintaining, on-site servers.
Arguably, Office 365 has played a huge part in transforming business software – and by extension, business life. Companies large and small have now embraced the Cloud and want more of it.
What has this got to do with outsourcing and MSPs?
I’ve been working in technology for more than 20 years – through all of my career. I’ve seen the widespread adoption of mobile phones. I’ve seen the dot-com bubble burst. I’ve seen IT emerge from the dark depths of the hardware cupboard and into the boardroom.
And now, I see a common theme with so many businesses: the C-suite declaring ‘we need to move to the Cloud’.
Why is this happening?
Sometimes, senior management learns of another company, or competitor, migrating to the Cloud. And this brings out a sense of rivalry, or even prompts a reckless race to keep up.
Furthermore, using Office 365 has given the business an irresistible urge to go all-out Cloud.
Other times, the idea to move to the Cloud comes from the iPhone and Office 365 mindset that I’ve explored earlier. In such cases, staff are desperate for their business technology to mirror their personal user experience of their smartphones.
And it can go further than this. Employees are now demanding the service they experience at home in their workspace. With fibre-to-the-premise providing fast internet access, Wi-Fi coverage in every corner of the home (and garden!) and devices powerful enough to stream high-quality content and video, the workspace has to, at least, be on par.
Whatever the cue, these Cloud migration aims are totally understandable – as I’ve discussed, there are numerous, huge benefits from making the move. But, the Cloud is not an overnight fix, or a matter of a few clicks. And it shouldn’t be a decision based on the Cloud migration of a peer or competitor – especially as every organisation is different in every possible way.
Additionally, moving to the Cloud is much, much more than a matter of technology. It affects every business area: from production to HR, logistics to customer service.
It is this final point that brings me to MSPs. While some technology matters are perfectly suited to, and should be, the domain of in-house IT, a full Cloud migration requires business expertise that goes far beyond the technology department. And it’s for this reason that many businesses consult an MSP.
The changing role of the MSP
Returning to my opening gambit – the decoupling of Capita and Birmingham City Council – I’m very aware that outsourced IT firms and MSPs have attracted their fair share of controversy over the years. Some of the complaints about them – such as exorbitant fees, millstone-like contracts, lack of transparency – are entirely justified.
The positive news is that MSPs are evolving, doing so to meet the changing needs and expectations of the businesses they serve.
One example of this is the impact of the GDPR, which means MSPs now and tomorrow must take a much greater responsibility in supporting their clients’ information management and security. Another example of the changing MSP is the move away from only selling boxed hardware. This is because many companies are fully capable of handling the hardware aspect of technology – and quite rightly, will no longer accept the traditional ‘break-fix’ model of IT outsourcing of old.
As these business needs have evolved and diversified, the MSP market has been opened up. Naturally, this means there’s more choice than ever for a business looking for support with any kind of technology change. But with more choice, comes more confusion. And I see that confusion every day with businesses of all sizes and industries.
How does this all relate to your business?
I’ve focused on Cloud adoption here because it’s a dominant part of my work and one of AdEPT’s specialist areas. But if your business needs external help with any aspect of its technology, you might find yourself being baffled by choosing an IT supplier or indeed an MSP. This is because not only are there so many more providers to choose from – it’s because many providers do themselves no favours when it comes to explaining what they do and how they can help.
This is perfectly illustrated by the number of ‘as-a-service’ options now available from MSPs. There’s ‘DaaS’ or desktop-as-a-service; ‘ITaaS’ or IT-as-a-service; ‘CIaaS’ or Cloud-infrastructure-as-a-service; ‘PaaS’ or platform-as-a-service… the list goes on. My personal favourite is ‘BADaas’ which sounds like some kind of rebellious rockstar – it’s actually Biz-Application-Development-as-a-service…
It’s no wonder then, that businesses find navigating the world of MSPs intimidating before they’ve even found a provider. But it doesn’t stop there. Often, when an MSP is chosen, the negative experience can continue. And one reason for this is because too many MSPs fail to ask the right questions.
As I’ve described above, I’ve encountered many companies whose reason for getting in touch is ‘we want to move to the Cloud’. It’s at this point that the plan of action can go the right way or the wrong way. So, when I’m faced with such a statement, I’ll ask ‘Just what is it you want to achieve?’ or more simply, ‘Why?’
I never ask this to be obstructive. Instead, I’m playing my role in being a responsible MSP – one that goes beyond pushing technology for technology’s sake. It’s a question that sets out to unearth the real business needs and ensure, as a MSP, we’re going to make a genuine difference to your business.
Asking the right questions at the start is, of course, the tip of the iceberg and I could say much more on this, but that’s a blog in itself.
Instead, I’ll touch briefly on the other aspects of an MSP that should be a dealmaker for your business – now and in the future. A good MSP takes time to understand your business from the outset; a great MSP is ahead of technological evolution, not reacting to change when it’s too late; and an exceptional MSP invests in every phase of the relationship, from presales to support. These are the qualities that will make or break tomorrow’s MSPs and the businesses they serve.
By reading this blog, you’ve hopefully learned why the ‘outsourcing versus insourcing’ debate that’s spilling over from the public sector isn’t black and white. You’ve hopefully seen why and how MSPs are changing – and what businesses should now expect from those providers. And above all else, I hope you’ve had a taster of how an MSP should be helping your business.
Most people often use the terms business continuity plan and disaster recovery plan interchangeably. Although they do go hand in hand, the two terms are not the same and in fact, describe two different approaches and objectives at ensuring that a company bounces back in case of a disaster. A company can choose to focus on one over the other although most choose to apply both to be completely prepared for the unthinkable.
So what is the difference between a business continuity plan and a disaster recovery plan? There is a breadth of information available on these two topics and the answer might vary depending on who you ask. However, let’s start with a general definition of each term.
What is a Business Continuity Plan?
A business continuity plan (BCP in short), refers to a collection of protocols established to guarantee that a business can maintain a healthy level of business operation in the face of a disruptive event. All the steps and processes listed in a BCP answer one question; “how can we continue to offer an acceptable level of service if a disaster strikes?”.
What is a Disaster Recovery Plan?
A disaster Recovery Plan (DRP in short), refers to the specific technologies and steps that a company needs to implement to recover AFTER a disruptive event. This usually pertains to infrastructural failure, lost data or other technological components. This plan answers the question; “How do we recover from a disaster?”
Dell describes a business continuity plan as a strategy aimed at helping a business continue operating with minimal disruption in case of a disaster.
A disaster recovery plan, on the other hand, is more specific. It is a plan aimed at restoring applications and data that an organisation uses in case their servers, data centre or other infrastructure is destroyed or damaged. Some people even argue that a disaster recovery plan is a subset of disaster recovery planning.
Business Continuity vs. Disaster Recovery
Organisations face a wide range of threats that could disrupt their normal operations or even decimate them completely. This could be anything from natural disasters such as floods, hurricanes, viral outbreaks to man-made threats like workplace violence, cyber-attacks, industrial sabotage etc. According to a report published on Forbes, a significant majority, close to 90%, of all small business fail within one year after facing a disaster. This goes to reinforce the importance of businesses having both disaster and recovery plans even though not every business employs both. In reality, however, a comprehensive BCP will have a DRP built into it.
A BCP is a master document that stipulates all aspects of your organisation’s prevention, mitigation and response including recovery protocols. In essence, an effective business continuity plan also addresses how a business will recover from every kind of disaster.
Let’s take a closer look at each plan
Business Continuity Planning
Business continuity planning, in general, is a high-level process that focuses on critical operations within an organisation that needs to be running to maintain a healthy level of service. If the plan is implemented effectively, the organisation should be able to continue offering products and services to customers with minimal disruption during and immediately after a disaster. This also involves addressing the needs of other stakeholders such as vendors and partners as the effects of a disaster can also affect their operations.
For the above reasons, a BCP needs to cover all ends of disaster preparedness in an organisation including prevention, mitigation and recovery. These broad categories of actions need to be individually defined for each risk and disaster scenario. This can mean the difference between survival and a complete shutdown. BC planning achieves these objectives through relentless analysis and isolation of critical business processes and threats. This helps you create a priority list of key processes, resources including employees and infrastructure not limited to IT.
Disaster Recovery Planning
A DRP can be viewed as a more specific part of a BCP. Although some people tend to narrow the focus of a DRP to information systems and business data, it can also refer to protocols outside the IT scope. In other words, even though most businesses are now heavily IT reliant, a DRP does not have to be exclusively about IT. It could include guidance on how to restore communication or finding a secondary business location to accommodate critical operations and systems.
Even with an extended scope of a DRP, it is essentially a response strategy – mostly being a component of a BCP. It lists all necessary technologies, procedures and objectives required to perform a quick recovery after a disaster. The recovery could pertain any point of failure across all operations including data loss, hardware failure, network outages, application failure etc.
A business continuity plan is the first line of defence for an organisation against disaster. However, a disaster recovery plan is a critical plan particularly for organisations that cannot function without vital business data. In practice, it is best to implement both plans when possible. View our business continuity & disaster recovery plans to find out how we can help your business.
Recovery Time Objective (RTO) and Recovery Point Objective (RPO) are vital parameters in any disaster recovery plan. However, are the two terms different means towards the same end? RPO and RTO are two key metrics used by organisations when developing a disaster recovery plan that can guarantee business continuity in the face of a disruptive event. At first glance, the two terms seem to be quite similar, but they are different metrics with unique objectives in disaster recovery and continuity management. Let’s take a closer look at each metric.
Recovery Time Objective (RTO)
RTO is the time duration and a service level within which a business process must be restored after a disruptive event to avoid a break in business continuity. It dictates how quickly an organisational infrastructure needs to be back online after a disaster. RTO is sometimes used to define the maximum downtime that an organisation can tolerate and maintain business continuity. In practice, this is a target time set to have services back up and running after a disruption – say two hours.
In reality, such RTO (2 hours) is not always attainable. A disaster such as a storm could leave a business down for weeks or even months. In other cases, a small lawn care company could get by with paperwork orders for a week or more in case of an outage. In essence, RTO is different for different organisations and in different circumstances. In outsourced IT services, RTO is defined within a service level agreement. The implication is that you can choose to have a better RTO at a higher cost depending on your business requirements.
Alternatively, businesses can handle RTO internally if you have an in-house IT department. In this case, there should be a goal for addressing technical problems. Of course, the ability to meet the RTO will depend on the severity of the disaster. The RTO for a server crash is not attainable for a natural disaster such as a flood. RTO is more than just the amount of time needed to recover from a disaster, it also includes steps to mitigate and recover from different disasters.
Recovery Point Objective (RPO)
RPO describes the time interval that might pass during a disruption before the quantity of data lost exceeds the maximum allowable tolerance or threshold in the business continuity plan. For instance, after 16 hours, the effect of lost sales on a small business might become an excessive burden against costs and result in not meeting sales targets.
It is important to understand how much data is an acceptable loss in your organisation. In this regard, mirror copies and backups of data are a key part of RPO. Some organisations determine how often they need to create backups by calculating the recovery costs versus storage costs. Other businesses create a real-time clone of their data using cloud storage. In this case, a failover only takes a couple of seconds.
As with RTO and acceptable downtime, different businesses have different levels of tolerance for data loss. While a small lawn care services company can retrieve 24 hours of records without any effect on real-time operations, an online billing company can face major difficulties with just a few minutes of data loss.
RPO is often categorized by time and technology
These objectives make use of external storage backups of the operations. In this case, the restoration point is the last available backup.
Up to 4 hours
These objectives call for ongoing snapshots of the operations. This allows the organisation to get data back faster with minimal disruption.
Such objectives require the application of enterprise cloud backup and storage solutions to replicate and mirror data. In most cases, these services offer maximum redundancy by replicating data in multiple geographic locations. The net effect is that failback and failover are seamless.
Both RPO and RTO involve time periods for the measurements. However, RTO focusses on bringing hardware and software online while RPO focuses on acceptable data loss. With so many forms of disasters to consider, it is important to define what they have in common – disruption of normal business operations.
Preparing your business for any disaster is critical to ensuring minimal downtime, continued operations and avoiding a negative impact on your reputation and revenue. This makes implementing business continuity and disaster recovery plans a crucial step for every business. That said, establishing RPO and RTO will greatly decrease the negative effects of downtime and also help you manage disasters more effectively when they happen. Partnering with a business continuity professional will help you manage RPO and RTO more effectively using the latest technologies and backed with an extended understanding of the industry. View our business continuity management and disaster recovery services for more information on how we can help.
This is one of our longer blogs. So here’s a summary of key points covered here:
It is impossible to overstate the professional challenges that IT staff have faced over the past six months.
Although IT work is less emotive than, say, that of healthcare professionals – and understandably so – technology is arguably the backbone of an organisation, helping it move forward in areas as diverse as finance and marketing. And so, when the pandemic took hold, IT professionals had to do all they could to keep that backbone standing upright.
Yet at the same time, they found themselves handling the biggest change to the way that we work in recent history. As a result of the crisis, they were tasked with what is the most profound kind of digital transformation: to remodel the modern workplace. And to do this literally overnight.
Now that the adrenalin has worn off and businesses look to return to some form of normality, IT professionals again face a new catalogue of demands. Some of those challenges are obvious – and there’s already thousands of articles that address them.
A less obvious challenge facing IT professionals right now is the ‘where do I even start?’ question. It’s something that many of our clients ask when faced with an overwhelming to-do list and an equally staggering list of expectations from the business.
If you’re at that point – asking yourself where should you start with IT now your workplace has been changed forever – this blog is for you. We have identified three key areas that will be of great value to your organisation in the post-Covid world. But before we get onto these pointers, it’s worth framing them.
First, we’ve focused on Microsoft. And we’ve done this because while we know you rely on many other vendors – some of whom are likely to be our very own partners – we also know that Microsoft is one of the most prevalent IT suppliers. And so, we’re hoping there’s guidance here that will be useful to you, irrespective of the size or the nature of your business.
Second, this blog is the tip of the iceberg. It’s likely we’ll look at these three areas more closely in the future, through further blogs or webinars. If we do, we’ll share them on LinkedIn, so if you’re not already following us there, do so here.
Third, if you’re asking yourself ‘where do I even start?’, remember that you’re not starting from scratch. You are instead building on years of hard work that’s led your business to where it is now – the past few months have been an acceleration of that. What you do now is an evolution, not a revolution – and that’s not to be confused, Alan Partridge style…
Microsoft Teams: what about Skype for Business?
It will come as no surprise to you that from 17 February to 14 June 2020, usage of Microsoft Teams grew by 894 per cent. We suspect that existing business users of Office 365 – now referred to as Microsoft 365 – account for a large part of this growth. And given that Teams has been part of Microsoft 365 since March 2017, it’s likely that many of these businesses had access to Teams long before the pandemic.
And so, our hunch is that when it’s an all-hands-to-the-pump situation, companies can and will embrace new technology and features – and they can do so quickly. The Microsoft figures are a good testament to that – and to the agility of IT teams.
Nevertheless, the catalyst for this change has been most unwelcome, causing many a sleepless night for an IT professional. At the same time, the switch to video conferencing has been so significant that it has tinged everything from office etiquette, to popular culture, to TV advertising. If you’ve not yet seen a schmaltzy commercial featuring a grid of talking heads, then lucky you.
One related topic that hasn’t attracted nearly as much interest concerns Skype for Business. Many organisations still use this service – and if yours is one of them, chances are you now have a raft of questions that have emerged in this new age of Microsoft Teams. Among them, one question appears to be the most pressing:
“I’ve heard Microsoft Teams is replacing Skype for Business. Do I really need to shift to Microsoft Teams?”
It’s a question that reflects the confusion about Skype we’re seeing in the marketplace. And it may stem from Microsoft’s announcement that Skype for Business Online is to be retired at the end of July next year (2021).
As with so much in the world of IT, this is another occasion where the devil really is in the detail. As you can see from the above blog, Microsoft’s announcement refers specifically to the ‘Online’ version of Skype for Business.
Of course, in the world of technology – where even fridges are connected to the internet – it’s natural to regard all products as being ‘online’. And so, we’ve seen many clients get a whiff of this news and become understandably confused between ‘Skype for Business Online’ with ‘Skype for Business Server’.
Semantics aside, where does this leave you? Let’s set the record straight.
If you use Skype for Business Online – where the system runs in the cloud – then there’s no way to sugarcoat it: you’ll need to make the shift to Teams. And as the Microsoft blog says, you’ll have to do so by 31 July 2021.
If you use Skype for Business Server – where the software runs from hardware in your own premises – there is nothing to do. But, as many of our clients have discovered, you may find Teams is more suitable for your business, it being a platform that unites many business communications tools. It’s also worth mentioning here that if you use even the most basic version of Microsoft 365, then you already have access to Teams.
We’re highlighting Teams because it appears that a lot of businesses will never return to past ways of working. As such, video calls and, more generally, unified communications platforms, are likely to be more critical than ever before.
Whether you have the Online version of Skype For Business, or use the Server version and want to switch to Teams, there’s no quick or one-size-fits-all answer to what you do next. So here’s the part where we ask you to get in touch, so we can objectively guide you towards the right setup for your business. This isn’t a sales pitch, but rather, it’s us being honest about the situation.
Microsoft 365: backup your backups
Those of you who already use Microsoft 365 will have no doubt encountered, or are already using, the cloud storage features of the service. It follows the same principles as Google’s equivalent – G Suite, with Google Drive – and is similar to other cloud storage services, such as Dropbox and Box.
With the emergence of these services over the past few years – and perhaps in the excitement about all things cloud – has come a mindset that cloud is an infallible way to store data, which isn’t strictly true. And it’s a misconception that’s particularly relevant now as remote working, often with personal devices, becomes par for the course.
The experience of IT leaders working through the pandemic is especially relevant here. According to Computer Weekly, 82 per cent of these professionals saw use of cloud ramp up in response to Covid-19. And 60 per cent said use of off-premise technologies continues to grow.
Consider here then, for example, a company leaver. As they wind up their employment, they may innocently decide to ‘tidy up’ their files. Or worse, they may be disgruntled and maliciously remove valuable data. In both instances, the standard cloud backup on Microsoft 365 may only be able to recover items deleted in the past three days. And so, if you discover a data loss that’s older than that, you might find retrieval is impossible.
In pre-pandemic times, these scenarios would be valid causes for concern. But in the post-pandemic world – with looser reins on staff and technology – this becomes an even more troubling possibility.
And so, this is why we are urging you and all of our clients to look into backing up your backups. In fact, Microsoft itself gives this same advice – because, after all, no IT professional wants loss of valuable data added to their already enormous plate – especially now.
To help clients with this concern, we work with a backup specialist called Veeam, which has been positioned ‘highest in execution’ in the 2020 Gartner Magic Quandrant – the fourth year in a row. Get in touch, and we can explain how it could work with your IT setup and benefit your business.
Microsoft Enterprise Mobility Security: go beyond two-factor authentication
As with data backup, the pandemic has exposed businesses to another IT-related risk and again, it relates to the new ways of working. Let’s take a step back to explain this.
As we all know, most of our online accounts and the valuable data within them are accessible through an email address and a password. This is true in both our personal and our professional lives.
There are two inherent problems with this setup.
First, email addresses are easy to harvest. Professional email addresses are often inadvertently leaked into the public domain. Many people have them listed on their LinkedIn profiles. And many companies have a standard format for their email addresses, so if one address is in the public domain, it doesn’t take much detective work to figure out the email addresses of specific employees.
Second, despite our efforts as IT professionals try to encourage our colleagues to use strong passwords, it’s much easier said than done. And so, armed with a genuine email address, a cyber criminal only has to crack a password to crack an IT system.
It’s for these reasons that in recent years, two-factor authentication (2FA) has grown in popularity, with options including one-time passwords, or authenticator apps. Indeed, these are very helpful tools that add a valuable layer of extra protection.
But there are a few catches. Some of them are covered by this very interesting article from Wired, which although dating from 2013 is still very relevant. One of the most notable flaws with 2FA is that it only verifies the user. It doesn’t authenticate the device being used – and again, the world we now work in means we need to think again about the devices accessing our IT systems and data.
The solution? Well, one of them is Microsoft’s Enterprise Mobility + Security (EMS). It’s a bit of a mouthful, granted, but it is worth talking and thinking about. It’s built around conditional access – that is, it can vet devices as well as users.
For example, say you wanted to restrict access to your systems by geography. With EMS, you could do this, since it can look at IP addresses and decide whether to grant access to a system. Likewise, if you wanted to ensure all the devices accessing your systems met certain security criteria, EMS would let you do this.
The interesting thing about this level of authentication is what it reveals about your systems, and the devices trying to access it. We’ve a sneaking suspicion that many companies would be surprised about who and what is trying to access their systems, and where those access attempts originate.
As you’ve guessed, we’re highlighting EMS because it is especially relevant in the post-Covid world. And if it’s something you would like to explore, then we can talk you through it.
You can get in touch here.
We know that right now, you’ll be swamped with advice and how-to guides related to technology in the post-Covid world. So we hope that our pointers here give you some practical tips, along with fresh thinking and useful background and context so you can make your business case internally.
We also hope that if you have any questions, you’ll get in touch – you’ve seen the links above, but you can call us on 0333 4002490 or email email@example.com. We live for problem solving, we’re friendly, straight-talking and positive.
- Andy Boylan is a Pre-sales Solutions Consultant for AdEPT Technology Group. He has worked in IT and telecoms for some 35 years, and has an MBA With Distinction in Technology Management. He is interested in the intersection of business technology with people and culture, and ardently believes that the most successful technology projects look beyond technology to address wider business issues.
You can connect with Andy on LinkedIn here.
Cloud computing has created a wide range of new business solutions. Understanding the differences between similar-sounding solutions can be confusing at best, and remote desktops are particularly complex. This article will outline the key differences, pros and cons between hosted desktops (Desktops as a Service, DaaS) and virtual desktops (Virtual Desktop Infrastructures, VDIs). Understanding these differences is crucial in making the best decisions for your business.
Hosted Desktops – DaaS
Hosted desktops allow for cloud-hosting of company computers. With DaaS, your company’s desktop computers are not connected to in-house servers. Your DaaS provider is responsible for providing network management, load balancing and resource provisioning.
A third-party data centre is responsible for hosting your company’s desktops. Your staff are thus able to access their desktops, applications and data from any location. With the correct setup, hosted desktops can stream to almost any other device.
Virtual Desktops – VDIs
Virtual desktops and hosted desktops offer similar functionalities. The fundamental difference is the solution’s setup. VDIs are managed in-house and not by a third-party (external) solution provider.
Virtual desktops are the same as hosted desktops in that your staff can access their data, applications and desktops from any location.
While virtual and hosted desktops may sound very similar, important distinctions make each solution suitable for different companies’ requirements. Deciding which one you need is a matter of comparing the pros and cons of each to your company’s requirements and available resources.
Pros and Cons
The greatest benefit of utilising a virtual desktop infrastructure is control. Since a VDI is set up and managed by an in-house department, businesses retain total control over their IT solution. This level of control is vital within industries like finance or law.
The drawback to an in-house solution is the increased workload on your company’s IT department. Additionally, third-party providers can provide extra benefits in terms of security.
The setup of virtual desktops is both expensive and time-consuming. A large initial investment is required when obtaining the hardware and software for a VDI.
Hosted desktops are the opposite in that they require very little in the way of an initial investment. These desktop solutions are usually sold as a subscription-based service, and businesses do not need to own a data-centre to utilise DaaS. This way, businesses unable to build VDI can benefit from remote desktops.
Companies that offer hosted desktop services are run by experienced professionals with cutting-edge hardware. Businesses that pay for other companies to host their desktops benefit from reduced IT overheads.
Hosted desktops are usually more secure than virtual desktops. Since hosted desktops are provided by specialist companies, they utilise enterprise-grade data centres equipped with the best in physical and cybersecurity measures. Even though your business’ data is held off-site, it is much more secure since most companies cannot afford top-level security systems. Additionally, as a company’s remote desktop requirements grow, hosted desktop providers are capable of scaling the services they provide to match.
|Managed by professionals||Partial loss of control|
|Low initial costs|
|Fast implementation time|
|High levels of security.|
|Complete control||High initial costs|
|System and staff familiarity||Requires intensive management.|
There are strong benefits regardless of whether your business adopts hosted or virtual desktops.
All remote desktop solutions allow your staff to work from any location. Salespeople travelling abroad, staff working from home and other employees operating outside of a normal schedule can all work just as efficiently as if they were in their usual offices. All positions within your company benefit from increased productivity.
More opportunities for remote work lead to happier workforces. At least 30% of the average workforce waste one or more hours every day, all because of issues experienced in office environments. Problems include irritating or noisy coworkers and unnecessary meetings.
The average remote worker is 13% more productive. Remote workers suffer from fewer distractions, have more productive hours and take fewer sick days.
Finally, all remote desktop solutions are good for the environment. As businesses look for more ways to reduce their carbon footprint, remote desktop environments provide reductions in energy usage and carbon emissions.
Find out more about our Nebula Hosted Desktop Solutions.
As cloud solutions continue to be an essential part of key business operations, cloud transformation has become a more relevant issue that almost every organisation needs to address. Even for organisations that are currently using cloud solutions, most don’t yet understand what cloud transformation really is and how it can impact an organisation. Gartner estimates that cloud transformation will drive $1.3 T in IT spending by 2022 as organisations seek to benefit from cost savings, enhanced revenues increased agility and innovation. In preparation, this article goes other the key aspects you need to know about cloud transformation; what is cloud transformation? What does cloud transformation mean for an organisation? How will cloud transformation benefit an organisation?
What is Cloud Transformation?
Put simply, cloud transformation is the process of moving your work to the cloud, including migrating apps, data, software programs or the entire IT infrastructure in line with your business objectives. Understandably, cloud transformation seems like a complex process since it can mean different things to different organisations. However, the overall objective, in every organisation, is to have the ability to quickly adopt new technology while also responding to new competitive threats and immediate market needs.
What Does Cloud Transformation Mean for an Organisation?
Cloud transformation is not simple or straightforward. It involves developing new muscle memory and essentially a new organisational culture in every aspect of an organisation that encourages and supports new ways of operating and learning. It is not all about technology. Employees are a key part of this transformation where they need to be empowered to make rapid decisions inspired by common organisational principles and directions. All aspects of the people, processes and technology components need to mature to new operational models.
Cloud is just but a platform that enables a business to be more agile and responsive. For cloud transformation, however, the change must start internally and touch on every aspect of the organisation. Most organisations are under the notion that shifting their on-premise IT infrastructure to a cloud service provider like AWS or Azure equals cloud transformation. Not even close. Cloud transformation must start by transforming organisational processes, organisational culture and technological components to make them cloud suitable – otherwise referred to as attaining cloud transformation maturity. Cloud-first processes, for instance, are;
- Self-service friendly
- Embedded with learning and healing abilities
- Exchange data and services with external systems
- Highly robust and scalable
Before an organisation can optimise their processes to be cloud-first, they cannot take full advantage of the benefits of the cloud.
How can Cloud Transformation Benefit an Organisation?
Many organisations approach cloud transformation intending to save costs on IT infrastructure and operations. While this is one of the benefits, we wouldn’t rank it as the most important reason why you should start your cloud transformation journey. Knowing what you know so far, you wouldn’t either. This is because the real value of cloud transformation is in helping organisations respond to the age-old dilemma; how do we respond to market changes, technological advancements and competitive forces? Cloud transformation gives organisations the ability to quickly consume the latest technology, rapidly adapt to the changes, and provide better responses to market needs. It achieves this through;
An organisation’s cloud transformation maturity can be measured by its ability to quickly respond to market conditions and competitive threats. Cloud transformation allows businesses to rapidly consume new technologies and resources and efficiently apply them to market conditions.
Failure is an essential part of growth. You cannot innovate without an aspect of failure. An organisation’s future can depend on the ideas that its employees come up with and how quickly they are incubated into products or services. Cloud transformation, inspires a learning culture that tolerates failure and thereby allows employees to be more expressive and creative to try new things, test new products and fail forward. Encouraging risk-taking is key to rapid innovation. This is the culmination of failure tolerant processes, transformative and learning organisational culture coupled with agile technological components.
Cost optimisations is also a key aspect of cloud transformation but not really in IT infrastructure and operations as many organisations assume. Instead, cloud transformation allows organisations to accurately measure their investment against key value chains and thereby determine if a particular investment is feasible and optimal in line with their P&L.
Recruiting Top Talents
Organisations can define their success by the talents they hire and mature internally. Cloud transformation pushes an organisation to look beyond the traditional pool of talents. In today’s fast-paced world, organisations must focus on finding adaptive, self-learning and highly dynamic and engaged employees. This ensures that the employees can take on the transformative organisation culture and evolve with the business by demanding improvement in everything that they do.
So, to answer the question; what is cloud transformation? The correct answer is it depends on your organisation and your objectives. However, if you want to create an agile, innovative, future-proof organisation, it should be your priority. Contact AdEPT today, the leading business solutions firm in the UK, to get started on your digital transformation journey or to learn more.
Business continuity is vital in every sector and industry, but perhaps none more than the healthcare industry. Over the last couple of years, the need for business continuity management has become ever so apparent. Be it from past experiences such as the 2007 floods, the H1N1 pandemic, the emergence of formal guidelines and standards or the COVID 19 pandemic – the importance of a robust NHS is there for all to see.
In this article, we will look at why the NHS needs business continuity, the aim of business continuity in the NHS and the steps you need to take.
Why Business Continuity in Healthcare is So Important
According to a 2016 report by Security Score Card, healthcare ranked 9th in overall security in comparison to other industries. Additionally, 1 in 4 healthcare organisations has been hit by a ransomware attack. The most prominent example being the WannaCry ransomware that cost the NHS upwards of £92m in damages. With increased risks, both natural and manmade, the threat is clear. When such disasters strike, medical professionals and organisations need to have the ability to regain immediate access to critical patient data. If the data is corrupted, stolen, lost or unrecoverable over a prolonged period of time, the impact can be costly both from a business and medical perspective.
We all appreciate the role of a good healthcare system. However, healthcare systems and organisations are incredibly complex. This leaves systems like the NHS open and vulnerable to countless risks. According to the Security Score Card, the most common vulnerabilities in healthcare include;
- Lack of system patching due to lax protocols for updating operating systems and applications.
- Inadequate cybersecurity training. Healthcare is one of the leading industries that are prey to malicious email attacks.
- Weak password. Most healthcare organisations have lax password management policies that make it easy for hackers to access their systems and applications.
- Unprotected devices. In a vastly interconnected world, some advanced medical devices connected to the internet, unfortunately, lack sufficient cybersecurity protection measures.
- Outdated data backup systems. Although a change in the healthcare industry does take some time, some organisations have taken too long to upgrade to more advanced data backup solutions that could negate the effects of data loss or corruption.
Above are some of the many reasons that have necessitated the introduction of the NHS England Business Continuity Management Framework.
The Aim of Business Continuity in the NHS
Compliance with the Law
Business continuity is now mandatory in all NHS organisations. Under the Health and Social Care Act, 2012 and the Civil Contingencies Act 2004, all NHS organisations have a duty and obligation to implement continuity arrangements as set out in the NHS England Core Standards for Emergency Preparedness, Resilience and Response (EPRR). This framework gives organisations the ability to identify and manage risks that could disrupt normal service. It also obligates them to maintain services at set standards in case of any disruption or recover services to these standards in the least possible time.
The consequences of any disaster, be it a virus, flood, power surge, fire or cyber-attack, can be fatal. Consider a high dependency unit with patients who require constant monitoring to adjust what medication they need, in what dosage, keep track of the latest systems, what has worked and so on. If this data was lost, corrupted or couldn’t be accessed in a prolonged time, the consequences could be life-threatening. Having a dependable business continuity management plan ensures that critical medical data can be restored almost instantly to maintain vital care to patients.
Protect Sensitive Data
Electronic personal health information (e-PHI), is incredibly personal and sensitive. In most cases, this information needs to be accessed from multiple sources across vendors and locations which increases the risk of it being compromised. A business continuity plan implements technology that regularly backs up data, checks it for integrity and also encrypts it to reduce the risk of unauthorised access. In addition to protecting PHI, it also maintains the security of the entire organisation.
The well-being and proper care of patients are undoubtedly very important. However, the bottom-line impact on NHS organisations as business entities is also very important. A loss of data regardless of duration or volume can be extremely expensive – both directly and indirectly. There is also the issue of downtime which can grind operations to a near halt, spike operating costs and damage goodwill. It doesn’t help that the cost of downtime rises with each passing minute. Business continuity ensures that healthcare organisations can maintain an acceptable level of service in the face of a disaster or at least recover to an acceptable level almost instantly.
It is not hard to see why all NHS organisations are required to have business continuity management plans. The benefits, both to the organisations and the patients are evident. At AdEPT, we have seasoned IT consultants who have worked with and provided training for a wide range of NHS organisations for many years. Contact us today to learn more about Business Continuity planning, training and implementation from the leading business solutions firm in the UK.
Data is the single most expensive asset in an organisation – far more than the equipment you use or a warehouse full of products. This is predominantly the reason why the thought of losing data is so sobering. With an increasing number of ways an organisation can lose data, it is only right that every organisation keeps its data safe and retrievable. From hardware fails, power outages, human error, increasingly sophisticated cyber threats, natural disasters and even insider threats, the risks are endless.
Why Data Backup Is Important in Business
The average cost of each record lost or stolen in the UK is around £113. Businesses collect an increasing amount of data every day – and its value is ever increasing. As such, the reason why businesses should regularly back up their data is not only to protect their most valuable asset, but also to guarantee its security and privacy as well as your ability to survive a malicious or accidental event. Studies also show that the cost of data loss and breaches is increasing as companies collect more data.
As a small business, you might think that you are immune to such dangers – that it only happens to big corporations. Well, research has identified that theft of intellectual property is the prime motivation for data breaches that result in data loss. This means that all size organisations, including start-ups and SMEs, are susceptible to loss of data either through malicious events or mistakes. To avoid becoming a victim of preventable human error or malicious data breaches, and to maintain continuous access to your data, you need to make data backup a regular part of your IT practices. The question now becomes; how often should you backup your business data?
How Often Should a Company Back-Up Data?
Although a bit of a cliché, the answer is; it depends.
Backing up data as part of industry regulation or organisational insurance policy takes more discretion that just making duplicate copies of everything. Understandably, data backup can be exorbitantly expensive. Therefore, to design an effective and efficient data backup strategy you need to evaluate the following;
- Who manages the data? Is it your in-house staff or do you work with a vendor?
- What data do you need to back up? Everything or the most vital?
- Where do you intend to store the backups? On-premise or off-site?
- Why are you backing up your data? In-house policy or by obligation?
- How do you intend to back up the data? Via cloud or traditional backup?
Having a clear understanding and answer to the above issues will make deciding how often you should backup your data a lot easier particularly when you are working with a vendor or you have an obligation to maintain backup data.
That’s not all, there are additional factors you need to consider when deciding how often you should backup your data.
Factors to consider when deciding how often a business should backup its data
- How important is the data on your systems?
- How often does the data change?
- What type of information is contained in the data?
- What type of equipment do you have for backup?
- How quickly do you need to recover the data?
- Who is responsible for your data backup and recovery?
- Do you have/need off-site backups?
- Does backup interrupt your business operation?
Although there are many things to consider, experts agree that how often your data changes and how important it is, are the most influential determining factors. With almost every business creating new content and collecting more data sometimes by the minute, organisations need to make sure that this data is safe and secure as soon as possible. While you might assume that you are relatively safe from cybercrime or malicious attacks, hardware failure and software corruption account for more than 75% of all business data losses.
What Should You Do?
It is important to remember that every business has unique circumstances and therefore different requirements. However, in general, the average mid-size company will be okay with a full back up every 24 hours with incremental backups every 6 hours. However, mid-size online retailers might need more frequent incremental backups of around 4 hours as well as making hourly transaction logs – to keep track of new purchases.
If you have significantly high traffic or handle more sensitive data such as large banks or enterprise online retailers, you may adopt a more aggressive approach with a full back up every 24 hours, incremental backups every 3 hours and transaction logs every half hour. However, this requires sizeable storage capabilities and resources not to slow down normal operations.
It is also important to consider whether backups will be done automatically or manually. Automatic backups will save your IT team time and focus their abilities on other tasks particularly in more frequent backups. Additionally, manual backups are prone to human error. This and other reasons makes cloud backup through a provider the best solution for most businesses.
At AdEPT, we provide you with just the data backup solution you need with the ability to scale as your business grows or needs change. Contact us today to learn more about our services or to talk to our team about your needs.
For an industry that has historically lagged behind in adopting new technologies, healthcare is surprisingly leading in cloud adoption. With a huge number of legacy systems and large volumes of highly sensitive and personal data, it is understandable why the industry has traditionally been slow to embrace change. However, this approach has completely changed with the rise of cloud technology. According to the West Monroe Partners Report, a remarkable 35% of healthcare organisations held the majority of their infrastructure and data in the cloud – becoming one of the leading industries to do so. What makes the cloud such a good match for the healthcare industry and what is the impact of cloud computing on the healthcare industry?
This article details the reasons healthcare is leading in cloud adoption and how the cloud is transforming healthcare as we know it.
What Makes Cloud Computing So Ideal for the Healthcare Industry?
Governments and healthcare institutions all over the world have faced countless challenges in trying to digitise health service. With the rising demand for healthcare services due to an ageing population and innovation of more efficient infection detection technologies, personnel shortfall and rising expectations for digitised services, current healthcare models are under immense strain. Additionally, there is a cultural push for efficiency across all industries, healthcare included. When you consider the process of version control for a healthcare system or institution managing millions of electronic patient records, the need to integrate healthcare and social information while connecting countless hospitals, clinics, trusts, insurers and practitioners, the challenge is clear for all to see.
However, in the challenges that healthcare is facing is where the cloud excels. In the past, healthcare systems had to be centralised. This meant that institutions had to acquire, manage and maintain all the necessary hardware, software and all relevant IT personnel whether these resources were used at full capacity or not. While you would think this would make for a fail-proof and secure system, apparently not – as evidenced by the WannaCry attack on the NHS.
Cloud computing is changing how doctors, nurses, clinics and hospitals deliver cost-effective and quality services to increasing numbers of patients. This is driven by need and push to improve the quality of patient care and experience, the economic imperative to cut costs and have integrated, secure and efficient systems. Through the decentralised system supported by cloud computing, healthcare systems and institutions can efficiently process, deliver and analyse data in collaborative fashion at a fraction of the cost of the previous system.
How the Cloud is Transforming Healthcare
Here are the ways in which the cloud is impacting healthcare
Cloud computing allows for on-demand availability of computer resources such as computing power, data and storage hence negating the need to purchase and maintain in-house hardware and servers. An institution pays only for what they need and use resulting in massive cost savings. This also improves scalability which gives institutions the ability to perform capacitive overhauls whenever they need to while also keeping costs in check.
Cloud computing facilitates and supports data integration regardless of origin or storage thereby making data patient data readily available throughout the healthcare system whenever needed. This diminishes the distance between specialists and allows them to review cases regardless of location while also providing insight to improve healthcare delivery and planning. Interoperability across various healthcare sectors such as pharmaceuticals, insurers and payment avenues increases efficiency and improves patient’s experience.
High Powered Analytics
Data in any industry is a huge asset, both structured and unstructured. The application of artificial intelligence algorithms and Big Data analytics on patient data collected from various sources can power up medical research including formulating personalised care plans. It also means that entire medical histories of patients are readily available to physicians when prescribing treatments so that nothing is missed.
Cloud computing gives patients’ access and ownership to their data allowing them to participate in decisions that affect their health as well as enhance patient engagement and education. While there are concerns about cloud storage of patient data, reliability is higher including data recovery if need be.
Cloud storage of patient records allows for remote accessibility which is one of the leading benefits of using the cloud in healthcare. This is particularly helpful for post-hospitalization care plans, telemedicine, and virtual medication follow up. Telemedicine apps improve access to healthcare services, convenience in service delivery and the overall patient experience.
Understandably, there are concerns regarding the security of patient data and the system itself, compliance with health data related security norms and system downtimes. However, cloud technology is rapidly evolving to increase security through encryption and provide redundancy to caution against downtimes.
Although cloud computing still has a long way to go in healthcare, its adoption has definitely had a powerful positive impact on the industry including cost savings, increased interoperability and improved service delivery. If you would like to learn more about how cloud computing can improve healthcare institutions, contact AdEPT today – the leading business and IT solutions firms in the UK.
Every business faces challenges, both common and unique. Some of these challenges and potential risk factors could cause significant setbacks or in some cases, utter ruination of a business. Owning and running a successful organisation requires an astute understanding of how to maintain core operations in the face of these challenges. From bad publicity, internal wrangles, cybercrime, natural disasters, economic downturns to power outages these and other risk factors are enough to keep you awake.
While some business owners and leaders tend to believe that they can quickly come up with a “Plan B” on the go, the leading global corporate leaders spend time and money creating a business continuity plan for events they hope will never come to pass. After all, preparedness if the key to mitigating risks, avoiding disaster, as well as coping and recovering when unavoidable setbacks occur. On top of that, there are many benefits of having a business continuity plan, not only for the business itself but also its partners, stakeholders, employees and of course its customers.
After creating a business continuity plan, the question becomes, how do you implement a business continuity plan?
Formulating a Strategic Implementation Plan
The next step after creating a business continuity plan is to formulate a strategic implementation plan starting with a risk-based analysis. The analysis will help you determine the risk level in relation to the capital investments that your business can make to guarantee viability. The business continuity plan is generally a holistic and expansive plan with different levels of critical assets. The analysis helps you determine where best to focus your expenditure and resources on a reliable system.
To formulate the strategic implementation plan, you first need to analyse the following 4 business models.
On-site Versus Off-site
Determine whether you will make an investment in an on-site infrastructure or will partner with a hosted co-location facility. The on-site infrastructure will include redundant systems for cooling, power, hardware and connectivity as the plan requires. On the other hand, the off-site option has less TCO. Regardless of which investment you decide to focus on, it is important to remember that every plan must include an off-site option with manual storage or automatic replication in case of a physical disaster.
Determine how tolerant your business as a whole is to downtime. Then, make an internal tolerance threshold determination depending on the service provided, type of data and customer demand. This will help you determine the levels of redundancy your business needs to prevent or minimise downtime to within tolerance.
Quantity of Data
Determine how much infrastructural investment is required in the on-site location depending on the amount of data you deem mission-critical and that you need for disaster recovery. The cost point of off-site remote access versus on-site redundancy will help you determine where you should focus your resources.
Determine the communication infrastructure you need to replicate large amounts of data to an off-site location or the network services that you require to maintain a reliable remote access connection to off-site live data repositories. This analysis should be done in conjunction with the quantity of data analysis. As a general rule, the more data that is being replicated, the larger the data access point needs to be. In most cases, the cost of replication is prohibitive thereby making remote access a more feasible option.
Execute the Implementation Plan
You now have a business continuity plan to respond to all natural and unnatural disruptions and its strategic implementation plan, it is now time to execute. At this stage, ongoing system monitoring and testing are key as they help you prepare for actual recovery. Additionally, your business will grow and change as time goes by. As such, you need to conduct an annual evaluation where policy, plan and procedures might need to change to adapt to your growing and changing system.
Important Tips when Implementing a Business Continuity Plan
- Have an understanding of the business architecture of your enterprise
- Have a knowledge of the daily business routines and the people responsible for them
- Formulate and maintain a service catalogue and CMDB
- Periodically perform trial runs of your BCP for practice and to make sure that it actually works
- Plan disaster recovery teams as well
- Continually update your BCP as the business changes including personnel, IT services and business context
- Use simple and clear “how to’s” rather than complicated flowcharts
- Formulate straightforward instruction directed to specific individuals to reduce panic and tension in the face of an emergency.
Just as is the case with creating a business continuity plan, implementing it also focusses of planning, analysis and evaluation. Depending on the size and context of your business, the solutions and options might differ. If you are looking to create and implement a business continuity plan for your business but do not know where to start, contact Adept today. We are a leading IT services firm providing all size organisations in the UK with innovative and reliable business solutions. Talk to us today to learn more about our services.
If you are considering cloud migration as the next logical step forward for your business or organisation, you are on the right track. In the last few years, many people have come to appreciate the many incredible benefits of cloud migration although there are still some lingering questions, how does cloud migration work? Who does the work for cloud migration? When is the right time for cloud migration? These and other questions make cloud migration, though beneficial, appear time-consuming and complicated.
This article demystifies this belief and aims to give you a better understanding of how cloud migration works so you can be better prepared.
What is Cloud Migration?
Cloud migration is the process of transitioning or moving the company’s data, applications and services from on-site premises to the cloud. Cloud migration can also be used to mean moving from one cloud to another. Essentially, it is putting information and services in a virtual space where it can be accessed immediately from any location on the globe.
The key objectives of cloud migration are to help companies reduce capital expenditures and operating costs and make it easier for them to function. However, most organisations seek cloud migration for the resource allocation and dynamic scaling capabilities it offers.
Basics of Cloud Migration
Cloud migration can mean any of the following things;
SaaS, Software as a Service, is a more comprehensive meaning of cloud migration and also the most common solution. In this type of integration, the client company uses a third-party hosted and maintained software that is immediately ready to go. Users go in through the software system and make a command to perform a specific action.
IaaS, Infrastructure as a Service, is the most basic form of cloud migration. Client companies use the infrastructure and raw resources of the system to do whatever they want including storing data and creating back-ups.
PaaS, Platform as a Service, is a higher level iteration of IaaS. Users have access to the tools and resources offered in IaaS but with the added ability of building apps.
Cloud migration, therefore, can manifest in different ways to different organisations. It can be one of the ways listed above or a self-defined version unique to your business.
How Does an On-Premise to Cloud Migration Work?
Although every business has different needs and will, therefore, adopt a slightly different cloud migration process, cloud migrations often include the following steps;
Formulate Migration Goals
Establish the performance gains you want to achieve. Having goals to measure against helps you determine whether the migration was a success.
Create a Security Strategy
Cloud cybersecurity is different from on-premises security and requires a different approach. Deploying a web application firewall or a cloud firewall might be necessary to protect corporate assets.
Copy over Data
Replicate existing database with your cloud provider throughout the migration process to keep the database up to date.
Move Business Intelligence
This might involve rewriting code or refactoring all at once or piecemeal– more on that later.
Switch Production from On-Premise to Cloud
Once the cloud goes live, the migration is complete. You can opt to turn off on-premise infrastructure or keep it on as back up or as part of a hybrid cloud deployment.
Cloud Migration Strategies
Gartner describes 5 options for organisations looking to migrate to the cloud – commonly known as the “5R’s”
Essentially, rehosting is doing the same thing but on cloud servers. It involves selecting an IaaS provider and recreating your application architecture on the provider’s infrastructure.
It involves reusing pre-existing frameworks and code while running your applications on a platform provided by a PaaS provider.
Involves partially expanding or rewriting the code base then deploying it using the refactoring or rehosting options.
This involves re-architecting and rewriting the application from the ground up on a PaaS provider’s platform. While developers take advantage of modern features offered by the provider, it can be labour intensive.
It involves discarding old applications and switching to already built SaaS applications from SaaS vendors.
Cloud Deployment Styles
In conjunction with a cloud migration strategy, you also need to decide how your cloud deployment will look after completion of cloud migration.
Combines two or more types of environments – private clouds, public clouds or on-premise legacy data centres. This style requires tight integration across clouds and data centres.
Combines two or more public clouds. This style generates cost savings, redundancy and leverages features from different providers.
Though not always feasible, it is still an option of deploying from a single cloud vendor.
Do You Need Help with Cloud Migration?
Adept offers a comprehensive and intuitive single control panel for the security and performance products required for successful cloud migration. We take time to understand your business needs, goals and long-term objectives to provide you with exactly the solutions you need within your budget. Talk to one of our experts today to learn more.