Posts

When UK mobile operator 02’s data network went down for a whole day in December, it brought home to many people just how interconnected the many services are that we take for granted. The inability to get email on the move or to use Google Maps to navigate to a meeting was an annoyance, but for thousands of Uber drivers it was more serious, leaving them with little choice but to buy Pay as You Go SIMs on other networks in order to continue making their living.

In fact Uber drivers were fortunate: the other networks were operating normally, and switching to another one took a matter of minutes. Things were not so easy for the tens of millions of people who rely on the availability of Microsoft’s cloud-based Office365 service when that experienced a prolonged down time of Multi-factor Authentication in November. Not only were they unable to access all their Office365 apps and data, but many also discovered that they were effectively locked out of other applications such as Smartsheet, Xero, and Insightly, which can share Office365 authentication. With a cellular network outage it’s easy to switch SIMs, but when parts of Office365 go down those that rely on it have no option but to wait for Microsoft to fix it.

This highlights two potential problems for organizations which rely on the availability of applications and services in the cloud. Firstly, cloud services do go down, and it’s not easy to switch to another cloud provider when that happens.

But perhaps more importantly, a huge number of companies are relying on the availability of a very small number of public clouds. (Of course to access them they also rely on a very small number of telecoms networks, but that’s another story.)

According to the Cloud Security Alliance, about 42% of application workloads run on Amazon Web Services, and a further 29% run on Microsoft’s Azure. And the fact that AWS and Azure account for well over two thirds of cloud workloads has become a cause for concern to regulatory authorities in many industries. For example, last July the European Banking Authority issued a report warning of the systemic risk arising from the international banking system’s concentration into such a small number of public clouds.

So what can be done to mitigate the risks presented by this concentration of computing resources in tiny number of huge public clouds, which will, from time to time, encounter availability problems?

A lot can certainly be done at the application level, by architecting them for the cloud to meet specific resilience requirements and specific RPOs (Recovery Point Objectives) and RTOs (Recovery Time Objectives).

Which means it’s important when moving to the cloud to conduct application-centric migrations and transformations that capture this type of information and determine information such as business criticality.

Indeed, it was in part for this reason that we created our AppScore product to capture, assess and plan at the application level in order to ensure successful cloud adoptions.

It’s also important to realize that putting an application or service in the cloud doesn’t free you from the responsibility of keep it running: standard, well established principles of redundancy and resilience still need to be applied. That means you need a disaster recovery plan in place that’s tested and proven.

The good news is that DR from one cloud location (region) to another can be far easier than switching from an on-premises data centre to an alternative site.  It’s important to use a multi-region strategy, and – where the cloud provider supports it (such as AWS) – it’s wise to reserve capacity in a specific availability zone as a disaster in one region could mean a large number of companies would be looking to recover to another cloud data centre simultaneously and you could get locked out.

The bottom line is this: the cloud may be a different world, but tried and tested resilience and redundancy principles still apply. When used effectively, the cloud provides greater resilience options at a better price-point than on-premises or co-located datacenters ever can.

Just remember to factor this into your cloud migrations and understand the criticality of the application to the business and their resilience requirements. Which means running cloud adoption at the application level rather than at the server level.

FutureProof Takes Home Top Honors for EMEA Emerging Partner of the Year

London, December 10th, 2018.

FutureProof has been selected as one of ten CloudHealth Partner of the Year award winners, which were recognized at AWS re:Invent 2018.

Selected for technical ability, business performance, and growth, Partner of the Year winners were chosen from among hundreds of service providers worldwide. CloudHealth CEO Tom Axbey described them as “standouts, even among a group of high performers” and reflective of how fundamental partners are to the DNA of CloudHealth.

FutureProof was awarded top honors in the EMEA Emerging Partner of the Year category.

“This award recognizes companies that deliver significant value – both to their own customers as well as to the CloudHealth partner program,” said Bob Kilbride, Senior Director of Channel Sales at CloudHealth. “FutureProof has demonstrated a commitment to service excellence and cloud innovation that puts them in the top tier of service providers.”

Through its partnership with CloudHealth, FutureProof is able to deliver a highly effective cost management and governance service enabling enterprise cloud users to control costs and ensure they run optimised and well-purchased cloud environments.

“For enterprise organizations managing cloud spend across multiple accounts and multi-cloud is challenging. Our highly valued partnership with CloudHealth enables us to support customers in managing their cloud spend and governing their cloud environments,” said Geoff Davies, Co-founder and Director of FutureProof.

About FutureProof

FutureProof are a technology consulting company that works with large organisations to deliver effective change programmes in complex environments. The company has a wide range of experience across many sectors and a track record of delivering strategic initiatives.

FutureProof executes highly successful cloud adoptions using its unique in-house developed “AppScore” platform for large scale migrations and datacenter exits.

Through its CloudControl offering the company enables cloud users to manage their cloud spend and govern their environments.

Mining cryptocurrencies such as Bitcoin, Ethereum and Ripple is one way to make money – quite literally. But to get rich you need a vast amount of powerful and costly computing resources at your disposal.

That explains the phenomenon of browser-based cryptojacking: hackers running mining software on victims’ computers without their knowledge or consent to generate Bitcoin or other currencies, while the victims pick up the tab.

The problem for hackers is that this type of cryptojacking is rarely lucrative: a recent research paper from a German university suggests that malicious websites which execute mining code on visitors’ systems generate an average of less than $6 a day.

To make serious money the bad guys need serious computing resources at their disposal, and there’s one obvious place to find them: public cloud providers.

And that explains a disturbing new trend: hackers (or the bots they control) hunt down vulnerable cloud admin accounts, spin up virtual machines or deploy containers (often via unsecured Kubernetes consoles) and put them to work mining cryptocurrencies for themselves without the account holders knowing. According to research by security firm, RedLock, victims of these types of attacks include high profile companies such as Tesla, Aviva, and Gemalto.

The first inkling that the account owner may have that something is amiss is at the end of the month when they discover that their cloud bill has gone through the roof. Even then it may not be easy to work out exactly what has been going on. Anton Gurov, CloudHealth Technologies’ Director of Technical Operations, recently provided a fascinating insight into these attacks.

Just this year we’ve seen multiple attacks happen with our clients. Typically, it manifests itself as a sharp and unexpected jump in spend and a number of large instance types running at high utilisation.

And it’s not small numbers – all the attacks we’ve seen have run up cloud bills in excess of $50,000.

The hacker could be a criminal or part of an organized crime gang, or they might be an agent of a nation state like North Korea looking to generate much needed cash reserves. This begs the question of what else could they do? If they can compromise a cloud account and spin up servers, they could also snapshot any interesting virtual machines or containers and exfiltrate them to examine at their leisure, or simply extract any interesting data. Once they have got what they want they could then leave a mining operation running to make some money while they move on to the next victim.

Or it may not be a hacker at all who is responsible for the unauthorized usage: a disgruntled ex-employee could have set up a Bitcoin mining operation before leaving to earn some extra cash at the company’s expense, or an opportunist current employee may have set one up on the sly with the expectation that it is unlikely to be detected.

What’s the solution to this problem?

Understanding exactly what cloud resources are being consumed, and what they are doing, is key to detecting any unauthorized usage. But native cost reporting tools in cloud platforms are inadequate, which is where dedicated cost management solutions come in.

Platforms like CloudHealth bring all of your billing data into one location surfacing it via an easy to use web interface enabling you to quickly see abnormal changes in spend in near real time. You can also apply policies, setting an alert if, for example, a monthly bill is forecast to to rise by more than 10%. Policies can also be established to automatically shut down servers that do not confirm to an organisations tagging strategy, another fast reacting process to minimise the cost impact of a hacked cloud account.

This makes it simple to spot any anomalous usage almost immediately, and by drilling down you can identify the source of the extra cost and detect any unauthorized resource usage.

Whilst detecting breaches is a key part of a security strategy, prevention, obviously, is essential. The leading cloud management platforms provide security reviews against best practices enabling you to quickly spot weaknesses that could be exploited and receive recommendations to harden your public cloud accounts.

These include disabling API access to your root account, and enabling multi-factor authentication (MFA) for it. All privileged users and operators should also be required to use MFA. (On AWS, this can be enforced though IAM policy).

CloudHealth’s Gurov provides detailed instructions for protection in his presentation. And it’s not just your production accounts that are at risk, we’ve seen successful attacks against sandbox or dev accounts where typically security controls are weaker.

By using a cloud management platform like CloudHealth and taking some relatively simple but effective steps, you can ensure that your cloud resources are working for you, not mining cryptocurrencies for somebody else.

If you’re not already backing up onto the cloud, why not? It’s easy, incredibly cost effective and offers a range of other potential benefits.

One of the most welcome features of cloud backup is that it takes away many day-to-day IT chores, leaving your people to get on with more productive work. No need to back up onto tape, store the files in a fire vault or use an off-premises provider such as Iron Mountain, with all the time consuming process that goes with it.

It is likely to cost less than these old methods too, and there’s less risk of accidentally deleting the wrong information while carrying out a backup.

Cloud fundamentally changes the way backups get done in entirely good ways.

With the infinite storage cloud offers, you can stream all your backup and site data to the cloud, you don’t need to plan the amount of space required and you have none of the integrity problems that come with tapes. Cloud backups also happen faster than manual tape backups.

Backups in the cloud mean you can implement longer term storage solutions at lower costs, helping you stay on the right side of data compliance rules with greater ease.

Cloud backup options

While migrating applications to cloud typically takes some planning, there are some amazingly simple options when it comes to backups. For example, we helped one of our smaller clients with a network attached storage device to perform automatic backups to the cloud.

Here are some other examples of the options available to businesses looking to adopt cloud-based backups:

  1. Carry on using your existing backup software and use it for storage on the cloud via a “virtualized tape library”. Using the existing software licence you have already invested in entails minimal change and minimal cost.
  2. Use AWS and or Microsoft tools to manage backups and restore using cloud storage for your on-premise servers into the cloud. This entails no cost for using those services. You are simply paying for storage.
  3. For advanced backup solutions, third party products such as Commvault can be deployed providing a single consistent platform for protecting data and applications across on-premise and cloud.
  4. In addition to a protecting data using cloud backup you can provide DR capability by taking copies of your servers and streaming them onto the cloud. Microsoft Azure site recovery is such a tool, replicating near instantaneously into the cloud. This option essentially offers you sophisticated disaster recovery into the bargain because if you do have a failure of either data centre or servers you can restore that service into operation running in the cloud in a short space of time, it could be within minutes. In a worst case scenario where your premises are wiped out, you will not have to worry about losing your data stores as they are all off site. It also means that there is far less for your people to do in the event of a DR invocation.

In summary

As we’ve said before here, cloud blurs the lines between DR and backup and now enables organisations that could not afford more than basic backups to build holistic data and application  protection solutions, affording them far greater resilience.

One of the greatest emerging benefits of the cloud is the possibilities it offers in disaster recovery (DR). Some are even calling it cloud’s killer app. Read more