When we founded FutureProof our aim was to help large organisations adopt cloud computing because we saw that the business case for cloud then, and as it still is now, was immense.
Cloud enabled large organisations to remove the technology barriers and constraints that had held them back and to imagine new ways to do things, achieving more, faster and with fewer resources.
The advantages were plain. With no more long provisioning times for new kit, resources available within minutes and near infinite storage on demand, they could take a completely new approach to capacity planning. Simpler application development and delivery were another two big benefits.
That was five years ago and yet despite all the promise and potential, big enterprises have largely failed to implement cloud for their core systems. For sure, every company has done some cloud and many have a “cloud first” strategy but generally this is with green-field applications that deliver on new requirements.
With legacy platforms many have run proof of concepts and pilots but gone no further. As a result the hum of hundreds of thousands of always-on servers in on-premises or co-located datacenters is deafening.
So commonplace is this lack of action that AWS have even coined a term for it “The Great Stall”. So why have organisations failed to embrace what is so obviously of benefit to them?
Another technological step change from the last century provides a good insight. Financial Times columnist and presenter of BBC’s Radio 4’s More or Less, economist Tim Harford, describes in his excellent piece on the adoption of electricity how it took nearly 50 years for electricity to replace steam in manufacturing.
Today that seems absurd but, as Harford describes, adopting electricity in factories was far from straightforward and took a shift in mindset before we harnessed its now obvious benefits.
“Some factory owners did replace steam engines with electric motors, drawing clean and modern power from a nearby generating station,” writes Harford. “But given the huge investment this involved, they were often disappointed with the savings. Why? Because to take advantage of electricity, factory owners had to think in a very different way.”
The problem was that most industrialists in the early days simply replaced the central steam powered unit driving the same drive shafts, making little or no impact on productivity. It was not until they completely retooled their factories with tiny electric motors at each work bench that they could transform what their workers did and reimagine the factory based around modern and flexible production methods.
The whole article has an uncanny ring to it and the many ways this reads across to cloud adoption,or the lack of it, is striking.
Like their early 20th century forebears, large organisations need to approach things in a completely new way. The shift in thinking required to maximise cloud for legacy environments in essence boils down to three areas:
A lack of understanding. ‘Just what is in our datacentres and why are we moving this to cloud?’ is a common refrain. The uncomfortable truth of enterprise IT is that most people don’t know exactly what’s in their datacentres. Where CMDBs or asset catalogues do exist, the coverage is patchy or incomplete. Hence traditional DC to DC migrations involve significant discovery work up front. Discovery tools are good at gathering information at the server level but typically that only facilitates a lift and shift to cloud, which is sub-optimal. Knowing this (or at least suspecting it) an organisation’s cloud migration stalls. To migrate legacy systems to cloud needs an entirely different approach through an application-centric view. Tooling in this area is distinctly lacking, which is what led us to create, AppScore, our own application-centric tool. Using such tools enables you to truly understand how an application is made up, its servers, databases and environments. In turn this makes determining a future state and associated cost easy and so allows you to unlock the benefits of cloud. This is very much like the shift in mind-set that was required to go from steam to electricity.
People and culture. Resourcing cloud projects is hard, as Steve Webb, our Head of Talent recently observed. Finding properly skilled staff to work on cloud programmes is tough. As with all new technology there’s a very apparent skills shortage. And there’s no point relying on traditional SIs and vendors because everyone has the same resource problem. Upskilling existing teams is a priority and new training approaches using Cloud Centres of Excellence combined with modular training from companies like A Cloud Guru are part of the new thinking on learning.
Tackling the cultural side of cloud adoption is also important.
Politics and vested interests are the single biggest cause of delays in application transformation programmes.
Application teams used to something being a certain way can be resistant to change. An approach we’ve found to be very effective involves the app team taking a low risk environment of their app across to cloud, allowing them to understand how it works and build “cloud confidence”. Once that confidence is built, the push to bring other environments across, including production, typically comes from the app team itself.
Cloud governance. One of the greatest fears of senior managers with cloud is: ‘This could all get out of control and we receive a huge bill’. And a well justified fear this is. Traditional datacentres have an in-built physical brake; at some you’ll run out of rack space or hypervisor capacity. Increasing it requires capital investment causing organisations to pause and consider whether it’s necessary. On cloud it’s all too easy to spin up new machines, over-provision servers and grow storage. Cloud sprawl is a very real thing. Reporting capabilities on the major cloud platforms are good but provide an incomplete picture. Therefore additional tools such as CloudHealth, when used effectively with new operating models built for cloud, provide a comprehensive way to manage cloud platforms.
Cloud represents an order of magnitude increase in the consumption of computing. The fundamental components might be the same – disks, virtual servers, databases, networking etc. – but what’s different is the way the provider packages, presents and charges for them and how users consume them.
It is this that enables organisations to be revolutionary about their approach to technology and delivery and so effect transformative change.
But it isn’t an easy process, just as Harford observed about the transition from steam to electric power: “The thing about a revolutionary technology is that it changes everything – that’s why we call it revolutionary. And changing everything takes time and imagination and courage – and sometimes just a lot of hard work.”