Archive | Utility Computing RSS feed for this section

The Business Case for Private Cloud

19 Apr
Private Cloud Posts Should Come in Threes

Over the last year I have returned to the subject of ‘private cloud’ on a number of occasions.  Basically I’m trying to share my confusion as I still don’t really ‘get it’.

First of all I discussed some of the common concerns related to cloud that are used to justify a pursuit of ‘private cloud’ models.  In particular I tried to explain why most of these issues distract us from the actual opportunities; for me cloud has always been a driver to rethink the purpose and scope of your business.  In this context I tried to explain why – as a result – public and private clouds are not even vaguely equivalent.

More recently I mused on whether the whole idea of private clouds could lead to the extinction of many businesses who invest heavily in them.  Again, my interest was on whether losing the ability to cede most of your business capabilities to partners due to over-investment in large scale private infrastructures could be harmful.  Perhaps ‘cloud-in-a-box’ needs a government health warning like tobacco.

In this third post I’d like to consider the business case of private cloud to see whether the concept is sufficiently compelling to overcome my other objections.

A Reiteration of My View of Cloud

Before I start I just wanted to reiterate the way I think about the opportunities of cloud as I’m pretty fed up of conversations about infrastructure, virtualisation and ‘hybrid stuff’.  To be honest I think the increase in pointless dialogue at this level has depressed my blog muse and rendered me mute for a while – while I don’t think hypervisors have anything to do with cloud and don’t believe there’s any long term value in so called ‘cloud bursting’ of infrastructure (as an apparently particularly exciting subject in my circle) I’m currently over-run by weight of numbers.

Essentially its easy to disappear down these technology rat holes but for me they all miss the fundamental point.  Cloud isn’t a technology disruption (although it is certainly disrupting the business models of technology companies) but eventually a powerful business disruption.  The cloud enables – and will eventually force – powerful new business models and business architectures.

As a result cloud isn’t about technology or computing per se for me but rather about the way in which technology is changing the economics of working with others.  Cloud is the latest in a line of related technologies that have been driving down the transaction costs of doing business with 3rd parties.  To me cloud represents the integration, commoditisation and consumerisation of these technologies and a fundamental change in the economics of IT and the businesses that depend on it.  I discussed these issues a few years ago using the picture below.

image

Essentially as collaboration costs move closer and closer to zero so the shape of businesses will change to take advantage of better capabilities and lower costs.  Many of the business capabilities that organisations currently execute will be ceded to others given that doing so will significantly raise the quality and focus of their own capabilities.  At the same time the rest will be scaled massively as they take advantage of the ability to exist in a broader ecosystem.  Business model experimentation will become widespread as the costs of start up (and failure) become tiny and tied to the value created.  Cloud is a key part of enabling these wider shifts by providing the business platforms required to specialise without losing scale and to serve many partners without sacrificing service standardisation.  While we are seeing the start of this process through offerings such as infrastructure-as-a-service and software-as-a-service these are just the tip of the iceberg.  As a very prosaic example many businesses are now working hard to think about how they can extend their reach using business APIs; combine this with improving business architecture practices and the inherent multi-tenancy of the cloud and it is not difficult to imagine a future in which businesses first become a set of internal service providers and then go on to take advantage of the disaggregation opportunity.  In future, businesses will become more specialised, more disaggregated and more connected components within complex value webs.  Essentially every discrete step in a value stream could be fulfilled by a different specialised service provider, with no ‘single organisation’ owning a large percentage of the capabilities being coordinated (as they do today).

As a result of all of these forces my first statement is therefore always that ‘private cloud’ does not really exist; sharing some of the point technologies of early stage cloud platform providers (but at lower scale and without the rapid learning opportunities they have) is not the same as aggressively looking to leverage the fall in transaction costs and availability of new delivery models to radically optimise your business.  Owning your own IT is not really a lever in unlocking the value of a business service based ecosystem but rather represents wasteful expense when the economics of IT have shifted decisively from those based on ownership to those based on access.  IT platforms are now independent economy-of-scale based businesses and not something that needs to be built, managed and supported on a business-by-business basis with all of the waste, diversity, delay and cost that this entails.  Whilst I would never condemn those who have the opportunity to improve their existing estates to generate value I would not accept that investing in internal enhancement would ever truly give you the benefits of cloud.  For this reason I have always disliked the term ‘private cloud’.

In the light of this view of the opportunities of cloud, I would posit that business cases for private cloud could be regarded as lacking some sense even before we look at their merit.  Putting aside the business issues for a moment, however, let’s look at the case from the perspective of technology and how likely it is that you will be able to replicate the above benefits by internal implementation.

What Is a “Cloud”?

One of the confusing issues related to cloud is that it is a broad shift in the value proposition of IT and IT enabled services and not a single thing.  It is a complete realignment of the IT industry and by extension the shape of all industries that use it.  I have a deeper model I don’t want to get into here but essentially we could view cloud as a collection of different kinds of independent businesses, each with their own maturity models:

  • Platforms: Along the platform dimension we see increasing complexity and maturity going –> infrastructure-as-a-service, platform-as-a-service, process-platform-as-a-service through to the kind of holistic service delivery platform I blogged about some time ago.  These are all increasingly mature platform value propositions based on technology commoditisation and economies of scale;
  • Services: Along the services dimension we see increasing complexity and maturity going –> ASP (single tenant applications in IaaS), software-as-a-service, business-processes-as-a-service through to complete business capabilities offered as a service.  While different services may have different economic models, from a cloud perspective they share the trait that they are essentially about codifying, capturing and delivering specialised IP as a multi-tenant cloud service; and
  • Consulting: Along the consulting dimension we see increasing complexity and maturity going –> IT integration and management, cloud application integration and management, business process integration and management through to complex business value web integration and management.  These all exist in the same dimension as they are essentially relationship based services rather than asset based ones.

All of these are independent cloud business types that need to be run and optimised differently.  From a private cloud perspective, however, most people only think about the ‘platform’ case (i.e. only about technology) and think no further than the lowest level of maturity (i.e. IaaS) – even though consulting and integration is actually the most likely business type available for IT departments to transition to (something I alluded to here).  In fact its probably an exaggeration to say that people think about IaaS as most people don’t get beyond virtualisation technology.

Looking at services – which is what businesses are actually interested in, surprisingly – this is probably the biggest of the many elephants in the room with respect to private cloud; if the cloud is about being able to specialise and leverage shared business services from others (whether applications, business process definitions or actual business capabilities) then they – by definition – execute somewhere beyond the walls of the existing organisation (i.e. at the service provider).  So how do these fit with private cloud?  Will you restrict your business to only ever running the old and traditional single-tenant applications you already have?  Will you build a private cloud that has a flavour of every single platform used or operated by specialised service providers?  Will you restrict your business to service providers who are “compatible” with your “platform” irrespective of the business suitability of the service?  Or do you expect every service provider to rewrite their services to run on your superior cloud but still charge you the same for a bespoke service as they charge for their public service?  Whichever one you pick it’s probably going to result in some pain and so you might want to think about it.

Again, for the sake of continuing the journey let’s ignore the issue of services – as it’s an aspect of the business ecosystem problem we’ve already decided we need to ignore to make progress – and concentrate where most people stop thinking.  Let’s have a look at cloud platforms.

Your New Cloud Platform

The first thing to realise is that public cloud platforms are large scale, integrated, automated, optimised and social offerings organised by value to wrap up complex hardware, networks, middleware, development tooling, software, security, provisioning, monetisation, reporting, catalogues, operations, staff, geographies etc etc and deliver them as an apparently simple service.  I’ll say it again – cloud is not just some virtualisation software.  I don’t know why but I just don’t seem able to say that enough.  For some reason people just underestimate all this stuff – they only seem to think about the hypervisor and forget the rest of the complexity that actually takes a hypervisor and a thousand other components and turns them into a well-oiled, automated, highly reliable and cross functional service business operated by trained and motivated staff.

Looking at the companies that have really built and operated such platforms on the internet we can see that there are not a large number due to:

  1. The breadth of cross functional expertise required to package and operate a mass of technologies coherently as a cost-effective and integrated service;
  2. The scarcity of talent with the breadth of vision and understanding required to deliver such an holistic offering; and
  3. The prohibitive capital investment involved in doing so.

Equally importantly these issues all become increasingly pressing as the scope of the value delivered progesses up the platform maturity scale beyond infrastructure and up to the kind of platform required for the realisation and support of complete multi-tenant business capabilities we described at the beginning.

Looking at the companies who are building  public cloud platforms it’s unsurprising that they are not enthusiastically embracing the nightmare of scaling down, repackaging, delivering and then offering support for many on-premise installations of their complex platforms across multiple underfunded IT organisations for no appreciable value.  Rather they are choosing to specialise on delivering these platforms as service offerings to fully optimise the economic model for both themselves and (ironically) their customers.

Whereforeart Thou Private Cloud?

Without the productised expertise of organisations who have delivered a cloud platform, however, who will build your ‘private cloud’?  Ask yourself how they have the knowledge to do so if they haven’t actually implemented and operated all of the complex components as a unified service at high scale and low cost?  Without ‘productised platforms’ built from the ground up to operate with the levels of integration, automation and cost-effectiveness required by the public cloud, most ‘private cloud’ initiatives will just be harried, underfunded and incapable IT organisations trying to build bespoke virtualised infrastructures with old, disparate and disconnected products along with traditional consulting, systems integration and managed services support. Despite enthusiastic ‘cloud washing’ by traditional providers in these spaces such individual combinations of traditional products and practices are not cloud, will probably cost a lot of money to build and support and will likely never be finished before the IT department is marginalised by the business for still delivering uncompetitive services.

Trying to blindly build a ‘cloud’ from the ground up with traditional products, the small number of use cases visible internally and a lack of cross functional expertise and talent – probably with some consulting and systems integration thrown in for good measure to help you on your way – could be considered to sound a little like an expensive, open ended and high risk proposition with the potential to result in a white elephant.  And this is before you concede that it won’t be the only thing you’re doing at the time given that you also have a legacy estate to run and enhance.

Furthermore, go into most IT shops and check out how current most of their hardware and software is and how quickly they are innovating their platforms, processes and roles.  Ask yourself how much time, money and commitment a business invests in enabling its _internal IT department_ to pursue thought leadership, standards efforts and open source projects.  Even once the white elephant lands what’s the likelihood that it will keep pace with specialised cloud platform providers who are constantly improving their shared service as part of their value proposition?

For Whom (does) Your Cloud (set its) Tolls?

Once you have your private cloud budget who will you build it for?  As we discussed at the outset your business will be increasingly ceding business capabilities to specialised partners in order to concentrate on their own differentiating capabilities.  This disaggregation will likely occur along economic lines as I discussed in a previous post, as different business capabilities in your organisation will be looking for different things from their IT provision based on their underlying business model.  Some capabilities will need to be highly adaptable, some highly scalable, some highly secure and some highly cost effective.  While the diversity of the public cloud market will enable different business capabilities within an organisation to choose different platforms and services without sacrificing the benefits of scale, any private cloud will necessarily be conflicted by a wide diversity of needs and therefore probably not be optimal for any.  Most importantly every part of the organisation will probably end up paying for the gold-plated infrastructure required by a subset of the business and which is then forced onto everyone as the ‘standard’ for internal efficiency reasons.

You therefore have to ask yourself:

  1. Is it _really_ true that all of your organisation’s business capabilities _really_ need private hosting given their business model and assets?  I suspect not;
  2. How will you support all of the many individual service levels and costs required to match the economics of your business’s divergent capabilities? I suspect you can’t and will deliver a mostly inappropriate ‘one size fits all’ platform geared to the most demanding use cases; and
  3. How will you make your private infrastructure cost-effective once the majority of capabilities have been outsourced to partners?  The answer is that you probably won’t need to worry about it – I suspect you’ll be out of a job by then after driving the business to bypass your expensive IT provision and go directly to the cloud.
Have We Got Sign-off Yet?

So let’s recap:

  1. Private cloud misses the point of the most important disruption related to cloud – that is the opportunity to specialise and participate more fully in valuable new economic ecosystems;
  2. Private cloud ignores the fundamental fact that cloud is a ‘service-oriented’ phenomenon – that is the benefits are gained by consuming things, uh as a service;
  3. Private cloud implementation represents a distraction from that part of the new IT value chain where IT departments have the most value to add – that is as business-savvy consultants, integrators and managers of services on behalf of their business.

To be fair, however, I will take all of that value destruction off the table given that most people don’t seem to have got there yet.

So let’s recap again just on the platform bit.  It’s certainly the case that internal initiatives targeted at building a ‘private cloud’ are embarking on a hugely complex and multi-disciplinary bespoke platform build wholly unrelated to the core business of the organisation.  Furthermore given that it is an increasing imperative that any business platform supports the secure exposure of an organisation’s business capabilities to the internet they must do this in new ways that are highly secure, standards based, multi-tenant and elastic.  In the context of the above discussion, it could perhaps be suggested that many organisations are therefore attempting to build bespoke ‘clouds’:

  1. Without proven and packaged expertise;
  2. Without the budget focus that public cloud companies need merely to stay in business;
  3. Often lacking both the necessary skills and the capability to recruit them;
  4. Under the constant distraction of wider day to day development and operational demands;
  5. Without support from their business for the activities required to support ongoing innovation and development;
  6. Without a clear strategy for providing multiple levels of service and cost that are aligned to the different business models in play within the company.

In addition whatever you build will be bespoke to you in many technological, operational and business ways as you pick best of breed ‘bits’, integrate them together using your organisations existing standards and create operational procedures that fit into the way your IT organisation works today (as you have to integrate the ‘new ops’ with the ‘old ops’ to be ‘efficient’).  As a result good luck with ever upgrading the whole thing given its patchwork nature and the ‘technical differentiation’ you’ve proudly built in order to realise a worse service than you could have had from a specialised platform provider with no time or cost commitment.

Oh and the costs to operate whatever eventually comes out the other end of the adventure – according to Microsoft at least – could potentially be anywhere between 10 and 80 times higher than those you could get externally right now (and that’s on the tenuous assumption that you get it right first time over the next few years and realise the maximum achievable internal savings – as you usually do no doubt).  To rephrase this we could say that it’s a plan to delay already available benefits for at least three years, possibly for longer if you mess up the first attempt.

I may be in the minority but I’m _still_ not convinced by the business case.

So What Should I Get Sign-off For?

My recommendation would be to just stop already.

And then consider that you are probably not a platform company but rather a consultant and integrator of services that helps your business be better.

So, my advice would be to:

  1. Stop (please) thinking (or at least talking) about hypervisors, virtual machines, ‘hybrid clouds’ and ‘cloud bursting’ and realise that there is inherently no business value in infrastructure in and of itself.  Think of IaaS as a tax on delivering value outcomes and try not to let it distract you as people look to make it more complex for no reason (public/private/hybrid/cross hypervisor/VM management/cloud bursting/etc).  It generates so much mental effort for so little business value;
  2. Optimise what you already have in house with whatever traditional technologies you think will help – including virtualisation – if there is a solid _short return_ business case for it but do not brand this as ‘private cloud’ and use it to attempt to fend off the public cloud;
  3. Model all of your business capabilities and understand the information they manage and the apps that help manage it.  Classify these business capabilities by some appropriate criteria such as criticality, data sensitivity, connectedness etc.  Effectively use Business Architecture to study the structure and characteristics of your business and its capabilities;
  4. Develop a staged roadmap to re-procure (via SaaS), redevelop (on PaaS) or redeploy (to IaaS) 80% of apps within the public cloud.  Do this based on the security and risk characteristics of each capability (or even better replace entire business capabilities with external services provided by specialised partners); and
  5. Pressure cloud providers to address any lingering issues during this period to pave the way for the remaining 20% (with more sensitive characteristics) in a few years.

Once you’ve arrived at 5) it may even be that a viable ‘private cloud’ model has emerged based on small scale and local deployments of ‘shrink wrapped boxes’ managed remotely by the cloud provider at some more reasonable level above infrastructure.  Even if this turns out to be the case at least you won’t have spent a fortune creating an unsupportable white elephant scaled to support the 80% of IT and business that has already left the building.

Whatever you do, though, try to get people to stop telling me that cloud is about infrastructure (and in particular your choice of hypervisor).  I’d be genuinely grateful.

Why Amazon Dedicated Instances Is No Big Deal… and Why it Really Is

29 Mar
Why It’s No Big Deal

I was interested yesterday in the amount of excitement that Amazon’s announcement of dedicated instances caused.  To me this seems like a sensible move from a public cloud provider in order to counter the widespread belief in large enterprises that they need to be physically as well as logically separate.  This represents a maturation of public cloud offerings in much the way I’ve discussed in the past and demonstrates that public clouds can evolve to provide the kinds of additional security enterprises (currently) perceive that they require.  This can only be a good thing as bit by bit public cloud companies like Amazon are removing the FUD generated by IT departments and traditional IT service companies and commoditising this completely unimportant stuff so that we can all move on and talk about the real business opportunities of the cloud.

Beyond the satisfaction of seeing another ‘roadblock’ item ticked off the list, however, in technical terms this seems like no big deal to me.  Essentially Amazon are offering you the ability to have a compute instance that takes up all of the resources on a single physical machine (whether you need all of those resources or not) and so fits into their existing framework of differently sized compute instances.  From that perspective it doesn’t feel groundbreaking as it merely ‘tweaks’ their existing model to mark a physical machine as ‘full’ (I’m obviously over-simplifying but intentionally so).

For these reasons I don’t subscribe to the idea that this ‘breaks’ the cloud model or is in some way not multi-tenant since there are no upfront costs for hardware, you still pay for what you use and the whole platform infrastructure is still shared.  The only difference is that you can choose to have some of your compute instances marked as requiring the resources of a complete physical machine.  Interestingly this is also my understanding of how Azure works – their compute instances are broken down into subsets of a physical machine; if you take the biggest instance you’ve essentially got that machine to yourself (although I guess that’s a side-effect of design rather than a conscious offering as per Amazon).

Why It Really Is a Big Deal

So technically we can consider this move to be a relatively small deal even though it is perceived by many as a potentially game changing additional capability.

And frankly that’s the big deal.

Most ‘private cloud’ initiatives from enterprise IT or traditional IT service vendors start from the perspective of trying to protect existing models by merely adding virtualisation and management to existing estates.  They are trying to extend an old model of enterprise IT in the vague hope that it will give them the same benefits as public cloud.  The two things are not vaguely equivalent, however, and they are hugely underestimating the differences.  There is no way that such efforts will result in something as sophisticated as Amazon’s public cloud, something that has been built from the bottom up as an optimised, integrated and low cost service that commoditises many complex products, processes and infrastructure into a single platform that caters for general usage.  There’s just too much distraction and baggage (systems wise and business model wise) for such efforts to ever succeed.  It’s not even putting lipstick on a pig but more like fluffing its muddy hair a little and half closing your eyes.

On the other hand Amazon have proven that not only are they able to build a platform that can operate at high scale and low cost for a non-enterprise market but that this new model can also be extended to cater for enterprise needs.  And they can do this competitively because they have done the spade work required to serve a lower cost market.  Adding some features to separate tenants using your ability to manage a commodity platform (for example) is much easier than trying to work out how to strip huge costs from traditional models of ownership.  This is the traditional pattern of disruptive innovation where a competitor seen as unfit for purpose by demanding users builds solutions that are far more capable and cost effective for the low end before leveraging these benefits upwards to oust incumbent suppliers at the upper end of the market.

In evolutionary terms cloud is a point of ‘punctuated equilibrium’ where the criteria for fitness to survive changes rapidly – whereas previously an ability to afford the ownership of complex, bespoke IT was a competitive advantage, it has now become a distinct disadvantage for everything except a small set of differentiating processes that represent core capabilities.  Furthermore in such a rapidly changing environment companies like Amazon who are focused around a key part of the emerging value web (i.e. an infrastructural business model focused on a commoditised IT platform) can rapidly evolve based on the selection criteria of the market, leaving traditional participants trapped and encumbered by outmoded business models.

Today this traditional end of the market is populated by enterprise IT departments, software providers, hardware providers and IT service providers – all of whom will increasingly see huge losses of business when judged for fitness against commoditised public cloud platforms.  Essentially many such participants will literally have no competitive offering as they have been prevented from making the shift by their own evolution of traits specialised for a previous environment.  As a result expect to see even more rabid ‘cloud washing’ and pushing of ‘private clouds’ by such vendors as the literal need for them ebbs away – effectively many of them will need to extend existing business for as long as possible while bringing new cloud platforms to market or deciding to cede this business completely.

Effectively companies who to date have been incapable of changing are being eaten from underneath and must decide whether they try to compete (in which case they need significant business, structural and asset realignment) or retreat into other business types that don’t compete with platforms (so consulting or implementation of vertical business services for instance – for IT departments see here for a suggestion).

Why It’s a Really Big Deal For You

For me this announcement is further proof of my long standing belief that you cannot build a successful cloud platform as an extension of old models and from within a business for whom it is not their core concern.  Rather successful providers must be ruthlessly focused on building a new cloud platform that integrates, optimises and simplifies complex technologies into a low cost service.  As a cloud platform consumer you should therefore think very carefully about the implications of this and consider whether buying that hypervisor software, hardware and services for a private cloud implementation will really get you an Amazon of your very own – or just further baggage for your business to drag along.

UPDATE:  Added link to Amazon announcement at the beginning of the post as… I forgot.

Reporting of “Cloud” Failures

12 Oct

I’ve been reading an article from Michael Krigsman today related to Virgin Blue’s “cloud” failure in Australia along with a response from Bob Warfield.  These articles raised the question in passing of whether such offerings can really be called cloud offerings and also brought back the whole issue of ‘private clouds’ and their potentially improper use as a source of FUD and protectionism.

Navitaire essentially seem to have been hosting an instance of their single-tenancy system in what appears to be positioned as a ‘private cloud’.  As other people have pointed out, if this was a true multi-tenant cloud offering then everyone would have been affected and not just a single customer.  Presumably then – as a private cloud offering – this is more secure, more reliable, has service levels you can bet the business on and won’t go down.  Although looking at these reports it seems like it does, sometimes.

Now I have no doubt that Navitaire are a competent, professional and committed organisation who are proud of the service they offer.  As a result I’m not really holding them up particularly as an example of bad operational practice but rather to highlight widespread current practices of repositioning ‘legacy’ offerings as ‘private cloud’ and the way in which this affects customers and the reporting of failures.

Many providers whose software or platform is not multi-tenant are aggressively positioning their offering as ‘private cloud’ both as an attempt to maintain revenues for their legacy systems and a slightly cynical way to press on companies’ worries about sharing.  Such providers are usually traditional software or managed service providers who have no multi-tenant expertise or assets; as a result they try to brand things cloud whilst really just delivering old software in an old hosted model.  Whilst there is still potentially a viable market in this space – i.e. moving single-tenant legacy applications from on-premise to off-premise as a way of reducing the costs of what you already have and increasing focus on core business – such offerings are really just managed services and not cloud offerings.  The ‘private’ positioning is a sweet spot for these people, however, as it simultaneously allows them to avoid the significant investment required to recreate their offerings as true cloud services, prolongs their existing business models and plays on customers uncertainty about security and other issues.  Whilst I understand the need to protect revenue at companies involved in such ‘cloud washing’ – and thus would stop short of calling these practices cynical – it illustrates that customers do need to be aware of the underlying architecture of offerings (as Phil Wainwright correctly argued).  In reality most current ‘private cloud’ offerings are not going to deliver the levels of reliability, configurability and scale that customers associate with the promise of the cloud.  And that’s before we even get to the more business transformational issues of connectivity and specialisation.

Looking at these kinds of offerings we can see why single-tenant software and private infrastructure provided separately for each customer (or indeed internally) is more likely to suffer a large scale failure of the kind experienced by Virgin Blue.  Essentially developing truly resilient and failure optimised solutions for the cloud needs to address every level of the offering stack and realistically requires a complete re-write of software, deep integration with the underlying infrastructure and expert operations who understand the whole service intimately.  This is obviously cost prohibitive without the ability to share a solution across multiple customers (remember that cloud != infrastructure and that you must design an integrated infrastructure, software and operations platform that inherently understands the structure of systems and deals with failures across all levels in an intelligent way).  Furthermore even if cost was not a consideration, without re-development the individual parts that make up such ‘private’ solutions (i.e. infrastructure, software and operations) were not optimised from the beginning to operate seamlessly together in a cloud environment and can be difficult to keep aligned and manage as a whole.  As a result it’s really just putting lipstick on a pig and making the best of an architecture that combines components that were never meant to be consumed in this way.

However much positioning companies try to do it’s plain that you can’t get away from the fact that ultimately multi-tenancy at every level of a completely integrated technology stack will be a pre-requisite for operating reliable, scalable, configurable and cost effective cloud solutions.  As a result – and in defiance of the claims – the lack of multi-tenant architectures at the heart of most offerings currently positioned as ‘private cloud’ (both hardware and software related, internal and external) probably makes them less secure, less reliable, less cost effective and less configurable (i.e. able to meet a business need) than their ‘public’ (i.e. new) counterparts.

In defiance of the current mass of positioning and marketing to the contrary, then, it could be suggested that companies like Virgin Blue would be less likely to suffer catastrophic failures in future if they seek out real, multi-tenant cloud services that share resources and thus have far greater resilience than those that have to accommodate the cost profiles of serving individual tenants using repainted legacy technologies.  This whole episode thus appears to be a failure of the notion that you can rebrand managed services as ‘private cloud’ rather than a failure of an actual cloud service.

Most ironically of all the headlines incorrectly proclaiming such episodes as failures of cloud systems will fuel fear within many organisations and make them even more likely to fall victim to the FUD from disingenuous vendors and IT departments around ‘private cloud’.  In reality failures such as the case discussed may just prove that ‘private cloud’ offerings create exposure to far greater risk than adopting real cloud services due to the incompatibility of architecting for high scale and failure tolerance across a complete stack at the same time as architecting for the cost constraints of a single tenant.

Private Clouds “Surge” for Wrong Reasons?

14 Jul

I read a post by David Linthicum today on an apparent surge in demand for Private Clouds.  This was in turn spurred by thoughts from Steve Rosenbush on increasing demand for Private Cloud infrastructures.

To me this whole debate is slightly tragic as I believe that most people are framing the wrong issues when considering the public vs private cloud debate (and frankly for me it is a ridiculous debate as in my mind ‘the cloud’ can only exist ‘out there, somewhere’ and thus be shared; to me a ‘private’ cloud can only be a logically separate area of a shared infrastructure and not an organisation specific infrastructure which merely shares some of the technologies and approaches – which, frankly, is business as usual and not a cloud.  For that reason when I talk about public clouds I also include such logically private clouds running on shared infrastructures).  As David points out there are a whole host of reasons that people push back against the use of cloud infrastructures, mostly to do with retaining control in one way or another.  In essence there are a list of IT issues that people raise as absolute blockers that require private infrastructure to solve – particularly control, service levels and security – whilst they ignore the business benefits of specialisation, flexibility and choice.  Often “solving” the IT issues and propagating a model of ownership and mediocrity in IT delivery when it’s not really necessary merely denies the business the opportunity to solve their issues and transformationally improve their operations (and surely optimising the business is more important than undermining it in order to optimise the IT, right?).  That’s why for me the discussion should be about the business opportunities presented by the cloud and not simply a childish public vs private debate at the – pretty worthless – technology level.

Let’s have a look at a couple of issues:

  1. The degree of truth in the control, service and security concerns most often cited about public cloud adoption and whether they represent serious blockers to progress;
  2. Whether public and private clouds are logically equivalent or completely different.

IT issues and the Major Fallacies

Control

Everyone wants to be in control.  I do.  I want to feel as if I’m moving towards my goals, doing a good job – on top of things.  In order to be able to be on top of things, however, there are certain things I need to take for granted.  I don’t grow my own food, I don’t run my own bank, I don’t make my own clothes.  In order for me to concentrate on my purpose in life and deliver the higher level services that I provide to my customers there are a whole bunch of things that I just need to be available to me at a cost that fits into my parameters.  And to avoid being overly facetious I’ll also extend this into the IT services that I use to do my job – I don’t build my own blogging software or create my own email application but rather consume all of these as services over the web from people like WordPress.com and Google. 

By not taking personal responsibility for the design, manufacture and delivery of these items, however (i.e. by not maintaining ‘control’ of how they are delivered to me), I gain the more useful ability to be in control of which services I consume to give me the greatest chance of delivering the things that are important to me (mostly, lol).  In essence I would have little chance of sitting here writing about cloud computing if I also had to cater to all my basic needs (from both a personal as well as IT perspective).  I don’t want to dive off into economics but simplistically I’m taking advantage of the transformational improvements that come from division of labour and specialisation – by relying on products and services from other people who can produce them better and at lower cost I can concentrate on the things that add value for me.

Now let’s come back to the issue of private infrastructure.  Let’s be harsh.  Businesses simply need IT that performs some useful service.  In an ideal world they would simply pay a small amount for the applications they need, as they need them.  For 80% of IT there is absolutely no purpose in owning it – it provides no differentiation and is merely an infrastructural capability that is required to get on with value-adding work (like my blog software).  In a totally optimised world businesses wouldn’t even use software for many of their activities but rather consume business services offered by partners that make IT irrelevant. 

So far then we can argue that for 80% of IT we don’t actually need to own it (i.e. we don’t need to physically control how it is delivered) as long as we have access to it.  For this category we could easily consume software as a service from the “public” cloud and doing so gives us far greater choice, flexibility and agility.

In order to deliver some of the applications and services that a business requires to deliver its own specialised and differentiated capabilities, however, they still need to create some bespoke software.  To do this they need a development platform.  We can therefore argue that the lowest level of computing required by a business in future is a Platform as a Service (PaaS) capability; businesses never need to be aware of the underlying hardware as it has – quite literally – no value.  Even in terms of the required PaaS capability the business doesn’t have any interest in the way in which it supports software development as long as it enables them to deliver the required solutions quickly, cheaply and with the right quality.  As a result the internals of the PaaS (in terms of development tooling, middleware and process support) have no intrinsic value to a business beyond the quality of outcome delivered by the whole.  In this context we also do not care about control since as long as we get the outcomes we require (i.e. rapid, cost effective and reliable applications delivery and operation) we do not care about the internals of the platform (i.e. we don’t need to have any control over how it is internally designed, the technology choices to realise the design or how it is operated).  More broadly a business can leverage the economies of scale provided by PaaS providers – plus interoperability standards – to use multiple platforms for different purposes, increasing the ‘fitness’ of their overall IT landscape without the traditional penalties of heterogeneity (since traditionally they would be ‘bound’ to one platform by the inability of their internal IT department to cost-effectively support more than one technology).

Thinking more deeply about control in the context of this discussion we can see that for the majority of IT required by an organisation concentrating on access gives greater control than ownership due to increased choice, flexibility and agility (and the ability to leverage economies of scale through sharing).  In this sense the appropriate meaning of ‘control’ is that businesses have flexibility in choosing the IT services that best optimise their individual business capabilities and not that the IT department has ‘control’ of the way in which these services are built and delivered.  I don’t need to control how my clothes manufacturer puts my t-shirt together but I do want to control which t-shirts I wear.  Control in the new economy is empowerment of businesses to choose the most appropriate services and not of the IT department to play with technology and specify how they should be built.  Allowing IT departments to maintain control – and meddle in the way in which services are delivered – actually destroys value by creating a burden of ownership for absolutely zero value to the business.  As a result giving ‘control’ to the IT department results in the destruction of an equal and opposite amount of ‘control’ in the business and is something to be feared rather than embraced.

So the need to maintain control – in the way in which many IT groups are positioning it – is the first major and dangerous fallacy. 

Service levels

It is currently pretty difficult to get a guaranteed service level with cloud service providers.  On the other hand, most providers are consistently up in the 99th percentile and so the actual service levels are pretty good.  The lack of a piece of paper with this actual, experienced service level written down as a guarantee, however, is currently perceived as a major blocker to adoption.  Essentially IT departments use it as a way of demonstrating the superiority of their services (“look, our service level says 5 nines – guaranteed!”) whilst the level of stock they put in these service levels creates FUD in the minds of business owners who want to avoid major risks. 

So let’s lay this out.  People compare the current lack of service level guarantees from cloud service providers with the ability to agree ‘cast-iron’ service levels with internal IT departments.  Every project I’ve ever been involved in has had a set of service levels but very few ever get delivered in practice.  Sometimes they end up being twisted into worthless measures for simplicity of delivery – like whether a machine is running irrespective of whether the business service it supports is available – and sometimes they are just unachievable given the level of investment and resources available to internal IT departments (whose function, after all, is merely that of a barely-tolerated but traditionally necessary drain on the core purpose of the business). 

So to find out whether I’m right or not and whether service level guarantees have any meaning I will wait until every IT department in the world puts their actual achieved service levels up on the web like – for instance – Salesforce.  I’m keen to compare practice rather than promises.  Irrespective of guarantees my suspicion is that most organisations actual service levels are woeful in comparison to the actual service levels delivered by cloud providers but I’m willing to be convinced.   Despite the illusion of SLA guarantees and enforcement the majority of internal IT departments (and managed service providers who take over all of those legacy systems for that matter) get nowhere near the actual service levels of cloud providers irrespective of what internal documents might say.  It is a false comfort.  Businesses therefore need to wise up, consider real data and actual risks – in conjunction with the transformational business benefits that can be gained by offloading capabilities and specialising – rather than let such meaningless nonsense take them down the old path to ownership; in doing so they are potentially sacrificing a move to cloud services and therefore their best chance of transforming their relationship with their IT and optimising their business.  This is essentially the ‘promise’ of buying into updated private infrastructures (aka ‘private cloud’).

A lot of it comes down to specialisation again and the incentives for delivering high service levels.  Think about it – a cloud provider (literally) lives and dies by whether the services they offer are up; without them they make no money, their stock falls and customers move to other providers.  That’s some incentive to maintain excellence.  Internally – well, what you gonna do?  You own the systems and all of the people so are you really going to penalise yourself?  Realistically you just grit your teeth and live with the mediocrity even though it is driving rampant sub-optimisation of your business.  Traditionally there has been no other option and IT has been a long process of trying to have less bad capability than your competitors, to be able to stagger forward slightly faster or spend a few pence less.  Even outsourcing your IT doesn’t address this since whilst you have the fleeting pleasure of kicking someone else at the end of the day it’s still your IT and you’ve got nowhere to go from there.  Cloud services provide you with another option, however, one which takes advantage of the fact that other people are specialising on providing the services and that they will live and die by their quality.  Whilst we might not get service levels – at this point in their evolution at least – we do get transparency of historical performance and actual excellence; stepping back it is critical to realise that deeds are more important than words, particularly in the new reputation-driven economy. 

So the perceived need for service levels as a justification for private infrastructures is the second major and dangerous fallacy.  Businesses may well get better service levels from cloud providers than they would internally and any suggestion to the contrary will need to be backed up by thorough historical analysis of the actual service levels experienced for the equivalent capability.  Simply stating that you get a guarantee is no longer acceptable. 

Security

It’s worth stating from the beginning that there is nothing inherently less secure about cloud infrastructures.  Let’s just get that out there to begin with.  Also in getting infrastructure as a service out of the way – given that we’re taking the position in this post that PaaS is the first level of actual value to a business – we can  say that it’s just infrastructure; your data and applications will be no more or less secure than your own procedures make it but the data centre is likely to be at least as secure as your own and probably much more so due to the level of capability required by a true service provider.

So starting from ground zero with things that actually deliver something (i.e. PaaS and SaaS) a cloud provider can build a service that uses any of the technologies that you use in your organisation to secure your applications and data only they’ll have more usecases and hence will consider more threats than you will.  And that’s just the start.  From that point the cloud provider will also have to consider how they manage different tenants to ensure that their data remains secure and they will also have to protect customers’ data from their own (i.e. the cloud service providers) employees.  This is a level of security that is rarely considered by internal IT departments and results in more – and more deeply considered – data separation and encryption than would be possible within a single company. 

Looking at the cloud service from the outside we can see that providers will be more obvious targets for security attacks than individual enterprises but counter-intuitively this will make them more secure.  They will need to be secured against a broader range of attacks, they will learn more rapidly and the capabilities they learn through this process could never be created within an internal IT organisation.  Frankly, however, the need to make security of IT a core competency is one of the things that will push us towards consolidation of computing platforms into large providers – it is a complex subject that will be more safely handled by specialised platforms rather than each cloud service provider or enterprise individually. 

All of these changes are part of the more general shift to new models of computing; to date the paradigm for security has largely been that we hide our applications and data from each other within firewalled islands.  Increasing collaboration across organisations and the cost, flexibility and scale benefits of sharing mean that we need to find a way of making our services available outside our organisational boundaries, however.  Again in doing this we need to consider who is best placed to ensure the secure operation of applications that are supporting multiple clients – is it specialised cloud providers who have created a security model specifically to cope with secure open access and multi-tenancy for many customer organisations, or is it a group of keen “amateurs” with the limited experience that comes from the small number of usecases they have discovered within the bounds of a single organisation?  Furthermore as more and more companies migrate onto cloud services – and such services become ever more secure – so the isolated islands will become prime targets for security attacks, since the likelihood that they can maintain top levels of security cut off from the rest of the industry – and with far less investment in security than can be made by specialised platform providers – becomes ever less.  Slowly isolationism becomes a threat rather than a protection.  We really are stronger together.

A final key issue that falls under the ‘security’ tag is that of data location (basically the perceived requirement to keep data in the country of the customers operating business).  Often this starts out as the major, major barrier to adoption but slowly you often discover that people are willing to trade off where their data are stored when the costs of implementing such location policies can be huge for little value.  Again, in an increasingly global world businesses need to think more openly about the implications of storing data outside their country – for instance a UK company (perhaps even government) may have no practical issues in storing most data within the EU.  Again, however, in many cases businesses apply old rules or ways of thinking rather than challenging themselves in order to gain the benefits involved.  This is often tied into political processes – particularly between the business and IT – and leads to organisations not sufficiently examining the real legal issues and possible solutions in a truly open way.  This can often become an excuse to build a private infrastructure, fulfilling the IT departments desire to maintain control over the assets but in doing so loading unnecessary costs and inflexibility on the business itself – ironically as a direct result of the businesses unwillingness to challenge its own thinking. 

Does this mean that I believe that people should immediately begin throwing applications into the cloud without due care and attention?  Of course not.  Any potential provider of applications or platforms will need to demonstrate appropriate certifications and undergo some kind of due diligence.  Where data resides is a real issue that needs to be considered but increasingly this is regional rather than country specific.   Overall, however, the reality is that credible providers will likely have better, more up to date and broader security measures than those in place within a single organisation. 

So finally – at least for me – weak cloud security is the third major and dangerous and fallacy.

Comparing Public and Private

Private and Public are Not Equivalent

The real discussion here needs to be less about public vs private clouds – as if they are equivalent but just delivered differently – and more about how businesses can leverage the seismic change in model occurring in IT delivery and economics.  Concentrating on the small minded issues of whether technology should be deployed internally or externally as a result of often inconsequential concerns – as we have discussed – belittles the business opportunities presented by a shift to the cloud by dragging the discussion out of the business realm and back into the sphere of techno-babble.

The reality is that public and private clouds and services are not remotely equivalent; private clouds (i.e. internal infrastructure) are a vote to retain the current expensive, inflexible and one-size-fits-all model of IT that forces a business to sub-optimise a large proportion of its capabilities to make their IT costs even slightly tolerable.  It is a vote to restrict choice, reduce flexibility, suffer uncompetitive service levels and to continue to be distracted – and poorly served – by activities that have absolutely no differentiating value to the business. 

Public clouds and services on the other hand are about letting go of non-differentiating services and embracing specialisation in order to focus limited attention and money on the key mission of the business.  The key point in this whole debate is therefore specialisation; organisations need to treat IT as an enabler and not an asset, they need to  concentrate on delivering their services and not on how their clothes get made. 

Summary

If there is currently a ‘surge’ in interest in private clouds it is deeply confusing (and disturbing) to me given that the basis for focusing attention on private infrastructures appears to be deeply flawed thinking around control, service and security.  As we have discussed not only are cloud services the best opportunity that businesses have ever had to improve these factors to their own gain but a misplaced desire to retain the IT models of today also undermines the huge business optimisations available through specialisation and condemns businesses to limited choice, high costs and poor service levels.  The very concerns that are expressed as reasons not to move to cloud models – due to a concentration on FUD around a small number of technical issues – are actually the things that businesses have most to gain from should they be bold and start a managed transition to new models.  Cloud models will give them control over their IT by allowing them to choose from different providers to optimise different areas of their business without sacrificing scale and management benefits; service levels of cloud providers – whilst not currently guaranteed – are often better than they’ve ever experienced and entrusting security to focused third parties is probably smarter than leaving it as one of many diverse concerns for stretched IT departments. 

Fundamentally, though, there is no equivalence between the concept of public (including logically private but shared) and truly private clouds; public services enable specialisation, focus and all of the benefits we’ve outlined whereas private clouds are just a vote to continue with the old way.  Yes virtualisation might reduce some costs, yes consolidation might help but at the end of the day the choice is not the simple hosting decision it’s often made out to be but one of business strategy and outlook.  It boils down to a choice between being specialised, outward looking, networked and able to accelerate capability building by taking advantage of other people’s scale and expertise or rejecting these transformational benefits and living within the scale and capability constraints of your existing business – even as other companies transform and build new and powerful value networks without you.

Business Enablement as a Key Cloud Element

30 Apr

After finally posting my last update about ‘Industrialised Service Delivery’ yesterday I have been happily catching up with the intervening output of some of my favourite bloggers.

One post that caught my eye was a reference from Phil Wainwright – whilst he was talking about the VMForce announcement – to a post he had written earlier in the year about Microsoft’s partnership with Intuit.  Essentially one of his central statements was related directly to the series of posts I completed yesterday (so part 1, part 2 and part 3):

“the breadth of infrastructure <required for SaaS> extends beyond the development functionality to embrace the entirely new element of service delivery capabilities. This is a platform’s support for all the components that go with the as-a-service business model, including provisioning, pay-as-you-go pricing and billing, service level monitoring and so on. Conventional software platforms have no conception of these types of capability but they’re absolutely fundamental to delivering cloud services and SaaS applications”.

This is one of the key points that I think is still – inexplicably – lost on many people (particularly people who believe that cloud computing is primarily about providing infrastructure as a service).  In reality the whole world is moving to service models because they are simpler to consume, deliver clearer value for more transparent costs and can be shared across organisations to generate economies of scale.  In fact ‘as a service’ models are increasingly not going to be an IT phenomenon but also going to extend to the way in which businesses deal with each other across organisational boundaries.  For the sale and consumption of such services to work, however, we need to be able to ‘deliver’ them; in this context we need to be able to market them, make them easy to subscribe to, manage billing and service levels transparently for both the supplier and consumer and enable rapid change and development over time to meet the evolving needs of service consumers.  As a result anyone who wants to deliver business capabilities in the future – whether these are applications or business process utilities – will need to be able to ensure that their offering exhibits all of these characteristics. 

Interestingly these ‘business enablement’ functions are pretty generic across all kinds of software and services since they essentially cover account management, subscription, business model definition, rating and billing, security, marketplaces etc etc (i.e. all of the capabilities that I defined as being required in a ‘Service Delivery Platform’).  In this context the use of the term ‘Service Delivery Platform’ in place of cloud or PaaS was deliberate; what next generation infrastructures need to do is enable people to deliver business services as quickly and as robustly as possible, with the platforms themselves also helping to ensure trust by brokering between the interests of consumers and suppliers through transparent billing and service management mechanisms.

This belief in service delivery is one of the reasons I believe that the notion of ‘private clouds’ is an oxymoron – I found this hoary subject raised again on a Joe McKendrick post after a discussion on ebizQ – even without the central point about the obvious loss of economies of scale; essentially  the requirement to provide a whole business enablement fabric to facilitate cross organisational service ecosystems – initially for SaaS but increasingly for organisational collaboration and specialisation – is just one of the reasons I believe that ‘Private Clouds’ are really just evolutions of on-premise architecture patterns – with all of the costs and complexity retained – and thus purely marketecture.  When decreasing transaction costs are enabling much greater cross organisational value chains the benefits of a public service delivery platform are immense, enabling organisations to both scale and evolve their operations more easily whilst also providing all of the business support they need to offer and consume business services in extended value chains.  Whilst some people may think that this is a pretty future-oriented reason to not like the notion of private clouds, for completeness I will also say that to me  – in the sense of customer owned infrastructures – they are an anachronism; again this is just an extension of existing models (for good or ill) and nothing to do with ‘cloud’.  It is only the fact that most protagonists of such models are vendors with very low level maturity offerings like packaged infrastructure and/or middleware solutions that makes it viable, since the complexity of delivering true private SDP offerings would be too great (not to mention ridiculously wasteful).  In my view ‘private clouds’ in the sense of end organisation deployment is just building a new internal infrastructure (whether self managed or via a service company) sort of like the one you already already have but with a whole bunch of expensive new hardware and software (so 90% of the expense but only 10% of the benefits). 

To temper this stance I do believe that there is a more subtle, viable version of ‘privacy’ that will be supported by ‘real’ service delivery platforms over time – that of having a logically private area of a public SDP to support an organisational context (so a cohesive collection of branded services, information and partner integrations – or what I’ve always called ‘virtual private platforms’).  This differs greatly from the ‘literally’ private clouds that many organisations are positioning as a mechanism to extend the life of traditional hardware, middleware or managed service offerings – the ability of service delivery platforms to rapidly instantiate ‘virtual’ private platforms will be a core competency and give the appearance and benefits of privacy whilst also maintaining the transformational benefits of leveraging the cloud in the first place.  To me literally ‘private clouds’ on an organisations own infrastructure – with all of their capital expense, complexity of operation, high running costs and ongoing drag on agility – only exist in the minds of software and service companies looking to extend out their traditional businesses for as long as possible. 

Industrialised Service Delivery Redux III

29 Apr

It’s a bit weird editing this more or less complete post 18 months later but this is a follow on to my previous posts here and here.  In those posts I discussed the need for much greater agility to cope with an increasingly unpredictable world and ran through the ways in which we can industrialise IT provision to focus on tangible business value and rapid realisation of business capability.  This story relied upon the core notion that technology is no longer a differentiator in and of itself and thus we just need workable patterns that meet our needs for particular classes of problem – which in turn reduces the design space we need to consider and allows increasing use of specialised platforms, templates and development tools.

In this final post I will discuss the notion that such standardisation calls into question the need to own such technology at all; essentially as platforms and tools become more standardised and available over the network so the importance of technology moves to access rather than ownership.

Future Consolidation

One of the interesting things from my perspective is that once you start to build out an asset-based business – like a service delivery platform – it quickly becomes subject to economies of scale.

It is rapidly becoming plain, therefore, that game changing trends such as:

  • Increasing middleware consolidation around traditional ‘mega platform’ providers;
  • Flexible infrastructure enabled by virtualisation technology;
  • Increasingly powerful abstractions such as service-orientation;
  • The growing influence of open source software and collaborating communities; and
  • The massively increased interconnectivity enabled by the web.

are all going to combine to change not just the shape of the IT industry itself but increasingly all industries; essentially as IT moves to service models so organisations will need to reshape themselves to align with these new realities, both in terms of their use of IT but also in terms of finding their distinctive place within their own disaggregating business ecosystems.

From a technology perspective it is therefore clear that these forces are combinatory and lead to accelerating commoditisation.  The implication of this acceleration is that decreasing differentiation should lead to increased consolidation as organisations no longer need to own and operate their own IT when such IT incurs cost and complexity penalties without delivering differentiation.

Picture1

In a related way such a shift by organisations to shared IT platforms is also likely to be an amplifying trend; as we see greater platform consolidation – and hence decreasing differentiation to organisations owning their own IT – so will laggard organisations become less competitive as a result of their expensive and high drag IT relative to their low cost, fleet of foot competitors.  Such organisations will then also seek to transition, eventually creating a tipping point at which ownership of IT becomes an anachronism.

From the supply perspective we can also see that as platforms become less differentiating and more commoditised they also become subject to increasing economies of scale – from an overall market perspective, therefore, offering platforms as a service becomes a far more effective use of capital than the creation and ownership of an island of IT, since scale technologies drift naturally towards consolidation.  There are some implications to this for the IT industry given the share of overall IT spend that goes on repeated individual installation and consulting for software and hardware but we shall leave that for another post.

As a result of these trends it is highly likely that we will see platform as a service propositions growing in influence fairly rapidly.  Initially these platforms are likely to be infrastructure-oriented and targeted at new SaaS providers or transitioning ISVs to lower the cost of entry but I believe that they will eventually expand to deliver the full business enablement support required by all organisations that need to exist in extended value webs (i.e. eventually everyone).  These latter platforms will need to have all of the capabilities I discussed in the previous post and will be far beyond the technology-centric platforms envisaged by the majority of emerging platform providers today.  Essentially as everybody becomes a service provider (or BPU in other terms) in their particular business ecosystem so they will need to rapidly realise, commercialise, manage and adapt the services they offer to their value webs.  In this latter scenario I believe that organisations will be caught in the jaws of a vise – the unbundling of capability to SaaS or other BPU providers to allow them to specialise and optimise the overall value stream will see their residual IT costs rocket as there are less capabilities to share it around; at the same time economies of scale produced by IT service companies will see the costs of platform as a service offerings plummet and make the transition a no brainer.

So what would a global SDP look like?

Picture2

Well remarkably like the one I showed in my previous posts given that I was leading up to this point, lol.  The first difference is that the main bulk of the platform is now explicitly deployed in the cloud – and it’ll obviously need to scale up and down smoothly and at low cost.  In addition all of the patterns that we discussed in my previous post will need to support multi-tenancy and such patterns will need to be built into the tools and factories that we will use to create systems optimised to run on our Service Delivery Platform.

At the same time the service factory becomes a way of enabling the broadest range of stakeholders to rapidly and reliably create services and applications that can be deployed to our platform – in fact it moves from being “just” an interesting set of tools to support industrialised capability realisation to being one of the main battlegrounds for PaaS providers trying to broaden their subscriber base by increasing the fidelity of realisation and reducing the barrier of entry to the lowest level possible.

Together the cloud platform and associated service factory will be the clear option of choice for most organisations, since it will yield the greatest economies of scale to the people using it.

One last element on this diagram that differentiates it from the earlier one is the on-premise ‘customer service platform’. In this context there is still a belief in many quarters that organisations will not want to physically share space and hardware with other people – they may be less mature, they may not trust sufficiently or they may genuinely have reasons why their data and services are so important that they are willing to pay to host them separately.  In the long term I do not subscribe to this view and to me the notion of ‘private clouds’ – outside of perhaps government and military use cases – is oxymoronic and at best a transitional situation as people learn to trust public infrastructures.  On the other hand whilst this may be playing with semantics I can see the case for ‘virtual private clouds’ (i.e. logically ring fenced areas of public clouds) that give the appearance and majority of benefits of being private through ‘soft’ partitioning (i.e. through logical security mechanisms) whilst allowing the retention of economies of scale through avoidance ‘hard’ partitioning (i.e. through separate physical infrastructure).  Indeed I would state that such mechanisms for making platforms appear private (including whitelabelling capabilities) will be necessary to support the branding requirements of resellers, systems integrators and end organisations.  For the sake of completeness, however, I would position transitional ‘private clouds’ as reduced functionality versions of a Service Delivery Platform that simply package up some hardware but leave the majority of the operational and business support – along with things like backup and failover – back at the main data centres of the provider in order to create an acceptable trade-off in cost.

Summary

So in this final post I have touched on some of the wider changes that are an implication of technology commoditisation and the industrialisation of service realisation.  For completeness I’ll recap the main messages from the three posts:

  • In post one I discussed how businesses are going to be forced to become much more aware of their business capabilities – and their value – by the increasingly networked and global nature of business ecosystems.  As a result they will be driven to concentrate very hard on realising their differentiating capabilities as quickly, flexibly and cost effectively as possible; in addition they will need to deliver these capabilities with stringent metrics.  This has some serious implications for the IT industry as we will need to shift away from a technology focus (where the client has to discover the value as a hit and miss emergent process) to one where we can demonstrate a much more mature, reliable and outcome based proposition. To do this we’ll need to build the platforms to realise capabilities effectively and in the broadest sense.
  • In post two I discussed how industrialisation is the creation and consistent application of known patterns, processes and infrastructures to increase repeatability and reliability. We might sacrifice some flexibility but increasing commoditisation of technology makes this far less important than cost effectiveness and reliability. When industrialising you need to understand your end to end process and then do the nasty bit – bottom up in excruciating detail.
  • Finally in post three I have discussed my belief that increasing standardisation of technology will lead to accelerating platform consolidation.  Essentially as technology becomes less differentiating and subject to economies of scale it’s likely that IT ownership and management will be less attractive. I believe, therefore, that we will see increasing and accelerating activity in the global Service Delivery Platform arena and that IT organisations and their customers need to have serious, robust and viable strategies to transition their business models.

Industrialised Service Delivery Redux II

22 Sep

In my previous post I discussed the way in which our increasingly sophisticated use of the Web is creating an unstoppable wave of change in the global business environment.  This resulting acceleration of change and expectation will require unprecedented organisational speed and adaptability whilst simultaneously driving globalisation and consumerisation of business.  I discussed my belief that companies will be forced to reform as a portfolio of systematically designed components with clear outcomes and how this kind of thinking changes the relationship between a business capability and its IT support.  In particular I discussed the need to create industrialised Service Delivery Platforms which vastly increase the speed, reliability and cost effectiveness of delivering service realisations. 

In this post I’ll to move into the second part of the story where I’ll look more specifically at how we can realise the industrialisation of service delivery through the creation of an SDP.

Industrialisation 101

There has been a great deal written about industrialisation over the last few years and most of this literature has focused on IT infrastructure (i.e. hardware) where components and techniques are more commoditised.  As an example many of my Japanese colleagues have spent decades working with leaders in the automotive industry and experienced firsthand the techniques and processes used in zero defect manufacturing and the application of lean principles. Sharing this same mindset around reliabliity, zero defect and technology commoditisation they created a process for delivering reliable and guaranteed outcomes through pre-integration and testing of combinations of hardware and software.  This kind of infrastructure industrialisation enables much higher success rates whilst simultaneously reducing the costs and lead times of implementation. 

In order to explore this a little further and to set some context, let’s Just think for a moment about the way in which IT has traditionally served its business customers.  

nonindutsrialisedvsindustrialised

We can see that generally speaking we are set a problem to solve and we then take a list of products selected by the customer – or often by one of our architects applying personal preference – and we try to integrate them together on the customers site, at the customers risk and at the customers expense. The problem is that we may never have used this particular combination of hardware, operating systems and middleware before – a problem that worsens exponentially as we increase the complexity of the solution, by the way – and so there are often glitches in their integration, it’s unclear how to manage them and there can’t be any guarantees about how they will perform when the whole thing is finally working. As a result projects take longer than they should – because much has to be learned from scratch every time – they cost a lot more than they should – because there are longer lead times to get things integrated, to get them working and then to get them into management – and, most damningly, they are often unreliable as there can be no guarantees that the combination will continue to work and there is learning needed to understand how to keep them up and running.

The idea of infrastructure industrialisation, however, helps us to concentrate on the technical capability required – do you want a Java application server? Well here it is, pre-integrated on known combinations of hardware and software and with manageability built in but – most importantly – tested to destruction with reference applications so that we can place some guarantees around the way this combination will perform in production.  As an example, 60% of the time taken within Fujitsu’s industrialisation process is in testing.  The whole idea of industrialisation is to transfer the risk to the provider – whether an internal IT department or an external provider – so that we are able to produce consistent results with standardised form and function, leading to quicker, more cost effective and reliable solutions for our customers.

Now such industrialisation has slowly been been maturing over the last few years but – as I stated at the beginning – has largely concentrated on infrastructure templating – hardware, operating systems and middleware combined and ready to receive applications.  Recent advances in virtualisation are also accelerating the commoditisation and industrialisation of IT infrastructure by making this templating process easier and more flexible than ever before.  Such industrialisation provides us with more reliable technology but does not address the ways in which we can realise higher level business value more rapidly and reliably.  The next (and more complex) challenge, therefore, is to take these same principles and apply them to the broader area of business service realisation and delivery.  The question is how we can do this?

Industrialisation From Top to Bottom

Well the first thing to do is understand how you are going to get from your expression of intent – i.e. the capability definitions I discussed in my previous post that abstract us away from implementation concerns – through to a running set of services that realise this capability on an industrialised Service Delivery Platform. This is a critical concern since If you don’t understand your end to end process then you can’t industrialise it through templating, transformation and automation.

endtoendservicerealisation

In this context we can look at our capability definitions and map concepts in the business architecture model down to classifications in the service model.  Capabilities map to concrete services, macro processes map to orchestrations, people tasks map to workflows, top level metrics become SLAs to be managed etc. The service model essentially bridges the gap between the expression of intent described by the target business architecture and the physical reality of assets needed to execute within the technology environment.

From here we broadly need to understand how each of our service types will be realised in the physical environment – so for instance we need a physical host to receive and execute each type of service, we need to understand how SLAs are provisioned so that we can monitor them etc. etc.

Basically the concern at this stage is to understand the end to end process through which we will transform the data that we capture at each stage of the process into ever more concrete terms – all the way from logical expressions of intent through greater information about the messages, service levels and type of implementation required, through to a whole set of assets that are physically deployed and executing on the physical service platform, thus realising the intent.

The core aim of this process must be to maximise both standardisation of approach and automation at each stage to ensure repeatability and reliability of outcome – essentially our aim in this process is to give business capability owners much greater reliability and rapidity of outcome as they look to realise business value.  We essentially want to give guarantees that we can not only realise functionality rapidly but also that these realisations will execute reliably and at low cost.  In addition we must also ensure that the linkage between each level of abstraction remains in place so that information about running physical services can be used to judge the performance of the capability that they realise, maximising the levers of change available to the organisation by putting them in control of the facts and allowing them to ‘know sooner’ what is actually happening.

Having an end to end view of this process essentially creates the rough outline of the production line that needs to be created to realise value – it gives us a feel for the overall requirements.  Unfortunately, however, that’s the nice bit, the kind of bit that I like to do. Whilst we need to understand broadly how we envisage an end to end capability realisation process working, the real work is in the nasty bit – when it comes to industrialisation work has to start at the bottom.

Industrialisation from Bottom to Top

If you imagine the creation of a production line for any kind of physical good they obviously have to be designed to optimise the creation of the end product. Every little conveyer belt or twisty robot arm has to be calibrated to nudge or weld the item in exactly the same spot to achieve repeatability of outcome. In the same way any attempt to industrialise the process of capability realisation has to start at the bottom with a consideration of the environment within which the final physical assets will execute and of how to create assets optimised for this environment as efficiently as possible. I use a simple ‘industrialisation pyramid’ to visualise this concept, since increasingly specialised and high value automation and industrialisation needs to be built on broader and more generic industrialised foundations. In reality the process is actually highly iterative as you need to continually be recalibrating both up and down the hierarchy to ensure that the process is both efficient and realises the expressed intent but for the sake of simplicity you can assume that we just build this up from the bottom.

industrialisationpyramid

So let’s start at the bottom with the core infrastructure technologies – what are the physical hosts that are required to support service execution? What physical assets will services need to create in order to execute on top of them? How does each host combine together to provide the necessary broad infrastructure and what quality of service guarantees can we put around each kind of host? Slightly more broadly, how will we manage each of the infrastructure assets? This stage requires a broad range of activity not just to standardise and templatise the hosts themselves but also to aggregate them into a platform and to create all of the information standards and process that deployed services will need to conform to so that we can find, provision, run and manage them successfully.

Moving up the pyramid we can now start to think in more conceptual terms about the reference architecture that we want to impose – the service classifications we want to use, the patterns and practices we want to impose on the realisation of each type, and more specifically the development practices.  Importantly we need to be clear about how these service classifications map seamlessly onto the infrastructure hosting templates and lower level management standards to ensure that our patterns and practices are optimised – its only in this way that we can guarantee outcomes by streamlining the realisation and asset creation process. Gradually through this definition activity we begin to build up a metamodel of the types of assets that need to be created as we move from the conceptual to the physical and the links and transformations between them. This is absolutely key as it enables us to move to the next level – which I call automating the “means of production”.

This level becomes the production line that pushes us reliably and repeatably from capability definition through to physical realisation. The metamodel we built up in the previous tier helps us to define domain specific languages that simplify the process of generating the final output, allowing the capture of data about each asset and the background generation of code that conforms to our preferred classification structure, architectural patterns and development practices. These DSLs can then be pulled together into “factories” specialised to the realisation of each type of asset, with each DSL representing a different viewpoint for the particular capability in hand.  Individual factories can then be aggregated into a ‘capability realisation factory’ that drives the end to end process.  As I stated in my previous post the whole factory and DSL space is mildly controversial at the moment with Microsoft advocating explicit DSL and factory technologies and others continuing to work towards MDA or flexible open source alternatives.  It is suffice to say in this context that the approaches I’m advocating are possible via either model – a subject I might return to actually with some examples of each (for an excellent consideration of this whole area consult Martin Fowler’s great coverage, btw).

The final level of this pyramid is to actually start taking the capability realisation factories and tailoring them for the creation of industry specific offerings – perhaps a whole set of ‘factories’ around banking, retail or travel capabilities. From my perspective this is the furthest out and may actually not come to pass; despite Jack Greenfield’s compelling arguments I feel that the rise of SOA and SaaS will obviate the need to generate the same application many times by allowing the composing of solutions from shared utilities.  I feel that the idea of an application or service specific factory assumes a continuation of IT oversupply through many deployments; as a result I feel that the key issue at stake in the industrialisation arena is actually that of democratising access to the means of capability production by giving people the tools to create new value rapidly and reliably.  As a result I feel that improving the reliability and repeatability of capability realisation across the board is more critical than a focus on any particular industry. (This may change in future with demand, however, and one potential area of interest is industry specific composition factories rather than industry specific application generation factories). 

Delivering Industrialised Services

So we come at last to a picture that demonstrates how the various components of our approach come together from a high level process perspective.

servicefactoryprocess

Across the top we have our service factory. We start on the left hand side with capability modelling, capturing the metadata that describes the capability and what it is meant to do. In this context we can use a domain specific language that allows us to model capabilities explicitly within the tooling. Our aim is then to use the metadata captured about a capability to realise it as one or more services. In this context information from the metamodel is transformed into an initial version of the service before we use a service domain language to add further detail about contracts, messages and service levels. It is important to note that at this point, however, the service is still abstract – we have not bound it to any particular realisation strategy. Once we have designed the service in the abstract we can then choose an implementation strategy – example classifications could be interaction services for Uis, workflow services for people tasks, process services for service orchestrations, domain services for services that manage and manipulate data and integration services that allow adaptation and integration with legacy or external systems.

Once we have chosen a realisation strategy all of the metadata captured about the service is used to generate a partially populated realisation of the chosen type – in this context we anticipate having a factory for each kind of service that will control the patterns and practices used and provide guidance in context to the developer.

Once we have designed our services we now want to be able to design a virtual deployment environment for them based wholly on industrialised infrastructure templates. In this view we can configure and soft test the resources required to run our services before generating provisioning information that can be used to create the virtual environment needed to host the services.

In the service platform the provisioning information can be used to create a number of hosting engines, deploy the services into them, provision the infrastructure to run them and then set up the necessary monitoring before finally publishing them into a catalogue. The Service Platform therefore consists of a number of specialised infrastructure hosts supporting runtime execution, along with runtime services that provide – for example – provisioning and eventing support.

The final component of the platform is what I call a ‘service wrap’. This is an implementation of the ITSM disciplines tailored for our environment. In this context you will find the catalogue, service management, reporting and metering capabilities that are needed to manage the services at runtime (this is again a subset to make a point). In this space the service catalogue will bring together service metadata, reports about performance and usage plus subscription and onboarding processes.  Most importantly there is a strong link between the capabilities originally required and the services used to realise them, since both are linked in the catalogue to support business performance management. In this context we can see a feedback loop from the service wrap which enables capability owners to make decisions about effectiveness and rework their capabilities appropriately.

Summary

In this second post of three I have demonstrated how we can use the increasing power of abstraction delivered by service-orientation to drive the industrialisation of capability realisation.  Despite current initiatives broadly targeting the infrastructure space I have discussed my belief that full industrialisation across the infrastructure, applications, service and business domains requires the creation and consistent application of known patterns, processes, infrastructures and skills to increase repeatability and reliability. We might sacrifice some flexibility in technology choice or systems design but increasing commoditisation of technology makes this far less important than cost effectiveness and reliability. It’s particularly important to realise that when industrialising you need to understand your end to end process and then do the nasty bit – bottom up in excruciating detail.

So in the third and final post on this subject I’m going to look a little bit at futures and how the creation of standardised and commoditised service delivery platforms will affect the industry more broadly – essentially as technology becomes about access rather than ownership so we will see the rise of global service delivery platforms that support capability realisation and execution on behalf of many organisations.

Industrialised Service Delivery Redux I

23 Jul

I’ve been terribly lax with my posting of late due to pressures of work but thought I had best try and put something up just to keep my blog (barely) alive, lol.  Following on from my previous posts on Cloud Computing and Service Delivery Platforms I thought I would go the extra step and actually talk about my views on Industrialisation in the platform and service delivery spaces.  I made this grand decision since my last post included a reference to a presentation that I did in Redmond last year and so I thought it would be useful to actually tell the story as well as just punt up the slides (which can be pretty meaningless without a description).  In addition there’s also been a huge amount of coverage of both cloud computing, platform as a service and industrialisation lately and so it seemed like revisiting the content of that particular presentation would be a good idea.  If I’m honest I also have to admit that I can largely just rip the notes out of the slideset for a quick post, but I don’t feel too guilty given that it’s a hot topic, lol.  I’ll essentially split this story across three posts: part I will cover why I believe Industrialisation is critical to supporting agility and reliability in the new business environment, part II will cover my feelings on how we can approach the industrialisation of business service delivery and part III will look at the way in which industrialisation accelerates the shift to shared Service Delivery Platforms (or PaaS or utility computing or cloud computing – take your pick).

The Industrialisation Imperative

So why do I feel that IT industrialisation is so important?  Well essentially I believe that we’re on the verge of some huge changes in the IT industry and that we’re only just seeing the very earliest signs of these through the emergence of SOA, Web 2.0 and SaaS/PaaS. I believe that organisations are going to be forced to reform and disaggregate and that technology will become increasingly commoditised. Essentially I believe that we all need to recognise these trends and learn the lessons of industrialisation from other more mature industries – if we can’t begin to deliver IT that is rapid, reliable, cost effective and – most importantly – guaranteed to work then what hope is there?  IT has consistently failed to deliver expected value time and time again through an obsession with technology for it’s own sake and the years of cost overruns, late delivery and unreliability are well documented; too often projects seem to ignore the lessons of history and start from ground zero. This has got to change. Service orientation is allowing us to express IT in ways that are closer to the business than ever before, reducing the conceptual gap that has allowed IT to hide from censure behind complexity. Software as a Service is starting to prove that there are models that allow us to deliver the same function to many people with lower costs born of economies of scale and we’re all going to have to finally recognise that everyone is not special, that they don’t need that customisation or tailoring for 80% of what they do and that SOAs assistance in refocusing on business value will draw out the lunacy of many IT investments.

In this three part post I therefore want to share some of my ideas around how we can industrialise IT. Firstly, I’m going to talk about the forces that are acting on organisations that will drive increasing specialisation and disaggregation and go onto to discuss business capabilities and how they accelerate the commoditisation of IT.  Secondly, I’m going to discuss  approaches to the industrialisation of service delivery and look at the different levels of industrialisation that need to be considered.  Finally I’ll talk about how the increasing commoditisation and standardisation of IT will accelerate the process of platform consolidation and the resulting shift towards models that recognise the essentially scale based economics of IT platform provision.

The Componentisation of Business

Over the last 100 years we’ve seen a gradual shift towards concentration on smaller levels of business organisation due to the decreasing costs of executing transactions with 3rd parties. Continuing discontinuities around the web are sending these transaction costs into free fall, however, and we believe that this is going to trigger yet another reduction in business aggregation and cause us to focus on a smaller unit of business granularity – the capability (for an early post on this subject see here).

kearney_capabilities

Essentially I believe that there are four major forces that will drive organisations to transform in this way:

1) Accelerating change;

2) Increasing commoditisation; and

3) Rapidly decreasing collaboration costs due to the emergence of the web as a viable global business network.

I’ll consider each in turn.

Accelerating Change

As the rate of change increases, so adaptability becomes a key requirement for survival. Most organisations are currently not well suited for this challenge, however, as they have structures carried over from a different age based on forward planning and command and control – they essentially focus inwards rather than outwards. The lack of systematic design in most organisations means that they rarely understand clearly how value is delivered and so cannot change effectively in response to external demand shifts. In order to become adaptable, however, organisations need to systematically understand what capabilities they need to satisfy demand and how these capabilities combine to deliver value – a systematic view enables us to understand the impact of change and to reconfigure our capabilities in response to shifts in external demand.

Increasing Commoditisation

This capability based view is also extremely important in addressing the shrinking commoditisation cycle. Essentially consumers are now able to instantly compare our goods and services with those from other companies – and switch just as quickly. A capability-based view enables us to ensure that we remove repetition and waste across organisational silos and replace these with shared capabilities to maximise our returns, both while the going is good but also when price sensitivity begins to bite.

Decreasing Transaction Costs

The final shift is to use our clearer view of the capabilities we need to begin thinking about those that are truly differentiating – the market will be putting such pressure on us to excel that we will be driven to take advantage of falling transaction costs and the global nature of the web to replace our non-differentiating capabilities with those of specialised partners, simultaneously increasing our focus, improving our overall proposition and reducing costs.

As a result of these drivers we view business capabilities as a key concept in the way in which we need to approach the industrialisation of services.

Componentisation Through Business Capabilities

So I’ve talked a lot about capabilities – how do they enable us to react to the discontinuities that I’ve discussed? Well to address the issues of adaptability and understand which things we want to do and which we want to unbundle we really need a way of understanding what the component parts of our organisation are and what they do.

Traditionally organisations use business processes, organisational structures or IT architectures as a way of expressing organisational design – perhaps all three if they use an enterprise architecture method. The big problem with these views, however, is that they tell us very little about what the combined output actually is – what is the thing that is being done, the essential business component that is being realised? Yes I understand that there are some people doing stuff using IT but what does it all amount to? Even worse, these views of the business are all inherently unstable since they are expressions of how things get done at a point in time; as a result they change regularly and at different rates and therefore make trying to understand the organisation a bit like catching jelly – you might get lucky and hold it for a second but it’ll shift and slip out of your grasp. This means that leaders within the organisation lack a consistent decision making framework and see instead a constantly shifting mass of incomplete and inconsistent detail that make it impossible to make well reasoned strategic decisions.

Capabilities bring another level of abstraction to the table; they allow us to look at the stable, component parts of the organisation without worrying about how they work. This gives us the opportunity to concentrate systematically on what things the organisation needs to do – in terms of outputs and commitments – without concerning ourselves with the details of how these commitments will be realised. This enables enterprise leaders to concentrate on what is required whilst delegating implementation to managers or partners. Essentially they are an expression of intent and express strategy as structure. Capabilities are then realised by their owners using a combination of organisational structures, role design, business processes and technology – all of which come together to deliver to the necessary commitments.

component_anatomy

In this particular example we see the capability from both the external and internal perspectives – from the perspective of the business designer and the consumer the capability is a discrete component that has a purpose – in this case enabling us to check credit histories – and a set of metrics – for simplicity we’ve included just service level, cost and channels. From the perspective of the capability owner, however, the capability consists of all of the different elements needed to realise the external commitments.

So how does a shift to capabilities affect the relationship between the organisation and its IT provision?

IT Follows Move from “How” to “What”

One of the big issues for us all is that a concentration on capabilities will begin to push technology to the bottom of the stack – essentially it becomes much more commoditised.

Capability owners will now have a much tighter scope in the form of a well defined purpose and set of metrics; this gives them greater clarity and leaves them able to look for rapid and cost effective realisation rather than a mismash of hardware, software or packages that they then need to turn into something that might eventually approximate to their need.  Furthermore the codification of their services will expose them far more clearly to the harsh realities of having to deliver well defined value to the rest of the organisation and they will no longer be able to ‘lose’ the time and cost of messing about with IT in the general noise of a less focused organisation.

As a result capability owners will be looking for two different things:

1) Is there anyone who can provide this capability to me externally to the level of performance that I need – for instance SaaS or BPU offering available on a usage or subscription basis; or

2) Failing that who can help me to realise my capability as rapidly, reliably and cost effectively as possible.

The competition is therefore increasingly going to move away from point technologies – which become increasingly irrelevant – and move towards the delivery of outcomes using a broad range of disciplines tightly integrated into a rapid capability realisation platform.

howtowhat2

Such realisation platforms – which I have been calling Service Delivery Platforms to denote their holistic nature – require us to bring infrastructure, application, business and service management disciplines into an integrated, reliable and scalable platform for capability realisation, reflecting the fact that service delivery is actually an holistic discipline and not a technology issue. Most critically – at least from our perspective – this platform needs to be highly industrialised; built from repeatable, reliable and guaranteed components in the infrastructure, application, business and service dimensions to guarantee successful outcomes to our customers.

So what would a Service Delivery Platform actually look like?

A Service Delivery Platform

platform2

In this picture I’ve surfaced a subset of the capabilities that I believe are required in the creation of a service delivery platform suitable for enterprise use – I’m not being secretive, I just ran out of room and so had to jettison some stuff.

If we start at the bottom we can see that we need to have highly scalable and templatised infrastructure that allows us to provide capacity on demand to ensure that we can meet the scaling needs of capability owners as they start to offer their services both inside and outside the organisation.

Above this we have a host of runtime capabilities that are needed to manage services running within the environment – identity management, provisioning, monitoring to ensure that delivered services meet their service levels, metering to support various monetisation strategies both from our perspective and from the capability owners perspective, audit and non-repudiation and brokering to external services in order to keep tabs on their performance for contractual purposes.

Moving up we have a number of templatised hosting engines – essentially we need to break the service space down using a classification to ensure that we are able to address different kinds of services effectively. These templates are essentially virtual machines that have services deployed into them and which are then delivered on the virtualised hardware; essentially the infrastructure becomes part of the service to decouple services both from each other and physical space.

The top level in the centre area is what we call service enablement. In this tier we essentially have a whole host of services that make the environment workable – examples that we were able to fit in here are service catalogue, performance reporting, subscription management – the whole higher level structure that brings services into the wider environment in a consistent and consumable way.

Moving across the left we can see that in order to deliver services developers will need to have standardised and templatised shared development support environments to support collaboration, process enablement and asset management.

Across on the right we have operational support – this is where we place our ITIL/ISO 20000 service management processes and personnel to ensure that all services are treated as assets – tracked, managed, reported upon, capacity managed etc etc.

On the far right we have a business support set of capabilities that support customer queries about services, how much they’ve been charged and where we also manage partners, perform billing or carry out any certification activity if we want to create new templates for inclusion in the overall platform.

Finally across the top we have what I call the ‘service factory’ – a highly templatised modelling and development environment that drives people from a conceptual view of the capabilities to be realised down through a process of service design, realisation and deployment against a set of architectural and development patterns represented in DSLs.  These DSLs could be combinations of UML profiles, little languages or full DSLs implemented specifically for the service domain.

Summary

In this post I have discussed my views on the way in which businesses will be forced to componentise and specialise and how this kind of thinking changes the relationship between a business capability and its IT support.  I’ve also briefly highlighted some of the key features that would need to be present within an holistic and industrialised Service Delivery Platform in order to increase the speed, reliability and cost effectiveness of delivering service realisations.  In the next post I’ll to move into the second part of the story where I’ll look more specifically at realising the industrialisation of service delivery through the creation of an SDP.

Does the Cloud lead to homogenised enterprises?

17 Apr

Just a quick follow on from my last post about cloud computing/service delivery platforms/platform as a service/saas platforms etc etc as. I was intrigued by a Joe McKendrick post on the merits of the model.  Joe’s analysis was pretty balanced as usual but one of his disadvantages caught my eye as I disagreed with it strongly.  Essentially Joe suggests that there is a perception that “Enterprises leveraging Cloud computing may become homogenized — and lose the competitive advantage that may come from custom-built systems”.  As I’ve previously discussed the concept of cloud computing drives people to finally decide what business they are in; if they work in an end user organisation then they need to recognise that and start treating IT like a utility rather than a differentiator.  In this context, however, I would argue that rather than homogenising the enterprise it will have the opposite effect – if we get rid of our IT platform to the cloud (who cares if it is homogenised – it’s just a technology platform), consume our 80% of non-differentiating business capabilities as SaaS or BPU services from the cloud (who cares if these are homogenised – they don’t differentiate us) and then focus all of our attention on realising the remaining 20% of differentiating capabilities – as custom built systems – more rapidly and cheaply on our chosen cloud computing platform(s) then we will be maximising our advantage whilst losing none of the benefits.  It’s important to realise that just because we don’t own and operate the physical technology it doesn’t mean that we can’t own the services that run on it  – we can still develop our custom-built systems to realise our differentiating capabilities.  And given that this is the only bit of the whole value chain that’s actually important for us to customise I would argue that we are – in reality – less likely to end up homogenised since we are more focused on where our value really lies and therefore more able to maximise our differentiation from our competitors. 

Service Delivery Platforms peek from behind the Cloud

17 Apr

Quick note having just read Dion Hincliffe’s article on Google App Engine and Amazon Web Services (their respective cloud computing infrastructures).  These platforms are early and very basic instances of the ‘service delivery platforms’ I’ve talked about for the past few years (for an example see a presentation I gave at the MS SOA & BPM conference last year).  As I discussed during this talk, I use the term ‘service delivery platform’ in place of ‘platform as a service’ or ‘cloud computing’ since in order to really support viable and fully rounded service delivery you need to provide far more than a ‘platform’ or ‘infrastructure’ – a topic I’m going to return to now that this area has been popularised by Google and I won’t seem like (such) a lunatic, lol.  More broadly, however, I’ve discussed a number of times why I feel that economics and technology commoditisation will drive people down this route eventually and I’m really excited now to see a number of competitors emerging in this space (with Amazon, Google and Salesforce now competing with a number of smaller startups (with more to follow)).  I know I’ve said this before but people are really going to have to start deciding what business they are in – if you work in an end user organisation then you need to recognise that and start treating IT like a utility rather than a differentiator; if you’re an IT service company, however, you’d better work out whether you want to be focused on relationship brokering and consulting, utility computing platform provision or  SaaS/BPU service offers, since the economics of each are very different (you may in fact want to play in all three but if so you’d best disaggregate your company and run them autonomously under your umbrella brand – or you’ll make a complete hash of them all).

One of the interesting things for me is whether a middle-ground model will emerge that enables enterprises to take advantage of the industrialisation and economies of scale available to service delivery platform vendors whilst also enabling the deployment of ‘edge’ infrastructures to customer specific environments.  In this context we may see locally deployed ‘chunks’ of the central service created for those organisations that are large enough or who have specific privacy or trust issues (still coordinated with and managed from the centre, however).  Whether such a model has a long term future is an interesting (but as yet undecided) question to my mind but in the short to medium term those SDP vendors who were able to deliver such a transitional solution would provide a compelling migration path to enterprises who are nervous about the implications of a wholesale shift to the cloud.

Follow

Get every new post delivered to your Inbox.

Join 172 other followers