Will Private Cloud Fail ?

A recent discussion on ebizq about the success or failure of private clouds was sparked by Forrester analyst James Staten’s prediction late last year that ‘You will build a private cloud, and it will fail’.  In reality James himself was not suggesting that the concept of ‘private cloud’ would be a failure, only that an enterprise’s first attempt to build one would be – for various technical or operational reasons – and that learning from these failures would be a key milestone in preparing for eventual ‘success’.  Within the actual ebizq discussion there were a lot of comments about the open ended nature of the prediction (i.e. what exactly will fail) and the differing aims of different organisations in pursuing private infrastructures (and therefore the myriad ways in which you could judge such implementations to be a ‘success’ or a ‘failure’ from detailed business or technology outcome perspectives).

I differ in the way I think about this issue, however, as I’m less interested in whether individual elements of a ‘private cloud’ implementation could be considered to be successful or to have failed, but rather more interested in the broader question of whether the whole concept will fail at a macro level for cultural and economic reasons.

First of all I would posit two main thoughts:

1) It feels to me as if any sensible notion of ‘private clouds’ cannot be a realistic proposition until we have mature, broad capability and large scale public clouds that the operating organisations are willing to  ‘productise’ for private deployment; and

2) By the time we get to this point I wonder whether anyone will want one any more.

To address the first point: without ‘productised platforms’ hardened through the practices of successful public providers most ‘private cloud’ initiatives will just be harried, underfunded and incapable IT organisations trying to build bespoke virtualised infrastructures with old, disparate and disconnected products along with traditional consulting, systems integration and managed services support. Despite enthusiastic ‘cloud washing’ by traditional providers in these spaces such individual combinations of traditional products and practices are not cloud, will probably cost a lot of money to build and support and will likely never be finished before the IT department is marginalised by the business for still delivering uncompetitive services.

To the second point: given that the economics (unsurprisingly) appear to overwhelmingly favour public clouds, that any lingering security issues will be solved as part of public cloud maturation and – most critically – that cloud ultimately provides opportunities for business specialisation rather than just technology improvements (i.e. letting go of 80% of non differentiating business capabilities and sourcing them from specialised partners), I wonder whether there will be any call for literally ‘private clouds’ by the time the industry is really ready to deliver them. Furthermore public clouds need not be literally ‘public’ – as in anyone can see everything – but will likely allow the creation of ‘virtual private platforms’ which allow organisations to run their own differentiating services on a shared platform whilst maintaining complete logical separation (so I’m guessing what James calls ‘hosted private clouds’ – although that description has a slightly tainted feeling of traditional services to me).

More broadly I wonder whether we will see a lot of wasted money spent for negative return here. Many ‘private cloud’ initiatives will be scaled for a static business (i.e. as they operate now) rather than for a target business (i.e. one that takes account of the wider business disruptions and opportunities brought by the cloud).  In this latter context as organisations take the opportunities to specialise and integrate business capabilities from partners they will require substantially less IT given that it will be part of the service provided and thus ‘hidden’.  Imagining a ‘target’ business would therefore lead us to speculate that such businesses will no longer need systems that previously supported capabilities they have ceased to execute. One possible scenario could therefore be that ‘private clouds’ actually make businesses uncompetitive in the medium to long term by becoming an expensive millstone that provides none of the benefits of true cloud whilst weighing down the leaner business that cloud enables with costs it cannot bear. In extreme cases one could even imagine ‘private clouds’ as the ‘new legacy’, creating a cost base that drives companies out of business as their competitors or new entrants transform the competitive landscape. In that scenario it’s feasible that not only would ‘private clouds’ fail as a concept but they could also drag down the businesses that invest heavily in them(1).

Whilst going out of business completely may be an extreme – and unlikely – end of a spectrum of possible scenarios, the basic issues about cost, distraction and future competitiveness – set against a backdrop of a declining need for IT ownership – stand. l therefore believe that people need to think very, very carefully before deciding that the often short-medium term (and ultimately solvable) risks of ‘public’ cloud for a subset of their most critical systems are outweighed by the immense long term risks, costs and commitment of building an own private infrastructure. This is particularly the case given that not all systems within an enterprise are of equal sensitivity and we therefore do not need to make an inappropriately early and extreme decision that everything must be privately hosted.  Even more subtly, different business capabilities in your organisation will be looking for different things from their IT provision based on their underlying business model – some will need to be highly adaptable, some highly scalable, some highly secure and some highly cost effective.  Whilst the diversity of the public cloud market will enable different business capabilities to choose different platforms and services without sacrificing the traditional scale benefits of internal standardisation, any private cloud will necessarily operate with a wide diversity of needs and therefore probably not be optimal for any.  In the light of these issues, there are more than enough – probably higher benefit and lower risk – initiatives available now in incrementally optimising your existing IT estate whilst simultaneously codifying the business capabilities required by your organisation, the optimum systems support for their delivery and then replacing or moving the 80% of non-critical applications to the public cloud in a staged manner (or better still directly sourcing a business capability from a partner that removes the need for IT). In parallel we have time to wait and see how the public environment matures – perhaps towards ‘virtual private clouds’ or ‘private cloud appliances’ – before making final decisions about future IT provision for the more sensitive assets we retain in house for now (using existing provision). Even if we end up never moving this 20% of critical assets to a mature and secure ‘public’ cloud they can either a) remain on existing platforms given the much reduced scope of our internal infrastructures and the spare capacity that results or b) be moved to a small scale, packaged and connected appliance from a cloud service provider.

Throwing everything behind building a ‘private cloud’ at this point therefore feels risky given the total lack of real, optimised and productised cloud platforms, uncertainty about how much IT a business will actually require in future and the distraction it would represent from harvesting less risky and surer public cloud benefits for less critical systems (in ways that also recognise the diversity of business models to be supported).

Whilst it’s easy, therefore, to feel that analysts often use broad brush judgments or seek publicity with sensationalist tag lines I feel in this instance a broad brush judgment of the likely success or failure of the ‘private cloud’ concept would actually be justified (despite the fact that I am using a different interpretation of failure to the ‘fail rapidly and try again’ one I understand James to have meant). Given the macro level impacts of cloud (i.e. a complete and disruptive redefinition of the value proposition of IT) and the fact that ‘private cloud’ initiatives fail to recognise this redefinition (by assuming a marginally improved propagation of the status quo), I agree with the idea that anyone who attempts to build their own ‘private cloud’ now will be judged to have ‘failed’ in any future retrospective. When we step away from detailed issues (where we may indeed see some comparatively marginal improvements over current provision) and look at the macro level picture, IT organisations who guide their business to ‘private cloud’ will simultaneously load it with expensive, uncompetitive and unnecessary assets that still need to be managed for no benefit whilst also failing to guide it towards the more transformational benefits of specialisation and flexible provision. As a result whilst we cannot provide ‘hard facts’ or ‘specific measures’ that strictly define which ‘elements’ of an individual ‘private cloud’ initiative will be judged to have ‘succeeded’ and which will have ‘failed’, looking for this justification is missing the broader point and failing to see the wood for the trees; the broader picture appears to suggest that when we look back on the overall holistic impacts of ‘private cloud’ efforts it will be apparent that they have failed to deliver the transformational benefits on offer by failing to appreciate the macro level trends towards IT and business service access in place of ownership. Such a failure to embrace the seismic change in IT value proposition – in order to concentrate instead on optimising a fading model of ‘ownership’ – may indeed be judged retrospectively as ‘failure’ by businesses, consumers and the market.

Whilst I agree with many of James’s messages about what it actually means to deliver a cloud – especially the fact that they are a complex, connected ‘how’ rather than a simple ‘thing’ and that budding ‘private cloud’ implementers fail to understand the true breadth of complexity and cross functional concerns – I believe I may part company with James’s prediction in the detail.  If I understand correctly James specifically advocates ‘trying and failing’ merely as an enabler to have another go with more knowledge; given the complexity involved in trying to build an own ‘cloud’ (particularly beyond low value infrastructure), the number of failures you’d have to incur to build a complete platform as you chase more value up the stack and the ultimately pointless nature of the task (at least taking the scenarios outlined above) I would prefer to ask why we would bother with ‘private cloud’ at this point at all? It would seem a slightly insane gamble versus taking the concrete benefits available from the public cloud (in a way which is consistent with your risk profile) whilst allowing cloud companies to ‘fail and try again’ on your behalf until they have either created ‘private cloud appliances’ for you to deploy locally or obviated the need completely through the more economically attractive maturation of ‘virtual private platforms’.

For further reading, I went into more detail on why I’m not sure private clouds make sense at this point in time here:

http://www.ebizq.net/blogs/ebizq_forum/2010/11/does-the-private-cloud-lack-business-sense.php#comment-12851

and why I’m not sure they make sense in general here:

https://itblagger.wordpress.com/2010/07/14/private-clouds-surge-for-wrong-reasons/

(1) Of course other possible scenarios people mention are:

  1. That the business capabilities remaining in house expand to fill the capacity.  In this scenario these business capabilities would probably still pay a premium versus what they could get externally and thus will still be uncompetitive and – more importantly – saddled with a serious distraction for no benefit.  Furthermore this assumes that the remaining business capabilities share a common business model and that through serendipity the ‘private cloud’ was built to optimise this model in spite of the original muddled requirements to optimise other business models in parallel; and
  2. Companies who over provision in order to build a ‘private cloud’ will be able to lease all of their now spare capacity to others in some kind of ‘electricity model’.  Whilst technically one would have some issues with this, more importantly such an operation seems a long way away from the core business of nearly every organisation and seems a slightly desperate ‘possible ancillary benefit’ to cling to as a justification to invest wildly now.  This is especially the case when such an ‘ancillary benefit’ may prevent greater direct benefits being gained without the hassle (through judicious use of the public cloud both now and in the future.)

Cloud is not Virtualization

In the interests of keeping a better record of my online activity I’ve recently decided to cross-post opinions and thoughts I inflict on people via forums and other technology sites via my blog as well (at least where they are related to my subject and have any level of coherence, lol).  In this context I replied to an ebizq question yesterday that asked “Is it better to use virtualization for some business apps than the cloud?“.  This question was essentially prompted by a survey finding that some companies are more likely to use virtualisation technologies than move to the cloud.

Whilst I was only vaguely interested in the facts presented per se, I often find that talking about cloud and virtualisation together begs people to draw a false equivalence between two things that – at least in my mind – are entirely different in their impact and importance.

Virtualisation is a technology that can (possibly)  increase efficiency in your existing data centre and which might be leveraged by some cloud providers as well.  That’s nice and it can reduce the costs of hosting all your old cack in the short term. Cloud on the other hand is a disruptive shift in the value proposition of IT and the start of a prolonged disruption in the nature and purpose of businesses.

In essence cloud will enable organisations to share multi-tenant business capabilities over the network in order to specialise on their core value. Whilst virtualisation can help you improve your legacy mess (or make it worse if done badly) it does nothing significant to help you take advantage of the larger disruption as it just reduces the costs of hosting applications that are going to increasingly be unfit for purpose due to their architecture rather than their infrastructure.

In this context I guess it’s up to people to decide what’s best to do with their legacy apps – it may indeed make sense in the short term to move them onto virtualised platforms for efficiency’s sake (should it cost out) in order to clean up their mess during the transition stage.

In the longer term, however, people are going to have to codify their business architecture, make decisions about their core purpose and then build new cloud services for key capabilities whilst integrating 3rd party cloud services for non-differentiating capabilities. In this scenario you need to throw away your legacy and develop cloud native and multi-tenant services on higher level PaaS platforms to survive – in which case VMs have no place as a unit of value and the single tenant legacy applications deployed within them will cease to be necessary. In that context the discussion becomes a strategic one – how aggressively will you adopt cloud platforms, what does this mean for the life span of your applications and how will it impact the case for building a virtualised infrastructure (I was assuming it was a question of internal virtualisation rather than IaaS due to the nature of the original question). If it doesn’t pay back or you’re left with fairly stable applications already covered by existing kit then don’t do it.

Either way – don’t build new systems using old architectures and think that running it in a virtualised environment ‘future proofs’ you; the future is addressing a set of higher level architectural issues related to delivering flexible, multi-tenant and mass customisable business capabilities to partners in specialised value webs. Such architectural issues will increasingly be addressed by higher level platform offerings that industrialise and consumerise IT to reduce the issues of managing the complex list of components required to deliver business systems (also mentioned as an increasing issue in the survey).   As a result your route to safety doesn’t lie in simply using less physical – but equally dumb – infrastructure.

Enterprise Architecture Top to Bottom

JP Morgenthal published an interesting post on his blog recently relating to the futility of trying to map out every facet of an enterprise architecture.  I wholeheartedly agree with his sentiments and have spoken on this issue in the past – albeit in a slightly different context (and also in discussing evolution and IT, actually).  I also feel strongly that EA practitioners should be focused far more on enabling a deeper understanding of the purpose and capabilities of the enterprises they work in – to facilitate greater clarity of reasoning about strategic options and appropriate action – rather than taking on an often obstructive and disconnected IT strategy and governance role (something that was covered nicely by Neil Ward-Dutton last week).  For all of these reasons I totally agreed with JP’s assertion that we should only pursue absolute detail in those areas that we are currently focused on.  This is certainly the route we took in my previous role in putting together an integration architecture for two financial services companies.

The one area where I think we can add to JP’s thoughtful consideration of the issues is that of developing useful abstractions of the business architecture as pivotal reasoning assets.  In pursuing the work I allude to we developed a business capability map of the enterprise that allowed us to divide it up into a portfolio of ‘business components’.  These capabilities allowed us to reason at a higher level and make an initial loose allocation of the underlying implementation assets and people to each (and given that both I and EA were new to the organisation when we started I even had to ‘crowdsource’ a view of the assets and their allocation to capabilities from across the organisation to kick start the process).  In this sense there was no need at the outset to understand the details of how everything linked together (either across the organisation or within individual capabilities) but rather just the purpose and broad outcomes of each capability.  This is an important consideration as it allowed us to focus clearly on understanding which capabilities needed to be addressed to respond to particular issues and also to reason about and action these changes at a more abstract level (i.e. without becoming distracted by – and lost in – the details of the required implementation).  In this sense we could concentrate not on understanding the detail of every ‘horizontal’ area as a discrete thing – so everything about every process, infrastructure, data or reward systems along with the connections across them all – but rather on building a single critical horizontal asset (i.e. the business capability view) that allowed us to reason about outcomes at an enterprise level whilst only loosely aligning implementation information to these capabilities until such a time as we wanted to make some changes.  At that stage specific programmes could work with the EA team to look much more specifically at actual relationships along with the implementation resources, roles and assets required to deliver the outcomes.  Furthermore the loosely bounded nature of the capabilities meant that we could gradually increase the degree of federation from a design and implementation perspective without losing overall context.

Overall this approach meant that we did not try to maintain a constant and consistent view of the entire enterprise within and across the traditional horizontal views – along with the way in which they all linked together from top to bottom – but only a loose view of the overall portfolio of each with specific contextualisation provided by an organising asset (i.e. the capability model).  In this context we needed to confirm the detailed as-is and to-be state of each capability whenever we wanted to action changes to its outcomes – as we expended little effort to create and maintain detailed central views – but this could be largely undertaken by the staff embedded within the capability with support and loose oversight from the central EA team.  In reality we kept an approximate portfolio view of the assets in the organisation (so for example processes, number of people, roles, applications, infrastructures and data) as horizontal assets along with the fact that there was some kind of relationship but these were only sufficient to allow reasoning about individual capabilities, broad systemic issues or the scale of impact of potential changes and were not particularly detailed (I even insisted on keeping them in spreadsheets and Sharepoint – eek – to limit sophistication rather than get sucked into a heavy EA tool with its voracious appetite for models, links and dependencies).

I guess the point I wanted to make is that my own epiphany a few years ago related to the fact that most people don’t need to know how most things work most of the time (if ever) and that trying to enable them to do so is a waste of time and a source of confusion and inaction.  It is essentially impossible to create and then manage a fixed and central model of how an entire enterprise works works top to bottom, particularly by looking at horizontal implementation facets like processes, people or technology which change rapidly, independently and for different reasons in different capabilities.  In addition the business models of capabilities are going to be very diverse and ‘horizontal’ views often encourage overly simplistic policies and standards for the sake of ‘standardisation’ that negatively impact large areas of the business.  Throw in an increasing move towards the cloud and the consumption of specialised external services and this only becomes more of an issue.  In this context it is far more critical to have a set of business architecture assets at different levels of abstraction that allow reasoning about the purpose, direction and execution strategy of the business, its capabilities and their implementation assets (this latter only for those capabilities you retain yourself in future).  These assets need to be explicitly targeted at different levels of abstraction,  produced in a contextually appropriate way and – importantly – facilitate far greater federation in decision making and implementation to improve outcomes.  Effectively a framework for understanding and actionable insight is far more valuable than a mass of – mostly out of date – data that causes information overload, confusion and inaction.  An old picture from a few years ago that I put together to illustrate some of these ideas is included below (although in reality I’m not sure that I see an “IT department” continuing to exist as a separate entity in the long term but rather a migration of appropriate staff into the enterprise and capability spaces with platforms and non-core business capabilities moving to the cloud).

guerilla

In terms of relinquishing central control in this way it is possible that for transitional business architectures – where capabilities remain largely within the control of a single enterprise as today – greater federation coupled with a refined form of internal crowd sourcing could enable each independent model to be internally consistent and for each independent model to be consistent with the broader picture of enterprise value creation.  I decided to do something else before getting to the point of testing this as a long term proposition, however, lol (although perhaps my former (business) partner in crime @pcgoodie who’s just started blogging will talk more about this given that he has more staying power than me and continues the work we started together, lol).  Stepping back, however, part of the value in moving to this way of thinking is letting go and viewing things from a systems perspective and so the value of having access to all the detail from the centre will diminish over time.

In the broader sense, though, whilst I first had a low grade ‘business services as organisation’ epiphany whilst working at a financial services company in 2001 most of this thinking and these ways of working were inspired not by being inside an enterprise but rather subsequently spending time outside of one.  Spending time researching and reflecting on the architectures, patterns, technologies and – more importantly – business models related to the cloud made me think more seriously about the place of an enterprise in its wider ecosystem of value creation and the need to concentrate completely on those aspects of the ecosystem that really deliver its value.  In the longer term whilst there are many pressures forcing an internal realignment to become more customer-centric, valuable or cost effective, the real pressure is going to start building from the outside; once you realise that the enterprise works within a broader system you also start to see how the enterprise itself is a system, with most of its components being pretty poor or misaligned to the needs of the wider ecosystem and its consumers.  At this point you begin to realise that you have to separate the different capabilities in your organisation and use greater design thinking, abstraction and federation, giving up control of the detail outside of very specific (and different) contexts depending on your purview.  At that stage you can really question your need to be executing many capabilities yourself at all, since the real promise of the cloud is not merely to provide computing power externally but rather to enable businesses to realise their specialised capabilities in a way that is open, collaborative and net native and to connect these specialisations across boundaries to form new kinds of loose but powerful value webs.  Such an end game will be totally impossible for organisations who continue to run centralised, detail-oriented EA programmes and thus do not learn to let go, federate and use abstraction to reason, plan and execute at different levels simultaneously.

What Does it Mean to Think of Your Business as a Service?

Just read a really interesting post from Henry Chesbrough about what it means to think about your business as a service.  It touches on something that has always seemed obvious to me but which also seems to be not well understood.  It’s important as it’s both subtle but ultimately highly disruptive.

In order to set some context about how businesses typically think of services, Henry first points to an illustration of the value chain model and the place of ‘services’ within this illustration:

image

He points out that services are often thought of as a second-class citizen in this view of the world, merely being tacked onto the end of the process to assist customers in adopting the ‘real’ value – i.e. that which has been designed to be pushed at them through the tightly integrated value chain.

He then goes on to suggest that this isn’t the best view of what services should be in reality and that there is immense value in thinking about – and delivering the value of – a business as a service.

I have been arguing on my blog for a long time that the challenge facing most organisations is to reimagine themselves as a set of ‘business services’ (or business capabilities) that are organised around value rather than customer segments, functional disciplines or physical assets.  Such a move can make them more adaptable, help them to specialise by disaggregating non-core capabilities to partners and unleash innovation on a scale not possible in today’s internally focused and tightly coupled organisations.  Looking at different kinds of value can also help us to sustainably disaggregate and then re-aggregate the organisation based on cultural and economic differences (so based around relationship management business models, infrastructural business models, IP development business models or portfolio management business models).

90% of people I talk to still equate services with the value chain definition highlighted above, however, and miss the core point that a move to a ‘services based world’ isn’t that the small area of the traditional value chain called ‘services’ becomes the most economically attractive (i.e. consulting is better than product development and so we should concentrate more there) but rather that every participant in the traditional value chain has to realign themselves to take responsibility for ‘hiding’ the assets needed to deliver their outcome.  In doing this they simplify consumption for their customers and create an ability to work with far more value web participants outside the boundaries of a single organisation.  Equally importantly such a realignment sets the scene for them to participate in pull-oriented value webs rather than merely being a dumb participant in a pre-set and push-oriented value chain.  This does not mean that they are only specialising on the traditional ‘services’ part of the value chain and sourcing all the non-services parts from partners – rather it means that every organisation has to identify the correct business model for each component and then increase the scope of each to wrap up whatever physical, human or information assets are required to deliver that as a specialised service.  As an example manufacturers (an infrastructural business with heavy dependency on physical assets and hence far from the definition of services we started with) will still need manufacturing capability, but they will ‘expose’ the whole capability (i.e. people, processes and technologies) as a service to others (who follow different business models related to IP development or relationships).

Such a shift to greater specialisation around the delivered value whilst simultaneously extending the required scope of expertise required to deliver that value as a service is an important point; more often than not such realignments will cut across settled business boundaries and drive ‘mini-vertical-integration*’ within the context of a particular business type and outcome.

We could therefore consider a reorganisation of businesses for a service economy as a move away from the value chain model we started with to one in which:

  • ‘services’ become core offerings rather than merely a value add and represent both the external boundary and a definition of the specialised outcome delivered.  Internally the service will be implemented by a ‘mini internal value chain’ tightly optimised to deliver its differentiating IP through the appropriate combination of physical, information and human resources; and
  • a ‘value web’ coordinates services into broader networks by aggregating value via the coordination of outcomes from many specialised service providers.

Effectively you could say that the ‘value chain’ (i.e. explicit, known implementation) becomes internal to the service provider whilst the ‘value web’ (i.e. external coordination of outcomes) becomes the external expression of how value is aggregated.

Either way there is an important mind shift that needs to be made here – moving to a model in which you make your business available as a service has profound implications for what does or does not constitute a specialisation for your organisation and on how you organise.  You may find that many things you have traditionally done internally actually have no intrinsic value and can be ceded to specialised partners, whereas subsets of many of the things that have over-simplistically been considered ‘horizontal’ (and thus easily outsourced – so for example HR, Marketing or IT) come to represent significant value when you look to optimise against outcomes.  Only be re-orienting around value will we gain the insights necessary to understand the nature of the services we wish to offer, the optimum business model to adopt for each and the skills and assets required by the cross-functional teams who will deliver it.

P.S.  As an example – I briefly discussed how moves to specialise around value might affect IT departments last week.

*I should also state that when talking about ‘vertical integration’ in this context I mean within a particular business type (i.e. relationship management, infrastructure, IP development or portfolio management) rather than _across_ business types – such horrific ‘vertical integration’ across the whole value chain of different kinds of value (as beloved by traditional telecoms incumbents and, it seems, Apple) creates walled gardens that restrict consumer freedom, create asymmetrical power relationships and inhibit innovation.  As a result I believe that this is something to be strictly avoided if we want open and competitive markets (and increasingly enforced by regulation if necessary).

What’s the Future of SOA?

EbizQ asked last week for views on the improvements people believe are required to make SOA a greater success.  I think that if we step back we can see some hope – in fact increasing necessity – for SOA and the cloud is going to be the major factor in this.

If we think about the history of SOA to date it was easy to talk about the need for better integration across the organisation, clearer views of what was going on or the abstract notion of agility. Making it concrete and urgent was more of an issue, however. Whilst we can discuss the ‘failure’ of SOA by pointing to a lack of any application of service principles at a business level (i.e. organisationally through some kind of EA) this is really only a symptom and not the underlying cause. In reality the cause of SOA failure to date has been business inertia – organisations were already set up to do what they did, they did it well enough in a push economy and the (understandable) incentives for wholesale consideration of the way the business worked were few.

The cloud changes all of this, however. The increasing availability of cloud computing platforms and services acts as a key accelerator to specialisation and pull business models since it allows new entrants to join the market quickly, cheaply and scalably and to be more specialised than ever before. As a result many organisational capabilities that were economically unviable as market offerings are now becoming increasingly viable because of the global nature of cloud services. All of these new service providers need to make their capabilities easy to consume, however, and as a result are making good use of what people are now calling ‘apis’ in a web 2.0 context but which are really just services; this is important as one of the direct consequences of specialisation is the need to be hooked into the maximum number of appropriate value web participants as easily as possible.

On the demand side, as more and more external options become available in the marketplace that offer the potential to replace those capabilities that enterprises have traditionally executed in house, so leaders will start to rethink the purpose of their organisations and leverage the capabilites of external service providers in place of their own.

As a result cloud and SOA are indivisable if we are to realise the potential of either; cloud enables a much broader and more specialised set of business service providers to enter a global market with cost and capability profiles far better than those which an enterprise can deliver internally. Equally importantly, however, they will be implicitly (but concretely) creating a ‘business SOA catalogue’ within the marketplace, removing the need for organisations to undertake a difficult internal slog to re-implement or re-configure outdated capabilities for reuse in service models. Organisations need to use this insight now to trigger the use of business architecture techniques to understand their future selves as service-based organisations – both by using external services as archtypes to help them understand the ways in which they need to change and offer their own specialised services but also to work with potential partners to co-develop and then disaggregate those services in which they don’t wish to specialise in future.

Having said all that to set the scene for my answer(!) I believe that SOA research needs to be focused on raising the concepts of IT mediated service provision to a business level – including concrete modelling of business capabilities and value webs – along with complex service levels, contracts, pricing and composition – new cloud development platforms, tooling and management approaches linked more explicitly to business outcomes – and which give specialised support to different kinds of work – and the emergence of new 3rd parties who will mediate, monitor and monetise such relationships on behalf of participants in order to provide the required trust.

All in all I guess there’s still plenty to do.

This is Not Your Daddy’s IT Department

Whilst noodling around the net looking at stuff for a longer post I’m writing I came across an excellent Peter Evans-Greenwood piece from a few months ago on a related theme – namely the future of the IT department.  I found it so interesting I decided to forgo my other post for now and jot down some thoughts.

After an interesting discussion about the way in which IT organisations have traditionally been managed and the ways in which outsourcing has evolved Peter turns to a discussion of the future shape of IT as a result of the need for businesses to focus more tightly, change more rapidly and deal with globalisation.  He posits that the ideal future shape of provision looks like that below (most strategic IT work at peak):

pyramid

Firstly I agree with the general shape of this graphic – it seems clear to me that much of what goes on in existing enterprises will be ceded to specialised third parties.  My only change would be to substitute ‘replace with software’ with ‘replace with external capability’ as I believe that businesses will outsource more than just software.  Given that this diagram was meant to look at the work of the IT department, however, it’s scope is understandable.

The second observation is that I believe that the IT “function” will disaggregate and be spread around both the residual business and the new external providers.  I believe that this split will happen based on cultural and economic factors.

Firstly all ‘platform’ technologies will be outsourced to the public cloud to gain economies of scale as the technology matures.  There may be a residual internal IT estate for quite some time but it is essentially something that gets run down rather than invested in for new capability.  It is probable that this legacy estate would go to one kind of outsourcer in the ‘waist’ of the triangle.

Secondly many business capabilities currently performed in house will be outsourced to specialised service providers – this is reflected in the triangle by the ‘replace with software’ bulge (although as I stated I would suggest ‘replace with external capability’ in this post to cover the fact that I’m also talking about business capabilities rather than just SaaS).

Thirdly – business capabilities that remain in house due to their differentiating or strategic nature will each absorb a subset of enterprise architects, managers and developers to enable a more lean process – essentially these people will be embedded with the rest of their business peers to support continual improvement based on aligned outcomes.  The developers producing these services will use cloud platforms to minimise infrastructural concerns and focus on software-based encoding of the specialised IP encapsulated by their business capability.  It is probable that enterprise architects, managers and developers in this context will also be supplemented by external resources from the ‘waist’ as need arises.

Finally a residual ‘portfolio and strategy’ group will sit with the executive and manage the enterprise as a collection of business capabilities sourced internally and externally against defined outcomes.  This is where the CIO and portfolio level EA people will sit and where traditional consulting suppliers would sell their services.

As a result my less elegant (i.e. pig ugly :)) diagram updated to reflect the disaggregation of the IT department and the different kinds of outsourcing capabilities they require would look something like:

future_it_department

In terms of whether the IT ‘department’ continues to exist as an identifiable capability after this disaggregation I suspect not – once the legacy platform has been replaced by a portfolio of public cloud platforms and the ‘IT staff’ merged with other cross-functional peers behind the delivery of outcomes I guess IT becomes part of the ‘fabric’ of the organisation rather than a separate capability.  I don’t believe that this means that IT becomes ‘only’ about procurement and vendor management, however, since those business capabilities that remain in house will still use IT literate staff to design and build new IT driven processes in partnership with their peers.

I did write a number of draft papers about all these issues a few years ago but they all got stuck down the gap between two jobs.  I should probably think about putting them up here one day and then updating them.

Cloud vs Mainframes

David Linthicum highlights some interesting research about mainframes and their continuation in a cloud era.

I think David is right that mainframes may be one of the last internal components to be switched off and that in 5 years most of them will still be around.  I also think, however, that the shift to cloud models may have a better chance of achieving the eventual decommissioning of mainframes than any previous technological advance.  Hear me out for a second.

All previous new generations of technology looking to supplant the mainframe have essentially been slightly better ways of doing the same thing.  Whilst we’ve had massive improvements in the cost and productivity of hardware, middleware and development languages essentially we’ve continued to be stuck with purchase and ownership of costly and complex IT assets. As a result whilst most new development has moved to other platforms the case for shifting away from the mainframe has never seriously held water. Whilst redevelopment would generate huge expense and risk it would result in no fundamental business shift as a result. Essentially you still owned and paid for a load of technology ‘stuff’ and the people to support it even if you successfully navigated the huge organisational and technical challenges required to move ‘that stuff’ to ‘this stuff’. In addition the costs already sunk into the assets and the technology cost barriers to other people entering a market (due to the capital required for large scale IT ownership) also added to the general inertia.

At its heart cloud is not a shift to a new technology but – for once – genuinely a shift to a new paradigm. It means capabilities are packaged and ready to be accessed on demand.  You no longer need to make big investments in new hardware, software and skills before you can even get started. In addition suddenly everyone has access to the best IT and so your competitors (and new entrants) can immediately start building better capabilities than you without the traditional technology-based barriers of entry. This could lead to four important considerations that might eventually lead to the end of the mainframe:

  1. Should an organisation decide to develop its way off the mainframe they can start immediately without the traditional need to incur the huge expense and risk of buying hardware, software, development and systems integration capability before they can even start to redevelop code.  This removes a lot of the cost-based risks and allows a more incremental approach;
  2. Many of the applications implemented on mainframes will increasingly be in competition with external SaaS applications that offer broadly equivalent functionality.  In this context moving away from the mainframe is even less costly and risky (whilst still a serious undertaking) since we do not need to even redevelop the functionality required;
  3. The nature of the work that mainframe applications were set up to support (i.e. internal transaction processing across a tight internal value chain) is changing rapidly as we move towards much more collaborative and social working styles that extend across organisational boundaries.  The changing nature of work is likely to eat away further at the tightly integrated functionality at the heart of most legacy applications and leave fewer core transactional components running on the mainframe; and
  4. Most disruptive of all, as organisations increasingly take advantage of falling collaboration costs to outsource whole business capabilities to specialised partners, so much of the functionality on the mainframe (and other systems) becomes redundant since that work is no longer performed in house.

I think that the four threads outlined here have the possibility to lead to a serious decline in mainframe usage over the next ten years.

But then again they are like terminators – perhaps they will simply be acquired gradually by managed service providers offering to squeeze the cost of maintenance, morph into something else and survive in a low grade capacity for for some time.

 

 

Is Your Business Right For The Cloud?

I left a short comment on the ebizq website a couple of days ago in response to the question ‘is the cloud right for my business?’

I thought I’d also post an extended response here as I strongly believe that this is the wrong question.  Basically I see questions like this all the time and they are always framed and answered at the wrong level, generating a lot of heat – as people argue about the merits of public vs private infrastructures etc – but little insight.  Essentially there are a number of technology offerings available which may or may not meet the specific IT requirements of a business at a particular point in time.  Framed in the context of traditional business and IT models the issues raised often focus on the potentially limited benefits of a one to one replacement of internal with external capability in the context of a static business.  Its usually just presented as a question of whether I provide equivalent IT from somewhere else (usually somewhere dark and scary) or continue to run it in house (warm, cuddly and with tea and biscuits thrown in).  The business is always represented as static and unaffected by the cloud other than in the degree to which its supporting IT is (marginally) better or (significantly) worse.

If the cloud was truly just about taking a traditional IT managed service (with some marginal cost benefit) vs running it in house – as is usually positioned – then I wouldn’t see the point either and would remain in front of the heater in my carpet slippers with everyone else in IT.  Unfortunately for people stuck in this way of thinking  – and the businesses that employ them – the cloud is a much, much bigger deal.

Essentially people are thinking too narrowly in terms of what the cloud represents.  It’s not about having IT infrastructure somewhere else or sourcing ‘commodity applications’ differently.  These may be the low hanging fruit visible to IT folks currently but they are a symptom of the impact of cloud and not the whole story.

The cloud is all about the falling transaction costs of collaboration and the current impact on IT business models is really just a continuation of the disruptions of the broader Internet.  As a result whilst we’re currently seeing this disruption playing out in the IT industry (through the commoditisation of technology and a move towards shared computing of all kinds) it is inevitable that other industry disruptions will follow as the costs of consuming services from world-class partners plummets and the enabling technology becomes cheaper, more configurable, more social and more scalable as a result of the reformation of the IT industry.

Essentially all businesses need to become more adaptive, more connected and more specialised to succeed in the next ten years and the cloud will both force this and support it.  Getting your business to understand and plan for these opportunities – and having a strong cloud strategy to support them – is probably the single most important thing a CIO can do at the moment.  Not building your own ‘private cloud’ with no expertise or prior practice to package or concentrating on trying to stop business colleagues with an inkling of the truth from sourcing cloud services more appropriate to their needs.  Making best use of new IT delivery models to deliver truly competitive and world-class business capabilities for the emerging market is the single biggest strategic issue facing CIOs and the long term health of the businesses they serve.  There is both huge untapped value and terrific waste languishing inside existing business structures and both can be tackled head on with the help of the cloud.  Optimising the limited number of business capabilities that remain in a business’s direct control – as opposed to those increasingly consumed from partners – will be a key part of making reformed organisations fit for the new business ecosystem.

As a result the question isn’t whether the cloud is or will be ‘right’ for your business but rather how ‘right’ your business will be for the cloud. Those organisations that fail to take a broader view and move their business and technical models to be ‘right’ for the cloud will face a tough struggle to survive in a marketplace that has evolved far beyond their capabilities.

Reporting of “Cloud” Failures

I’ve been reading an article from Michael Krigsman today related to Virgin Blue’s “cloud” failure in Australia along with a response from Bob Warfield.  These articles raised the question in passing of whether such offerings can really be called cloud offerings and also brought back the whole issue of ‘private clouds’ and their potentially improper use as a source of FUD and protectionism.

Navitaire essentially seem to have been hosting an instance of their single-tenancy system in what appears to be positioned as a ‘private cloud’.  As other people have pointed out, if this was a true multi-tenant cloud offering then everyone would have been affected and not just a single customer.  Presumably then – as a private cloud offering – this is more secure, more reliable, has service levels you can bet the business on and won’t go down.  Although looking at these reports it seems like it does, sometimes.

Now I have no doubt that Navitaire are a competent, professional and committed organisation who are proud of the service they offer.  As a result I’m not really holding them up particularly as an example of bad operational practice but rather to highlight widespread current practices of repositioning ‘legacy’ offerings as ‘private cloud’ and the way in which this affects customers and the reporting of failures.

Many providers whose software or platform is not multi-tenant are aggressively positioning their offering as ‘private cloud’ both as an attempt to maintain revenues for their legacy systems and a slightly cynical way to press on companies’ worries about sharing.  Such providers are usually traditional software or managed service providers who have no multi-tenant expertise or assets; as a result they try to brand things cloud whilst really just delivering old software in an old hosted model.  Whilst there is still potentially a viable market in this space – i.e. moving single-tenant legacy applications from on-premise to off-premise as a way of reducing the costs of what you already have and increasing focus on core business – such offerings are really just managed services and not cloud offerings.  The ‘private’ positioning is a sweet spot for these people, however, as it simultaneously allows them to avoid the significant investment required to recreate their offerings as true cloud services, prolongs their existing business models and plays on customers uncertainty about security and other issues.  Whilst I understand the need to protect revenue at companies involved in such ‘cloud washing’ – and thus would stop short of calling these practices cynical – it illustrates that customers do need to be aware of the underlying architecture of offerings (as Phil Wainwright correctly argued).  In reality most current ‘private cloud’ offerings are not going to deliver the levels of reliability, configurability and scale that customers associate with the promise of the cloud.  And that’s before we even get to the more business transformational issues of connectivity and specialisation.

Looking at these kinds of offerings we can see why single-tenant software and private infrastructure provided separately for each customer (or indeed internally) is more likely to suffer a large scale failure of the kind experienced by Virgin Blue.  Essentially developing truly resilient and failure optimised solutions for the cloud needs to address every level of the offering stack and realistically requires a complete re-write of software, deep integration with the underlying infrastructure and expert operations who understand the whole service intimately.  This is obviously cost prohibitive without the ability to share a solution across multiple customers (remember that cloud != infrastructure and that you must design an integrated infrastructure, software and operations platform that inherently understands the structure of systems and deals with failures across all levels in an intelligent way).  Furthermore even if cost was not a consideration, without re-development the individual parts that make up such ‘private’ solutions (i.e. infrastructure, software and operations) were not optimised from the beginning to operate seamlessly together in a cloud environment and can be difficult to keep aligned and manage as a whole.  As a result it’s really just putting lipstick on a pig and making the best of an architecture that combines components that were never meant to be consumed in this way.

However much positioning companies try to do it’s plain that you can’t get away from the fact that ultimately multi-tenancy at every level of a completely integrated technology stack will be a pre-requisite for operating reliable, scalable, configurable and cost effective cloud solutions.  As a result – and in defiance of the claims – the lack of multi-tenant architectures at the heart of most offerings currently positioned as ‘private cloud’ (both hardware and software related, internal and external) probably makes them less secure, less reliable, less cost effective and less configurable (i.e. able to meet a business need) than their ‘public’ (i.e. new) counterparts.

In defiance of the current mass of positioning and marketing to the contrary, then, it could be suggested that companies like Virgin Blue would be less likely to suffer catastrophic failures in future if they seek out real, multi-tenant cloud services that share resources and thus have far greater resilience than those that have to accommodate the cost profiles of serving individual tenants using repainted legacy technologies.  This whole episode thus appears to be a failure of the notion that you can rebrand managed services as ‘private cloud’ rather than a failure of an actual cloud service.

Most ironically of all the headlines incorrectly proclaiming such episodes as failures of cloud systems will fuel fear within many organisations and make them even more likely to fall victim to the FUD from disingenuous vendors and IT departments around ‘private cloud’.  In reality failures such as the case discussed may just prove that ‘private cloud’ offerings create exposure to far greater risk than adopting real cloud services due to the incompatibility of architecting for high scale and failure tolerance across a complete stack at the same time as architecting for the cost constraints of a single tenant.

Cynefin as a Means of Understanding

Don’t know how I missed it given the range of influences I have and their seeming respect for the framework but I have just discovered Dave Snowden’s (@snowded) Cynefin framework.  Need to think on it more deeply but it immediately appealed to me and looks like an extremely useful – and rather deep – tool for exploring the world.  Yr enw gymraeg yn helpu wneud e’n berffaith :-)  (and the welsh name just makes it perfect).

I’ve been trying to argue for a long time that not all problems in a business can be solved by one approach and usually used the ‘type’ of business (e.g relationships, infrastructure, innovation or portfolio) as a lens with which to optimise different business capabilities in different ways.  This process of ‘fractal’ and evolutionary EA – where we understand what capabilities we need overall but then allow each capability owner to define their own EA for their particular context based on the business type they run and external selection criteria (and so on) – would seem to be a potential benefactor of the kinds of clarity of thinking the model seems able to bring.  Understand what outcomes we need and then explore the culture and economics required to realise each optimally with a model like cynefin. 

I’ve always believed that the primary purpose of EA should be to create ‘sense making’ models for the enterprise at different levels of abstraction appropriate to the capability we’re trying to realise and so tools like this are invaluable in my view.

I’ve been thinking a lot lately about the future of EA and how to capture in one place all of my thoughts from the last few years; I guess my new challenge is to decide how (and whether) to fit cynefin into that picture as I can rarely pass a good idea by, lol.  In struggling through this process, however, I will now find Dave’s rule of thumb immensely comforting – “I know more than I can say and can say more than I can write down”.  

I just didn’t realise he knew me so well.