Archive | SaaS RSS feed for this section

Re-imagining Business Through Integration

14 Nov

(I’m cross-posting this from the Fujitsu RunMyProcess blog where I am now a regular contributor).

Just a commentary in response to a post I found by by Peter Evans-Greenwood on the potential for business re-engineering based on presence-based technologies such as Apple’s iBeacon. While I don’t want to talk about this subject specifically, Peter uses a couple of very clear examples in terms of retail purchasing that illustrate the power of re-imagining desirable outcomes from the consumer’s perspective – as opposed to a technology perspective – and the resulting need to pursue consumer-focused integration of business capabilities to give them what they need.

These themes resonated with me this morning as I gave a talk at the Eurocloud congress recently in which I berated people for not “thinking big” about the potential of cloud in combination with other technologies. At the moment there is so much discussion and argument about whose VM is better or the benefits (or not) of making VMs more ‘enterprisey’ that everyone seems to be missing the ‘moon shot’ opportunity of integrating, simplifying and putting technology platforms into the hands of everyone. This problem only becomes more acute as you broaden your view to all of the other silo arguments raging across other areas of technology evolution. From this perspective Peter’s examples of design-led, consumer-oriented thinking were very similar to the challenge I tried to lay down to congress attendees.

Effectively I believe that the IT challenge of our generation is to package diverse technologies into much higher level platforms that humanise technology and empower less technical people to solve real problems – i.e. to enable them to use modelling and simplified domain languages to scalably and reliably address the huge opportunities that technology can deliver to science, business and society. It’s a shock to many IT people but more often than not it’s actually other people who have the domain knowledge required to change the world – which is why they don’t have the time to learn the technology. From their perspective everything related to traditional IT is a form of tax, a significant driver of risk and delay and at worst an insurmountable barrier to their activities. These problems become more acute as you scale down the size of organisation under consideration – to the point at which the vast majority of smart people are locked out of the ability to bring their expertise to bear in new digital business models.

humanizing technology to realise new digital value chains

If we take Peter’s examples of placing the consumer – rather than technology – at the heart of our endeavours then it feels to me as if many seemingly “hot” IT trends fail on this basic test and are simply a reflection of technology-led thinking. Doing isolated things better because we can – e.g. like Peter’s NFC example – is really just a way of increasing the efficiency of something that brings no benefit to the customer and is therefore just pointless when you step back and reflect. In Peter’s example the ‘customer’ from the technology provider’s perspective may have been the cashiers, the people who support payment systems or even the CIO. When you shift to an outside-in perspective, however, the obvious question is why make payment at the cash desk more efficient when there is no need to queue to pay at all?

I know it’s a difficult discussion but in a similar sense businesses rather than IT staff are the true customers of IT and their intent is ultimately to deliver new and valuable outcomes as quickly as possible – they really couldn’t care less whether your infrastructure is virtualised, what middleware is or whether the pointless technical activity required to undertake these tasks is managed by operations staff or developers. While they still have to ‘queue’ unnecessarily to get their outcomes it makes no material difference to their poor experience or the lack of empowerment offered by technology platforms. By stepping back we can see that most of the activity in cloud at the moment is not focused on re-imagining how we integrate and simplify IT to support the rapid achievement of new and customer-led business models but rather on how we provide tools and approaches to increase the efficiency of the people who have traditionally implemented IT. Again, this might make worthless tasks more efficient but effectively it’s like the payment example mentioned by Peter – in the same way that using NFC misses the opportunity for a wholesale rethink of the customer’s payment experience, I feel that most cloud activity (and certainly noise) is focused on achieving efficiency increases within the vast swathes of traditional IT activity which could be wholly eliminated using a design-led, outcome-centric approach.

In this context I believe that the major responsibility of cloud platform providers is to provide a simplified way of creating business solutions that span all of the different technologies, business capabilities and channels that are meaningful to the creation of business models. Essentially we need to enable businesses to ‘compose’ internal and external capabilities into new value webs supporting innovative new business models – all at a higher level of abstraction. I call this concept of rapid business model creation, integration and adaptation composite business. Essentially there should be no need for anyone other than cloud platform providers to understand the complexity of the different underlying technologies necessary to create, deliver and monetise systems that digitally encode business IP for such composite business models.

Realising a business platform for the support of composite business models requires the consideration of two different dimensions of integration and simplification:

  1. Firstly composite business platforms need to provide a cohesive experience to their users by integrating and simplifying all of the technologies, processes and tools required to deliver value outcomes via multi-layer business composition; such platforms cannot simply be a loose and low level collection of technologies and middleware that require ongoing integration, configuration and management by technical users.
  2. Secondly the platform itself needs to provide high leverage tools that a range of stakeholders can use to quickly capture, deliver, monetise and distribute their business IP as composite business and technology services.  In this context a composite business platform needs to facilitate the simplified creation of solutions that integrate distributed and heterogeneous assets into new value webs – while hiding the technical complexity required to enable it.

In stepping back we need to realise the essentially pointless nature of technology implementation and management as an end in itself and focus on the ways in which we can make it disappear behind tools that simplify the realisation of valuable business outcomes. Such a re-imagining has never been more feasible – we now have a foundation of open networks, open protocols and open technologies that enable the creation of new and higher order platforms for value creation. From my perspective this is the responsibility of platform companies in the emerging business ecosystem and we only have to step back to see the opportunities.

Aspects of Integration

In this context ‘cloud integration’ transforms from being a technical issue to an enabler to the rapid linkage of business and technology assets into new, consumer-centric value webs that can span industry boundaries and deliver new personalised services.

Furthermore while I believe that this shift has the short term potential to improve services from companies and organisations operating within settled industry boundaries, the outstanding business opportunities of our age are to put high leverage cloud platforms into the hands of the maximum number of people to democratise technology and allow organisations to pursue wholesale specialisation and the aggressive re-drawing of existing industry and social boundaries around value. I believe that we truly are on the verge of not just a new information industrial revolution that impacts IT companies but rather a whole new business revolution that will leverage the shift to utility platforms to change the basis of on which businesses compete.  As the technology platform coheres,  enterprises will increasingly be able to specialise, integrate and then focus their joint efforts around value to the end consumer rather than on maximising the utilisation of their own capabilities in pursuit of scale and efficiency (something that represents a ‘punctuated equilibrium’ in evolutionary terms – as I’ll continue to explore in part II of my recent post on this subject). As value webs can be quickly created, evolved and realigned to ‘pull’ everything into the experience required by the consumer, the old model of ‘pushing’ industrially or functionally siloed products and services from large and tightly integrated companies becomes insupportable.

So I would encourage you to read Peter’s post – to see some simple and concrete examples of design thinking in action – and then think about the ‘moonshot’ opportunity of a wholesale re-imagining of technology. With all of the myriad technology advances that we are seeing it has never been easier to create a simplified and reliable platform for the modelling, execution and monetisation of new kinds of business.

Finally, also take the time to really reflect on all of these opportunities in the context of your role and the ways in which you can truly add value in this new environment. If you are working in an enterprise then think hard about whether you really need to control the technology in order to realise business value for your organisation (hint – uh, no). On the other hand if you’re working in an IT company then think about how to hide the technology and enable IT groups to focus purely on business IP capture, management and distribution.

The Business Case for Private Cloud

19 Apr
Private Cloud Posts Should Come in Threes

Over the last year I have returned to the subject of ‘private cloud’ on a number of occasions.  Basically I’m trying to share my confusion as I still don’t really ‘get it’.

First of all I discussed some of the common concerns related to cloud that are used to justify a pursuit of ‘private cloud’ models.  In particular I tried to explain why most of these issues distract us from the actual opportunities; for me cloud has always been a driver to rethink the purpose and scope of your business.  In this context I tried to explain why – as a result – public and private clouds are not even vaguely equivalent.

More recently I mused on whether the whole idea of private clouds could lead to the extinction of many businesses who invest heavily in them.  Again, my interest was on whether losing the ability to cede most of your business capabilities to partners due to over-investment in large scale private infrastructures could be harmful.  Perhaps ‘cloud-in-a-box’ needs a government health warning like tobacco.

In this third post I’d like to consider the business case of private cloud to see whether the concept is sufficiently compelling to overcome my other objections.

A Reiteration of My View of Cloud

Before I start I just wanted to reiterate the way I think about the opportunities of cloud as I’m pretty fed up of conversations about infrastructure, virtualisation and ‘hybrid stuff’.  To be honest I think the increase in pointless dialogue at this level has depressed my blog muse and rendered me mute for a while – while I don’t think hypervisors have anything to do with cloud and don’t believe there’s any long term value in so called ‘cloud bursting’ of infrastructure (as an apparently particularly exciting subject in my circle) I’m currently over-run by weight of numbers.

Essentially its easy to disappear down these technology rat holes but for me they all miss the fundamental point.  Cloud isn’t a technology disruption (although it is certainly disrupting the business models of technology companies) but eventually a powerful business disruption.  The cloud enables – and will eventually force – powerful new business models and business architectures.

As a result cloud isn’t about technology or computing per se for me but rather about the way in which technology is changing the economics of working with others.  Cloud is the latest in a line of related technologies that have been driving down the transaction costs of doing business with 3rd parties.  To me cloud represents the integration, commoditisation and consumerisation of these technologies and a fundamental change in the economics of IT and the businesses that depend on it.  I discussed these issues a few years ago using the picture below.

image

Essentially as collaboration costs move closer and closer to zero so the shape of businesses will change to take advantage of better capabilities and lower costs.  Many of the business capabilities that organisations currently execute will be ceded to others given that doing so will significantly raise the quality and focus of their own capabilities.  At the same time the rest will be scaled massively as they take advantage of the ability to exist in a broader ecosystem.  Business model experimentation will become widespread as the costs of start up (and failure) become tiny and tied to the value created.  Cloud is a key part of enabling these wider shifts by providing the business platforms required to specialise without losing scale and to serve many partners without sacrificing service standardisation.  While we are seeing the start of this process through offerings such as infrastructure-as-a-service and software-as-a-service these are just the tip of the iceberg.  As a very prosaic example many businesses are now working hard to think about how they can extend their reach using business APIs; combine this with improving business architecture practices and the inherent multi-tenancy of the cloud and it is not difficult to imagine a future in which businesses first become a set of internal service providers and then go on to take advantage of the disaggregation opportunity.  In future, businesses will become more specialised, more disaggregated and more connected components within complex value webs.  Essentially every discrete step in a value stream could be fulfilled by a different specialised service provider, with no ‘single organisation’ owning a large percentage of the capabilities being coordinated (as they do today).

As a result of all of these forces my first statement is therefore always that ‘private cloud’ does not really exist; sharing some of the point technologies of early stage cloud platform providers (but at lower scale and without the rapid learning opportunities they have) is not the same as aggressively looking to leverage the fall in transaction costs and availability of new delivery models to radically optimise your business.  Owning your own IT is not really a lever in unlocking the value of a business service based ecosystem but rather represents wasteful expense when the economics of IT have shifted decisively from those based on ownership to those based on access.  IT platforms are now independent economy-of-scale based businesses and not something that needs to be built, managed and supported on a business-by-business basis with all of the waste, diversity, delay and cost that this entails.  Whilst I would never condemn those who have the opportunity to improve their existing estates to generate value I would not accept that investing in internal enhancement would ever truly give you the benefits of cloud.  For this reason I have always disliked the term ‘private cloud’.

In the light of this view of the opportunities of cloud, I would posit that business cases for private cloud could be regarded as lacking some sense even before we look at their merit.  Putting aside the business issues for a moment, however, let’s look at the case from the perspective of technology and how likely it is that you will be able to replicate the above benefits by internal implementation.

What Is a “Cloud”?

One of the confusing issues related to cloud is that it is a broad shift in the value proposition of IT and IT enabled services and not a single thing.  It is a complete realignment of the IT industry and by extension the shape of all industries that use it.  I have a deeper model I don’t want to get into here but essentially we could view cloud as a collection of different kinds of independent businesses, each with their own maturity models:

  • Platforms: Along the platform dimension we see increasing complexity and maturity going –> infrastructure-as-a-service, platform-as-a-service, process-platform-as-a-service through to the kind of holistic service delivery platform I blogged about some time ago.  These are all increasingly mature platform value propositions based on technology commoditisation and economies of scale;
  • Services: Along the services dimension we see increasing complexity and maturity going –> ASP (single tenant applications in IaaS), software-as-a-service, business-processes-as-a-service through to complete business capabilities offered as a service.  While different services may have different economic models, from a cloud perspective they share the trait that they are essentially about codifying, capturing and delivering specialised IP as a multi-tenant cloud service; and
  • Consulting: Along the consulting dimension we see increasing complexity and maturity going –> IT integration and management, cloud application integration and management, business process integration and management through to complex business value web integration and management.  These all exist in the same dimension as they are essentially relationship based services rather than asset based ones.

All of these are independent cloud business types that need to be run and optimised differently.  From a private cloud perspective, however, most people only think about the ‘platform’ case (i.e. only about technology) and think no further than the lowest level of maturity (i.e. IaaS) – even though consulting and integration is actually the most likely business type available for IT departments to transition to (something I alluded to here).  In fact its probably an exaggeration to say that people think about IaaS as most people don’t get beyond virtualisation technology.

Looking at services – which is what businesses are actually interested in, surprisingly – this is probably the biggest of the many elephants in the room with respect to private cloud; if the cloud is about being able to specialise and leverage shared business services from others (whether applications, business process definitions or actual business capabilities) then they – by definition – execute somewhere beyond the walls of the existing organisation (i.e. at the service provider).  So how do these fit with private cloud?  Will you restrict your business to only ever running the old and traditional single-tenant applications you already have?  Will you build a private cloud that has a flavour of every single platform used or operated by specialised service providers?  Will you restrict your business to service providers who are “compatible” with your “platform” irrespective of the business suitability of the service?  Or do you expect every service provider to rewrite their services to run on your superior cloud but still charge you the same for a bespoke service as they charge for their public service?  Whichever one you pick it’s probably going to result in some pain and so you might want to think about it.

Again, for the sake of continuing the journey let’s ignore the issue of services – as it’s an aspect of the business ecosystem problem we’ve already decided we need to ignore to make progress – and concentrate where most people stop thinking.  Let’s have a look at cloud platforms.

Your New Cloud Platform

The first thing to realise is that public cloud platforms are large scale, integrated, automated, optimised and social offerings organised by value to wrap up complex hardware, networks, middleware, development tooling, software, security, provisioning, monetisation, reporting, catalogues, operations, staff, geographies etc etc and deliver them as an apparently simple service.  I’ll say it again – cloud is not just some virtualisation software.  I don’t know why but I just don’t seem able to say that enough.  For some reason people just underestimate all this stuff – they only seem to think about the hypervisor and forget the rest of the complexity that actually takes a hypervisor and a thousand other components and turns them into a well-oiled, automated, highly reliable and cross functional service business operated by trained and motivated staff.

Looking at the companies that have really built and operated such platforms on the internet we can see that there are not a large number due to:

  1. The breadth of cross functional expertise required to package and operate a mass of technologies coherently as a cost-effective and integrated service;
  2. The scarcity of talent with the breadth of vision and understanding required to deliver such an holistic offering; and
  3. The prohibitive capital investment involved in doing so.

Equally importantly these issues all become increasingly pressing as the scope of the value delivered progesses up the platform maturity scale beyond infrastructure and up to the kind of platform required for the realisation and support of complete multi-tenant business capabilities we described at the beginning.

Looking at the companies who are building  public cloud platforms it’s unsurprising that they are not enthusiastically embracing the nightmare of scaling down, repackaging, delivering and then offering support for many on-premise installations of their complex platforms across multiple underfunded IT organisations for no appreciable value.  Rather they are choosing to specialise on delivering these platforms as service offerings to fully optimise the economic model for both themselves and (ironically) their customers.

Whereforeart Thou Private Cloud?

Without the productised expertise of organisations who have delivered a cloud platform, however, who will build your ‘private cloud’?  Ask yourself how they have the knowledge to do so if they haven’t actually implemented and operated all of the complex components as a unified service at high scale and low cost?  Without ‘productised platforms’ built from the ground up to operate with the levels of integration, automation and cost-effectiveness required by the public cloud, most ‘private cloud’ initiatives will just be harried, underfunded and incapable IT organisations trying to build bespoke virtualised infrastructures with old, disparate and disconnected products along with traditional consulting, systems integration and managed services support. Despite enthusiastic ‘cloud washing’ by traditional providers in these spaces such individual combinations of traditional products and practices are not cloud, will probably cost a lot of money to build and support and will likely never be finished before the IT department is marginalised by the business for still delivering uncompetitive services.

Trying to blindly build a ‘cloud’ from the ground up with traditional products, the small number of use cases visible internally and a lack of cross functional expertise and talent – probably with some consulting and systems integration thrown in for good measure to help you on your way – could be considered to sound a little like an expensive, open ended and high risk proposition with the potential to result in a white elephant.  And this is before you concede that it won’t be the only thing you’re doing at the time given that you also have a legacy estate to run and enhance.

Furthermore, go into most IT shops and check out how current most of their hardware and software is and how quickly they are innovating their platforms, processes and roles.  Ask yourself how much time, money and commitment a business invests in enabling its _internal IT department_ to pursue thought leadership, standards efforts and open source projects.  Even once the white elephant lands what’s the likelihood that it will keep pace with specialised cloud platform providers who are constantly improving their shared service as part of their value proposition?

For Whom (does) Your Cloud (set its) Tolls?

Once you have your private cloud budget who will you build it for?  As we discussed at the outset your business will be increasingly ceding business capabilities to specialised partners in order to concentrate on their own differentiating capabilities.  This disaggregation will likely occur along economic lines as I discussed in a previous post, as different business capabilities in your organisation will be looking for different things from their IT provision based on their underlying business model.  Some capabilities will need to be highly adaptable, some highly scalable, some highly secure and some highly cost effective.  While the diversity of the public cloud market will enable different business capabilities within an organisation to choose different platforms and services without sacrificing the benefits of scale, any private cloud will necessarily be conflicted by a wide diversity of needs and therefore probably not be optimal for any.  Most importantly every part of the organisation will probably end up paying for the gold-plated infrastructure required by a subset of the business and which is then forced onto everyone as the ‘standard’ for internal efficiency reasons.

You therefore have to ask yourself:

  1. Is it _really_ true that all of your organisation’s business capabilities _really_ need private hosting given their business model and assets?  I suspect not;
  2. How will you support all of the many individual service levels and costs required to match the economics of your business’s divergent capabilities? I suspect you can’t and will deliver a mostly inappropriate ‘one size fits all’ platform geared to the most demanding use cases; and
  3. How will you make your private infrastructure cost-effective once the majority of capabilities have been outsourced to partners?  The answer is that you probably won’t need to worry about it – I suspect you’ll be out of a job by then after driving the business to bypass your expensive IT provision and go directly to the cloud.
Have We Got Sign-off Yet?

So let’s recap:

  1. Private cloud misses the point of the most important disruption related to cloud – that is the opportunity to specialise and participate more fully in valuable new economic ecosystems;
  2. Private cloud ignores the fundamental fact that cloud is a ‘service-oriented’ phenomenon – that is the benefits are gained by consuming things, uh as a service;
  3. Private cloud implementation represents a distraction from that part of the new IT value chain where IT departments have the most value to add – that is as business-savvy consultants, integrators and managers of services on behalf of their business.

To be fair, however, I will take all of that value destruction off the table given that most people don’t seem to have got there yet.

So let’s recap again just on the platform bit.  It’s certainly the case that internal initiatives targeted at building a ‘private cloud’ are embarking on a hugely complex and multi-disciplinary bespoke platform build wholly unrelated to the core business of the organisation.  Furthermore given that it is an increasing imperative that any business platform supports the secure exposure of an organisation’s business capabilities to the internet they must do this in new ways that are highly secure, standards based, multi-tenant and elastic.  In the context of the above discussion, it could perhaps be suggested that many organisations are therefore attempting to build bespoke ‘clouds’:

  1. Without proven and packaged expertise;
  2. Without the budget focus that public cloud companies need merely to stay in business;
  3. Often lacking both the necessary skills and the capability to recruit them;
  4. Under the constant distraction of wider day to day development and operational demands;
  5. Without support from their business for the activities required to support ongoing innovation and development;
  6. Without a clear strategy for providing multiple levels of service and cost that are aligned to the different business models in play within the company.

In addition whatever you build will be bespoke to you in many technological, operational and business ways as you pick best of breed ‘bits’, integrate them together using your organisations existing standards and create operational procedures that fit into the way your IT organisation works today (as you have to integrate the ‘new ops’ with the ‘old ops’ to be ‘efficient’).  As a result good luck with ever upgrading the whole thing given its patchwork nature and the ‘technical differentiation’ you’ve proudly built in order to realise a worse service than you could have had from a specialised platform provider with no time or cost commitment.

Oh and the costs to operate whatever eventually comes out the other end of the adventure – according to Microsoft at least – could potentially be anywhere between 10 and 80 times higher than those you could get externally right now (and that’s on the tenuous assumption that you get it right first time over the next few years and realise the maximum achievable internal savings – as you usually do no doubt).  To rephrase this we could say that it’s a plan to delay already available benefits for at least three years, possibly for longer if you mess up the first attempt.

I may be in the minority but I’m _still_ not convinced by the business case.

So What Should I Get Sign-off For?

My recommendation would be to just stop already.

And then consider that you are probably not a platform company but rather a consultant and integrator of services that helps your business be better.

So, my advice would be to:

  1. Stop (please) thinking (or at least talking) about hypervisors, virtual machines, ‘hybrid clouds’ and ‘cloud bursting’ and realise that there is inherently no business value in infrastructure in and of itself.  Think of IaaS as a tax on delivering value outcomes and try not to let it distract you as people look to make it more complex for no reason (public/private/hybrid/cross hypervisor/VM management/cloud bursting/etc).  It generates so much mental effort for so little business value;
  2. Optimise what you already have in house with whatever traditional technologies you think will help – including virtualisation – if there is a solid _short return_ business case for it but do not brand this as ‘private cloud’ and use it to attempt to fend off the public cloud;
  3. Model all of your business capabilities and understand the information they manage and the apps that help manage it.  Classify these business capabilities by some appropriate criteria such as criticality, data sensitivity, connectedness etc.  Effectively use Business Architecture to study the structure and characteristics of your business and its capabilities;
  4. Develop a staged roadmap to re-procure (via SaaS), redevelop (on PaaS) or redeploy (to IaaS) 80% of apps within the public cloud.  Do this based on the security and risk characteristics of each capability (or even better replace entire business capabilities with external services provided by specialised partners); and
  5. Pressure cloud providers to address any lingering issues during this period to pave the way for the remaining 20% (with more sensitive characteristics) in a few years.

Once you’ve arrived at 5) it may even be that a viable ‘private cloud’ model has emerged based on small scale and local deployments of ‘shrink wrapped boxes’ managed remotely by the cloud provider at some more reasonable level above infrastructure.  Even if this turns out to be the case at least you won’t have spent a fortune creating an unsupportable white elephant scaled to support the 80% of IT and business that has already left the building.

Whatever you do, though, try to get people to stop telling me that cloud is about infrastructure (and in particular your choice of hypervisor).  I’d be genuinely grateful.

Will Private Cloud Fail ?

28 Jan

A recent discussion on ebizq about the success or failure of private clouds was sparked by Forrester analyst James Staten’s prediction late last year that ‘You will build a private cloud, and it will fail’.  In reality James himself was not suggesting that the concept of ‘private cloud’ would be a failure, only that an enterprise’s first attempt to build one would be – for various technical or operational reasons – and that learning from these failures would be a key milestone in preparing for eventual ‘success’.  Within the actual ebizq discussion there were a lot of comments about the open ended nature of the prediction (i.e. what exactly will fail) and the differing aims of different organisations in pursuing private infrastructures (and therefore the myriad ways in which you could judge such implementations to be a ‘success’ or a ‘failure’ from detailed business or technology outcome perspectives).

I differ in the way I think about this issue, however, as I’m less interested in whether individual elements of a ‘private cloud’ implementation could be considered to be successful or to have failed, but rather more interested in the broader question of whether the whole concept will fail at a macro level for cultural and economic reasons.

First of all I would posit two main thoughts:

1) It feels to me as if any sensible notion of ‘private clouds’ cannot be a realistic proposition until we have mature, broad capability and large scale public clouds that the operating organisations are willing to  ‘productise’ for private deployment; and

2) By the time we get to this point I wonder whether anyone will want one any more.

To address the first point: without ‘productised platforms’ hardened through the practices of successful public providers most ‘private cloud’ initiatives will just be harried, underfunded and incapable IT organisations trying to build bespoke virtualised infrastructures with old, disparate and disconnected products along with traditional consulting, systems integration and managed services support. Despite enthusiastic ‘cloud washing’ by traditional providers in these spaces such individual combinations of traditional products and practices are not cloud, will probably cost a lot of money to build and support and will likely never be finished before the IT department is marginalised by the business for still delivering uncompetitive services.

To the second point: given that the economics (unsurprisingly) appear to overwhelmingly favour public clouds, that any lingering security issues will be solved as part of public cloud maturation and – most critically – that cloud ultimately provides opportunities for business specialisation rather than just technology improvements (i.e. letting go of 80% of non differentiating business capabilities and sourcing them from specialised partners), I wonder whether there will be any call for literally ‘private clouds’ by the time the industry is really ready to deliver them. Furthermore public clouds need not be literally ‘public’ – as in anyone can see everything – but will likely allow the creation of ‘virtual private platforms’ which allow organisations to run their own differentiating services on a shared platform whilst maintaining complete logical separation (so I’m guessing what James calls ‘hosted private clouds’ – although that description has a slightly tainted feeling of traditional services to me).

More broadly I wonder whether we will see a lot of wasted money spent for negative return here. Many ‘private cloud’ initiatives will be scaled for a static business (i.e. as they operate now) rather than for a target business (i.e. one that takes account of the wider business disruptions and opportunities brought by the cloud).  In this latter context as organisations take the opportunities to specialise and integrate business capabilities from partners they will require substantially less IT given that it will be part of the service provided and thus ‘hidden’.  Imagining a ‘target’ business would therefore lead us to speculate that such businesses will no longer need systems that previously supported capabilities they have ceased to execute. One possible scenario could therefore be that ‘private clouds’ actually make businesses uncompetitive in the medium to long term by becoming an expensive millstone that provides none of the benefits of true cloud whilst weighing down the leaner business that cloud enables with costs it cannot bear. In extreme cases one could even imagine ‘private clouds’ as the ‘new legacy’, creating a cost base that drives companies out of business as their competitors or new entrants transform the competitive landscape. In that scenario it’s feasible that not only would ‘private clouds’ fail as a concept but they could also drag down the businesses that invest heavily in them(1).

Whilst going out of business completely may be an extreme – and unlikely – end of a spectrum of possible scenarios, the basic issues about cost, distraction and future competitiveness – set against a backdrop of a declining need for IT ownership – stand. l therefore believe that people need to think very, very carefully before deciding that the often short-medium term (and ultimately solvable) risks of ‘public’ cloud for a subset of their most critical systems are outweighed by the immense long term risks, costs and commitment of building an own private infrastructure. This is particularly the case given that not all systems within an enterprise are of equal sensitivity and we therefore do not need to make an inappropriately early and extreme decision that everything must be privately hosted.  Even more subtly, different business capabilities in your organisation will be looking for different things from their IT provision based on their underlying business model – some will need to be highly adaptable, some highly scalable, some highly secure and some highly cost effective.  Whilst the diversity of the public cloud market will enable different business capabilities to choose different platforms and services without sacrificing the traditional scale benefits of internal standardisation, any private cloud will necessarily operate with a wide diversity of needs and therefore probably not be optimal for any.  In the light of these issues, there are more than enough – probably higher benefit and lower risk – initiatives available now in incrementally optimising your existing IT estate whilst simultaneously codifying the business capabilities required by your organisation, the optimum systems support for their delivery and then replacing or moving the 80% of non-critical applications to the public cloud in a staged manner (or better still directly sourcing a business capability from a partner that removes the need for IT). In parallel we have time to wait and see how the public environment matures – perhaps towards ‘virtual private clouds’ or ‘private cloud appliances’ – before making final decisions about future IT provision for the more sensitive assets we retain in house for now (using existing provision). Even if we end up never moving this 20% of critical assets to a mature and secure ‘public’ cloud they can either a) remain on existing platforms given the much reduced scope of our internal infrastructures and the spare capacity that results or b) be moved to a small scale, packaged and connected appliance from a cloud service provider.

Throwing everything behind building a ‘private cloud’ at this point therefore feels risky given the total lack of real, optimised and productised cloud platforms, uncertainty about how much IT a business will actually require in future and the distraction it would represent from harvesting less risky and surer public cloud benefits for less critical systems (in ways that also recognise the diversity of business models to be supported).

Whilst it’s easy, therefore, to feel that analysts often use broad brush judgments or seek publicity with sensationalist tag lines I feel in this instance a broad brush judgment of the likely success or failure of the ‘private cloud’ concept would actually be justified (despite the fact that I am using a different interpretation of failure to the ‘fail rapidly and try again’ one I understand James to have meant). Given the macro level impacts of cloud (i.e. a complete and disruptive redefinition of the value proposition of IT) and the fact that ‘private cloud’ initiatives fail to recognise this redefinition (by assuming a marginally improved propagation of the status quo), I agree with the idea that anyone who attempts to build their own ‘private cloud’ now will be judged to have ‘failed’ in any future retrospective. When we step away from detailed issues (where we may indeed see some comparatively marginal improvements over current provision) and look at the macro level picture, IT organisations who guide their business to ‘private cloud’ will simultaneously load it with expensive, uncompetitive and unnecessary assets that still need to be managed for no benefit whilst also failing to guide it towards the more transformational benefits of specialisation and flexible provision. As a result whilst we cannot provide ‘hard facts’ or ‘specific measures’ that strictly define which ‘elements’ of an individual ‘private cloud’ initiative will be judged to have ‘succeeded’ and which will have ‘failed’, looking for this justification is missing the broader point and failing to see the wood for the trees; the broader picture appears to suggest that when we look back on the overall holistic impacts of ‘private cloud’ efforts it will be apparent that they have failed to deliver the transformational benefits on offer by failing to appreciate the macro level trends towards IT and business service access in place of ownership. Such a failure to embrace the seismic change in IT value proposition – in order to concentrate instead on optimising a fading model of ‘ownership’ – may indeed be judged retrospectively as ‘failure’ by businesses, consumers and the market.

Whilst I agree with many of James’s messages about what it actually means to deliver a cloud – especially the fact that they are a complex, connected ‘how’ rather than a simple ‘thing’ and that budding ‘private cloud’ implementers fail to understand the true breadth of complexity and cross functional concerns – I believe I may part company with James’s prediction in the detail.  If I understand correctly James specifically advocates ‘trying and failing’ merely as an enabler to have another go with more knowledge; given the complexity involved in trying to build an own ‘cloud’ (particularly beyond low value infrastructure), the number of failures you’d have to incur to build a complete platform as you chase more value up the stack and the ultimately pointless nature of the task (at least taking the scenarios outlined above) I would prefer to ask why we would bother with ‘private cloud’ at this point at all? It would seem a slightly insane gamble versus taking the concrete benefits available from the public cloud (in a way which is consistent with your risk profile) whilst allowing cloud companies to ‘fail and try again’ on your behalf until they have either created ‘private cloud appliances’ for you to deploy locally or obviated the need completely through the more economically attractive maturation of ‘virtual private platforms’.

For further reading, I went into more detail on why I’m not sure private clouds make sense at this point in time here:

http://www.ebizq.net/blogs/ebizq_forum/2010/11/does-the-private-cloud-lack-business-sense.php#comment-12851

and why I’m not sure they make sense in general here:

https://itblagger.wordpress.com/2010/07/14/private-clouds-surge-for-wrong-reasons/

(1) Of course other possible scenarios people mention are:

  1. That the business capabilities remaining in house expand to fill the capacity.  In this scenario these business capabilities would probably still pay a premium versus what they could get externally and thus will still be uncompetitive and – more importantly – saddled with a serious distraction for no benefit.  Furthermore this assumes that the remaining business capabilities share a common business model and that through serendipity the ‘private cloud’ was built to optimise this model in spite of the original muddled requirements to optimise other business models in parallel; and
  2. Companies who over provision in order to build a ‘private cloud’ will be able to lease all of their now spare capacity to others in some kind of ‘electricity model’.  Whilst technically one would have some issues with this, more importantly such an operation seems a long way away from the core business of nearly every organisation and seems a slightly desperate ‘possible ancillary benefit’ to cling to as a justification to invest wildly now.  This is especially the case when such an ‘ancillary benefit’ may prevent greater direct benefits being gained without the hassle (through judicious use of the public cloud both now and in the future.)

Cloud vs Mainframes

19 Oct

David Linthicum highlights some interesting research about mainframes and their continuation in a cloud era.

I think David is right that mainframes may be one of the last internal components to be switched off and that in 5 years most of them will still be around.  I also think, however, that the shift to cloud models may have a better chance of achieving the eventual decommissioning of mainframes than any previous technological advance.  Hear me out for a second.

All previous new generations of technology looking to supplant the mainframe have essentially been slightly better ways of doing the same thing.  Whilst we’ve had massive improvements in the cost and productivity of hardware, middleware and development languages essentially we’ve continued to be stuck with purchase and ownership of costly and complex IT assets. As a result whilst most new development has moved to other platforms the case for shifting away from the mainframe has never seriously held water. Whilst redevelopment would generate huge expense and risk it would result in no fundamental business shift as a result. Essentially you still owned and paid for a load of technology ‘stuff’ and the people to support it even if you successfully navigated the huge organisational and technical challenges required to move ‘that stuff’ to ‘this stuff’. In addition the costs already sunk into the assets and the technology cost barriers to other people entering a market (due to the capital required for large scale IT ownership) also added to the general inertia.

At its heart cloud is not a shift to a new technology but – for once – genuinely a shift to a new paradigm. It means capabilities are packaged and ready to be accessed on demand.  You no longer need to make big investments in new hardware, software and skills before you can even get started. In addition suddenly everyone has access to the best IT and so your competitors (and new entrants) can immediately start building better capabilities than you without the traditional technology-based barriers of entry. This could lead to four important considerations that might eventually lead to the end of the mainframe:

  1. Should an organisation decide to develop its way off the mainframe they can start immediately without the traditional need to incur the huge expense and risk of buying hardware, software, development and systems integration capability before they can even start to redevelop code.  This removes a lot of the cost-based risks and allows a more incremental approach;
  2. Many of the applications implemented on mainframes will increasingly be in competition with external SaaS applications that offer broadly equivalent functionality.  In this context moving away from the mainframe is even less costly and risky (whilst still a serious undertaking) since we do not need to even redevelop the functionality required;
  3. The nature of the work that mainframe applications were set up to support (i.e. internal transaction processing across a tight internal value chain) is changing rapidly as we move towards much more collaborative and social working styles that extend across organisational boundaries.  The changing nature of work is likely to eat away further at the tightly integrated functionality at the heart of most legacy applications and leave fewer core transactional components running on the mainframe; and
  4. Most disruptive of all, as organisations increasingly take advantage of falling collaboration costs to outsource whole business capabilities to specialised partners, so much of the functionality on the mainframe (and other systems) becomes redundant since that work is no longer performed in house.

I think that the four threads outlined here have the possibility to lead to a serious decline in mainframe usage over the next ten years.

But then again they are like terminators – perhaps they will simply be acquired gradually by managed service providers offering to squeeze the cost of maintenance, morph into something else and survive in a low grade capacity for for some time.

 

 

Reporting of “Cloud” Failures

12 Oct

I’ve been reading an article from Michael Krigsman today related to Virgin Blue’s “cloud” failure in Australia along with a response from Bob Warfield.  These articles raised the question in passing of whether such offerings can really be called cloud offerings and also brought back the whole issue of ‘private clouds’ and their potentially improper use as a source of FUD and protectionism.

Navitaire essentially seem to have been hosting an instance of their single-tenancy system in what appears to be positioned as a ‘private cloud’.  As other people have pointed out, if this was a true multi-tenant cloud offering then everyone would have been affected and not just a single customer.  Presumably then – as a private cloud offering – this is more secure, more reliable, has service levels you can bet the business on and won’t go down.  Although looking at these reports it seems like it does, sometimes.

Now I have no doubt that Navitaire are a competent, professional and committed organisation who are proud of the service they offer.  As a result I’m not really holding them up particularly as an example of bad operational practice but rather to highlight widespread current practices of repositioning ‘legacy’ offerings as ‘private cloud’ and the way in which this affects customers and the reporting of failures.

Many providers whose software or platform is not multi-tenant are aggressively positioning their offering as ‘private cloud’ both as an attempt to maintain revenues for their legacy systems and a slightly cynical way to press on companies’ worries about sharing.  Such providers are usually traditional software or managed service providers who have no multi-tenant expertise or assets; as a result they try to brand things cloud whilst really just delivering old software in an old hosted model.  Whilst there is still potentially a viable market in this space – i.e. moving single-tenant legacy applications from on-premise to off-premise as a way of reducing the costs of what you already have and increasing focus on core business – such offerings are really just managed services and not cloud offerings.  The ‘private’ positioning is a sweet spot for these people, however, as it simultaneously allows them to avoid the significant investment required to recreate their offerings as true cloud services, prolongs their existing business models and plays on customers uncertainty about security and other issues.  Whilst I understand the need to protect revenue at companies involved in such ‘cloud washing’ – and thus would stop short of calling these practices cynical – it illustrates that customers do need to be aware of the underlying architecture of offerings (as Phil Wainwright correctly argued).  In reality most current ‘private cloud’ offerings are not going to deliver the levels of reliability, configurability and scale that customers associate with the promise of the cloud.  And that’s before we even get to the more business transformational issues of connectivity and specialisation.

Looking at these kinds of offerings we can see why single-tenant software and private infrastructure provided separately for each customer (or indeed internally) is more likely to suffer a large scale failure of the kind experienced by Virgin Blue.  Essentially developing truly resilient and failure optimised solutions for the cloud needs to address every level of the offering stack and realistically requires a complete re-write of software, deep integration with the underlying infrastructure and expert operations who understand the whole service intimately.  This is obviously cost prohibitive without the ability to share a solution across multiple customers (remember that cloud != infrastructure and that you must design an integrated infrastructure, software and operations platform that inherently understands the structure of systems and deals with failures across all levels in an intelligent way).  Furthermore even if cost was not a consideration, without re-development the individual parts that make up such ‘private’ solutions (i.e. infrastructure, software and operations) were not optimised from the beginning to operate seamlessly together in a cloud environment and can be difficult to keep aligned and manage as a whole.  As a result it’s really just putting lipstick on a pig and making the best of an architecture that combines components that were never meant to be consumed in this way.

However much positioning companies try to do it’s plain that you can’t get away from the fact that ultimately multi-tenancy at every level of a completely integrated technology stack will be a pre-requisite for operating reliable, scalable, configurable and cost effective cloud solutions.  As a result – and in defiance of the claims – the lack of multi-tenant architectures at the heart of most offerings currently positioned as ‘private cloud’ (both hardware and software related, internal and external) probably makes them less secure, less reliable, less cost effective and less configurable (i.e. able to meet a business need) than their ‘public’ (i.e. new) counterparts.

In defiance of the current mass of positioning and marketing to the contrary, then, it could be suggested that companies like Virgin Blue would be less likely to suffer catastrophic failures in future if they seek out real, multi-tenant cloud services that share resources and thus have far greater resilience than those that have to accommodate the cost profiles of serving individual tenants using repainted legacy technologies.  This whole episode thus appears to be a failure of the notion that you can rebrand managed services as ‘private cloud’ rather than a failure of an actual cloud service.

Most ironically of all the headlines incorrectly proclaiming such episodes as failures of cloud systems will fuel fear within many organisations and make them even more likely to fall victim to the FUD from disingenuous vendors and IT departments around ‘private cloud’.  In reality failures such as the case discussed may just prove that ‘private cloud’ offerings create exposure to far greater risk than adopting real cloud services due to the incompatibility of architecting for high scale and failure tolerance across a complete stack at the same time as architecting for the cost constraints of a single tenant.

Private Clouds “Surge” for Wrong Reasons?

14 Jul

I read a post by David Linthicum today on an apparent surge in demand for Private Clouds.  This was in turn spurred by thoughts from Steve Rosenbush on increasing demand for Private Cloud infrastructures.

To me this whole debate is slightly tragic as I believe that most people are framing the wrong issues when considering the public vs private cloud debate (and frankly for me it is a ridiculous debate as in my mind ‘the cloud’ can only exist ‘out there, somewhere’ and thus be shared; to me a ‘private’ cloud can only be a logically separate area of a shared infrastructure and not an organisation specific infrastructure which merely shares some of the technologies and approaches – which, frankly, is business as usual and not a cloud.  For that reason when I talk about public clouds I also include such logically private clouds running on shared infrastructures).  As David points out there are a whole host of reasons that people push back against the use of cloud infrastructures, mostly to do with retaining control in one way or another.  In essence there are a list of IT issues that people raise as absolute blockers that require private infrastructure to solve – particularly control, service levels and security – whilst they ignore the business benefits of specialisation, flexibility and choice.  Often “solving” the IT issues and propagating a model of ownership and mediocrity in IT delivery when it’s not really necessary merely denies the business the opportunity to solve their issues and transformationally improve their operations (and surely optimising the business is more important than undermining it in order to optimise the IT, right?).  That’s why for me the discussion should be about the business opportunities presented by the cloud and not simply a childish public vs private debate at the – pretty worthless – technology level.

Let’s have a look at a couple of issues:

  1. The degree of truth in the control, service and security concerns most often cited about public cloud adoption and whether they represent serious blockers to progress;
  2. Whether public and private clouds are logically equivalent or completely different.

IT issues and the Major Fallacies

Control

Everyone wants to be in control.  I do.  I want to feel as if I’m moving towards my goals, doing a good job – on top of things.  In order to be able to be on top of things, however, there are certain things I need to take for granted.  I don’t grow my own food, I don’t run my own bank, I don’t make my own clothes.  In order for me to concentrate on my purpose in life and deliver the higher level services that I provide to my customers there are a whole bunch of things that I just need to be available to me at a cost that fits into my parameters.  And to avoid being overly facetious I’ll also extend this into the IT services that I use to do my job – I don’t build my own blogging software or create my own email application but rather consume all of these as services over the web from people like WordPress.com and Google. 

By not taking personal responsibility for the design, manufacture and delivery of these items, however (i.e. by not maintaining ‘control’ of how they are delivered to me), I gain the more useful ability to be in control of which services I consume to give me the greatest chance of delivering the things that are important to me (mostly, lol).  In essence I would have little chance of sitting here writing about cloud computing if I also had to cater to all my basic needs (from both a personal as well as IT perspective).  I don’t want to dive off into economics but simplistically I’m taking advantage of the transformational improvements that come from division of labour and specialisation – by relying on products and services from other people who can produce them better and at lower cost I can concentrate on the things that add value for me.

Now let’s come back to the issue of private infrastructure.  Let’s be harsh.  Businesses simply need IT that performs some useful service.  In an ideal world they would simply pay a small amount for the applications they need, as they need them.  For 80% of IT there is absolutely no purpose in owning it – it provides no differentiation and is merely an infrastructural capability that is required to get on with value-adding work (like my blog software).  In a totally optimised world businesses wouldn’t even use software for many of their activities but rather consume business services offered by partners that make IT irrelevant. 

So far then we can argue that for 80% of IT we don’t actually need to own it (i.e. we don’t need to physically control how it is delivered) as long as we have access to it.  For this category we could easily consume software as a service from the “public” cloud and doing so gives us far greater choice, flexibility and agility.

In order to deliver some of the applications and services that a business requires to deliver its own specialised and differentiated capabilities, however, they still need to create some bespoke software.  To do this they need a development platform.  We can therefore argue that the lowest level of computing required by a business in future is a Platform as a Service (PaaS) capability; businesses never need to be aware of the underlying hardware as it has – quite literally – no value.  Even in terms of the required PaaS capability the business doesn’t have any interest in the way in which it supports software development as long as it enables them to deliver the required solutions quickly, cheaply and with the right quality.  As a result the internals of the PaaS (in terms of development tooling, middleware and process support) have no intrinsic value to a business beyond the quality of outcome delivered by the whole.  In this context we also do not care about control since as long as we get the outcomes we require (i.e. rapid, cost effective and reliable applications delivery and operation) we do not care about the internals of the platform (i.e. we don’t need to have any control over how it is internally designed, the technology choices to realise the design or how it is operated).  More broadly a business can leverage the economies of scale provided by PaaS providers – plus interoperability standards – to use multiple platforms for different purposes, increasing the ‘fitness’ of their overall IT landscape without the traditional penalties of heterogeneity (since traditionally they would be ‘bound’ to one platform by the inability of their internal IT department to cost-effectively support more than one technology).

Thinking more deeply about control in the context of this discussion we can see that for the majority of IT required by an organisation concentrating on access gives greater control than ownership due to increased choice, flexibility and agility (and the ability to leverage economies of scale through sharing).  In this sense the appropriate meaning of ‘control’ is that businesses have flexibility in choosing the IT services that best optimise their individual business capabilities and not that the IT department has ‘control’ of the way in which these services are built and delivered.  I don’t need to control how my clothes manufacturer puts my t-shirt together but I do want to control which t-shirts I wear.  Control in the new economy is empowerment of businesses to choose the most appropriate services and not of the IT department to play with technology and specify how they should be built.  Allowing IT departments to maintain control – and meddle in the way in which services are delivered – actually destroys value by creating a burden of ownership for absolutely zero value to the business.  As a result giving ‘control’ to the IT department results in the destruction of an equal and opposite amount of ‘control’ in the business and is something to be feared rather than embraced.

So the need to maintain control – in the way in which many IT groups are positioning it – is the first major and dangerous fallacy. 

Service levels

It is currently pretty difficult to get a guaranteed service level with cloud service providers.  On the other hand, most providers are consistently up in the 99th percentile and so the actual service levels are pretty good.  The lack of a piece of paper with this actual, experienced service level written down as a guarantee, however, is currently perceived as a major blocker to adoption.  Essentially IT departments use it as a way of demonstrating the superiority of their services (“look, our service level says 5 nines – guaranteed!”) whilst the level of stock they put in these service levels creates FUD in the minds of business owners who want to avoid major risks. 

So let’s lay this out.  People compare the current lack of service level guarantees from cloud service providers with the ability to agree ‘cast-iron’ service levels with internal IT departments.  Every project I’ve ever been involved in has had a set of service levels but very few ever get delivered in practice.  Sometimes they end up being twisted into worthless measures for simplicity of delivery – like whether a machine is running irrespective of whether the business service it supports is available – and sometimes they are just unachievable given the level of investment and resources available to internal IT departments (whose function, after all, is merely that of a barely-tolerated but traditionally necessary drain on the core purpose of the business). 

So to find out whether I’m right or not and whether service level guarantees have any meaning I will wait until every IT department in the world puts their actual achieved service levels up on the web like – for instance – Salesforce.  I’m keen to compare practice rather than promises.  Irrespective of guarantees my suspicion is that most organisations actual service levels are woeful in comparison to the actual service levels delivered by cloud providers but I’m willing to be convinced.   Despite the illusion of SLA guarantees and enforcement the majority of internal IT departments (and managed service providers who take over all of those legacy systems for that matter) get nowhere near the actual service levels of cloud providers irrespective of what internal documents might say.  It is a false comfort.  Businesses therefore need to wise up, consider real data and actual risks – in conjunction with the transformational business benefits that can be gained by offloading capabilities and specialising – rather than let such meaningless nonsense take them down the old path to ownership; in doing so they are potentially sacrificing a move to cloud services and therefore their best chance of transforming their relationship with their IT and optimising their business.  This is essentially the ‘promise’ of buying into updated private infrastructures (aka ‘private cloud’).

A lot of it comes down to specialisation again and the incentives for delivering high service levels.  Think about it – a cloud provider (literally) lives and dies by whether the services they offer are up; without them they make no money, their stock falls and customers move to other providers.  That’s some incentive to maintain excellence.  Internally – well, what you gonna do?  You own the systems and all of the people so are you really going to penalise yourself?  Realistically you just grit your teeth and live with the mediocrity even though it is driving rampant sub-optimisation of your business.  Traditionally there has been no other option and IT has been a long process of trying to have less bad capability than your competitors, to be able to stagger forward slightly faster or spend a few pence less.  Even outsourcing your IT doesn’t address this since whilst you have the fleeting pleasure of kicking someone else at the end of the day it’s still your IT and you’ve got nowhere to go from there.  Cloud services provide you with another option, however, one which takes advantage of the fact that other people are specialising on providing the services and that they will live and die by their quality.  Whilst we might not get service levels – at this point in their evolution at least – we do get transparency of historical performance and actual excellence; stepping back it is critical to realise that deeds are more important than words, particularly in the new reputation-driven economy. 

So the perceived need for service levels as a justification for private infrastructures is the second major and dangerous fallacy.  Businesses may well get better service levels from cloud providers than they would internally and any suggestion to the contrary will need to be backed up by thorough historical analysis of the actual service levels experienced for the equivalent capability.  Simply stating that you get a guarantee is no longer acceptable. 

Security

It’s worth stating from the beginning that there is nothing inherently less secure about cloud infrastructures.  Let’s just get that out there to begin with.  Also in getting infrastructure as a service out of the way – given that we’re taking the position in this post that PaaS is the first level of actual value to a business – we can  say that it’s just infrastructure; your data and applications will be no more or less secure than your own procedures make it but the data centre is likely to be at least as secure as your own and probably much more so due to the level of capability required by a true service provider.

So starting from ground zero with things that actually deliver something (i.e. PaaS and SaaS) a cloud provider can build a service that uses any of the technologies that you use in your organisation to secure your applications and data only they’ll have more usecases and hence will consider more threats than you will.  And that’s just the start.  From that point the cloud provider will also have to consider how they manage different tenants to ensure that their data remains secure and they will also have to protect customers’ data from their own (i.e. the cloud service providers) employees.  This is a level of security that is rarely considered by internal IT departments and results in more – and more deeply considered – data separation and encryption than would be possible within a single company. 

Looking at the cloud service from the outside we can see that providers will be more obvious targets for security attacks than individual enterprises but counter-intuitively this will make them more secure.  They will need to be secured against a broader range of attacks, they will learn more rapidly and the capabilities they learn through this process could never be created within an internal IT organisation.  Frankly, however, the need to make security of IT a core competency is one of the things that will push us towards consolidation of computing platforms into large providers – it is a complex subject that will be more safely handled by specialised platforms rather than each cloud service provider or enterprise individually. 

All of these changes are part of the more general shift to new models of computing; to date the paradigm for security has largely been that we hide our applications and data from each other within firewalled islands.  Increasing collaboration across organisations and the cost, flexibility and scale benefits of sharing mean that we need to find a way of making our services available outside our organisational boundaries, however.  Again in doing this we need to consider who is best placed to ensure the secure operation of applications that are supporting multiple clients – is it specialised cloud providers who have created a security model specifically to cope with secure open access and multi-tenancy for many customer organisations, or is it a group of keen “amateurs” with the limited experience that comes from the small number of usecases they have discovered within the bounds of a single organisation?  Furthermore as more and more companies migrate onto cloud services – and such services become ever more secure – so the isolated islands will become prime targets for security attacks, since the likelihood that they can maintain top levels of security cut off from the rest of the industry – and with far less investment in security than can be made by specialised platform providers – becomes ever less.  Slowly isolationism becomes a threat rather than a protection.  We really are stronger together.

A final key issue that falls under the ‘security’ tag is that of data location (basically the perceived requirement to keep data in the country of the customers operating business).  Often this starts out as the major, major barrier to adoption but slowly you often discover that people are willing to trade off where their data are stored when the costs of implementing such location policies can be huge for little value.  Again, in an increasingly global world businesses need to think more openly about the implications of storing data outside their country – for instance a UK company (perhaps even government) may have no practical issues in storing most data within the EU.  Again, however, in many cases businesses apply old rules or ways of thinking rather than challenging themselves in order to gain the benefits involved.  This is often tied into political processes – particularly between the business and IT – and leads to organisations not sufficiently examining the real legal issues and possible solutions in a truly open way.  This can often become an excuse to build a private infrastructure, fulfilling the IT departments desire to maintain control over the assets but in doing so loading unnecessary costs and inflexibility on the business itself – ironically as a direct result of the businesses unwillingness to challenge its own thinking. 

Does this mean that I believe that people should immediately begin throwing applications into the cloud without due care and attention?  Of course not.  Any potential provider of applications or platforms will need to demonstrate appropriate certifications and undergo some kind of due diligence.  Where data resides is a real issue that needs to be considered but increasingly this is regional rather than country specific.   Overall, however, the reality is that credible providers will likely have better, more up to date and broader security measures than those in place within a single organisation. 

So finally – at least for me – weak cloud security is the third major and dangerous and fallacy.

Comparing Public and Private

Private and Public are Not Equivalent

The real discussion here needs to be less about public vs private clouds – as if they are equivalent but just delivered differently – and more about how businesses can leverage the seismic change in model occurring in IT delivery and economics.  Concentrating on the small minded issues of whether technology should be deployed internally or externally as a result of often inconsequential concerns – as we have discussed – belittles the business opportunities presented by a shift to the cloud by dragging the discussion out of the business realm and back into the sphere of techno-babble.

The reality is that public and private clouds and services are not remotely equivalent; private clouds (i.e. internal infrastructure) are a vote to retain the current expensive, inflexible and one-size-fits-all model of IT that forces a business to sub-optimise a large proportion of its capabilities to make their IT costs even slightly tolerable.  It is a vote to restrict choice, reduce flexibility, suffer uncompetitive service levels and to continue to be distracted – and poorly served – by activities that have absolutely no differentiating value to the business. 

Public clouds and services on the other hand are about letting go of non-differentiating services and embracing specialisation in order to focus limited attention and money on the key mission of the business.  The key point in this whole debate is therefore specialisation; organisations need to treat IT as an enabler and not an asset, they need to  concentrate on delivering their services and not on how their clothes get made. 

Summary

If there is currently a ‘surge’ in interest in private clouds it is deeply confusing (and disturbing) to me given that the basis for focusing attention on private infrastructures appears to be deeply flawed thinking around control, service and security.  As we have discussed not only are cloud services the best opportunity that businesses have ever had to improve these factors to their own gain but a misplaced desire to retain the IT models of today also undermines the huge business optimisations available through specialisation and condemns businesses to limited choice, high costs and poor service levels.  The very concerns that are expressed as reasons not to move to cloud models – due to a concentration on FUD around a small number of technical issues – are actually the things that businesses have most to gain from should they be bold and start a managed transition to new models.  Cloud models will give them control over their IT by allowing them to choose from different providers to optimise different areas of their business without sacrificing scale and management benefits; service levels of cloud providers – whilst not currently guaranteed – are often better than they’ve ever experienced and entrusting security to focused third parties is probably smarter than leaving it as one of many diverse concerns for stretched IT departments. 

Fundamentally, though, there is no equivalence between the concept of public (including logically private but shared) and truly private clouds; public services enable specialisation, focus and all of the benefits we’ve outlined whereas private clouds are just a vote to continue with the old way.  Yes virtualisation might reduce some costs, yes consolidation might help but at the end of the day the choice is not the simple hosting decision it’s often made out to be but one of business strategy and outlook.  It boils down to a choice between being specialised, outward looking, networked and able to accelerate capability building by taking advantage of other people’s scale and expertise or rejecting these transformational benefits and living within the scale and capability constraints of your existing business – even as other companies transform and build new and powerful value networks without you.

Differentiation vs Integration (Addenda)

22 Jun

After completing my post on different kinds of differentiation the other day I still had a number of points left over that didn’t really fit neatly into the flow of the arguments I presented.  I still think some of them are interesting, though, and so thought I’d add them as addendums to my previous post!

Addendum 1

The first point was a general feeling that ‘standardisation’ is a good thing from an IT perspective.  This stemmed from one of Richard’s explicit statements that:

“Many people in the IT world take for granted that standardization (reduction in variety) is a good thing”

Interestingly it is true to say that from an IT perspective standardisation is generally a good thing (since IT is an infrastructural capability).  Such standardisation, however, must allow for key variances that allow people to configure and consume the standardised applications and systems in a way that enables them to reach their goals (so they must support configuration for each ‘tenant’).  Echoing my other post on evolution – in order to consider this at both an organisational and a market level – we can see that a shift to cloud computing (and ultimately consumption of specialised business capabilities across organisational boundaries) opens up a wider vista than is traditionally available within a single company.

In the traditional way of thinking about IT people within a single organisation are looking to increase standardisation as a valid way of reducing costs and increasing reliability within the bounds of a single organisation’s IT estate.  The issue with this is that such IT standardisation often forces inappropriate standardisation – both in terms of technology support and change processes – on capabilities within the business (something I talked about a while ago).  Essentially the need to standardise for operational IT efficiency tries to override the often genuine cost and capability differences required by each business area.  In addition on-premise solutions have rarely been created with simple mass-configuration in mind, requiring expensive IT customisation and integration to create a single ‘standard’ solution that cannot be varied by tenant (tenant in this case being a business capability with different needs).  Such tensions result in a constant war between IT and the single ‘standard’ solution they can afford to support and individual business capabilities and the different cost and capability requirements they have (which often results in departmental or ‘shadow’ IT implemented by end users outside the control of the IT department).

The interesting point about this, however, is that cloud computing allows organisations to make use of many platforms and applications without a) the upfront expenditure usually required for hardware, training and operational setup and b) the ongoing operational management costs.  In this instance the valid reasons that IT departments try to drive towards standardisation – i.e. reducing the number of heterogeneous technologies they must deploy, manage and upgrade – largely disappear.  If we also accept that IT is essentially infrastructural in nature – and hence provides no differentiation – then we can easily rely on external technology platforms to provide standardisation and economies of scale on our behalf without having to mandate a single platform or application to gain these efficiencies.  At this point we can turn the traditional model on its head – we can choose different platforms and applications for each capability dependent on its needs without sacrificing any of the benefits of standardisation (subject to the applications and platforms supporting interoperability standards to facilitate integration).  Significant and transformational improvements enabled by capability-specific optimisation of the business is therefore (almost tragically) dependent on freeing ourselves from the drag of internal IT.

Addendum 2

Richard also highlighted the fact that there is still a strong belief in many quarters that ‘business architecture’ should be an IT discipline (largely I guess from people who can’t read?).  I believe that ‘business’ architecture is fundamentally about propositions, structure and culture before anything else and that IT is simply one element of a lower level set of implementation decisions.  Whilst IT people may have a leg up on the required ‘structured thinking’ aspects necessary to think about a businesses architecture I feel that any suggestion that business owners are too stupid to design their own organisations – especially using abstraction methods like capabilities – seems outrageous to me.  IT people have an increasingly strong role to play in ‘fusing’ with business colleagues to more rapidly implement differentiating capabilities but they don’t own the business.  Additionally, continued IT ownership of business architecture and EA causes two additional issues: 1) IT architecture techniques are still a long way in advance of business architecture techniques and this means it is faster, easier and more natural for IT people to concentrate on this; and 2) The lack of business people working in the field – since they don’t know IT – limits the rate at which the harder questions about propositions and organisational fitness are being asked and tackled.  As a result – at least from my perspective – ‘business architecture’ owned by IT delivers a potential double whammy against progress; on the one hand it leads to a profusion of IT-centric EA efforts targeted at low interest areas like IT efficiency or cost reduction whilst on the other it allows people to avoid studying, codifying and tackling the real business architecture issues that could be major strategic levers.

Addendum 3

As a final quick aside the model that I discussed for viewing an organisation as a set of business capabilities gives rise to the need for different ‘kinds’ of business architects with many levels of responsibility.  Essentially you can be a business architect helping the overall enterprise to understand what capabilities are needed to realise value streams (so having an enterprise and market view of ‘what’ is required) through to a business architect responsible for how a given capability is actually implemented in terms of process, people and technology (so having an implementation view of ‘how’ to realise a specific ‘what’).  In this latter case – for capabilities that are infrastructural in nature and thus require high standardisation – it may still be appropriate to use detailed, scientific management approaches.

Business Enablement as a Key Cloud Element

30 Apr

After finally posting my last update about ‘Industrialised Service Delivery’ yesterday I have been happily catching up with the intervening output of some of my favourite bloggers.

One post that caught my eye was a reference from Phil Wainwright – whilst he was talking about the VMForce announcement – to a post he had written earlier in the year about Microsoft’s partnership with Intuit.  Essentially one of his central statements was related directly to the series of posts I completed yesterday (so part 1, part 2 and part 3):

“the breadth of infrastructure <required for SaaS> extends beyond the development functionality to embrace the entirely new element of service delivery capabilities. This is a platform’s support for all the components that go with the as-a-service business model, including provisioning, pay-as-you-go pricing and billing, service level monitoring and so on. Conventional software platforms have no conception of these types of capability but they’re absolutely fundamental to delivering cloud services and SaaS applications”.

This is one of the key points that I think is still – inexplicably – lost on many people (particularly people who believe that cloud computing is primarily about providing infrastructure as a service).  In reality the whole world is moving to service models because they are simpler to consume, deliver clearer value for more transparent costs and can be shared across organisations to generate economies of scale.  In fact ‘as a service’ models are increasingly not going to be an IT phenomenon but also going to extend to the way in which businesses deal with each other across organisational boundaries.  For the sale and consumption of such services to work, however, we need to be able to ‘deliver’ them; in this context we need to be able to market them, make them easy to subscribe to, manage billing and service levels transparently for both the supplier and consumer and enable rapid change and development over time to meet the evolving needs of service consumers.  As a result anyone who wants to deliver business capabilities in the future – whether these are applications or business process utilities – will need to be able to ensure that their offering exhibits all of these characteristics. 

Interestingly these ‘business enablement’ functions are pretty generic across all kinds of software and services since they essentially cover account management, subscription, business model definition, rating and billing, security, marketplaces etc etc (i.e. all of the capabilities that I defined as being required in a ‘Service Delivery Platform’).  In this context the use of the term ‘Service Delivery Platform’ in place of cloud or PaaS was deliberate; what next generation infrastructures need to do is enable people to deliver business services as quickly and as robustly as possible, with the platforms themselves also helping to ensure trust by brokering between the interests of consumers and suppliers through transparent billing and service management mechanisms.

This belief in service delivery is one of the reasons I believe that the notion of ‘private clouds’ is an oxymoron – I found this hoary subject raised again on a Joe McKendrick post after a discussion on ebizQ – even without the central point about the obvious loss of economies of scale; essentially  the requirement to provide a whole business enablement fabric to facilitate cross organisational service ecosystems – initially for SaaS but increasingly for organisational collaboration and specialisation – is just one of the reasons I believe that ‘Private Clouds’ are really just evolutions of on-premise architecture patterns – with all of the costs and complexity retained – and thus purely marketecture.  When decreasing transaction costs are enabling much greater cross organisational value chains the benefits of a public service delivery platform are immense, enabling organisations to both scale and evolve their operations more easily whilst also providing all of the business support they need to offer and consume business services in extended value chains.  Whilst some people may think that this is a pretty future-oriented reason to not like the notion of private clouds, for completeness I will also say that to me  – in the sense of customer owned infrastructures – they are an anachronism; again this is just an extension of existing models (for good or ill) and nothing to do with ‘cloud’.  It is only the fact that most protagonists of such models are vendors with very low level maturity offerings like packaged infrastructure and/or middleware solutions that makes it viable, since the complexity of delivering true private SDP offerings would be too great (not to mention ridiculously wasteful).  In my view ‘private clouds’ in the sense of end organisation deployment is just building a new internal infrastructure (whether self managed or via a service company) sort of like the one you already already have but with a whole bunch of expensive new hardware and software (so 90% of the expense but only 10% of the benefits). 

To temper this stance I do believe that there is a more subtle, viable version of ‘privacy’ that will be supported by ‘real’ service delivery platforms over time – that of having a logically private area of a public SDP to support an organisational context (so a cohesive collection of branded services, information and partner integrations – or what I’ve always called ‘virtual private platforms’).  This differs greatly from the ‘literally’ private clouds that many organisations are positioning as a mechanism to extend the life of traditional hardware, middleware or managed service offerings – the ability of service delivery platforms to rapidly instantiate ‘virtual’ private platforms will be a core competency and give the appearance and benefits of privacy whilst also maintaining the transformational benefits of leveraging the cloud in the first place.  To me literally ‘private clouds’ on an organisations own infrastructure – with all of their capital expense, complexity of operation, high running costs and ongoing drag on agility – only exist in the minds of software and service companies looking to extend out their traditional businesses for as long as possible. 

Industrialised Service Delivery Redux III

29 Apr

It’s a bit weird editing this more or less complete post 18 months later but this is a follow on to my previous posts here and here.  In those posts I discussed the need for much greater agility to cope with an increasingly unpredictable world and ran through the ways in which we can industrialise IT provision to focus on tangible business value and rapid realisation of business capability.  This story relied upon the core notion that technology is no longer a differentiator in and of itself and thus we just need workable patterns that meet our needs for particular classes of problem – which in turn reduces the design space we need to consider and allows increasing use of specialised platforms, templates and development tools.

In this final post I will discuss the notion that such standardisation calls into question the need to own such technology at all; essentially as platforms and tools become more standardised and available over the network so the importance of technology moves to access rather than ownership.

Future Consolidation

One of the interesting things from my perspective is that once you start to build out an asset-based business – like a service delivery platform – it quickly becomes subject to economies of scale.

It is rapidly becoming plain, therefore, that game changing trends such as:

  • Increasing middleware consolidation around traditional ‘mega platform’ providers;
  • Flexible infrastructure enabled by virtualisation technology;
  • Increasingly powerful abstractions such as service-orientation;
  • The growing influence of open source software and collaborating communities; and
  • The massively increased interconnectivity enabled by the web.

are all going to combine to change not just the shape of the IT industry itself but increasingly all industries; essentially as IT moves to service models so organisations will need to reshape themselves to align with these new realities, both in terms of their use of IT but also in terms of finding their distinctive place within their own disaggregating business ecosystems.

From a technology perspective it is therefore clear that these forces are combinatory and lead to accelerating commoditisation.  The implication of this acceleration is that decreasing differentiation should lead to increased consolidation as organisations no longer need to own and operate their own IT when such IT incurs cost and complexity penalties without delivering differentiation.

Picture1

In a related way such a shift by organisations to shared IT platforms is also likely to be an amplifying trend; as we see greater platform consolidation – and hence decreasing differentiation to organisations owning their own IT – so will laggard organisations become less competitive as a result of their expensive and high drag IT relative to their low cost, fleet of foot competitors.  Such organisations will then also seek to transition, eventually creating a tipping point at which ownership of IT becomes an anachronism.

From the supply perspective we can also see that as platforms become less differentiating and more commoditised they also become subject to increasing economies of scale – from an overall market perspective, therefore, offering platforms as a service becomes a far more effective use of capital than the creation and ownership of an island of IT, since scale technologies drift naturally towards consolidation.  There are some implications to this for the IT industry given the share of overall IT spend that goes on repeated individual installation and consulting for software and hardware but we shall leave that for another post.

As a result of these trends it is highly likely that we will see platform as a service propositions growing in influence fairly rapidly.  Initially these platforms are likely to be infrastructure-oriented and targeted at new SaaS providers or transitioning ISVs to lower the cost of entry but I believe that they will eventually expand to deliver the full business enablement support required by all organisations that need to exist in extended value webs (i.e. eventually everyone).  These latter platforms will need to have all of the capabilities I discussed in the previous post and will be far beyond the technology-centric platforms envisaged by the majority of emerging platform providers today.  Essentially as everybody becomes a service provider (or BPU in other terms) in their particular business ecosystem so they will need to rapidly realise, commercialise, manage and adapt the services they offer to their value webs.  In this latter scenario I believe that organisations will be caught in the jaws of a vise – the unbundling of capability to SaaS or other BPU providers to allow them to specialise and optimise the overall value stream will see their residual IT costs rocket as there are less capabilities to share it around; at the same time economies of scale produced by IT service companies will see the costs of platform as a service offerings plummet and make the transition a no brainer.

So what would a global SDP look like?

Picture2

Well remarkably like the one I showed in my previous posts given that I was leading up to this point, lol.  The first difference is that the main bulk of the platform is now explicitly deployed in the cloud – and it’ll obviously need to scale up and down smoothly and at low cost.  In addition all of the patterns that we discussed in my previous post will need to support multi-tenancy and such patterns will need to be built into the tools and factories that we will use to create systems optimised to run on our Service Delivery Platform.

At the same time the service factory becomes a way of enabling the broadest range of stakeholders to rapidly and reliably create services and applications that can be deployed to our platform – in fact it moves from being “just” an interesting set of tools to support industrialised capability realisation to being one of the main battlegrounds for PaaS providers trying to broaden their subscriber base by increasing the fidelity of realisation and reducing the barrier of entry to the lowest level possible.

Together the cloud platform and associated service factory will be the clear option of choice for most organisations, since it will yield the greatest economies of scale to the people using it.

One last element on this diagram that differentiates it from the earlier one is the on-premise ‘customer service platform’. In this context there is still a belief in many quarters that organisations will not want to physically share space and hardware with other people – they may be less mature, they may not trust sufficiently or they may genuinely have reasons why their data and services are so important that they are willing to pay to host them separately.  In the long term I do not subscribe to this view and to me the notion of ‘private clouds’ – outside of perhaps government and military use cases – is oxymoronic and at best a transitional situation as people learn to trust public infrastructures.  On the other hand whilst this may be playing with semantics I can see the case for ‘virtual private clouds’ (i.e. logically ring fenced areas of public clouds) that give the appearance and majority of benefits of being private through ‘soft’ partitioning (i.e. through logical security mechanisms) whilst allowing the retention of economies of scale through avoidance ‘hard’ partitioning (i.e. through separate physical infrastructure).  Indeed I would state that such mechanisms for making platforms appear private (including whitelabelling capabilities) will be necessary to support the branding requirements of resellers, systems integrators and end organisations.  For the sake of completeness, however, I would position transitional ‘private clouds’ as reduced functionality versions of a Service Delivery Platform that simply package up some hardware but leave the majority of the operational and business support – along with things like backup and failover – back at the main data centres of the provider in order to create an acceptable trade-off in cost.

Summary

So in this final post I have touched on some of the wider changes that are an implication of technology commoditisation and the industrialisation of service realisation.  For completeness I’ll recap the main messages from the three posts:

  • In post one I discussed how businesses are going to be forced to become much more aware of their business capabilities – and their value – by the increasingly networked and global nature of business ecosystems.  As a result they will be driven to concentrate very hard on realising their differentiating capabilities as quickly, flexibly and cost effectively as possible; in addition they will need to deliver these capabilities with stringent metrics.  This has some serious implications for the IT industry as we will need to shift away from a technology focus (where the client has to discover the value as a hit and miss emergent process) to one where we can demonstrate a much more mature, reliable and outcome based proposition. To do this we’ll need to build the platforms to realise capabilities effectively and in the broadest sense.
  • In post two I discussed how industrialisation is the creation and consistent application of known patterns, processes and infrastructures to increase repeatability and reliability. We might sacrifice some flexibility but increasing commoditisation of technology makes this far less important than cost effectiveness and reliability. When industrialising you need to understand your end to end process and then do the nasty bit – bottom up in excruciating detail.
  • Finally in post three I have discussed my belief that increasing standardisation of technology will lead to accelerating platform consolidation.  Essentially as technology becomes less differentiating and subject to economies of scale it’s likely that IT ownership and management will be less attractive. I believe, therefore, that we will see increasing and accelerating activity in the global Service Delivery Platform arena and that IT organisations and their customers need to have serious, robust and viable strategies to transition their business models.

Industrialised Service Delivery Redux II

22 Sep

In my previous post I discussed the way in which our increasingly sophisticated use of the Web is creating an unstoppable wave of change in the global business environment.  This resulting acceleration of change and expectation will require unprecedented organisational speed and adaptability whilst simultaneously driving globalisation and consumerisation of business.  I discussed my belief that companies will be forced to reform as a portfolio of systematically designed components with clear outcomes and how this kind of thinking changes the relationship between a business capability and its IT support.  In particular I discussed the need to create industrialised Service Delivery Platforms which vastly increase the speed, reliability and cost effectiveness of delivering service realisations. 

In this post I’ll to move into the second part of the story where I’ll look more specifically at how we can realise the industrialisation of service delivery through the creation of an SDP.

Industrialisation 101

There has been a great deal written about industrialisation over the last few years and most of this literature has focused on IT infrastructure (i.e. hardware) where components and techniques are more commoditised.  As an example many of my Japanese colleagues have spent decades working with leaders in the automotive industry and experienced firsthand the techniques and processes used in zero defect manufacturing and the application of lean principles. Sharing this same mindset around reliabliity, zero defect and technology commoditisation they created a process for delivering reliable and guaranteed outcomes through pre-integration and testing of combinations of hardware and software.  This kind of infrastructure industrialisation enables much higher success rates whilst simultaneously reducing the costs and lead times of implementation. 

In order to explore this a little further and to set some context, let’s Just think for a moment about the way in which IT has traditionally served its business customers.  

nonindutsrialisedvsindustrialised

We can see that generally speaking we are set a problem to solve and we then take a list of products selected by the customer – or often by one of our architects applying personal preference – and we try to integrate them together on the customers site, at the customers risk and at the customers expense. The problem is that we may never have used this particular combination of hardware, operating systems and middleware before – a problem that worsens exponentially as we increase the complexity of the solution, by the way – and so there are often glitches in their integration, it’s unclear how to manage them and there can’t be any guarantees about how they will perform when the whole thing is finally working. As a result projects take longer than they should – because much has to be learned from scratch every time – they cost a lot more than they should – because there are longer lead times to get things integrated, to get them working and then to get them into management – and, most damningly, they are often unreliable as there can be no guarantees that the combination will continue to work and there is learning needed to understand how to keep them up and running.

The idea of infrastructure industrialisation, however, helps us to concentrate on the technical capability required – do you want a Java application server? Well here it is, pre-integrated on known combinations of hardware and software and with manageability built in but – most importantly – tested to destruction with reference applications so that we can place some guarantees around the way this combination will perform in production.  As an example, 60% of the time taken within Fujitsu’s industrialisation process is in testing.  The whole idea of industrialisation is to transfer the risk to the provider – whether an internal IT department or an external provider – so that we are able to produce consistent results with standardised form and function, leading to quicker, more cost effective and reliable solutions for our customers.

Now such industrialisation has slowly been been maturing over the last few years but – as I stated at the beginning – has largely concentrated on infrastructure templating – hardware, operating systems and middleware combined and ready to receive applications.  Recent advances in virtualisation are also accelerating the commoditisation and industrialisation of IT infrastructure by making this templating process easier and more flexible than ever before.  Such industrialisation provides us with more reliable technology but does not address the ways in which we can realise higher level business value more rapidly and reliably.  The next (and more complex) challenge, therefore, is to take these same principles and apply them to the broader area of business service realisation and delivery.  The question is how we can do this?

Industrialisation From Top to Bottom

Well the first thing to do is understand how you are going to get from your expression of intent – i.e. the capability definitions I discussed in my previous post that abstract us away from implementation concerns – through to a running set of services that realise this capability on an industrialised Service Delivery Platform. This is a critical concern since If you don’t understand your end to end process then you can’t industrialise it through templating, transformation and automation.

endtoendservicerealisation

In this context we can look at our capability definitions and map concepts in the business architecture model down to classifications in the service model.  Capabilities map to concrete services, macro processes map to orchestrations, people tasks map to workflows, top level metrics become SLAs to be managed etc. The service model essentially bridges the gap between the expression of intent described by the target business architecture and the physical reality of assets needed to execute within the technology environment.

From here we broadly need to understand how each of our service types will be realised in the physical environment – so for instance we need a physical host to receive and execute each type of service, we need to understand how SLAs are provisioned so that we can monitor them etc. etc.

Basically the concern at this stage is to understand the end to end process through which we will transform the data that we capture at each stage of the process into ever more concrete terms – all the way from logical expressions of intent through greater information about the messages, service levels and type of implementation required, through to a whole set of assets that are physically deployed and executing on the physical service platform, thus realising the intent.

The core aim of this process must be to maximise both standardisation of approach and automation at each stage to ensure repeatability and reliability of outcome – essentially our aim in this process is to give business capability owners much greater reliability and rapidity of outcome as they look to realise business value.  We essentially want to give guarantees that we can not only realise functionality rapidly but also that these realisations will execute reliably and at low cost.  In addition we must also ensure that the linkage between each level of abstraction remains in place so that information about running physical services can be used to judge the performance of the capability that they realise, maximising the levers of change available to the organisation by putting them in control of the facts and allowing them to ‘know sooner’ what is actually happening.

Having an end to end view of this process essentially creates the rough outline of the production line that needs to be created to realise value – it gives us a feel for the overall requirements.  Unfortunately, however, that’s the nice bit, the kind of bit that I like to do. Whilst we need to understand broadly how we envisage an end to end capability realisation process working, the real work is in the nasty bit – when it comes to industrialisation work has to start at the bottom.

Industrialisation from Bottom to Top

If you imagine the creation of a production line for any kind of physical good they obviously have to be designed to optimise the creation of the end product. Every little conveyer belt or twisty robot arm has to be calibrated to nudge or weld the item in exactly the same spot to achieve repeatability of outcome. In the same way any attempt to industrialise the process of capability realisation has to start at the bottom with a consideration of the environment within which the final physical assets will execute and of how to create assets optimised for this environment as efficiently as possible. I use a simple ‘industrialisation pyramid’ to visualise this concept, since increasingly specialised and high value automation and industrialisation needs to be built on broader and more generic industrialised foundations. In reality the process is actually highly iterative as you need to continually be recalibrating both up and down the hierarchy to ensure that the process is both efficient and realises the expressed intent but for the sake of simplicity you can assume that we just build this up from the bottom.

industrialisationpyramid

So let’s start at the bottom with the core infrastructure technologies – what are the physical hosts that are required to support service execution? What physical assets will services need to create in order to execute on top of them? How does each host combine together to provide the necessary broad infrastructure and what quality of service guarantees can we put around each kind of host? Slightly more broadly, how will we manage each of the infrastructure assets? This stage requires a broad range of activity not just to standardise and templatise the hosts themselves but also to aggregate them into a platform and to create all of the information standards and process that deployed services will need to conform to so that we can find, provision, run and manage them successfully.

Moving up the pyramid we can now start to think in more conceptual terms about the reference architecture that we want to impose – the service classifications we want to use, the patterns and practices we want to impose on the realisation of each type, and more specifically the development practices.  Importantly we need to be clear about how these service classifications map seamlessly onto the infrastructure hosting templates and lower level management standards to ensure that our patterns and practices are optimised – its only in this way that we can guarantee outcomes by streamlining the realisation and asset creation process. Gradually through this definition activity we begin to build up a metamodel of the types of assets that need to be created as we move from the conceptual to the physical and the links and transformations between them. This is absolutely key as it enables us to move to the next level – which I call automating the “means of production”.

This level becomes the production line that pushes us reliably and repeatably from capability definition through to physical realisation. The metamodel we built up in the previous tier helps us to define domain specific languages that simplify the process of generating the final output, allowing the capture of data about each asset and the background generation of code that conforms to our preferred classification structure, architectural patterns and development practices. These DSLs can then be pulled together into “factories” specialised to the realisation of each type of asset, with each DSL representing a different viewpoint for the particular capability in hand.  Individual factories can then be aggregated into a ‘capability realisation factory’ that drives the end to end process.  As I stated in my previous post the whole factory and DSL space is mildly controversial at the moment with Microsoft advocating explicit DSL and factory technologies and others continuing to work towards MDA or flexible open source alternatives.  It is suffice to say in this context that the approaches I’m advocating are possible via either model – a subject I might return to actually with some examples of each (for an excellent consideration of this whole area consult Martin Fowler’s great coverage, btw).

The final level of this pyramid is to actually start taking the capability realisation factories and tailoring them for the creation of industry specific offerings – perhaps a whole set of ‘factories’ around banking, retail or travel capabilities. From my perspective this is the furthest out and may actually not come to pass; despite Jack Greenfield’s compelling arguments I feel that the rise of SOA and SaaS will obviate the need to generate the same application many times by allowing the composing of solutions from shared utilities.  I feel that the idea of an application or service specific factory assumes a continuation of IT oversupply through many deployments; as a result I feel that the key issue at stake in the industrialisation arena is actually that of democratising access to the means of capability production by giving people the tools to create new value rapidly and reliably.  As a result I feel that improving the reliability and repeatability of capability realisation across the board is more critical than a focus on any particular industry. (This may change in future with demand, however, and one potential area of interest is industry specific composition factories rather than industry specific application generation factories). 

Delivering Industrialised Services

So we come at last to a picture that demonstrates how the various components of our approach come together from a high level process perspective.

servicefactoryprocess

Across the top we have our service factory. We start on the left hand side with capability modelling, capturing the metadata that describes the capability and what it is meant to do. In this context we can use a domain specific language that allows us to model capabilities explicitly within the tooling. Our aim is then to use the metadata captured about a capability to realise it as one or more services. In this context information from the metamodel is transformed into an initial version of the service before we use a service domain language to add further detail about contracts, messages and service levels. It is important to note that at this point, however, the service is still abstract – we have not bound it to any particular realisation strategy. Once we have designed the service in the abstract we can then choose an implementation strategy – example classifications could be interaction services for Uis, workflow services for people tasks, process services for service orchestrations, domain services for services that manage and manipulate data and integration services that allow adaptation and integration with legacy or external systems.

Once we have chosen a realisation strategy all of the metadata captured about the service is used to generate a partially populated realisation of the chosen type – in this context we anticipate having a factory for each kind of service that will control the patterns and practices used and provide guidance in context to the developer.

Once we have designed our services we now want to be able to design a virtual deployment environment for them based wholly on industrialised infrastructure templates. In this view we can configure and soft test the resources required to run our services before generating provisioning information that can be used to create the virtual environment needed to host the services.

In the service platform the provisioning information can be used to create a number of hosting engines, deploy the services into them, provision the infrastructure to run them and then set up the necessary monitoring before finally publishing them into a catalogue. The Service Platform therefore consists of a number of specialised infrastructure hosts supporting runtime execution, along with runtime services that provide – for example – provisioning and eventing support.

The final component of the platform is what I call a ‘service wrap’. This is an implementation of the ITSM disciplines tailored for our environment. In this context you will find the catalogue, service management, reporting and metering capabilities that are needed to manage the services at runtime (this is again a subset to make a point). In this space the service catalogue will bring together service metadata, reports about performance and usage plus subscription and onboarding processes.  Most importantly there is a strong link between the capabilities originally required and the services used to realise them, since both are linked in the catalogue to support business performance management. In this context we can see a feedback loop from the service wrap which enables capability owners to make decisions about effectiveness and rework their capabilities appropriately.

Summary

In this second post of three I have demonstrated how we can use the increasing power of abstraction delivered by service-orientation to drive the industrialisation of capability realisation.  Despite current initiatives broadly targeting the infrastructure space I have discussed my belief that full industrialisation across the infrastructure, applications, service and business domains requires the creation and consistent application of known patterns, processes, infrastructures and skills to increase repeatability and reliability. We might sacrifice some flexibility in technology choice or systems design but increasing commoditisation of technology makes this far less important than cost effectiveness and reliability. It’s particularly important to realise that when industrialising you need to understand your end to end process and then do the nasty bit – bottom up in excruciating detail.

So in the third and final post on this subject I’m going to look a little bit at futures and how the creation of standardised and commoditised service delivery platforms will affect the industry more broadly – essentially as technology becomes about access rather than ownership so we will see the rise of global service delivery platforms that support capability realisation and execution on behalf of many organisations.

Follow

Get every new post delivered to your Inbox.

Join 172 other followers