Archive | SOA RSS feed for this section

Industrialised Service Delivery Redux III

29 Apr

It’s a bit weird editing this more or less complete post 18 months later but this is a follow on to my previous posts here and here.  In those posts I discussed the need for much greater agility to cope with an increasingly unpredictable world and ran through the ways in which we can industrialise IT provision to focus on tangible business value and rapid realisation of business capability.  This story relied upon the core notion that technology is no longer a differentiator in and of itself and thus we just need workable patterns that meet our needs for particular classes of problem – which in turn reduces the design space we need to consider and allows increasing use of specialised platforms, templates and development tools.

In this final post I will discuss the notion that such standardisation calls into question the need to own such technology at all; essentially as platforms and tools become more standardised and available over the network so the importance of technology moves to access rather than ownership.

Future Consolidation

One of the interesting things from my perspective is that once you start to build out an asset-based business – like a service delivery platform – it quickly becomes subject to economies of scale.

It is rapidly becoming plain, therefore, that game changing trends such as:

  • Increasing middleware consolidation around traditional ‘mega platform’ providers;
  • Flexible infrastructure enabled by virtualisation technology;
  • Increasingly powerful abstractions such as service-orientation;
  • The growing influence of open source software and collaborating communities; and
  • The massively increased interconnectivity enabled by the web.

are all going to combine to change not just the shape of the IT industry itself but increasingly all industries; essentially as IT moves to service models so organisations will need to reshape themselves to align with these new realities, both in terms of their use of IT but also in terms of finding their distinctive place within their own disaggregating business ecosystems.

From a technology perspective it is therefore clear that these forces are combinatory and lead to accelerating commoditisation.  The implication of this acceleration is that decreasing differentiation should lead to increased consolidation as organisations no longer need to own and operate their own IT when such IT incurs cost and complexity penalties without delivering differentiation.


In a related way such a shift by organisations to shared IT platforms is also likely to be an amplifying trend; as we see greater platform consolidation – and hence decreasing differentiation to organisations owning their own IT – so will laggard organisations become less competitive as a result of their expensive and high drag IT relative to their low cost, fleet of foot competitors.  Such organisations will then also seek to transition, eventually creating a tipping point at which ownership of IT becomes an anachronism.

From the supply perspective we can also see that as platforms become less differentiating and more commoditised they also become subject to increasing economies of scale – from an overall market perspective, therefore, offering platforms as a service becomes a far more effective use of capital than the creation and ownership of an island of IT, since scale technologies drift naturally towards consolidation.  There are some implications to this for the IT industry given the share of overall IT spend that goes on repeated individual installation and consulting for software and hardware but we shall leave that for another post.

As a result of these trends it is highly likely that we will see platform as a service propositions growing in influence fairly rapidly.  Initially these platforms are likely to be infrastructure-oriented and targeted at new SaaS providers or transitioning ISVs to lower the cost of entry but I believe that they will eventually expand to deliver the full business enablement support required by all organisations that need to exist in extended value webs (i.e. eventually everyone).  These latter platforms will need to have all of the capabilities I discussed in the previous post and will be far beyond the technology-centric platforms envisaged by the majority of emerging platform providers today.  Essentially as everybody becomes a service provider (or BPU in other terms) in their particular business ecosystem so they will need to rapidly realise, commercialise, manage and adapt the services they offer to their value webs.  In this latter scenario I believe that organisations will be caught in the jaws of a vise – the unbundling of capability to SaaS or other BPU providers to allow them to specialise and optimise the overall value stream will see their residual IT costs rocket as there are less capabilities to share it around; at the same time economies of scale produced by IT service companies will see the costs of platform as a service offerings plummet and make the transition a no brainer.

So what would a global SDP look like?


Well remarkably like the one I showed in my previous posts given that I was leading up to this point, lol.  The first difference is that the main bulk of the platform is now explicitly deployed in the cloud – and it’ll obviously need to scale up and down smoothly and at low cost.  In addition all of the patterns that we discussed in my previous post will need to support multi-tenancy and such patterns will need to be built into the tools and factories that we will use to create systems optimised to run on our Service Delivery Platform.

At the same time the service factory becomes a way of enabling the broadest range of stakeholders to rapidly and reliably create services and applications that can be deployed to our platform – in fact it moves from being “just” an interesting set of tools to support industrialised capability realisation to being one of the main battlegrounds for PaaS providers trying to broaden their subscriber base by increasing the fidelity of realisation and reducing the barrier of entry to the lowest level possible.

Together the cloud platform and associated service factory will be the clear option of choice for most organisations, since it will yield the greatest economies of scale to the people using it.

One last element on this diagram that differentiates it from the earlier one is the on-premise ‘customer service platform’. In this context there is still a belief in many quarters that organisations will not want to physically share space and hardware with other people – they may be less mature, they may not trust sufficiently or they may genuinely have reasons why their data and services are so important that they are willing to pay to host them separately.  In the long term I do not subscribe to this view and to me the notion of ‘private clouds’ – outside of perhaps government and military use cases – is oxymoronic and at best a transitional situation as people learn to trust public infrastructures.  On the other hand whilst this may be playing with semantics I can see the case for ‘virtual private clouds’ (i.e. logically ring fenced areas of public clouds) that give the appearance and majority of benefits of being private through ‘soft’ partitioning (i.e. through logical security mechanisms) whilst allowing the retention of economies of scale through avoidance ‘hard’ partitioning (i.e. through separate physical infrastructure).  Indeed I would state that such mechanisms for making platforms appear private (including whitelabelling capabilities) will be necessary to support the branding requirements of resellers, systems integrators and end organisations.  For the sake of completeness, however, I would position transitional ‘private clouds’ as reduced functionality versions of a Service Delivery Platform that simply package up some hardware but leave the majority of the operational and business support – along with things like backup and failover – back at the main data centres of the provider in order to create an acceptable trade-off in cost.


So in this final post I have touched on some of the wider changes that are an implication of technology commoditisation and the industrialisation of service realisation.  For completeness I’ll recap the main messages from the three posts:

  • In post one I discussed how businesses are going to be forced to become much more aware of their business capabilities – and their value – by the increasingly networked and global nature of business ecosystems.  As a result they will be driven to concentrate very hard on realising their differentiating capabilities as quickly, flexibly and cost effectively as possible; in addition they will need to deliver these capabilities with stringent metrics.  This has some serious implications for the IT industry as we will need to shift away from a technology focus (where the client has to discover the value as a hit and miss emergent process) to one where we can demonstrate a much more mature, reliable and outcome based proposition. To do this we’ll need to build the platforms to realise capabilities effectively and in the broadest sense.
  • In post two I discussed how industrialisation is the creation and consistent application of known patterns, processes and infrastructures to increase repeatability and reliability. We might sacrifice some flexibility but increasing commoditisation of technology makes this far less important than cost effectiveness and reliability. When industrialising you need to understand your end to end process and then do the nasty bit – bottom up in excruciating detail.
  • Finally in post three I have discussed my belief that increasing standardisation of technology will lead to accelerating platform consolidation.  Essentially as technology becomes less differentiating and subject to economies of scale it’s likely that IT ownership and management will be less attractive. I believe, therefore, that we will see increasing and accelerating activity in the global Service Delivery Platform arena and that IT organisations and their customers need to have serious, robust and viable strategies to transition their business models.

Industrialised Service Delivery Redux II

22 Sep

In my previous post I discussed the way in which our increasingly sophisticated use of the Web is creating an unstoppable wave of change in the global business environment.  This resulting acceleration of change and expectation will require unprecedented organisational speed and adaptability whilst simultaneously driving globalisation and consumerisation of business.  I discussed my belief that companies will be forced to reform as a portfolio of systematically designed components with clear outcomes and how this kind of thinking changes the relationship between a business capability and its IT support.  In particular I discussed the need to create industrialised Service Delivery Platforms which vastly increase the speed, reliability and cost effectiveness of delivering service realisations. 

In this post I’ll to move into the second part of the story where I’ll look more specifically at how we can realise the industrialisation of service delivery through the creation of an SDP.

Industrialisation 101

There has been a great deal written about industrialisation over the last few years and most of this literature has focused on IT infrastructure (i.e. hardware) where components and techniques are more commoditised.  As an example many of my Japanese colleagues have spent decades working with leaders in the automotive industry and experienced firsthand the techniques and processes used in zero defect manufacturing and the application of lean principles. Sharing this same mindset around reliabliity, zero defect and technology commoditisation they created a process for delivering reliable and guaranteed outcomes through pre-integration and testing of combinations of hardware and software.  This kind of infrastructure industrialisation enables much higher success rates whilst simultaneously reducing the costs and lead times of implementation. 

In order to explore this a little further and to set some context, let’s Just think for a moment about the way in which IT has traditionally served its business customers.  


We can see that generally speaking we are set a problem to solve and we then take a list of products selected by the customer – or often by one of our architects applying personal preference – and we try to integrate them together on the customers site, at the customers risk and at the customers expense. The problem is that we may never have used this particular combination of hardware, operating systems and middleware before – a problem that worsens exponentially as we increase the complexity of the solution, by the way – and so there are often glitches in their integration, it’s unclear how to manage them and there can’t be any guarantees about how they will perform when the whole thing is finally working. As a result projects take longer than they should – because much has to be learned from scratch every time – they cost a lot more than they should – because there are longer lead times to get things integrated, to get them working and then to get them into management – and, most damningly, they are often unreliable as there can be no guarantees that the combination will continue to work and there is learning needed to understand how to keep them up and running.

The idea of infrastructure industrialisation, however, helps us to concentrate on the technical capability required – do you want a Java application server? Well here it is, pre-integrated on known combinations of hardware and software and with manageability built in but – most importantly – tested to destruction with reference applications so that we can place some guarantees around the way this combination will perform in production.  As an example, 60% of the time taken within Fujitsu’s industrialisation process is in testing.  The whole idea of industrialisation is to transfer the risk to the provider – whether an internal IT department or an external provider – so that we are able to produce consistent results with standardised form and function, leading to quicker, more cost effective and reliable solutions for our customers.

Now such industrialisation has slowly been been maturing over the last few years but – as I stated at the beginning – has largely concentrated on infrastructure templating – hardware, operating systems and middleware combined and ready to receive applications.  Recent advances in virtualisation are also accelerating the commoditisation and industrialisation of IT infrastructure by making this templating process easier and more flexible than ever before.  Such industrialisation provides us with more reliable technology but does not address the ways in which we can realise higher level business value more rapidly and reliably.  The next (and more complex) challenge, therefore, is to take these same principles and apply them to the broader area of business service realisation and delivery.  The question is how we can do this?

Industrialisation From Top to Bottom

Well the first thing to do is understand how you are going to get from your expression of intent – i.e. the capability definitions I discussed in my previous post that abstract us away from implementation concerns – through to a running set of services that realise this capability on an industrialised Service Delivery Platform. This is a critical concern since If you don’t understand your end to end process then you can’t industrialise it through templating, transformation and automation.


In this context we can look at our capability definitions and map concepts in the business architecture model down to classifications in the service model.  Capabilities map to concrete services, macro processes map to orchestrations, people tasks map to workflows, top level metrics become SLAs to be managed etc. The service model essentially bridges the gap between the expression of intent described by the target business architecture and the physical reality of assets needed to execute within the technology environment.

From here we broadly need to understand how each of our service types will be realised in the physical environment – so for instance we need a physical host to receive and execute each type of service, we need to understand how SLAs are provisioned so that we can monitor them etc. etc.

Basically the concern at this stage is to understand the end to end process through which we will transform the data that we capture at each stage of the process into ever more concrete terms – all the way from logical expressions of intent through greater information about the messages, service levels and type of implementation required, through to a whole set of assets that are physically deployed and executing on the physical service platform, thus realising the intent.

The core aim of this process must be to maximise both standardisation of approach and automation at each stage to ensure repeatability and reliability of outcome – essentially our aim in this process is to give business capability owners much greater reliability and rapidity of outcome as they look to realise business value.  We essentially want to give guarantees that we can not only realise functionality rapidly but also that these realisations will execute reliably and at low cost.  In addition we must also ensure that the linkage between each level of abstraction remains in place so that information about running physical services can be used to judge the performance of the capability that they realise, maximising the levers of change available to the organisation by putting them in control of the facts and allowing them to ‘know sooner’ what is actually happening.

Having an end to end view of this process essentially creates the rough outline of the production line that needs to be created to realise value – it gives us a feel for the overall requirements.  Unfortunately, however, that’s the nice bit, the kind of bit that I like to do. Whilst we need to understand broadly how we envisage an end to end capability realisation process working, the real work is in the nasty bit – when it comes to industrialisation work has to start at the bottom.

Industrialisation from Bottom to Top

If you imagine the creation of a production line for any kind of physical good they obviously have to be designed to optimise the creation of the end product. Every little conveyer belt or twisty robot arm has to be calibrated to nudge or weld the item in exactly the same spot to achieve repeatability of outcome. In the same way any attempt to industrialise the process of capability realisation has to start at the bottom with a consideration of the environment within which the final physical assets will execute and of how to create assets optimised for this environment as efficiently as possible. I use a simple ‘industrialisation pyramid’ to visualise this concept, since increasingly specialised and high value automation and industrialisation needs to be built on broader and more generic industrialised foundations. In reality the process is actually highly iterative as you need to continually be recalibrating both up and down the hierarchy to ensure that the process is both efficient and realises the expressed intent but for the sake of simplicity you can assume that we just build this up from the bottom.


So let’s start at the bottom with the core infrastructure technologies – what are the physical hosts that are required to support service execution? What physical assets will services need to create in order to execute on top of them? How does each host combine together to provide the necessary broad infrastructure and what quality of service guarantees can we put around each kind of host? Slightly more broadly, how will we manage each of the infrastructure assets? This stage requires a broad range of activity not just to standardise and templatise the hosts themselves but also to aggregate them into a platform and to create all of the information standards and process that deployed services will need to conform to so that we can find, provision, run and manage them successfully.

Moving up the pyramid we can now start to think in more conceptual terms about the reference architecture that we want to impose – the service classifications we want to use, the patterns and practices we want to impose on the realisation of each type, and more specifically the development practices.  Importantly we need to be clear about how these service classifications map seamlessly onto the infrastructure hosting templates and lower level management standards to ensure that our patterns and practices are optimised – its only in this way that we can guarantee outcomes by streamlining the realisation and asset creation process. Gradually through this definition activity we begin to build up a metamodel of the types of assets that need to be created as we move from the conceptual to the physical and the links and transformations between them. This is absolutely key as it enables us to move to the next level – which I call automating the “means of production”.

This level becomes the production line that pushes us reliably and repeatably from capability definition through to physical realisation. The metamodel we built up in the previous tier helps us to define domain specific languages that simplify the process of generating the final output, allowing the capture of data about each asset and the background generation of code that conforms to our preferred classification structure, architectural patterns and development practices. These DSLs can then be pulled together into “factories” specialised to the realisation of each type of asset, with each DSL representing a different viewpoint for the particular capability in hand.  Individual factories can then be aggregated into a ‘capability realisation factory’ that drives the end to end process.  As I stated in my previous post the whole factory and DSL space is mildly controversial at the moment with Microsoft advocating explicit DSL and factory technologies and others continuing to work towards MDA or flexible open source alternatives.  It is suffice to say in this context that the approaches I’m advocating are possible via either model – a subject I might return to actually with some examples of each (for an excellent consideration of this whole area consult Martin Fowler’s great coverage, btw).

The final level of this pyramid is to actually start taking the capability realisation factories and tailoring them for the creation of industry specific offerings – perhaps a whole set of ‘factories’ around banking, retail or travel capabilities. From my perspective this is the furthest out and may actually not come to pass; despite Jack Greenfield’s compelling arguments I feel that the rise of SOA and SaaS will obviate the need to generate the same application many times by allowing the composing of solutions from shared utilities.  I feel that the idea of an application or service specific factory assumes a continuation of IT oversupply through many deployments; as a result I feel that the key issue at stake in the industrialisation arena is actually that of democratising access to the means of capability production by giving people the tools to create new value rapidly and reliably.  As a result I feel that improving the reliability and repeatability of capability realisation across the board is more critical than a focus on any particular industry. (This may change in future with demand, however, and one potential area of interest is industry specific composition factories rather than industry specific application generation factories). 

Delivering Industrialised Services

So we come at last to a picture that demonstrates how the various components of our approach come together from a high level process perspective.


Across the top we have our service factory. We start on the left hand side with capability modelling, capturing the metadata that describes the capability and what it is meant to do. In this context we can use a domain specific language that allows us to model capabilities explicitly within the tooling. Our aim is then to use the metadata captured about a capability to realise it as one or more services. In this context information from the metamodel is transformed into an initial version of the service before we use a service domain language to add further detail about contracts, messages and service levels. It is important to note that at this point, however, the service is still abstract – we have not bound it to any particular realisation strategy. Once we have designed the service in the abstract we can then choose an implementation strategy – example classifications could be interaction services for Uis, workflow services for people tasks, process services for service orchestrations, domain services for services that manage and manipulate data and integration services that allow adaptation and integration with legacy or external systems.

Once we have chosen a realisation strategy all of the metadata captured about the service is used to generate a partially populated realisation of the chosen type – in this context we anticipate having a factory for each kind of service that will control the patterns and practices used and provide guidance in context to the developer.

Once we have designed our services we now want to be able to design a virtual deployment environment for them based wholly on industrialised infrastructure templates. In this view we can configure and soft test the resources required to run our services before generating provisioning information that can be used to create the virtual environment needed to host the services.

In the service platform the provisioning information can be used to create a number of hosting engines, deploy the services into them, provision the infrastructure to run them and then set up the necessary monitoring before finally publishing them into a catalogue. The Service Platform therefore consists of a number of specialised infrastructure hosts supporting runtime execution, along with runtime services that provide – for example – provisioning and eventing support.

The final component of the platform is what I call a ‘service wrap’. This is an implementation of the ITSM disciplines tailored for our environment. In this context you will find the catalogue, service management, reporting and metering capabilities that are needed to manage the services at runtime (this is again a subset to make a point). In this space the service catalogue will bring together service metadata, reports about performance and usage plus subscription and onboarding processes.  Most importantly there is a strong link between the capabilities originally required and the services used to realise them, since both are linked in the catalogue to support business performance management. In this context we can see a feedback loop from the service wrap which enables capability owners to make decisions about effectiveness and rework their capabilities appropriately.


In this second post of three I have demonstrated how we can use the increasing power of abstraction delivered by service-orientation to drive the industrialisation of capability realisation.  Despite current initiatives broadly targeting the infrastructure space I have discussed my belief that full industrialisation across the infrastructure, applications, service and business domains requires the creation and consistent application of known patterns, processes, infrastructures and skills to increase repeatability and reliability. We might sacrifice some flexibility in technology choice or systems design but increasing commoditisation of technology makes this far less important than cost effectiveness and reliability. It’s particularly important to realise that when industrialising you need to understand your end to end process and then do the nasty bit – bottom up in excruciating detail.

So in the third and final post on this subject I’m going to look a little bit at futures and how the creation of standardised and commoditised service delivery platforms will affect the industry more broadly – essentially as technology becomes about access rather than ownership so we will see the rise of global service delivery platforms that support capability realisation and execution on behalf of many organisations.

Industrialised Service Delivery Redux I

23 Jul

I’ve been terribly lax with my posting of late due to pressures of work but thought I had best try and put something up just to keep my blog (barely) alive, lol.  Following on from my previous posts on Cloud Computing and Service Delivery Platforms I thought I would go the extra step and actually talk about my views on Industrialisation in the platform and service delivery spaces.  I made this grand decision since my last post included a reference to a presentation that I did in Redmond last year and so I thought it would be useful to actually tell the story as well as just punt up the slides (which can be pretty meaningless without a description).  In addition there’s also been a huge amount of coverage of both cloud computing, platform as a service and industrialisation lately and so it seemed like revisiting the content of that particular presentation would be a good idea.  If I’m honest I also have to admit that I can largely just rip the notes out of the slideset for a quick post, but I don’t feel too guilty given that it’s a hot topic, lol.  I’ll essentially split this story across three posts: part I will cover why I believe Industrialisation is critical to supporting agility and reliability in the new business environment, part II will cover my feelings on how we can approach the industrialisation of business service delivery and part III will look at the way in which industrialisation accelerates the shift to shared Service Delivery Platforms (or PaaS or utility computing or cloud computing – take your pick).

The Industrialisation Imperative

So why do I feel that IT industrialisation is so important?  Well essentially I believe that we’re on the verge of some huge changes in the IT industry and that we’re only just seeing the very earliest signs of these through the emergence of SOA, Web 2.0 and SaaS/PaaS. I believe that organisations are going to be forced to reform and disaggregate and that technology will become increasingly commoditised. Essentially I believe that we all need to recognise these trends and learn the lessons of industrialisation from other more mature industries – if we can’t begin to deliver IT that is rapid, reliable, cost effective and – most importantly – guaranteed to work then what hope is there?  IT has consistently failed to deliver expected value time and time again through an obsession with technology for it’s own sake and the years of cost overruns, late delivery and unreliability are well documented; too often projects seem to ignore the lessons of history and start from ground zero. This has got to change. Service orientation is allowing us to express IT in ways that are closer to the business than ever before, reducing the conceptual gap that has allowed IT to hide from censure behind complexity. Software as a Service is starting to prove that there are models that allow us to deliver the same function to many people with lower costs born of economies of scale and we’re all going to have to finally recognise that everyone is not special, that they don’t need that customisation or tailoring for 80% of what they do and that SOAs assistance in refocusing on business value will draw out the lunacy of many IT investments.

In this three part post I therefore want to share some of my ideas around how we can industrialise IT. Firstly, I’m going to talk about the forces that are acting on organisations that will drive increasing specialisation and disaggregation and go onto to discuss business capabilities and how they accelerate the commoditisation of IT.  Secondly, I’m going to discuss  approaches to the industrialisation of service delivery and look at the different levels of industrialisation that need to be considered.  Finally I’ll talk about how the increasing commoditisation and standardisation of IT will accelerate the process of platform consolidation and the resulting shift towards models that recognise the essentially scale based economics of IT platform provision.

The Componentisation of Business

Over the last 100 years we’ve seen a gradual shift towards concentration on smaller levels of business organisation due to the decreasing costs of executing transactions with 3rd parties. Continuing discontinuities around the web are sending these transaction costs into free fall, however, and we believe that this is going to trigger yet another reduction in business aggregation and cause us to focus on a smaller unit of business granularity – the capability (for an early post on this subject see here).


Essentially I believe that there are four major forces that will drive organisations to transform in this way:

1) Accelerating change;

2) Increasing commoditisation; and

3) Rapidly decreasing collaboration costs due to the emergence of the web as a viable global business network.

I’ll consider each in turn.

Accelerating Change

As the rate of change increases, so adaptability becomes a key requirement for survival. Most organisations are currently not well suited for this challenge, however, as they have structures carried over from a different age based on forward planning and command and control – they essentially focus inwards rather than outwards. The lack of systematic design in most organisations means that they rarely understand clearly how value is delivered and so cannot change effectively in response to external demand shifts. In order to become adaptable, however, organisations need to systematically understand what capabilities they need to satisfy demand and how these capabilities combine to deliver value – a systematic view enables us to understand the impact of change and to reconfigure our capabilities in response to shifts in external demand.

Increasing Commoditisation

This capability based view is also extremely important in addressing the shrinking commoditisation cycle. Essentially consumers are now able to instantly compare our goods and services with those from other companies – and switch just as quickly. A capability-based view enables us to ensure that we remove repetition and waste across organisational silos and replace these with shared capabilities to maximise our returns, both while the going is good but also when price sensitivity begins to bite.

Decreasing Transaction Costs

The final shift is to use our clearer view of the capabilities we need to begin thinking about those that are truly differentiating – the market will be putting such pressure on us to excel that we will be driven to take advantage of falling transaction costs and the global nature of the web to replace our non-differentiating capabilities with those of specialised partners, simultaneously increasing our focus, improving our overall proposition and reducing costs.

As a result of these drivers we view business capabilities as a key concept in the way in which we need to approach the industrialisation of services.

Componentisation Through Business Capabilities

So I’ve talked a lot about capabilities – how do they enable us to react to the discontinuities that I’ve discussed? Well to address the issues of adaptability and understand which things we want to do and which we want to unbundle we really need a way of understanding what the component parts of our organisation are and what they do.

Traditionally organisations use business processes, organisational structures or IT architectures as a way of expressing organisational design – perhaps all three if they use an enterprise architecture method. The big problem with these views, however, is that they tell us very little about what the combined output actually is – what is the thing that is being done, the essential business component that is being realised? Yes I understand that there are some people doing stuff using IT but what does it all amount to? Even worse, these views of the business are all inherently unstable since they are expressions of how things get done at a point in time; as a result they change regularly and at different rates and therefore make trying to understand the organisation a bit like catching jelly – you might get lucky and hold it for a second but it’ll shift and slip out of your grasp. This means that leaders within the organisation lack a consistent decision making framework and see instead a constantly shifting mass of incomplete and inconsistent detail that make it impossible to make well reasoned strategic decisions.

Capabilities bring another level of abstraction to the table; they allow us to look at the stable, component parts of the organisation without worrying about how they work. This gives us the opportunity to concentrate systematically on what things the organisation needs to do – in terms of outputs and commitments – without concerning ourselves with the details of how these commitments will be realised. This enables enterprise leaders to concentrate on what is required whilst delegating implementation to managers or partners. Essentially they are an expression of intent and express strategy as structure. Capabilities are then realised by their owners using a combination of organisational structures, role design, business processes and technology – all of which come together to deliver to the necessary commitments.


In this particular example we see the capability from both the external and internal perspectives – from the perspective of the business designer and the consumer the capability is a discrete component that has a purpose – in this case enabling us to check credit histories – and a set of metrics – for simplicity we’ve included just service level, cost and channels. From the perspective of the capability owner, however, the capability consists of all of the different elements needed to realise the external commitments.

So how does a shift to capabilities affect the relationship between the organisation and its IT provision?

IT Follows Move from “How” to “What”

One of the big issues for us all is that a concentration on capabilities will begin to push technology to the bottom of the stack – essentially it becomes much more commoditised.

Capability owners will now have a much tighter scope in the form of a well defined purpose and set of metrics; this gives them greater clarity and leaves them able to look for rapid and cost effective realisation rather than a mismash of hardware, software or packages that they then need to turn into something that might eventually approximate to their need.  Furthermore the codification of their services will expose them far more clearly to the harsh realities of having to deliver well defined value to the rest of the organisation and they will no longer be able to ‘lose’ the time and cost of messing about with IT in the general noise of a less focused organisation.

As a result capability owners will be looking for two different things:

1) Is there anyone who can provide this capability to me externally to the level of performance that I need – for instance SaaS or BPU offering available on a usage or subscription basis; or

2) Failing that who can help me to realise my capability as rapidly, reliably and cost effectively as possible.

The competition is therefore increasingly going to move away from point technologies – which become increasingly irrelevant – and move towards the delivery of outcomes using a broad range of disciplines tightly integrated into a rapid capability realisation platform.


Such realisation platforms – which I have been calling Service Delivery Platforms to denote their holistic nature – require us to bring infrastructure, application, business and service management disciplines into an integrated, reliable and scalable platform for capability realisation, reflecting the fact that service delivery is actually an holistic discipline and not a technology issue. Most critically – at least from our perspective – this platform needs to be highly industrialised; built from repeatable, reliable and guaranteed components in the infrastructure, application, business and service dimensions to guarantee successful outcomes to our customers.

So what would a Service Delivery Platform actually look like?

A Service Delivery Platform


In this picture I’ve surfaced a subset of the capabilities that I believe are required in the creation of a service delivery platform suitable for enterprise use – I’m not being secretive, I just ran out of room and so had to jettison some stuff.

If we start at the bottom we can see that we need to have highly scalable and templatised infrastructure that allows us to provide capacity on demand to ensure that we can meet the scaling needs of capability owners as they start to offer their services both inside and outside the organisation.

Above this we have a host of runtime capabilities that are needed to manage services running within the environment – identity management, provisioning, monitoring to ensure that delivered services meet their service levels, metering to support various monetisation strategies both from our perspective and from the capability owners perspective, audit and non-repudiation and brokering to external services in order to keep tabs on their performance for contractual purposes.

Moving up we have a number of templatised hosting engines – essentially we need to break the service space down using a classification to ensure that we are able to address different kinds of services effectively. These templates are essentially virtual machines that have services deployed into them and which are then delivered on the virtualised hardware; essentially the infrastructure becomes part of the service to decouple services both from each other and physical space.

The top level in the centre area is what we call service enablement. In this tier we essentially have a whole host of services that make the environment workable – examples that we were able to fit in here are service catalogue, performance reporting, subscription management – the whole higher level structure that brings services into the wider environment in a consistent and consumable way.

Moving across the left we can see that in order to deliver services developers will need to have standardised and templatised shared development support environments to support collaboration, process enablement and asset management.

Across on the right we have operational support – this is where we place our ITIL/ISO 20000 service management processes and personnel to ensure that all services are treated as assets – tracked, managed, reported upon, capacity managed etc etc.

On the far right we have a business support set of capabilities that support customer queries about services, how much they’ve been charged and where we also manage partners, perform billing or carry out any certification activity if we want to create new templates for inclusion in the overall platform.

Finally across the top we have what I call the ‘service factory’ – a highly templatised modelling and development environment that drives people from a conceptual view of the capabilities to be realised down through a process of service design, realisation and deployment against a set of architectural and development patterns represented in DSLs.  These DSLs could be combinations of UML profiles, little languages or full DSLs implemented specifically for the service domain.


In this post I have discussed my views on the way in which businesses will be forced to componentise and specialise and how this kind of thinking changes the relationship between a business capability and its IT support.  I’ve also briefly highlighted some of the key features that would need to be present within an holistic and industrialised Service Delivery Platform in order to increase the speed, reliability and cost effectiveness of delivering service realisations.  In the next post I’ll to move into the second part of the story where I’ll look more specifically at realising the industrialisation of service delivery through the creation of an SDP.

SOA vs Efficient & Agile vs Differentiation

7 Apr

One of my friends recently sent me an article from CIO connect that stated “IT Agility Trumps IT Efficiency”.  Essentially the drift of the argument was that agility was becoming a much greater driver than efficiency, citing a survey in which executives placed IT agility at the top of their wishlist.  Taking this point a little further the article suggested that many people now saw SOA as the best route to achieving greater IT agility and that this was now becoming the main motivator for adoption.

I obviously found this interesting but unsurprising – I’ve always believed that SOA was the quickest route to increasing agility and would only extend this to cover the use of service-orientation to organise the business around the delivery of services and not just the IT.

More broadly, however, I had some comments in relation to sources of sustainable differentiation and where it comes from; in this context I had some specific comments around efficiency and agility – the two options from the article. 

Now the big problem with efficiency is that it often becomes an end in itself.  Anything you can make more efficient is codifiable – whether through the use of SOA or not – and therefore can be copied by others.  Therefore efficiency is unlikely to be a long term differentiator.  Worse than that people who spend all of their time trying to be efficient gradually turn inwards and lose sight of why they were doing things in the first place (i.e. to give their customers what they want at best value). 

Interestingly, using SOA to realise greater agility is also not really a long term differentiator since having access to technology that allows you to change your codified processes more rapidly is fine but such technology is available to everyone and therefore agility as an end in itself ceases to be a reliable source of differentiation.  Whilst not moving to service-oriented business models in order to become adaptable will undoubtably be bad for your business, it cannot be considered a sustainable source of differentiation given that the option is there for everyone.  In this context it is more important to identify those services within your business that capture differentiating IP or talent and which would benefit from agility to keep you ahead of your competitors. 

In both of these instances the real advantage comes from recognising those services within the organisation that are key assets – and leveraging them mercilessly – whilst also identifying those things that are non-differentiating – and getting rid of them.  There is no point trying to invest in the efficiency or adaptability of non-core services since in that context you are just competing with other people in areas that provide no long term differentiation.

The only real and true differentiation is increasingly going to be in the 20% of services that encapsulate IP or amplify talent – true agility will therefore require us to codify non-differentiating capabilities and – where possible – get rid of them to specialised providers whilst maximising the leverage of the tacit knowledge we have in the form of our most talented people or the IP that they generate.  Ironically a search for control and efficiency in the new connected world will often disenfranchise the very people that we should give the greatest freedom to, stifling their judgement, talent and ability to create valuable relationships and in the process destroying a potent source of competitive advantage.

So is a search for efficiency and agility necessary?  The answer of course is yes but they are now just the way we win the right to stay in the game and not a source of competitive advantage by themselves.  A search for efficiency is often best realised through the leverage of partner services in place of endless tinkering with non-differentiating services of our own, whilst agility comes far more from our ability to leverage a web of partners who really care about service than from our own mediocre and uncaring supporting functions.  At the end of the day it’s how we deliver our service that matters and for this you need your best people in the front lines supported by technology that enables them to amplify their talent or maximise the leverage of IP but not driven by technology through highly restrictive processes in the pursuit of efficiency or wasting their valuable attention implementing and supporting agility in the wrong places.

BPM vs SOA – Why so hard?

7 Jan

Just wended my way through the blogosphere from Joe McKendrick’s discussion of EDA and SOA to a transcript of a conversation on EBizQ about the relationship between BPM and SOA.  I’m still struggling to come to terms with the issues that people have in this space given that I believe that service-orientation is all about structure – i.e. what gets done – whilst processes are all about implementation – i.e. how things get done – but a few things stood out in the conversation that I really disagreed with.

Firstly there was a general consensus that SOA was a technical ‘thing’ whilst BPM was a business ‘thing’.  A representative comment here would be this statement from JT Taylor:

“And the reason is, is because SOA, first off, as I’ve think we’ve agreed is a very technical approach to delivering a collection of services.”

Any regular reader of this blog will know that I wholly refute this kind of argument – at the end of the day SOA – or service orientation as I prefer to call it – is a conceptual model that allows us to attack complexity and increase adaptability through componentisation.  I strongly believe that this conceptual model has equal applicability in a business context – lowering transaction costs will open  the door to greater specialisation but such opportunities will rely on an ability to componentise the business in order to understand what services make up a total offering to customers.  In this context service-orientation – as a discipline – has much to offer.

The other major thrust that I took issue with was the assumption that you start with business processes and then try and find services from these.  Another representative set of comments from JT Taylor (who was supported by the other participants, I must stress – his comments were just the most quotable in this context):

“I guess my opinion is that the business processes should govern the organization behavior and systems should support that. So I would actually start from the process angle first and then take the SOA approach as an implementation approach.”

I’ve discussed before why I think that this is a poor model but basically I believe that we need to first separate what gets done before we start to get into the detail of how to do it.  As a result I believe that the appropriate place to start is with higher level abstractions – call them business capabilities, business services or whatever – which capture metadata and information about the structural properties of an organisation and their required outcomes – before you start looking at how you implement them.

More broadly, however, I believe that the relationship between SOA and BPM is essentially fractal based on the ‘what’ and ‘how’ dimensions I’ve discussed.  Essentially the questions are:

  • What do organisations offer their consumers? um services.
  • But what implements those services? um. business processes.
  • And what do business processes consume? um services.
  • And.. um where was I again?

Again to stress the point: services describe what the organisation does and to what level (in terms of metrics and commitments) whilst business processes describe how each individual service is implemented and the commitments met (and the model is fractal).  That’s why I believe that the notion of starting with BPM (which is a technology-oriented view of business processes, btw) is upside down – what’s the service you’re implementing again….?

Happy New Year, btw.

The Departing CIO

30 Oct
Everyone is different… aren’t they?

Just read a post by Nicholas Carr about the diminishing role of CIOs.  Not sure whether I am ready to believe that CIOs are not necessary yet but I certainly believe that they need to get their asses out of the dead end of operations and management of IT and much more into the role of trusted adviser to the board around innovation and access to the right services at the right cost.  Slightly tangentially, though, one comment really caught my attention.  Buried amongst a load of stuff about how CIOs were effectively being left marginalised by increasing technology maturity and commoditisation was the following comment answering a number of points from the post (original post content being commented on underlined):

Hopper, who predicted that IT would come to be thought of more like electricity or the telephone network than as a decisive source of organizational advantage.”

From a software perspective, I have my doubts that it will ever happen. The goal of IT is business process automation. But businesses processes are inherently different across businesses. For example, no two business does finance the same way. No two real estate company processes loans or property inspections the same way. Therefore, they each need their own specialized applications.”

Umm… why?

Umm… why?  Why do no two businesses do finance in the same way?  I accept that this may be the current state but I see no reason why this should be accepted as the right way?  What competitive differentiation does doing 80% of non-differentiating things in different ways provide to the organisations concerned?  Given the fact that I tend to believe strongly in capability driven organisations I can see a case for co-sourcing these functions with specialised providers – in that context no two finance providers will necessarily have the same processes but all other kinds of companies will increasingly just integrate these standardised offerings from third parties and will therefore absolutely share the same services – why wouldn’t they?  In this context the IT becomes just a component part of the shared capability being offered and thus the need for consuming organisations to each have their own applications in non-differentiating areas is gradually reduced over time.  This will increasingly occur across all capabilities that organisations have – they will outsource, co-source or share non-differentiating capabilities in order to concentrate on those that are truly differentiating, leveraging these outstanding capabilities into wider value networks to maximise their value whilst simultaneously multiplying their value by combining them with others best in class capabilities.  As I’ve discussed before, the capabilities around which an individual company wishes to differentiate itself may still need IT support to realise effectively but this will increasingly not require the ownership of that IT.

The bald truth

I’m getting fairly tired of having these conversations with people so let’s put it down in bullet form:

  • You are not special – 80% of what you do is not special and you’re just wasting money and attention competing with other people who are equally dumb.  Understand your capabilities, understand where you are special and unbundle the rest for pities sake – find specialised providers, talk to IT service companies about offloading the stuff or work with your competitors to pool resources.  In no circumstances be tempted to customise packages or implement applications or processes in non-differentiating capabilities – either deploy or (better still) gain access to standard offerings;
  • IT is not differentiating – Owning and operating IT gives you no competitive advantage; in fact it drains your resources, locks you into processes that are simultaneously uncompetitive and non-differentiating, is ridiculously expensive to scale or right-size and gives you your own islands of function that stagnate outside the rapid learning possible for IT service providers.  What you do with IT – in terms of process enablement for differentiating capabilities – may be important depending on your industry but you don’t need to own and operate the IT platforms required to do it any more than you need to own electricity infrastructure to work the photocopier – it’s increasingly going to become a utility and should be treated as such.

Different types of value

If we look at the types of value chains that organisations typically bundle together – and which we fully expect to be broken up over the next few years to recognise the differences in economics and culture needed to be successful in their provision – these points can perhaps we made a little clearer:

  • Physical value chain:  In this context certain capabilities require the ownership and leverage of physical assets. The economics of these capabilities respond strongly to economies of scale and therefore tend towards shared usage rather than differentiation for each organisation.  IT platforms are increasingly tending towards economies of scale and are therefore likely to be increasingly consolidated over the coming years – essentially building service platforms that enable services to be created, deployed and delivered scalably are capital intensive activities that drive them towards being shared.  In this context anyone who aims to own and operate their own bespoke IT platforms and operations in the longer term is probably not concentrating effectively. 
  • Transactional value chain:  In this value chain we have information resources and business processes that implement capabilities.  These capabilities again tend towards economies of scale as 80% of capabilities are non-differentiating to individual organisations – as a result we would expect to see the emergence of specialised providers who leverage economies of scale by offering their capabilities for integration into the wider value web of their customers and partners.  In that context, again, trying to build and operate applications that you customise to allow you to execute non-differentiating capabilities – such as the finance example given originally – in ways which are different to others is a waste of time, money and focus.  For your own specialisation, however, you would absolutely want to create bespoke applications and services to support you in your endeavors but given the previous discussion on physical value chains you would want to ensure that you don’t have to take on the ownership and management of the platforms needed to do this – rather you would want to reduce risk and capital exposure by leveraging partners that can offer such service platforms to you on demand.
  • Knowledge value chain:  Knowledge value chains absolutely represent differentiation for your organisation as by definition you are dealing in capabilities that have knowledge and IPR not available to others.  Perversely, however, this does not mean that you necessarily need any particularly differentiating IT as software that enables you to capture, develop and leverage knowledge will be available in the transactional value chain.  Again in this context IT is not a differentiator and you need to focus on using standard applications and services to support you in your IP creation and leverage.

 So just stop

In breaking down the typical organisation we can see that IT provides very little differentiation in the majority of the work that gets done.  Platforms will increasingly be consolidated and shared, removing a large part of the IT related work that most organisations currently spend time and energy getting stressed about.  There will be some applications and services that represent your differentiation but in this context you will increasingly be using shared platforms to compose and deliver these services in ways that are different from the IT delivery of the past.  The challenge for CIOs is to understand how they remove themselves from daily battles trying to keep their heads above water in the physical value chain and concentrate more on helping their business colleagues to unbundle non-differentiating capabilities from the transactional value chain to improve focus and performance and on finding the best platform partner to rapidly compose differentiating capabilities that represent valuable IPR for their company.

Enterprise Architecture is yesterday’s news

19 Oct

Amazed to find this post that suggests that Enterprise Architecture is the next big thing.  I thought that EA was a late 90s, early 00s silver bullet and that we were now all moving on to a concentration on fractal business architectures supported by federated IT.  Now I realise that judicious EA can encompass this view – indeed that it should – but realistically EA as a term was hijacked by geeks who used it to enforce technical standards and who gibbered at their business colleagues in a threatening way about governance and compliance whilst creating impenetrable mountains of useless documentation that was out of date as soon as it was created (and made no sense to business people or developers). 

Worse than that most EA practitioners genuinely believe that they can document the whole organisation top down in a complex chain of dependencies – a horrible fallacy in a world of federation – and then micro-govern people by putting cumbersome processes in the way of getting anything done (and given the difficulty (nay – impossibility) of doing this they end up obsessing on forcing shared and inappropriate IT on people instead of supporting business improvement). 

Taking it all a bit further it’s also unfortunate that most EA efforts top out at business processes (rather than capabilities) and so are a pure expression of how things get done rather than a sensible framework for enabling decisions about what should be done (I enjoyed Steve Jones’s post about this very issue).

Before I get flamed to the extent that I am consumed by a terrible conflagration I have to say a couple of other things; are the aims of EA crazy (i.e. to understand and govern the organisation)?  Umm, no.  Should we still be seeking to do this?  Umm, yes.  Does the kind of top down, all encompassing approach taken by most EA efforts actually work?  Umm, no.  By saying that EA is no longer useful I’m basically saying that the concept is absolutely right, it is more necessary than ever in a world of services – indeed services could be the missing abstraction that finally fill the gap between intent and execution – but that initial efforts in this direction didn’t take sufficient notice (indeed why would they have) of unbundling and federation.  So I believe that EA needs to evolve to help govern the portfolio of capabilities required by an organisation to function and to include some light touch policies that specify how they should work together.  Below this level we start again, essentially with a smaller EA for each capability, recognising federation and unbundling will remove our ability to control every aspect of the way that an organisation works centrally and from top to bottom – EA essentially needs to become explicitly fractal.

Either way I’m not sure that EA is the next big thing, but then again maybe I’ve just dreamt all of this and the future is bright etc.