Archive | Business 2.0 RSS feed for this section

Is Social Media Rubbish?

8 Jul

I’ve read a few interesting posts recently relating to Social Media and ‘Enterprise 2.0’.  First up was Peter Evans-Greenwood talking about the myth of social organisations given their incompatibility with current structures and the lack of business cases for many efforts.  From there I followed links out to Martin Linssen and Dennis Howlett – both of whom commented on the current state of Enterprise 2.0 and social business, in particular their lack of clarity (i.e. are they primarily about tools, people or marketing efforts), the often ironic lack of focus on people in favour of technology and the paucity of compelling business cases.  Furthermore they also highlighted the continued migration of traditional vendors from one hot topic to another (e.g. from ECM to Enterprise 2.0 to Social Business) in order to support updated positioning for products, creating confusion and distraction by suggesting that success comes from owning specific tools rather than from particular ways of working.

Most damningly of all I found a link (courtesy of @adamson) to some strong commentry from David Chalke of Quantum Market Research suggesting that:

Social media: ‘Oversold, misused and in decline’

All of these discussions made me think a bit about my own feelings about these topics at the moment.

The first thing to state is that it seems clear to me that in the broadest sense businesses will increasingly exist in extended value webs of customers and partners.  From that perspective ‘business sociability’ – i.e. the ability to take up a specialised position within a complex value web of complementary partners and to collaborate across organisational and geographical boundaries – will be critical.  The strength of an organisation’s network will increasingly define the strength of their capabilities.  Social tools that support people in building useful networks and in collaborating across boundaries – like social networks, micro-blogs, blogs, wikis, forums etc – will be coupled with new architectures and approaches – like SOA, open APIs and cloud computing – as the necessary technical foundations for “opening up” a business and allowing it to participate in wider value creation networks.  As I’ve discussed before, however, tooling will only exist to support talented people undertaking creative processes within the context of broader networks of codified and automated processes.

Whilst therefore having the potential to support increasing participation in extended value webs, develop knowledge and support the work of our most talented people, it’s clear that throwing random combinations of tools at the majority of existing business models without significant analysis of this broader picture is both pointless but also extremely distracting and potentially ultimately very damaging (as failed, ill thought through initiatives can lead to an opportunity for entrenched interests to ignore the broader change for longer).

Most of the organisations I have worked with are failing to see the bigger picture outlined above, however.  For them ‘social tools’ are either all about the way in which they make themselves ‘cooler’ or ‘more relevant’ by ‘engaging’ in social media platforms for marketing or customer support (looking externally) or something vaguely threatening and of marginal interest that undermines organisational structures and leads to staff wasting time outside the restrictions of their job role (looking internally).  To date they seem to be less interested in how these tools relate to a wider transformation to more ‘social’ (i.e.  specialised and interconnected) business models.  As with the SOA inertia I discussed in a previous blog post there is no heartfelt internal urgency for the business model reconfiguration required to really take social thinking to the heart of the organisation.  Like SOA, social tools drive componentisation and specialisation along with networked collaboration and hence the changes required for one are pretty similar to the changes required for the other.  As with SOA it may take the emergence of superior external service providers built from the ground up to be open, social and designed for composition to really start to trigger internal change.

In lieu of reflecting on the deeper and more meaningful trends towards ‘business model sociability’ that are eroding the effectiveness of their existing organisation, then, many are currently trying to bolt ‘sociability’ onto the edge of their current model as simply another channel for PR activity.  Whilst this often goes wrong it can also add terrific value if done honestly or with a clear business purpose.  Mostly it is done with little or no business case – it is after all an imperative to be more social, isn’t it? – and for each accidental success that occurs because a company’s unarticulated business model happens to be right for such channels there are also many failures (because it isn’t).

The reality is that the value of social tools will depend on the primary business model you follow (and increasingly the business model of each individual business capability in your value web, both internal and external – something I discussed in more detail here).

I think my current feeling is therefore that we have a set of circumstances that go kind of like this:

  1. There is an emerging business disruption that will drive organisational specialisation around a set of ‘business model types’ but which isn’t yet broadly understood or seen by the majority of people who are busy doing real work;
  2. We have a broad set of useful tools that can be used to create enormous value by fostering collaboration amongst groups of people across departmental, organisational and geographic boundaries; and
  3. There are a small number of organisations who – often through serendipity – have happened to make a success of using a subset of these tools with particular consumer groups due to the accidental fit of their primary business model with the project and tools selected.

As a result although most people’s reptilian brain instinctively feels that ‘something’ big is happening, instead of:

  • focusing on understanding their future business model (1) before
  • selecting useful tools to amplify this business model (2) and then
  • using them to engage with appropriate groups in a culturally appropriate way (3)

People are actually:

  • trying to blindly replicate others serendipitous success (3)
  • with whatever tools seems ‘coolest’ or most in use (2) and
  • no hope of fundamentally addressing the disruptions to their business model (1)

Effectively most people are therefore coming at the problem from entirely the wrong direction and wasting time, money and – potentially – the good opinion of their customers.

More clearly – rather than looking at their business as a collection of different business models and trying to work out how social tools can help in each different context, companies are all trying to use a single approach based largely on herd behaviour when their business model often has nothing directly to do with the target audience.  Until we separate the kinds of capabilities that require the application of creative or networking talent, understand the business models that underpin them and then analyse the resulting ‘types’ of work (and hence outcomes) to be enabled by tooling we will never gain significant value or leverage from the whole Enterprise 2.0 / social business / whatever field.

Will CIOs Fail on Cloud?

4 Jul

I’ve been reading a lot of content lately that covers three topics:

  1. What’s the future of enterprise architecture;
  2. How we govern businesses who are increasingly bypassing IT and going directly to the cloud; and
  3. Public vs Private clouds and the IT department’s role in creating FUD.

I think that these issues are deeply related and sadly speak of a lack of leadership and business-centricity in many IT departments.  All three areas give CIOs the opportunity to embrace their businesses and move to the heart of strategic thinking but in each case they have not (and are not) grasping these opportunities.  All share two important dimensions – answering fundamental questions about the way in which a business should be shaped and – as an element of that – how IT is supplied.  In both cases many CIOs seem unable to recognise which one is truly important.  Whilst I want to write a longer piece on the implications of these changes for the future of IT, in this post I just wanted to look at the question of whether CIOs will succeed or fail in finding a future within our organisations.

Enterprise Architecture as IT Architecture

Enterprise Architecture was supposed to give us a view of how the business worked.  Executed correctly it was meant to give us the context required to understand the strategic options available to our business and then understand the potential impact of each across various dimensions.  Most EA efforts originated – not unreasonably – within the IT department, however, because as a horizontal function used to thinking systematically they understood the potential first.  Unfortunately many IT departments have failed to address the business context purpose of EA and have become wholly inwardly focused.  Such groups use technology standards and governance as a proxy for really understanding and shaping the business and its supporting systems, leading them to simplified views of their purpose based on technology ‘standardisation’. Many of the technology standards they adopt are often inappropriate for large areas of the business, however, where business capabilities have business models different to those that drove the adoption of the ‘standard’ solution.  The limited scope of their ambition and understanding, however, leads them to push such technologies forward in any case as the ‘strategic solution’ to every problem that looks similar.  In drifting into this role most EA efforts have unfortunately therefore become a problem rather than an enabler; they have become detached from business realities, focused on internal IT issues and taken on the operation of governance processes that mostly result in delays, cost over runs and inappropriate solutions.  Most tragically in doing this they have spurned a tremendous opportunity to investigate and codify the structure and purpose of the enterprise and thereby find a place at the heart of the strategic processes of the business.

As a result of missing this opportunity many CIOs have become confirmed in the role of an operational supplier.  Worse still they are increasingly being seen as a costly and obstructive operational supplier and are therefore constantly under pressure to increase efficiency and reduce cost.  This forces them into a reactive, inward looking position, always looking to cut costs, standardise or begging for investment resources but whose services are still always considered to be decoupled from business value as well as slow, expensive and cumbersome.  Whilst in many ways being in the best position to see opportunities – because of the horizontal role of both themselves and their EA team – they singly fail to take advantage of it because they’re trapped in the wrong conversations by their operational responsibilities.

Enter the Cloud to Cheers from CIOs…. or not.

Despite the failure of IT departments to use the opportunities of EA to help the business gain strategic insights, CIOs have now been offered a golden opportunity to once again take the lead in their organisations.  Cloud computing offers CIOs the opportunity to remove themselves from the operational treadmill and place themselves firmly in the centre of strategic conversations about the future shape of their business.

Cloud is not a technology trend but rather a disruptive change in the way we communicate and consume services.  It will completely reshape organisations and the industries they operate in.  That may sound like hyperbole to some but I genuinely believe it.  History has shown that falling transaction costs make it more cost effective to consume services from partners than to operate them yourself and the pressures of the market will also ensure that these services are much better than those you could build yourself with limited scope.  Furthermore cloud services represent concrete business outcomes that can be aggregated into overall value-webs, moving conversations out of the realm of the bespoke, abstract and technical and into the realm of direct, consumable value outcomes.  Over the coming years every aspect of a business’s operations will be examined, categorised and in many cases outsourced to specialised third parties.  Cloud is the driving force behind these changes by making it inexpensive to connect to other people whilst simultaneously reducing their cost of entry to the market and allowing them to scale at low cost as their business grows.  I repeat – cloud may be viewed as an IT phenomenon currently but the fall out will disrupt every single industry as new specialised companies come rapidly to market with cost and agility profiles that cannot be matched by incumbents.

Many businesses don’t get this yet, however, and while they see the attractiveness of services like Salesforce (indeed are often purchasing them in spite of the CIO) they haven’t yet understood the profound consequences for their organisations in the years ahead.  For CIOs you would think that this is a huge opportunity to take the lead and help their businesses firstly understand and then transform to meet the demands of the new order; essentially someone needs to codify the concrete outcomes required by the organisation (business architecture), source and integrate them together (development and integration) and manage the integrity of the overall value web (business service management).  There is nobody better placed to grasp this opportunity than the CIO, who has an opportunity to lead their companies through a fundamental shift in the purpose and structure of not just IT but also of businesses and their operations.

But. But. But.

The issue is that many CIOs aren’t thinking like this at all.  Many CIOs seem to have come to believe that their job really is operational and therefore see cloud as a threat.  Many CIOs listen to their technologists who fear a loss of control over the way IT is designed and run even though they can’t explicitly relate it to business value.

Enter “private cloud”.  So now the CIO can have their cake and eat it.  They can tell the business that yes cloud is important and – darn it – they’re on top of the whole thing.  But its a big commitment, requires the recruitment of the absolute best technologists in the global industry, will take years to roll out (if it ever gets finished with limited budgets and everything else going on) and will never deliver the instant on, pay as you go model given the retention of a bunch of expensive capital assets and people that can’t be shared.  More importantly it’ll only operate at the – effectively worthless – infrastructure level and won’t provide the business with the opportunity to specialise by consuming world class, multi-tenant services from partners.

It’s Ultimately About Direct Value to the Business, Not Technology

So the business gets fed up with the expense, delay and excuses; they see explicit business value, lower costs, better capability and greater agility on offer externally – and they’re losing ground rapidly against their competitors – and so they go around the CIO and purchase their services directly from cloud suppliers.  Again the CIO has lost the opportunity to lead and has merely been cornered by business and economic reality.  The plain facts are that you can no longer work in isolation from demonstrable business value or put your finger in the cloud dyke to protect your own little private cloud bubble – economically it just won’t work out.  You have to face the fact that you’re not good enough, focused enough or well funded enough to build and operate a large scale cloud platform and that your real value is as a trusted advisor and integrator of services aligned to the business of your organisation.  Worst of all, the CIOs who are currently focused on technology in place of business architecture and sourcing will bring to fruition their own worst fears about losing control and influence – as the business increasingly flows around them they will end up as the dumb guy who missed the – by now obvious – signs about the way in which the cloud was going to affect the business and who showed no leadership.  Most importantly for this discussion the CIO will also be “the guy who runs all that expensive stuff nobody really wants any more with those weird people who talk about the way they used to control things in the old days.  Let’s just keep him out of the way while an external company comes in and helps us to transform our business.” (ironically perhaps the very same consultants and systems integrators who led him down the private cloud route in the first place – and who have been forced to accept their place as advisors and integrators of specialised services from the global market rather than providers and operators of uncompetitive, per-customer technology).

It’s a Combination of Enterprise Architecture and The Cloud That Will Save Those Who Deserve it

Looking at this track record it’s unfortunate that the CIO’s route to salvation requires him to fully embrace enterprise architecture and the cloud.

Essentially every enterprise consists of a number of business capabilities with divergent business models and the first role of the CIO should be to help to visualise these discrete capabilities in order to support higher level thinking about the purpose of the organisation and the best way of delivering each outcome.  Many peripheral business capabilities exist within an organisation merely to support the execution of more business critical core capabilities – such ‘supporting’ capabilities can be outsourced to cloud providers to enable greater specialisation.  It may be that much of the low hanging fruit during the earliest phases of this transformation will be centred around IT applications and services but over time the CIO can facilitate a change in thinking to open the business to the idea of sourcing any kind of business service from external providers in order to integrate successful services and increase the ‘fitness’ of the overall organisation.  Establishing the right to do this first requires the CIO to take a leadership position in early cloud implementations by helping the business deliver in an integrated and compliant way rather than fighting them, losing and further confirming their position outside the strategic tent.  Such an approach can lead to increasing momentum:

  1. On the back of early wins and increased standing CIOs can use the story of the coming disruption to help their businesses understand the exciting wider opportunities and consolidate their strategic leadership role.  Positioning the IT department as the ‘integrator’ and ‘manager’ of a business service portfolio spanning internal and external services provides a sustainable context for the future role of the CIO;
  2. As part of this role the CIO must take on the documentation of the business architecture and use this as a key strategic asset to provide decision support capabilities to the organisation around business models, specialisation and partnerships;
  3. At the same time the CIO should create a process of ‘certification’ based on appropriate criteria to provide an open marketplace of services for capability owners to use.  Informed curation (based on the industry of the company) along with feedback and requests from capability owners for additional applications and services will be a key part of this process and the result should be a portfolio that is open (i.e. not ‘standardised’ and ‘restricted’) but at the same time ‘approved’ to support governance responsibilities;
  4. In going through this transition CIOs have the opportunity to become ever more embedded in the strategic processes of the business – working on business architecture, rapid capability realisation and losing low level operational concerns as they move to cloud providers; and
  5. Most importantly, all of this can be achieved without spending huge amounts of money on non-differentiating technology or becoming more mired in the operational tar pit.  In contrast merely yielding to the changing role of the IT department leads to a virtuous circle of increasing relevance to business value, lower costs, better service and burgeoning innovation.

The reality is that specialised cloud services are increasingly going to be more competitive than those available within an organisation.  Even more critically, accessing such services allows us to specialise within our own organisations, providing us with the focus required to excel in our chosen areas of expertise.  To unlock the benefits of these synergies, however, enterprises need someone who can help them view their organisation more systematically as a portfolio of business capabilities and facilitate access to external services that can be used to enhance or replace them.  My feeling is that this will either be the CIO – or that the CIO will cease to exist.

Enterprise Architecture Top to Bottom

2 Dec

JP Morgenthal published an interesting post on his blog recently relating to the futility of trying to map out every facet of an enterprise architecture.  I wholeheartedly agree with his sentiments and have spoken on this issue in the past – albeit in a slightly different context (and also in discussing evolution and IT, actually).  I also feel strongly that EA practitioners should be focused far more on enabling a deeper understanding of the purpose and capabilities of the enterprises they work in – to facilitate greater clarity of reasoning about strategic options and appropriate action – rather than taking on an often obstructive and disconnected IT strategy and governance role (something that was covered nicely by Neil Ward-Dutton last week).  For all of these reasons I totally agreed with JP’s assertion that we should only pursue absolute detail in those areas that we are currently focused on.  This is certainly the route we took in my previous role in putting together an integration architecture for two financial services companies.

The one area where I think we can add to JP’s thoughtful consideration of the issues is that of developing useful abstractions of the business architecture as pivotal reasoning assets.  In pursuing the work I allude to we developed a business capability map of the enterprise that allowed us to divide it up into a portfolio of ‘business components’.  These capabilities allowed us to reason at a higher level and make an initial loose allocation of the underlying implementation assets and people to each (and given that both I and EA were new to the organisation when we started I even had to ‘crowdsource’ a view of the assets and their allocation to capabilities from across the organisation to kick start the process).  In this sense there was no need at the outset to understand the details of how everything linked together (either across the organisation or within individual capabilities) but rather just the purpose and broad outcomes of each capability.  This is an important consideration as it allowed us to focus clearly on understanding which capabilities needed to be addressed to respond to particular issues and also to reason about and action these changes at a more abstract level (i.e. without becoming distracted by – and lost in – the details of the required implementation).  In this sense we could concentrate not on understanding the detail of every ‘horizontal’ area as a discrete thing – so everything about every process, infrastructure, data or reward systems along with the connections across them all – but rather on building a single critical horizontal asset (i.e. the business capability view) that allowed us to reason about outcomes at an enterprise level whilst only loosely aligning implementation information to these capabilities until such a time as we wanted to make some changes.  At that stage specific programmes could work with the EA team to look much more specifically at actual relationships along with the implementation resources, roles and assets required to deliver the outcomes.  Furthermore the loosely bounded nature of the capabilities meant that we could gradually increase the degree of federation from a design and implementation perspective without losing overall context.

Overall this approach meant that we did not try to maintain a constant and consistent view of the entire enterprise within and across the traditional horizontal views – along with the way in which they all linked together from top to bottom – but only a loose view of the overall portfolio of each with specific contextualisation provided by an organising asset (i.e. the capability model).  In this context we needed to confirm the detailed as-is and to-be state of each capability whenever we wanted to action changes to its outcomes – as we expended little effort to create and maintain detailed central views – but this could be largely undertaken by the staff embedded within the capability with support and loose oversight from the central EA team.  In reality we kept an approximate portfolio view of the assets in the organisation (so for example processes, number of people, roles, applications, infrastructures and data) as horizontal assets along with the fact that there was some kind of relationship but these were only sufficient to allow reasoning about individual capabilities, broad systemic issues or the scale of impact of potential changes and were not particularly detailed (I even insisted on keeping them in spreadsheets and Sharepoint – eek – to limit sophistication rather than get sucked into a heavy EA tool with its voracious appetite for models, links and dependencies).

I guess the point I wanted to make is that my own epiphany a few years ago related to the fact that most people don’t need to know how most things work most of the time (if ever) and that trying to enable them to do so is a waste of time and a source of confusion and inaction.  It is essentially impossible to create and then manage a fixed and central model of how an entire enterprise works works top to bottom, particularly by looking at horizontal implementation facets like processes, people or technology which change rapidly, independently and for different reasons in different capabilities.  In addition the business models of capabilities are going to be very diverse and ‘horizontal’ views often encourage overly simplistic policies and standards for the sake of ‘standardisation’ that negatively impact large areas of the business.  Throw in an increasing move towards the cloud and the consumption of specialised external services and this only becomes more of an issue.  In this context it is far more critical to have a set of business architecture assets at different levels of abstraction that allow reasoning about the purpose, direction and execution strategy of the business, its capabilities and their implementation assets (this latter only for those capabilities you retain yourself in future).  These assets need to be explicitly targeted at different levels of abstraction,  produced in a contextually appropriate way and – importantly – facilitate far greater federation in decision making and implementation to improve outcomes.  Effectively a framework for understanding and actionable insight is far more valuable than a mass of – mostly out of date – data that causes information overload, confusion and inaction.  An old picture from a few years ago that I put together to illustrate some of these ideas is included below (although in reality I’m not sure that I see an “IT department” continuing to exist as a separate entity in the long term but rather a migration of appropriate staff into the enterprise and capability spaces with platforms and non-core business capabilities moving to the cloud).

guerilla

In terms of relinquishing central control in this way it is possible that for transitional business architectures – where capabilities remain largely within the control of a single enterprise as today – greater federation coupled with a refined form of internal crowd sourcing could enable each independent model to be internally consistent and for each independent model to be consistent with the broader picture of enterprise value creation.  I decided to do something else before getting to the point of testing this as a long term proposition, however, lol (although perhaps my former (business) partner in crime @pcgoodie who’s just started blogging will talk more about this given that he has more staying power than me and continues the work we started together, lol).  Stepping back, however, part of the value in moving to this way of thinking is letting go and viewing things from a systems perspective and so the value of having access to all the detail from the centre will diminish over time.

In the broader sense, though, whilst I first had a low grade ‘business services as organisation’ epiphany whilst working at a financial services company in 2001 most of this thinking and these ways of working were inspired not by being inside an enterprise but rather subsequently spending time outside of one.  Spending time researching and reflecting on the architectures, patterns, technologies and – more importantly – business models related to the cloud made me think more seriously about the place of an enterprise in its wider ecosystem of value creation and the need to concentrate completely on those aspects of the ecosystem that really deliver its value.  In the longer term whilst there are many pressures forcing an internal realignment to become more customer-centric, valuable or cost effective, the real pressure is going to start building from the outside; once you realise that the enterprise works within a broader system you also start to see how the enterprise itself is a system, with most of its components being pretty poor or misaligned to the needs of the wider ecosystem and its consumers.  At this point you begin to realise that you have to separate the different capabilities in your organisation and use greater design thinking, abstraction and federation, giving up control of the detail outside of very specific (and different) contexts depending on your purview.  At that stage you can really question your need to be executing many capabilities yourself at all, since the real promise of the cloud is not merely to provide computing power externally but rather to enable businesses to realise their specialised capabilities in a way that is open, collaborative and net native and to connect these specialisations across boundaries to form new kinds of loose but powerful value webs.  Such an end game will be totally impossible for organisations who continue to run centralised, detail-oriented EA programmes and thus do not learn to let go, federate and use abstraction to reason, plan and execute at different levels simultaneously.

What Does it Mean to Think of Your Business as a Service?

10 Nov

Just read a really interesting post from Henry Chesbrough about what it means to think about your business as a service.  It touches on something that has always seemed obvious to me but which also seems to be not well understood.  It’s important as it’s both subtle but ultimately highly disruptive.

In order to set some context about how businesses typically think of services, Henry first points to an illustration of the value chain model and the place of ‘services’ within this illustration:

image

He points out that services are often thought of as a second-class citizen in this view of the world, merely being tacked onto the end of the process to assist customers in adopting the ‘real’ value – i.e. that which has been designed to be pushed at them through the tightly integrated value chain.

He then goes on to suggest that this isn’t the best view of what services should be in reality and that there is immense value in thinking about – and delivering the value of – a business as a service.

I have been arguing on my blog for a long time that the challenge facing most organisations is to reimagine themselves as a set of ‘business services’ (or business capabilities) that are organised around value rather than customer segments, functional disciplines or physical assets.  Such a move can make them more adaptable, help them to specialise by disaggregating non-core capabilities to partners and unleash innovation on a scale not possible in today’s internally focused and tightly coupled organisations.  Looking at different kinds of value can also help us to sustainably disaggregate and then re-aggregate the organisation based on cultural and economic differences (so based around relationship management business models, infrastructural business models, IP development business models or portfolio management business models).

90% of people I talk to still equate services with the value chain definition highlighted above, however, and miss the core point that a move to a ‘services based world’ isn’t that the small area of the traditional value chain called ‘services’ becomes the most economically attractive (i.e. consulting is better than product development and so we should concentrate more there) but rather that every participant in the traditional value chain has to realign themselves to take responsibility for ‘hiding’ the assets needed to deliver their outcome.  In doing this they simplify consumption for their customers and create an ability to work with far more value web participants outside the boundaries of a single organisation.  Equally importantly such a realignment sets the scene for them to participate in pull-oriented value webs rather than merely being a dumb participant in a pre-set and push-oriented value chain.  This does not mean that they are only specialising on the traditional ‘services’ part of the value chain and sourcing all the non-services parts from partners – rather it means that every organisation has to identify the correct business model for each component and then increase the scope of each to wrap up whatever physical, human or information assets are required to deliver that as a specialised service.  As an example manufacturers (an infrastructural business with heavy dependency on physical assets and hence far from the definition of services we started with) will still need manufacturing capability, but they will ‘expose’ the whole capability (i.e. people, processes and technologies) as a service to others (who follow different business models related to IP development or relationships).

Such a shift to greater specialisation around the delivered value whilst simultaneously extending the required scope of expertise required to deliver that value as a service is an important point; more often than not such realignments will cut across settled business boundaries and drive ‘mini-vertical-integration*’ within the context of a particular business type and outcome.

We could therefore consider a reorganisation of businesses for a service economy as a move away from the value chain model we started with to one in which:

  • ‘services’ become core offerings rather than merely a value add and represent both the external boundary and a definition of the specialised outcome delivered.  Internally the service will be implemented by a ‘mini internal value chain’ tightly optimised to deliver its differentiating IP through the appropriate combination of physical, information and human resources; and
  • a ‘value web’ coordinates services into broader networks by aggregating value via the coordination of outcomes from many specialised service providers.

Effectively you could say that the ‘value chain’ (i.e. explicit, known implementation) becomes internal to the service provider whilst the ‘value web’ (i.e. external coordination of outcomes) becomes the external expression of how value is aggregated.

Either way there is an important mind shift that needs to be made here – moving to a model in which you make your business available as a service has profound implications for what does or does not constitute a specialisation for your organisation and on how you organise.  You may find that many things you have traditionally done internally actually have no intrinsic value and can be ceded to specialised partners, whereas subsets of many of the things that have over-simplistically been considered ‘horizontal’ (and thus easily outsourced – so for example HR, Marketing or IT) come to represent significant value when you look to optimise against outcomes.  Only be re-orienting around value will we gain the insights necessary to understand the nature of the services we wish to offer, the optimum business model to adopt for each and the skills and assets required by the cross-functional teams who will deliver it.

P.S.  As an example – I briefly discussed how moves to specialise around value might affect IT departments last week.

*I should also state that when talking about ‘vertical integration’ in this context I mean within a particular business type (i.e. relationship management, infrastructure, IP development or portfolio management) rather than _across_ business types – such horrific ‘vertical integration’ across the whole value chain of different kinds of value (as beloved by traditional telecoms incumbents and, it seems, Apple) creates walled gardens that restrict consumer freedom, create asymmetrical power relationships and inhibit innovation.  As a result I believe that this is something to be strictly avoided if we want open and competitive markets (and increasingly enforced by regulation if necessary).

What’s the Future of SOA?

9 Nov

EbizQ asked last week for views on the improvements people believe are required to make SOA a greater success.  I think that if we step back we can see some hope – in fact increasing necessity – for SOA and the cloud is going to be the major factor in this.

If we think about the history of SOA to date it was easy to talk about the need for better integration across the organisation, clearer views of what was going on or the abstract notion of agility. Making it concrete and urgent was more of an issue, however. Whilst we can discuss the ‘failure’ of SOA by pointing to a lack of any application of service principles at a business level (i.e. organisationally through some kind of EA) this is really only a symptom and not the underlying cause. In reality the cause of SOA failure to date has been business inertia – organisations were already set up to do what they did, they did it well enough in a push economy and the (understandable) incentives for wholesale consideration of the way the business worked were few.

The cloud changes all of this, however. The increasing availability of cloud computing platforms and services acts as a key accelerator to specialisation and pull business models since it allows new entrants to join the market quickly, cheaply and scalably and to be more specialised than ever before. As a result many organisational capabilities that were economically unviable as market offerings are now becoming increasingly viable because of the global nature of cloud services. All of these new service providers need to make their capabilities easy to consume, however, and as a result are making good use of what people are now calling ‘apis’ in a web 2.0 context but which are really just services; this is important as one of the direct consequences of specialisation is the need to be hooked into the maximum number of appropriate value web participants as easily as possible.

On the demand side, as more and more external options become available in the marketplace that offer the potential to replace those capabilities that enterprises have traditionally executed in house, so leaders will start to rethink the purpose of their organisations and leverage the capabilites of external service providers in place of their own.

As a result cloud and SOA are indivisable if we are to realise the potential of either; cloud enables a much broader and more specialised set of business service providers to enter a global market with cost and capability profiles far better than those which an enterprise can deliver internally. Equally importantly, however, they will be implicitly (but concretely) creating a ‘business SOA catalogue’ within the marketplace, removing the need for organisations to undertake a difficult internal slog to re-implement or re-configure outdated capabilities for reuse in service models. Organisations need to use this insight now to trigger the use of business architecture techniques to understand their future selves as service-based organisations – both by using external services as archtypes to help them understand the ways in which they need to change and offer their own specialised services but also to work with potential partners to co-develop and then disaggregate those services in which they don’t wish to specialise in future.

Having said all that to set the scene for my answer(!) I believe that SOA research needs to be focused on raising the concepts of IT mediated service provision to a business level – including concrete modelling of business capabilities and value webs – along with complex service levels, contracts, pricing and composition – new cloud development platforms, tooling and management approaches linked more explicitly to business outcomes – and which give specialised support to different kinds of work – and the emergence of new 3rd parties who will mediate, monitor and monetise such relationships on behalf of participants in order to provide the required trust.

All in all I guess there’s still plenty to do.

This is Not Your Daddy’s IT Department

3 Nov

Whilst noodling around the net looking at stuff for a longer post I’m writing I came across an excellent Peter Evans-Greenwood piece from a few months ago on a related theme – namely the future of the IT department.  I found it so interesting I decided to forgo my other post for now and jot down some thoughts.

After an interesting discussion about the way in which IT organisations have traditionally been managed and the ways in which outsourcing has evolved Peter turns to a discussion of the future shape of IT as a result of the need for businesses to focus more tightly, change more rapidly and deal with globalisation.  He posits that the ideal future shape of provision looks like that below (most strategic IT work at peak):

pyramid

Firstly I agree with the general shape of this graphic – it seems clear to me that much of what goes on in existing enterprises will be ceded to specialised third parties.  My only change would be to substitute ‘replace with software’ with ‘replace with external capability’ as I believe that businesses will outsource more than just software.  Given that this diagram was meant to look at the work of the IT department, however, it’s scope is understandable.

The second observation is that I believe that the IT “function” will disaggregate and be spread around both the residual business and the new external providers.  I believe that this split will happen based on cultural and economic factors.

Firstly all ‘platform’ technologies will be outsourced to the public cloud to gain economies of scale as the technology matures.  There may be a residual internal IT estate for quite some time but it is essentially something that gets run down rather than invested in for new capability.  It is probable that this legacy estate would go to one kind of outsourcer in the ‘waist’ of the triangle.

Secondly many business capabilities currently performed in house will be outsourced to specialised service providers – this is reflected in the triangle by the ‘replace with software’ bulge (although as I stated I would suggest ‘replace with external capability’ in this post to cover the fact that I’m also talking about business capabilities rather than just SaaS).

Thirdly – business capabilities that remain in house due to their differentiating or strategic nature will each absorb a subset of enterprise architects, managers and developers to enable a more lean process – essentially these people will be embedded with the rest of their business peers to support continual improvement based on aligned outcomes.  The developers producing these services will use cloud platforms to minimise infrastructural concerns and focus on software-based encoding of the specialised IP encapsulated by their business capability.  It is probable that enterprise architects, managers and developers in this context will also be supplemented by external resources from the ‘waist’ as need arises.

Finally a residual ‘portfolio and strategy’ group will sit with the executive and manage the enterprise as a collection of business capabilities sourced internally and externally against defined outcomes.  This is where the CIO and portfolio level EA people will sit and where traditional consulting suppliers would sell their services.

As a result my less elegant (i.e. pig ugly :)) diagram updated to reflect the disaggregation of the IT department and the different kinds of outsourcing capabilities they require would look something like:

future_it_department

In terms of whether the IT ‘department’ continues to exist as an identifiable capability after this disaggregation I suspect not – once the legacy platform has been replaced by a portfolio of public cloud platforms and the ‘IT staff’ merged with other cross-functional peers behind the delivery of outcomes I guess IT becomes part of the ‘fabric’ of the organisation rather than a separate capability.  I don’t believe that this means that IT becomes ‘only’ about procurement and vendor management, however, since those business capabilities that remain in house will still use IT literate staff to design and build new IT driven processes in partnership with their peers.

I did write a number of draft papers about all these issues a few years ago but they all got stuck down the gap between two jobs.  I should probably think about putting them up here one day and then updating them.

Private Clouds “Surge” for Wrong Reasons?

14 Jul

I read a post by David Linthicum today on an apparent surge in demand for Private Clouds.  This was in turn spurred by thoughts from Steve Rosenbush on increasing demand for Private Cloud infrastructures.

To me this whole debate is slightly tragic as I believe that most people are framing the wrong issues when considering the public vs private cloud debate (and frankly for me it is a ridiculous debate as in my mind ‘the cloud’ can only exist ‘out there, somewhere’ and thus be shared; to me a ‘private’ cloud can only be a logically separate area of a shared infrastructure and not an organisation specific infrastructure which merely shares some of the technologies and approaches – which, frankly, is business as usual and not a cloud.  For that reason when I talk about public clouds I also include such logically private clouds running on shared infrastructures).  As David points out there are a whole host of reasons that people push back against the use of cloud infrastructures, mostly to do with retaining control in one way or another.  In essence there are a list of IT issues that people raise as absolute blockers that require private infrastructure to solve – particularly control, service levels and security – whilst they ignore the business benefits of specialisation, flexibility and choice.  Often “solving” the IT issues and propagating a model of ownership and mediocrity in IT delivery when it’s not really necessary merely denies the business the opportunity to solve their issues and transformationally improve their operations (and surely optimising the business is more important than undermining it in order to optimise the IT, right?).  That’s why for me the discussion should be about the business opportunities presented by the cloud and not simply a childish public vs private debate at the – pretty worthless – technology level.

Let’s have a look at a couple of issues:

  1. The degree of truth in the control, service and security concerns most often cited about public cloud adoption and whether they represent serious blockers to progress;
  2. Whether public and private clouds are logically equivalent or completely different.

IT issues and the Major Fallacies

Control

Everyone wants to be in control.  I do.  I want to feel as if I’m moving towards my goals, doing a good job – on top of things.  In order to be able to be on top of things, however, there are certain things I need to take for granted.  I don’t grow my own food, I don’t run my own bank, I don’t make my own clothes.  In order for me to concentrate on my purpose in life and deliver the higher level services that I provide to my customers there are a whole bunch of things that I just need to be available to me at a cost that fits into my parameters.  And to avoid being overly facetious I’ll also extend this into the IT services that I use to do my job – I don’t build my own blogging software or create my own email application but rather consume all of these as services over the web from people like WordPress.com and Google. 

By not taking personal responsibility for the design, manufacture and delivery of these items, however (i.e. by not maintaining ‘control’ of how they are delivered to me), I gain the more useful ability to be in control of which services I consume to give me the greatest chance of delivering the things that are important to me (mostly, lol).  In essence I would have little chance of sitting here writing about cloud computing if I also had to cater to all my basic needs (from both a personal as well as IT perspective).  I don’t want to dive off into economics but simplistically I’m taking advantage of the transformational improvements that come from division of labour and specialisation – by relying on products and services from other people who can produce them better and at lower cost I can concentrate on the things that add value for me.

Now let’s come back to the issue of private infrastructure.  Let’s be harsh.  Businesses simply need IT that performs some useful service.  In an ideal world they would simply pay a small amount for the applications they need, as they need them.  For 80% of IT there is absolutely no purpose in owning it – it provides no differentiation and is merely an infrastructural capability that is required to get on with value-adding work (like my blog software).  In a totally optimised world businesses wouldn’t even use software for many of their activities but rather consume business services offered by partners that make IT irrelevant. 

So far then we can argue that for 80% of IT we don’t actually need to own it (i.e. we don’t need to physically control how it is delivered) as long as we have access to it.  For this category we could easily consume software as a service from the “public” cloud and doing so gives us far greater choice, flexibility and agility.

In order to deliver some of the applications and services that a business requires to deliver its own specialised and differentiated capabilities, however, they still need to create some bespoke software.  To do this they need a development platform.  We can therefore argue that the lowest level of computing required by a business in future is a Platform as a Service (PaaS) capability; businesses never need to be aware of the underlying hardware as it has – quite literally – no value.  Even in terms of the required PaaS capability the business doesn’t have any interest in the way in which it supports software development as long as it enables them to deliver the required solutions quickly, cheaply and with the right quality.  As a result the internals of the PaaS (in terms of development tooling, middleware and process support) have no intrinsic value to a business beyond the quality of outcome delivered by the whole.  In this context we also do not care about control since as long as we get the outcomes we require (i.e. rapid, cost effective and reliable applications delivery and operation) we do not care about the internals of the platform (i.e. we don’t need to have any control over how it is internally designed, the technology choices to realise the design or how it is operated).  More broadly a business can leverage the economies of scale provided by PaaS providers – plus interoperability standards – to use multiple platforms for different purposes, increasing the ‘fitness’ of their overall IT landscape without the traditional penalties of heterogeneity (since traditionally they would be ‘bound’ to one platform by the inability of their internal IT department to cost-effectively support more than one technology).

Thinking more deeply about control in the context of this discussion we can see that for the majority of IT required by an organisation concentrating on access gives greater control than ownership due to increased choice, flexibility and agility (and the ability to leverage economies of scale through sharing).  In this sense the appropriate meaning of ‘control’ is that businesses have flexibility in choosing the IT services that best optimise their individual business capabilities and not that the IT department has ‘control’ of the way in which these services are built and delivered.  I don’t need to control how my clothes manufacturer puts my t-shirt together but I do want to control which t-shirts I wear.  Control in the new economy is empowerment of businesses to choose the most appropriate services and not of the IT department to play with technology and specify how they should be built.  Allowing IT departments to maintain control – and meddle in the way in which services are delivered – actually destroys value by creating a burden of ownership for absolutely zero value to the business.  As a result giving ‘control’ to the IT department results in the destruction of an equal and opposite amount of ‘control’ in the business and is something to be feared rather than embraced.

So the need to maintain control – in the way in which many IT groups are positioning it – is the first major and dangerous fallacy. 

Service levels

It is currently pretty difficult to get a guaranteed service level with cloud service providers.  On the other hand, most providers are consistently up in the 99th percentile and so the actual service levels are pretty good.  The lack of a piece of paper with this actual, experienced service level written down as a guarantee, however, is currently perceived as a major blocker to adoption.  Essentially IT departments use it as a way of demonstrating the superiority of their services (“look, our service level says 5 nines – guaranteed!”) whilst the level of stock they put in these service levels creates FUD in the minds of business owners who want to avoid major risks. 

So let’s lay this out.  People compare the current lack of service level guarantees from cloud service providers with the ability to agree ‘cast-iron’ service levels with internal IT departments.  Every project I’ve ever been involved in has had a set of service levels but very few ever get delivered in practice.  Sometimes they end up being twisted into worthless measures for simplicity of delivery – like whether a machine is running irrespective of whether the business service it supports is available – and sometimes they are just unachievable given the level of investment and resources available to internal IT departments (whose function, after all, is merely that of a barely-tolerated but traditionally necessary drain on the core purpose of the business). 

So to find out whether I’m right or not and whether service level guarantees have any meaning I will wait until every IT department in the world puts their actual achieved service levels up on the web like – for instance – Salesforce.  I’m keen to compare practice rather than promises.  Irrespective of guarantees my suspicion is that most organisations actual service levels are woeful in comparison to the actual service levels delivered by cloud providers but I’m willing to be convinced.   Despite the illusion of SLA guarantees and enforcement the majority of internal IT departments (and managed service providers who take over all of those legacy systems for that matter) get nowhere near the actual service levels of cloud providers irrespective of what internal documents might say.  It is a false comfort.  Businesses therefore need to wise up, consider real data and actual risks – in conjunction with the transformational business benefits that can be gained by offloading capabilities and specialising – rather than let such meaningless nonsense take them down the old path to ownership; in doing so they are potentially sacrificing a move to cloud services and therefore their best chance of transforming their relationship with their IT and optimising their business.  This is essentially the ‘promise’ of buying into updated private infrastructures (aka ‘private cloud’).

A lot of it comes down to specialisation again and the incentives for delivering high service levels.  Think about it – a cloud provider (literally) lives and dies by whether the services they offer are up; without them they make no money, their stock falls and customers move to other providers.  That’s some incentive to maintain excellence.  Internally – well, what you gonna do?  You own the systems and all of the people so are you really going to penalise yourself?  Realistically you just grit your teeth and live with the mediocrity even though it is driving rampant sub-optimisation of your business.  Traditionally there has been no other option and IT has been a long process of trying to have less bad capability than your competitors, to be able to stagger forward slightly faster or spend a few pence less.  Even outsourcing your IT doesn’t address this since whilst you have the fleeting pleasure of kicking someone else at the end of the day it’s still your IT and you’ve got nowhere to go from there.  Cloud services provide you with another option, however, one which takes advantage of the fact that other people are specialising on providing the services and that they will live and die by their quality.  Whilst we might not get service levels – at this point in their evolution at least – we do get transparency of historical performance and actual excellence; stepping back it is critical to realise that deeds are more important than words, particularly in the new reputation-driven economy. 

So the perceived need for service levels as a justification for private infrastructures is the second major and dangerous fallacy.  Businesses may well get better service levels from cloud providers than they would internally and any suggestion to the contrary will need to be backed up by thorough historical analysis of the actual service levels experienced for the equivalent capability.  Simply stating that you get a guarantee is no longer acceptable. 

Security

It’s worth stating from the beginning that there is nothing inherently less secure about cloud infrastructures.  Let’s just get that out there to begin with.  Also in getting infrastructure as a service out of the way – given that we’re taking the position in this post that PaaS is the first level of actual value to a business – we can  say that it’s just infrastructure; your data and applications will be no more or less secure than your own procedures make it but the data centre is likely to be at least as secure as your own and probably much more so due to the level of capability required by a true service provider.

So starting from ground zero with things that actually deliver something (i.e. PaaS and SaaS) a cloud provider can build a service that uses any of the technologies that you use in your organisation to secure your applications and data only they’ll have more usecases and hence will consider more threats than you will.  And that’s just the start.  From that point the cloud provider will also have to consider how they manage different tenants to ensure that their data remains secure and they will also have to protect customers’ data from their own (i.e. the cloud service providers) employees.  This is a level of security that is rarely considered by internal IT departments and results in more – and more deeply considered – data separation and encryption than would be possible within a single company. 

Looking at the cloud service from the outside we can see that providers will be more obvious targets for security attacks than individual enterprises but counter-intuitively this will make them more secure.  They will need to be secured against a broader range of attacks, they will learn more rapidly and the capabilities they learn through this process could never be created within an internal IT organisation.  Frankly, however, the need to make security of IT a core competency is one of the things that will push us towards consolidation of computing platforms into large providers – it is a complex subject that will be more safely handled by specialised platforms rather than each cloud service provider or enterprise individually. 

All of these changes are part of the more general shift to new models of computing; to date the paradigm for security has largely been that we hide our applications and data from each other within firewalled islands.  Increasing collaboration across organisations and the cost, flexibility and scale benefits of sharing mean that we need to find a way of making our services available outside our organisational boundaries, however.  Again in doing this we need to consider who is best placed to ensure the secure operation of applications that are supporting multiple clients – is it specialised cloud providers who have created a security model specifically to cope with secure open access and multi-tenancy for many customer organisations, or is it a group of keen “amateurs” with the limited experience that comes from the small number of usecases they have discovered within the bounds of a single organisation?  Furthermore as more and more companies migrate onto cloud services – and such services become ever more secure – so the isolated islands will become prime targets for security attacks, since the likelihood that they can maintain top levels of security cut off from the rest of the industry – and with far less investment in security than can be made by specialised platform providers – becomes ever less.  Slowly isolationism becomes a threat rather than a protection.  We really are stronger together.

A final key issue that falls under the ‘security’ tag is that of data location (basically the perceived requirement to keep data in the country of the customers operating business).  Often this starts out as the major, major barrier to adoption but slowly you often discover that people are willing to trade off where their data are stored when the costs of implementing such location policies can be huge for little value.  Again, in an increasingly global world businesses need to think more openly about the implications of storing data outside their country – for instance a UK company (perhaps even government) may have no practical issues in storing most data within the EU.  Again, however, in many cases businesses apply old rules or ways of thinking rather than challenging themselves in order to gain the benefits involved.  This is often tied into political processes – particularly between the business and IT – and leads to organisations not sufficiently examining the real legal issues and possible solutions in a truly open way.  This can often become an excuse to build a private infrastructure, fulfilling the IT departments desire to maintain control over the assets but in doing so loading unnecessary costs and inflexibility on the business itself – ironically as a direct result of the businesses unwillingness to challenge its own thinking. 

Does this mean that I believe that people should immediately begin throwing applications into the cloud without due care and attention?  Of course not.  Any potential provider of applications or platforms will need to demonstrate appropriate certifications and undergo some kind of due diligence.  Where data resides is a real issue that needs to be considered but increasingly this is regional rather than country specific.   Overall, however, the reality is that credible providers will likely have better, more up to date and broader security measures than those in place within a single organisation. 

So finally – at least for me – weak cloud security is the third major and dangerous and fallacy.

Comparing Public and Private

Private and Public are Not Equivalent

The real discussion here needs to be less about public vs private clouds – as if they are equivalent but just delivered differently – and more about how businesses can leverage the seismic change in model occurring in IT delivery and economics.  Concentrating on the small minded issues of whether technology should be deployed internally or externally as a result of often inconsequential concerns – as we have discussed – belittles the business opportunities presented by a shift to the cloud by dragging the discussion out of the business realm and back into the sphere of techno-babble.

The reality is that public and private clouds and services are not remotely equivalent; private clouds (i.e. internal infrastructure) are a vote to retain the current expensive, inflexible and one-size-fits-all model of IT that forces a business to sub-optimise a large proportion of its capabilities to make their IT costs even slightly tolerable.  It is a vote to restrict choice, reduce flexibility, suffer uncompetitive service levels and to continue to be distracted – and poorly served – by activities that have absolutely no differentiating value to the business. 

Public clouds and services on the other hand are about letting go of non-differentiating services and embracing specialisation in order to focus limited attention and money on the key mission of the business.  The key point in this whole debate is therefore specialisation; organisations need to treat IT as an enabler and not an asset, they need to  concentrate on delivering their services and not on how their clothes get made. 

Summary

If there is currently a ‘surge’ in interest in private clouds it is deeply confusing (and disturbing) to me given that the basis for focusing attention on private infrastructures appears to be deeply flawed thinking around control, service and security.  As we have discussed not only are cloud services the best opportunity that businesses have ever had to improve these factors to their own gain but a misplaced desire to retain the IT models of today also undermines the huge business optimisations available through specialisation and condemns businesses to limited choice, high costs and poor service levels.  The very concerns that are expressed as reasons not to move to cloud models – due to a concentration on FUD around a small number of technical issues – are actually the things that businesses have most to gain from should they be bold and start a managed transition to new models.  Cloud models will give them control over their IT by allowing them to choose from different providers to optimise different areas of their business without sacrificing scale and management benefits; service levels of cloud providers – whilst not currently guaranteed – are often better than they’ve ever experienced and entrusting security to focused third parties is probably smarter than leaving it as one of many diverse concerns for stretched IT departments. 

Fundamentally, though, there is no equivalence between the concept of public (including logically private but shared) and truly private clouds; public services enable specialisation, focus and all of the benefits we’ve outlined whereas private clouds are just a vote to continue with the old way.  Yes virtualisation might reduce some costs, yes consolidation might help but at the end of the day the choice is not the simple hosting decision it’s often made out to be but one of business strategy and outlook.  It boils down to a choice between being specialised, outward looking, networked and able to accelerate capability building by taking advantage of other people’s scale and expertise or rejecting these transformational benefits and living within the scale and capability constraints of your existing business – even as other companies transform and build new and powerful value networks without you.

Evolution and IT

3 Jun

This is a subject that has been on my mind a lot lately as I recently read an astounding book by Eric D. Beinhocker called “The Origins of Wealth”.  It was astounding to me for the way in which Beinhocker imperiously swept across traditional economic theories based on equilibrium systems, critiqued the inherent weaknesses of such theories when faced with real world scenarios and then hypothesised the use of the evolutionary algorithm as a basis for a fundamental shift to what he called ‘complexity economics’.  I’m going to return to discuss some of the points from this book – and the way in which they resonated with my own thoughts around business design, economic patterns and technology change – but for today I just wanted to comment on a post by Steve Jones where he raises the issue of evolution in the context of IT systems.

Steve’s question was whether we should “reject evolution and instead take up arms with the Intelligent design mob”.  His thoughts have been influenced by the writing of Richard Dawkins, in particular the oft-times contrast between the apparent elegance of the external appearance of an animal (including its fitness for its environment) with the messy internals that give it life.  Steve suggests that he sees parallels in the IT world and brings this around to issues with the way in which a shift to service-based models often creates unfounded expectations on internal agility:

“The point is that actually we shouldn’t sell SOA from the perspective of evolution of the INSIDE at all we should sell it as an intelligent design approach based on the outside of the service. Its interfaces and its contracts. By claiming internal elements as benefits we are actually undermining the whole benefits that SOA can actually deliver.”

In the rest of the post and into the comments Steve then extends this argument to call for intelligent design (of externals) in place of evolution:

“The point I’m making is that Evolution is a bad way to design a system the whole point of evolution is that of selection, selection of an individual against others. In IT we have just one individual (our IT estate) and so selection doesn’t apply.”

My own feeling is that there isn’t a direct 1:1 relationship in thinking about evolution and the difficulties of changing the internals of a service in the way that Steve suggests.  I believe that evolution is a fractal algorithm whose principles apply equally to the design of business capabilities, service contracts and code.  To think about this more specifically I’d like to consider a number of his points after first considering evolution and how we frame it more broadly from a market and enterprise context.

What is evolution?

Evolution is an algorithm that allows us to explore large design spaces without knowing everything in advance.  It allows us to try out random designs, apply some selection criteria and then amplify those characteristics of a design that are judged as ‘fit’ by the environment (i.e. the selection criteria).  In the natural world evolution throws up organisms that have many component traits and success is judged – often brutally – by how well these traits enable an animal to survive in the environment in which it exists.  Within an individual species there will be a particular subset of traits that define that species (so traits that govern size, speed or vision for instance).  Individuals within a species who have the most desirable instances of these traits will be better equipped to survive, the mating of these individuals will merge their desirable traits and over time the preponderance of the most effective traits will therefore increase in the population overall.  As a result evolution creates a number of designs and uses a selection algorithm to more rapidly arrive at designs that are ‘good enough’ to thrive within the context of the environment in which they exist.  It is a much more rapid method of exploring large design spaces than trying to think about every possible design, work out the best combination of traits and then create the ‘perfect’ design from scratch (i.e. “intelligently” design something without a full understanding of the complexities of the selection criteria and hence what will be successful).

Enterprises and evolution

In Beinhockers book he uses a ‘business’ as the unit of selection that operates within the evolutionary context of the market.  Those businesses with successful traits are chosen by consumers and thus excel.  These traits – whether they be talent strategies, process strategies or technology strategies – are then copied by other businesses, replicating and amplifying successful traits within the economic system. 

I believe that this is the best approximation that we can use in the – rather unsystematic – businesses that exist today but that we can use systematic business architecture to do better.  I have often written about my belief in the need for companies to become more adaptable by identifying and then reforming around the discrete business capabilities they require to realise value.  Such capabilities would form a portfolio of discrete components with defined outcomes which could then be combined and recombined as necessary to realise systematic value streams. 

Such a shift to business capabilities will allow an enterprise to adapt its organisation through the optimisation and recombination of components; whilst at this stage of maturity Beinhocker’s hypothesis of the ‘business’ as the element of selection remains sound (since capabilities are still internal and not individually selectable as desirable traits) we can at least begin to talk about capabilities and the way in which we combine them as the primary traits that need to be ‘amplified’ to increase the fitness of the design of our business. 

Inside Out 

Whilst realigning internal capabilities is a worthwhile exercise in its own right, evolutionary systems also tend to exhibit long periods of relative stability punctuated by rapid change as something happens to alter the selection criteria for ‘fitness’.  The Web and related techniques for decomposition – such as service-orientation – have made it possible to consume external services as easily as internal services.  Business capabilities can thus be made available by specialised providers from anywhere in the world in such a way that they can be easily integrated with internal capabilities to form collaborative value webs.  We can therefore view the current convergence of business architecture, technology and a mass shift to service models as a point of ‘punctuated equilibrium’. 

In this environment continuing to execute capabilities that are non-differentiating will cease to be an attractive option as working with specialised providers will deliver both better outcomes and more opportunities for innovation.  From an evolutionary perspective our algorithm will continue to select those organisations that are most fit (as judged by the market) and those organisations will be those with the strongest combination of traits (i.e. capabilities).  Specialised, external capabilities can be considered to be more attractive ‘traits’ due to their sharp focus, shorter feedback loops and market outlook; they will thus be amplified as more organisations close down their own internal capabilities and integrate them instead, a kind of organisational mutation caused by the combination of the best capabilities available to increase overall fitness.    Enterprises working with limited, expensive and non-differentiating internal capabilities will risk extinction.

Once this shift reaches a tipping point we discover that business capabilities become the unit of market selection since they are now visible as businesses in their own right.  Whilst this could be considered pedantry – as a ‘business’ is still the unit of selection even though what we consider a business has become smaller – there is an important shift that happens at this point.  Essentially as business capabilities become units of selection in their own right the ‘traits’ for selection and amplification of their services become a combination of their own internal people, process and technology capabilities plus the quality of the external capabilities they integrate.  Equally importantly they have to act as businesses – rather than internal, supporting organisations – and support the needs of many customers – and hence support mass customisation.  This will mean that they will have many more consumers than internal support functions would ever have had and the needs of these consumers could be both very different and impossible to guess in advance; there will be new opportunities to rapidly improve their services based on insight from different industries, orthogonal areas and new collaborations.  An ability to respond to these new opportunities by changing their own capabilities or finding new partners to work with will be a significant factor in whether these capabilities thrive and are thus judged as ‘fit’ by the selection criteria of the market.  An ability to evolve externally to provide the ‘right’ services will thus be a core competency required in the new world. 

What has this got to do with services?

The basic points I’m making here are that evolution acts at the scale of markets and is a process that we participate in rather than a way of designing.  We design our offers using the best knowledge that we have available but the market will decide whether the ‘traits’ we exhibit fit the selection criteria of the environment.  Business capabilities can become the ‘traits’ that make particular market offers (or businesses) fit for selection or not by having a huge influence over the overall cost, quality and desirability of a particular offer.  From a technology perspective such capabilities will in reality need to offer their services as services in order for them to be easily integrated into the overall value webs of their customers and partners; in many cases there may be a 1:1 mapping between the business capability and the service interface used to consume it.  In that sense services are just as much a driver of fitness in the overall ecosystem and their interface and purpose will inevitably need to change as the overall ecosystem evolves.  Hence it is not simply a question of ‘fixing’ interfaces and ‘evolving’ internals; the reality is that the whole market is an evolutionary system and businesses – plus the services they offer for consumption –will need to continually evolve in order to remain fit against changing selection criteria.

Intelligent design or evolution

The core question raised by Steve is whether ‘evolution’ has any place in our notion of service design.  In particular:

“The point I’m making is that Evolution is a bad way to design a system the whole point of evolution is that of selection, selection of an individual against others. In IT we have just one individual (our IT estate) and so selection doesn’t apply.”

Is evolution a ‘bad’ designer?

I do not believe that evolution is either a good or a bad designer but it is a very successful one.  Evolution is an algorithm that takes external selection criteria, applies them and amplifies those traits that are most successful in meeting the criteria.  It is brilliant at evaluating near-infinite design spaces (such as living organisms or markets) and continually refining designs to make them fit for the environmental selection criteria in play.

If I read Steve’s post correctly, however, he actually isn’t objecting to the notion of evolution per se – since at the macro level it is a market process in which we are all involved and not a conscious way of designing our services – but rather to a lack of design being labelled an ‘evolutionary’ approach. 

What is ‘evolutionary’ design?

In the majority of cases when people talk about ‘evolution’ in the context of systems they really mean that they want to implement as quickly and cheaply as possible and then ‘evolve’ the system.  Often vendors encourage this behaviour by promising that new technologies are so rapid as to make changes easy and inexpensive.  Such approaches often eschew any attempt at formal design, choosing instead to implement in isolation and then retro-integrate with anything else on a case by case basis.  I have often seen Steve talk about the evils of generating WSDL from code and I imagine that this is the sort of behaviour that he is classifying as ‘evolutionary’ changes to internals.

Is this a good or a bad approach?  From an evolutionary perspective we can say that we do not care.  Given that we are talking about evolution in its true sense the algorithm would merely continue to churn through its evaluation of services, amplifying successful traits.  It is just that such behaviour might have some unrecognised issues: firstly evolution would have to work for longer to bring a service to a point at which it is ‘fit’, secondly the combination of all of these unfit services means that there is a multiplier effect to evolving the ecosystem of services to a point at which it is fit overall and thirdly whilst all of this goes on at a micro level the fitness of the enterprise against the selection criteria of the market might be poor due to the unfitness of some of its major ‘traits’.

Intelligence in design

Whilst a lack of design might extend the evolutionary process to a point at which it is unlikely that a business could ever become fit before it became extinct, an assumption that we can design service interfaces that are fixed also ignores the reality of operating in a complex evolutionary system (like a business). 

Creating a ‘perfect’ service from scratch is a very difficult thing to do as even within the bounds of a single organisation we cannot know all of the potential uses that might come to pass.  We can however use the best available data to create an approximation of the business capabilities and resulting services required in order to try and speed up the evolutionary process by reducing the design space it has to search.  Hence the notion that we use an evolutionary process of service design (a bit like I discussed here) is an important one; often people will not know what good looks like until they see something.  Whilst I therefore accept that we can start with an approximation of the capabilities (and services) we believe we will need we have to accept that these will evolve as we gain experience and exposure to new use cases.  

From this perspective I don’t agree with the literal statement that Steve has made; it is not about intelligent design vs evolution but rather about intelligence of design to support the evolutionary process.  As I stated previously markets are fundamentally evolutionary systems and therefore our businesses – and the business capabilities and services that represent their traits – are assessed by the evolutionary algorithm for fitness against market selection criteria.  We are not dumb observers in this process, however, and must fight to create offers that are attractive to the market along with supporting organisations that enable us to do it at the right price and service levels.  We can apply our intelligence to this process to increase our chances of success but a key element will be to understand that our enterprises will increasingly become a value web of individual capabilities, that it is the combination of our capabilities that is judged and that we must therefore design our organisations to evolve by adopting successful traits to improve our overall fitness.  As a result we should not expect the evolutionary process to do our work for us – by choosing not to apply any intelligence in design – but we should also not assume that evolution has no place in design given that meeting its demands is becoming the primary requirement of business architecture.

Macro evolution in the economy

Stepping back and taking an external perspective leads us to realise that it is also untrue to say that we only have one individual (in terms of a single IT estate) and that there is nothing to select against; in reality even today we are competing for selection against businesses with other IT estates and thus our ‘traits’ (in the form of our IT) are already a major factor in deciding our fitness (and thus our ability to be ‘selected’ by the evolutionary algorithm of the market).  If we factor in the emerging discontinuities we see as part of the ‘punctuated equilibrium’ process it only makes things worse; the specific IT we have within specific business capabilities will have a large impact on the fitness of these capabilities to survive.  In that context continually evolving our business capabilities (and with them the IT and software services that enable them) is the only way to ensure future success.

More importantly as we look at the wider picture of the position of our business capabilities within the market as a whole so our unknowns become more acute and the more that we can only rely on selection and amplification (i.e. evolution) to guide us in shaping them.  Looking beyond the boundaries of our single organisation we have to consider the fact that all of our services will exist in a market ecosystem many of whose needs and usage we are even less equipped to know in advance.  There will often be new and novel ways in which we can change our services to meet the emerging needs of customers and partners and in this way the overall ecosystem itself will evolve.  As a result selection is the only way in which design can occur in an ecosystem as complex as a market where there are many participants whose needs are diverse.  Nobody can ‘intelligently design’ a whole market from top to bottom.  Furthermore the market – as an evolutionary system – will be subject to a process of ‘punctuated equilibrium’ meaning that sudden changes in the criteria used to judge fitness can occur.  From an IT perspective the shift towards service models such as cloud computing could be considered to be one such change, since it changes the economics of IT from one based on differentiation based on ownership to one based on universal access.  Such changes could be considered ‘revolutionary’ as the carefully crafted and scaled business models created during a period of relative stability cease to be appropriate and new capabilities have to be developed or integrated to be successful.  This is one area where I disagreed with Steve’s comment about the relationship between revolution and evolution:

“The point of revolutionary change is that it may require a drop back to the beginning and starting again. This isn’t possible in an evolutionary model.”

Essentially revolutionary change often happens in evolutionary systems – evolution is always exploring the design space and changes in the environment can lead to previously uninteresting traits becoming key selection criteria.  In this case ‘revolutionary change’ is a side-effect of the way in which evolution starts to amplify different traits due to the changes in selection criteria.  In the natural world such changes can lead to catastrophic outcomes for whole species whose specialisations are too far removed from the new selection criteria and this can also happen to businesses (it will be interesting to see how many IT companies survive the shift to new models, for instance).  Evolution also allows the development of new ‘traits’ that make us sustainable, however, and therefore can support us in surviving ‘revolutionary’ changes if we have sufficient desirable ‘traits’ to prevent total collapse.  The trick is to understand how you can evolve at the macro level to incorporate the changes that have occurred in the selection criteria of your market and to realign your capabilities as appropriate.  Often the safest way to do this is to have different services on offer that try different combinations of traits, hence keeping sensors within the environment to warn you of impending changes. 

Summary

As a result there is no question that both evolution and intelligence in design have a place in the creation of sustainable architectures (whether macro business architectures or micro service architectures).  We have to be precise in the ways in which we use this language, however; it is not sufficient to label a lack of design as ‘evolution’ (which I believe was Steve’s core point).  Evolution is a larger, exogenous force that shapes systems by highlighting and amplifying desirable traits and not something that we can rely on to reliably fix our design issues without an infinite amount of time and change capability.  We therefore need to apply intelligence to the process of design – even when there is great uncertainty – to try and narrow down the design space to minimise the amount we have to rely on evolution to arrive at a viable ‘system’; even once we get to this point, however, we need to be aware of the fact that evolution is an ongoing process of selection and amplification and design our business architectures with the flexibility necessary to recognise this fact.

More broadly I believe that we can also look at the application of ‘intelligence’ and ‘evolution’ as a matter of scale;  we can design individual services with a fair degree of intelligence, we can design our business capabilities with some fair approximations and then rely on evolution to improve them but we can only rely on evolution to shape the market itself and thus the selection criteria that define our participation.  For this reason strategies that stress adaptability (i.e. an ability to evolve in response to changing selection criteria) have to take precedence over strategies that stress certainty and efficiency.

Industrialised Service Delivery Redux II

22 Sep

In my previous post I discussed the way in which our increasingly sophisticated use of the Web is creating an unstoppable wave of change in the global business environment.  This resulting acceleration of change and expectation will require unprecedented organisational speed and adaptability whilst simultaneously driving globalisation and consumerisation of business.  I discussed my belief that companies will be forced to reform as a portfolio of systematically designed components with clear outcomes and how this kind of thinking changes the relationship between a business capability and its IT support.  In particular I discussed the need to create industrialised Service Delivery Platforms which vastly increase the speed, reliability and cost effectiveness of delivering service realisations. 

In this post I’ll to move into the second part of the story where I’ll look more specifically at how we can realise the industrialisation of service delivery through the creation of an SDP.

Industrialisation 101

There has been a great deal written about industrialisation over the last few years and most of this literature has focused on IT infrastructure (i.e. hardware) where components and techniques are more commoditised.  As an example many of my Japanese colleagues have spent decades working with leaders in the automotive industry and experienced firsthand the techniques and processes used in zero defect manufacturing and the application of lean principles. Sharing this same mindset around reliabliity, zero defect and technology commoditisation they created a process for delivering reliable and guaranteed outcomes through pre-integration and testing of combinations of hardware and software.  This kind of infrastructure industrialisation enables much higher success rates whilst simultaneously reducing the costs and lead times of implementation. 

In order to explore this a little further and to set some context, let’s Just think for a moment about the way in which IT has traditionally served its business customers.  

nonindutsrialisedvsindustrialised

We can see that generally speaking we are set a problem to solve and we then take a list of products selected by the customer – or often by one of our architects applying personal preference – and we try to integrate them together on the customers site, at the customers risk and at the customers expense. The problem is that we may never have used this particular combination of hardware, operating systems and middleware before – a problem that worsens exponentially as we increase the complexity of the solution, by the way – and so there are often glitches in their integration, it’s unclear how to manage them and there can’t be any guarantees about how they will perform when the whole thing is finally working. As a result projects take longer than they should – because much has to be learned from scratch every time – they cost a lot more than they should – because there are longer lead times to get things integrated, to get them working and then to get them into management – and, most damningly, they are often unreliable as there can be no guarantees that the combination will continue to work and there is learning needed to understand how to keep them up and running.

The idea of infrastructure industrialisation, however, helps us to concentrate on the technical capability required – do you want a Java application server? Well here it is, pre-integrated on known combinations of hardware and software and with manageability built in but – most importantly – tested to destruction with reference applications so that we can place some guarantees around the way this combination will perform in production.  As an example, 60% of the time taken within Fujitsu’s industrialisation process is in testing.  The whole idea of industrialisation is to transfer the risk to the provider – whether an internal IT department or an external provider – so that we are able to produce consistent results with standardised form and function, leading to quicker, more cost effective and reliable solutions for our customers.

Now such industrialisation has slowly been been maturing over the last few years but – as I stated at the beginning – has largely concentrated on infrastructure templating – hardware, operating systems and middleware combined and ready to receive applications.  Recent advances in virtualisation are also accelerating the commoditisation and industrialisation of IT infrastructure by making this templating process easier and more flexible than ever before.  Such industrialisation provides us with more reliable technology but does not address the ways in which we can realise higher level business value more rapidly and reliably.  The next (and more complex) challenge, therefore, is to take these same principles and apply them to the broader area of business service realisation and delivery.  The question is how we can do this?

Industrialisation From Top to Bottom

Well the first thing to do is understand how you are going to get from your expression of intent – i.e. the capability definitions I discussed in my previous post that abstract us away from implementation concerns – through to a running set of services that realise this capability on an industrialised Service Delivery Platform. This is a critical concern since If you don’t understand your end to end process then you can’t industrialise it through templating, transformation and automation.

endtoendservicerealisation

In this context we can look at our capability definitions and map concepts in the business architecture model down to classifications in the service model.  Capabilities map to concrete services, macro processes map to orchestrations, people tasks map to workflows, top level metrics become SLAs to be managed etc. The service model essentially bridges the gap between the expression of intent described by the target business architecture and the physical reality of assets needed to execute within the technology environment.

From here we broadly need to understand how each of our service types will be realised in the physical environment – so for instance we need a physical host to receive and execute each type of service, we need to understand how SLAs are provisioned so that we can monitor them etc. etc.

Basically the concern at this stage is to understand the end to end process through which we will transform the data that we capture at each stage of the process into ever more concrete terms – all the way from logical expressions of intent through greater information about the messages, service levels and type of implementation required, through to a whole set of assets that are physically deployed and executing on the physical service platform, thus realising the intent.

The core aim of this process must be to maximise both standardisation of approach and automation at each stage to ensure repeatability and reliability of outcome – essentially our aim in this process is to give business capability owners much greater reliability and rapidity of outcome as they look to realise business value.  We essentially want to give guarantees that we can not only realise functionality rapidly but also that these realisations will execute reliably and at low cost.  In addition we must also ensure that the linkage between each level of abstraction remains in place so that information about running physical services can be used to judge the performance of the capability that they realise, maximising the levers of change available to the organisation by putting them in control of the facts and allowing them to ‘know sooner’ what is actually happening.

Having an end to end view of this process essentially creates the rough outline of the production line that needs to be created to realise value – it gives us a feel for the overall requirements.  Unfortunately, however, that’s the nice bit, the kind of bit that I like to do. Whilst we need to understand broadly how we envisage an end to end capability realisation process working, the real work is in the nasty bit – when it comes to industrialisation work has to start at the bottom.

Industrialisation from Bottom to Top

If you imagine the creation of a production line for any kind of physical good they obviously have to be designed to optimise the creation of the end product. Every little conveyer belt or twisty robot arm has to be calibrated to nudge or weld the item in exactly the same spot to achieve repeatability of outcome. In the same way any attempt to industrialise the process of capability realisation has to start at the bottom with a consideration of the environment within which the final physical assets will execute and of how to create assets optimised for this environment as efficiently as possible. I use a simple ‘industrialisation pyramid’ to visualise this concept, since increasingly specialised and high value automation and industrialisation needs to be built on broader and more generic industrialised foundations. In reality the process is actually highly iterative as you need to continually be recalibrating both up and down the hierarchy to ensure that the process is both efficient and realises the expressed intent but for the sake of simplicity you can assume that we just build this up from the bottom.

industrialisationpyramid

So let’s start at the bottom with the core infrastructure technologies – what are the physical hosts that are required to support service execution? What physical assets will services need to create in order to execute on top of them? How does each host combine together to provide the necessary broad infrastructure and what quality of service guarantees can we put around each kind of host? Slightly more broadly, how will we manage each of the infrastructure assets? This stage requires a broad range of activity not just to standardise and templatise the hosts themselves but also to aggregate them into a platform and to create all of the information standards and process that deployed services will need to conform to so that we can find, provision, run and manage them successfully.

Moving up the pyramid we can now start to think in more conceptual terms about the reference architecture that we want to impose – the service classifications we want to use, the patterns and practices we want to impose on the realisation of each type, and more specifically the development practices.  Importantly we need to be clear about how these service classifications map seamlessly onto the infrastructure hosting templates and lower level management standards to ensure that our patterns and practices are optimised – its only in this way that we can guarantee outcomes by streamlining the realisation and asset creation process. Gradually through this definition activity we begin to build up a metamodel of the types of assets that need to be created as we move from the conceptual to the physical and the links and transformations between them. This is absolutely key as it enables us to move to the next level – which I call automating the “means of production”.

This level becomes the production line that pushes us reliably and repeatably from capability definition through to physical realisation. The metamodel we built up in the previous tier helps us to define domain specific languages that simplify the process of generating the final output, allowing the capture of data about each asset and the background generation of code that conforms to our preferred classification structure, architectural patterns and development practices. These DSLs can then be pulled together into “factories” specialised to the realisation of each type of asset, with each DSL representing a different viewpoint for the particular capability in hand.  Individual factories can then be aggregated into a ‘capability realisation factory’ that drives the end to end process.  As I stated in my previous post the whole factory and DSL space is mildly controversial at the moment with Microsoft advocating explicit DSL and factory technologies and others continuing to work towards MDA or flexible open source alternatives.  It is suffice to say in this context that the approaches I’m advocating are possible via either model – a subject I might return to actually with some examples of each (for an excellent consideration of this whole area consult Martin Fowler’s great coverage, btw).

The final level of this pyramid is to actually start taking the capability realisation factories and tailoring them for the creation of industry specific offerings – perhaps a whole set of ‘factories’ around banking, retail or travel capabilities. From my perspective this is the furthest out and may actually not come to pass; despite Jack Greenfield’s compelling arguments I feel that the rise of SOA and SaaS will obviate the need to generate the same application many times by allowing the composing of solutions from shared utilities.  I feel that the idea of an application or service specific factory assumes a continuation of IT oversupply through many deployments; as a result I feel that the key issue at stake in the industrialisation arena is actually that of democratising access to the means of capability production by giving people the tools to create new value rapidly and reliably.  As a result I feel that improving the reliability and repeatability of capability realisation across the board is more critical than a focus on any particular industry. (This may change in future with demand, however, and one potential area of interest is industry specific composition factories rather than industry specific application generation factories). 

Delivering Industrialised Services

So we come at last to a picture that demonstrates how the various components of our approach come together from a high level process perspective.

servicefactoryprocess

Across the top we have our service factory. We start on the left hand side with capability modelling, capturing the metadata that describes the capability and what it is meant to do. In this context we can use a domain specific language that allows us to model capabilities explicitly within the tooling. Our aim is then to use the metadata captured about a capability to realise it as one or more services. In this context information from the metamodel is transformed into an initial version of the service before we use a service domain language to add further detail about contracts, messages and service levels. It is important to note that at this point, however, the service is still abstract – we have not bound it to any particular realisation strategy. Once we have designed the service in the abstract we can then choose an implementation strategy – example classifications could be interaction services for Uis, workflow services for people tasks, process services for service orchestrations, domain services for services that manage and manipulate data and integration services that allow adaptation and integration with legacy or external systems.

Once we have chosen a realisation strategy all of the metadata captured about the service is used to generate a partially populated realisation of the chosen type – in this context we anticipate having a factory for each kind of service that will control the patterns and practices used and provide guidance in context to the developer.

Once we have designed our services we now want to be able to design a virtual deployment environment for them based wholly on industrialised infrastructure templates. In this view we can configure and soft test the resources required to run our services before generating provisioning information that can be used to create the virtual environment needed to host the services.

In the service platform the provisioning information can be used to create a number of hosting engines, deploy the services into them, provision the infrastructure to run them and then set up the necessary monitoring before finally publishing them into a catalogue. The Service Platform therefore consists of a number of specialised infrastructure hosts supporting runtime execution, along with runtime services that provide – for example – provisioning and eventing support.

The final component of the platform is what I call a ‘service wrap’. This is an implementation of the ITSM disciplines tailored for our environment. In this context you will find the catalogue, service management, reporting and metering capabilities that are needed to manage the services at runtime (this is again a subset to make a point). In this space the service catalogue will bring together service metadata, reports about performance and usage plus subscription and onboarding processes.  Most importantly there is a strong link between the capabilities originally required and the services used to realise them, since both are linked in the catalogue to support business performance management. In this context we can see a feedback loop from the service wrap which enables capability owners to make decisions about effectiveness and rework their capabilities appropriately.

Summary

In this second post of three I have demonstrated how we can use the increasing power of abstraction delivered by service-orientation to drive the industrialisation of capability realisation.  Despite current initiatives broadly targeting the infrastructure space I have discussed my belief that full industrialisation across the infrastructure, applications, service and business domains requires the creation and consistent application of known patterns, processes, infrastructures and skills to increase repeatability and reliability. We might sacrifice some flexibility in technology choice or systems design but increasing commoditisation of technology makes this far less important than cost effectiveness and reliability. It’s particularly important to realise that when industrialising you need to understand your end to end process and then do the nasty bit – bottom up in excruciating detail.

So in the third and final post on this subject I’m going to look a little bit at futures and how the creation of standardised and commoditised service delivery platforms will affect the industry more broadly – essentially as technology becomes about access rather than ownership so we will see the rise of global service delivery platforms that support capability realisation and execution on behalf of many organisations.

Industrialised Service Delivery Redux I

23 Jul

I’ve been terribly lax with my posting of late due to pressures of work but thought I had best try and put something up just to keep my blog (barely) alive, lol.  Following on from my previous posts on Cloud Computing and Service Delivery Platforms I thought I would go the extra step and actually talk about my views on Industrialisation in the platform and service delivery spaces.  I made this grand decision since my last post included a reference to a presentation that I did in Redmond last year and so I thought it would be useful to actually tell the story as well as just punt up the slides (which can be pretty meaningless without a description).  In addition there’s also been a huge amount of coverage of both cloud computing, platform as a service and industrialisation lately and so it seemed like revisiting the content of that particular presentation would be a good idea.  If I’m honest I also have to admit that I can largely just rip the notes out of the slideset for a quick post, but I don’t feel too guilty given that it’s a hot topic, lol.  I’ll essentially split this story across three posts: part I will cover why I believe Industrialisation is critical to supporting agility and reliability in the new business environment, part II will cover my feelings on how we can approach the industrialisation of business service delivery and part III will look at the way in which industrialisation accelerates the shift to shared Service Delivery Platforms (or PaaS or utility computing or cloud computing – take your pick).

The Industrialisation Imperative

So why do I feel that IT industrialisation is so important?  Well essentially I believe that we’re on the verge of some huge changes in the IT industry and that we’re only just seeing the very earliest signs of these through the emergence of SOA, Web 2.0 and SaaS/PaaS. I believe that organisations are going to be forced to reform and disaggregate and that technology will become increasingly commoditised. Essentially I believe that we all need to recognise these trends and learn the lessons of industrialisation from other more mature industries – if we can’t begin to deliver IT that is rapid, reliable, cost effective and – most importantly – guaranteed to work then what hope is there?  IT has consistently failed to deliver expected value time and time again through an obsession with technology for it’s own sake and the years of cost overruns, late delivery and unreliability are well documented; too often projects seem to ignore the lessons of history and start from ground zero. This has got to change. Service orientation is allowing us to express IT in ways that are closer to the business than ever before, reducing the conceptual gap that has allowed IT to hide from censure behind complexity. Software as a Service is starting to prove that there are models that allow us to deliver the same function to many people with lower costs born of economies of scale and we’re all going to have to finally recognise that everyone is not special, that they don’t need that customisation or tailoring for 80% of what they do and that SOAs assistance in refocusing on business value will draw out the lunacy of many IT investments.

In this three part post I therefore want to share some of my ideas around how we can industrialise IT. Firstly, I’m going to talk about the forces that are acting on organisations that will drive increasing specialisation and disaggregation and go onto to discuss business capabilities and how they accelerate the commoditisation of IT.  Secondly, I’m going to discuss  approaches to the industrialisation of service delivery and look at the different levels of industrialisation that need to be considered.  Finally I’ll talk about how the increasing commoditisation and standardisation of IT will accelerate the process of platform consolidation and the resulting shift towards models that recognise the essentially scale based economics of IT platform provision.

The Componentisation of Business

Over the last 100 years we’ve seen a gradual shift towards concentration on smaller levels of business organisation due to the decreasing costs of executing transactions with 3rd parties. Continuing discontinuities around the web are sending these transaction costs into free fall, however, and we believe that this is going to trigger yet another reduction in business aggregation and cause us to focus on a smaller unit of business granularity – the capability (for an early post on this subject see here).

kearney_capabilities

Essentially I believe that there are four major forces that will drive organisations to transform in this way:

1) Accelerating change;

2) Increasing commoditisation; and

3) Rapidly decreasing collaboration costs due to the emergence of the web as a viable global business network.

I’ll consider each in turn.

Accelerating Change

As the rate of change increases, so adaptability becomes a key requirement for survival. Most organisations are currently not well suited for this challenge, however, as they have structures carried over from a different age based on forward planning and command and control – they essentially focus inwards rather than outwards. The lack of systematic design in most organisations means that they rarely understand clearly how value is delivered and so cannot change effectively in response to external demand shifts. In order to become adaptable, however, organisations need to systematically understand what capabilities they need to satisfy demand and how these capabilities combine to deliver value – a systematic view enables us to understand the impact of change and to reconfigure our capabilities in response to shifts in external demand.

Increasing Commoditisation

This capability based view is also extremely important in addressing the shrinking commoditisation cycle. Essentially consumers are now able to instantly compare our goods and services with those from other companies – and switch just as quickly. A capability-based view enables us to ensure that we remove repetition and waste across organisational silos and replace these with shared capabilities to maximise our returns, both while the going is good but also when price sensitivity begins to bite.

Decreasing Transaction Costs

The final shift is to use our clearer view of the capabilities we need to begin thinking about those that are truly differentiating – the market will be putting such pressure on us to excel that we will be driven to take advantage of falling transaction costs and the global nature of the web to replace our non-differentiating capabilities with those of specialised partners, simultaneously increasing our focus, improving our overall proposition and reducing costs.

As a result of these drivers we view business capabilities as a key concept in the way in which we need to approach the industrialisation of services.

Componentisation Through Business Capabilities

So I’ve talked a lot about capabilities – how do they enable us to react to the discontinuities that I’ve discussed? Well to address the issues of adaptability and understand which things we want to do and which we want to unbundle we really need a way of understanding what the component parts of our organisation are and what they do.

Traditionally organisations use business processes, organisational structures or IT architectures as a way of expressing organisational design – perhaps all three if they use an enterprise architecture method. The big problem with these views, however, is that they tell us very little about what the combined output actually is – what is the thing that is being done, the essential business component that is being realised? Yes I understand that there are some people doing stuff using IT but what does it all amount to? Even worse, these views of the business are all inherently unstable since they are expressions of how things get done at a point in time; as a result they change regularly and at different rates and therefore make trying to understand the organisation a bit like catching jelly – you might get lucky and hold it for a second but it’ll shift and slip out of your grasp. This means that leaders within the organisation lack a consistent decision making framework and see instead a constantly shifting mass of incomplete and inconsistent detail that make it impossible to make well reasoned strategic decisions.

Capabilities bring another level of abstraction to the table; they allow us to look at the stable, component parts of the organisation without worrying about how they work. This gives us the opportunity to concentrate systematically on what things the organisation needs to do – in terms of outputs and commitments – without concerning ourselves with the details of how these commitments will be realised. This enables enterprise leaders to concentrate on what is required whilst delegating implementation to managers or partners. Essentially they are an expression of intent and express strategy as structure. Capabilities are then realised by their owners using a combination of organisational structures, role design, business processes and technology – all of which come together to deliver to the necessary commitments.

component_anatomy

In this particular example we see the capability from both the external and internal perspectives – from the perspective of the business designer and the consumer the capability is a discrete component that has a purpose – in this case enabling us to check credit histories – and a set of metrics – for simplicity we’ve included just service level, cost and channels. From the perspective of the capability owner, however, the capability consists of all of the different elements needed to realise the external commitments.

So how does a shift to capabilities affect the relationship between the organisation and its IT provision?

IT Follows Move from “How” to “What”

One of the big issues for us all is that a concentration on capabilities will begin to push technology to the bottom of the stack – essentially it becomes much more commoditised.

Capability owners will now have a much tighter scope in the form of a well defined purpose and set of metrics; this gives them greater clarity and leaves them able to look for rapid and cost effective realisation rather than a mismash of hardware, software or packages that they then need to turn into something that might eventually approximate to their need.  Furthermore the codification of their services will expose them far more clearly to the harsh realities of having to deliver well defined value to the rest of the organisation and they will no longer be able to ‘lose’ the time and cost of messing about with IT in the general noise of a less focused organisation.

As a result capability owners will be looking for two different things:

1) Is there anyone who can provide this capability to me externally to the level of performance that I need – for instance SaaS or BPU offering available on a usage or subscription basis; or

2) Failing that who can help me to realise my capability as rapidly, reliably and cost effectively as possible.

The competition is therefore increasingly going to move away from point technologies – which become increasingly irrelevant – and move towards the delivery of outcomes using a broad range of disciplines tightly integrated into a rapid capability realisation platform.

howtowhat2

Such realisation platforms – which I have been calling Service Delivery Platforms to denote their holistic nature – require us to bring infrastructure, application, business and service management disciplines into an integrated, reliable and scalable platform for capability realisation, reflecting the fact that service delivery is actually an holistic discipline and not a technology issue. Most critically – at least from our perspective – this platform needs to be highly industrialised; built from repeatable, reliable and guaranteed components in the infrastructure, application, business and service dimensions to guarantee successful outcomes to our customers.

So what would a Service Delivery Platform actually look like?

A Service Delivery Platform

platform2

In this picture I’ve surfaced a subset of the capabilities that I believe are required in the creation of a service delivery platform suitable for enterprise use – I’m not being secretive, I just ran out of room and so had to jettison some stuff.

If we start at the bottom we can see that we need to have highly scalable and templatised infrastructure that allows us to provide capacity on demand to ensure that we can meet the scaling needs of capability owners as they start to offer their services both inside and outside the organisation.

Above this we have a host of runtime capabilities that are needed to manage services running within the environment – identity management, provisioning, monitoring to ensure that delivered services meet their service levels, metering to support various monetisation strategies both from our perspective and from the capability owners perspective, audit and non-repudiation and brokering to external services in order to keep tabs on their performance for contractual purposes.

Moving up we have a number of templatised hosting engines – essentially we need to break the service space down using a classification to ensure that we are able to address different kinds of services effectively. These templates are essentially virtual machines that have services deployed into them and which are then delivered on the virtualised hardware; essentially the infrastructure becomes part of the service to decouple services both from each other and physical space.

The top level in the centre area is what we call service enablement. In this tier we essentially have a whole host of services that make the environment workable – examples that we were able to fit in here are service catalogue, performance reporting, subscription management – the whole higher level structure that brings services into the wider environment in a consistent and consumable way.

Moving across the left we can see that in order to deliver services developers will need to have standardised and templatised shared development support environments to support collaboration, process enablement and asset management.

Across on the right we have operational support – this is where we place our ITIL/ISO 20000 service management processes and personnel to ensure that all services are treated as assets – tracked, managed, reported upon, capacity managed etc etc.

On the far right we have a business support set of capabilities that support customer queries about services, how much they’ve been charged and where we also manage partners, perform billing or carry out any certification activity if we want to create new templates for inclusion in the overall platform.

Finally across the top we have what I call the ‘service factory’ – a highly templatised modelling and development environment that drives people from a conceptual view of the capabilities to be realised down through a process of service design, realisation and deployment against a set of architectural and development patterns represented in DSLs.  These DSLs could be combinations of UML profiles, little languages or full DSLs implemented specifically for the service domain.

Summary

In this post I have discussed my views on the way in which businesses will be forced to componentise and specialise and how this kind of thinking changes the relationship between a business capability and its IT support.  I’ve also briefly highlighted some of the key features that would need to be present within an holistic and industrialised Service Delivery Platform in order to increase the speed, reliability and cost effectiveness of delivering service realisations.  In the next post I’ll to move into the second part of the story where I’ll look more specifically at realising the industrialisation of service delivery through the creation of an SDP.

Follow

Get every new post delivered to your Inbox.

Join 172 other followers