Archive | Web 2.0 RSS feed for this section

Is Social Media Rubbish?

8 Jul

I’ve read a few interesting posts recently relating to Social Media and ‘Enterprise 2.0’.  First up was Peter Evans-Greenwood talking about the myth of social organisations given their incompatibility with current structures and the lack of business cases for many efforts.  From there I followed links out to Martin Linssen and Dennis Howlett – both of whom commented on the current state of Enterprise 2.0 and social business, in particular their lack of clarity (i.e. are they primarily about tools, people or marketing efforts), the often ironic lack of focus on people in favour of technology and the paucity of compelling business cases.  Furthermore they also highlighted the continued migration of traditional vendors from one hot topic to another (e.g. from ECM to Enterprise 2.0 to Social Business) in order to support updated positioning for products, creating confusion and distraction by suggesting that success comes from owning specific tools rather than from particular ways of working.

Most damningly of all I found a link (courtesy of @adamson) to some strong commentry from David Chalke of Quantum Market Research suggesting that:

Social media: ‘Oversold, misused and in decline’

All of these discussions made me think a bit about my own feelings about these topics at the moment.

The first thing to state is that it seems clear to me that in the broadest sense businesses will increasingly exist in extended value webs of customers and partners.  From that perspective ‘business sociability’ – i.e. the ability to take up a specialised position within a complex value web of complementary partners and to collaborate across organisational and geographical boundaries – will be critical.  The strength of an organisation’s network will increasingly define the strength of their capabilities.  Social tools that support people in building useful networks and in collaborating across boundaries – like social networks, micro-blogs, blogs, wikis, forums etc – will be coupled with new architectures and approaches – like SOA, open APIs and cloud computing – as the necessary technical foundations for “opening up” a business and allowing it to participate in wider value creation networks.  As I’ve discussed before, however, tooling will only exist to support talented people undertaking creative processes within the context of broader networks of codified and automated processes.

Whilst therefore having the potential to support increasing participation in extended value webs, develop knowledge and support the work of our most talented people, it’s clear that throwing random combinations of tools at the majority of existing business models without significant analysis of this broader picture is both pointless but also extremely distracting and potentially ultimately very damaging (as failed, ill thought through initiatives can lead to an opportunity for entrenched interests to ignore the broader change for longer).

Most of the organisations I have worked with are failing to see the bigger picture outlined above, however.  For them ‘social tools’ are either all about the way in which they make themselves ‘cooler’ or ‘more relevant’ by ‘engaging’ in social media platforms for marketing or customer support (looking externally) or something vaguely threatening and of marginal interest that undermines organisational structures and leads to staff wasting time outside the restrictions of their job role (looking internally).  To date they seem to be less interested in how these tools relate to a wider transformation to more ‘social’ (i.e.  specialised and interconnected) business models.  As with the SOA inertia I discussed in a previous blog post there is no heartfelt internal urgency for the business model reconfiguration required to really take social thinking to the heart of the organisation.  Like SOA, social tools drive componentisation and specialisation along with networked collaboration and hence the changes required for one are pretty similar to the changes required for the other.  As with SOA it may take the emergence of superior external service providers built from the ground up to be open, social and designed for composition to really start to trigger internal change.

In lieu of reflecting on the deeper and more meaningful trends towards ‘business model sociability’ that are eroding the effectiveness of their existing organisation, then, many are currently trying to bolt ‘sociability’ onto the edge of their current model as simply another channel for PR activity.  Whilst this often goes wrong it can also add terrific value if done honestly or with a clear business purpose.  Mostly it is done with little or no business case – it is after all an imperative to be more social, isn’t it? – and for each accidental success that occurs because a company’s unarticulated business model happens to be right for such channels there are also many failures (because it isn’t).

The reality is that the value of social tools will depend on the primary business model you follow (and increasingly the business model of each individual business capability in your value web, both internal and external – something I discussed in more detail here).

I think my current feeling is therefore that we have a set of circumstances that go kind of like this:

  1. There is an emerging business disruption that will drive organisational specialisation around a set of ‘business model types’ but which isn’t yet broadly understood or seen by the majority of people who are busy doing real work;
  2. We have a broad set of useful tools that can be used to create enormous value by fostering collaboration amongst groups of people across departmental, organisational and geographic boundaries; and
  3. There are a small number of organisations who – often through serendipity – have happened to make a success of using a subset of these tools with particular consumer groups due to the accidental fit of their primary business model with the project and tools selected.

As a result although most people’s reptilian brain instinctively feels that ‘something’ big is happening, instead of:

  • focusing on understanding their future business model (1) before
  • selecting useful tools to amplify this business model (2) and then
  • using them to engage with appropriate groups in a culturally appropriate way (3)

People are actually:

  • trying to blindly replicate others serendipitous success (3)
  • with whatever tools seems ‘coolest’ or most in use (2) and
  • no hope of fundamentally addressing the disruptions to their business model (1)

Effectively most people are therefore coming at the problem from entirely the wrong direction and wasting time, money and – potentially – the good opinion of their customers.

More clearly – rather than looking at their business as a collection of different business models and trying to work out how social tools can help in each different context, companies are all trying to use a single approach based largely on herd behaviour when their business model often has nothing directly to do with the target audience.  Until we separate the kinds of capabilities that require the application of creative or networking talent, understand the business models that underpin them and then analyse the resulting ‘types’ of work (and hence outcomes) to be enabled by tooling we will never gain significant value or leverage from the whole Enterprise 2.0 / social business / whatever field.

What’s the Future of SOA?

9 Nov

EbizQ asked last week for views on the improvements people believe are required to make SOA a greater success.  I think that if we step back we can see some hope – in fact increasing necessity – for SOA and the cloud is going to be the major factor in this.

If we think about the history of SOA to date it was easy to talk about the need for better integration across the organisation, clearer views of what was going on or the abstract notion of agility. Making it concrete and urgent was more of an issue, however. Whilst we can discuss the ‘failure’ of SOA by pointing to a lack of any application of service principles at a business level (i.e. organisationally through some kind of EA) this is really only a symptom and not the underlying cause. In reality the cause of SOA failure to date has been business inertia – organisations were already set up to do what they did, they did it well enough in a push economy and the (understandable) incentives for wholesale consideration of the way the business worked were few.

The cloud changes all of this, however. The increasing availability of cloud computing platforms and services acts as a key accelerator to specialisation and pull business models since it allows new entrants to join the market quickly, cheaply and scalably and to be more specialised than ever before. As a result many organisational capabilities that were economically unviable as market offerings are now becoming increasingly viable because of the global nature of cloud services. All of these new service providers need to make their capabilities easy to consume, however, and as a result are making good use of what people are now calling ‘apis’ in a web 2.0 context but which are really just services; this is important as one of the direct consequences of specialisation is the need to be hooked into the maximum number of appropriate value web participants as easily as possible.

On the demand side, as more and more external options become available in the marketplace that offer the potential to replace those capabilities that enterprises have traditionally executed in house, so leaders will start to rethink the purpose of their organisations and leverage the capabilites of external service providers in place of their own.

As a result cloud and SOA are indivisable if we are to realise the potential of either; cloud enables a much broader and more specialised set of business service providers to enter a global market with cost and capability profiles far better than those which an enterprise can deliver internally. Equally importantly, however, they will be implicitly (but concretely) creating a ‘business SOA catalogue’ within the marketplace, removing the need for organisations to undertake a difficult internal slog to re-implement or re-configure outdated capabilities for reuse in service models. Organisations need to use this insight now to trigger the use of business architecture techniques to understand their future selves as service-based organisations – both by using external services as archtypes to help them understand the ways in which they need to change and offer their own specialised services but also to work with potential partners to co-develop and then disaggregate those services in which they don’t wish to specialise in future.

Having said all that to set the scene for my answer(!) I believe that SOA research needs to be focused on raising the concepts of IT mediated service provision to a business level – including concrete modelling of business capabilities and value webs – along with complex service levels, contracts, pricing and composition – new cloud development platforms, tooling and management approaches linked more explicitly to business outcomes – and which give specialised support to different kinds of work – and the emergence of new 3rd parties who will mediate, monitor and monetise such relationships on behalf of participants in order to provide the required trust.

All in all I guess there’s still plenty to do.

Cloud vs Mainframes

19 Oct

David Linthicum highlights some interesting research about mainframes and their continuation in a cloud era.

I think David is right that mainframes may be one of the last internal components to be switched off and that in 5 years most of them will still be around.  I also think, however, that the shift to cloud models may have a better chance of achieving the eventual decommissioning of mainframes than any previous technological advance.  Hear me out for a second.

All previous new generations of technology looking to supplant the mainframe have essentially been slightly better ways of doing the same thing.  Whilst we’ve had massive improvements in the cost and productivity of hardware, middleware and development languages essentially we’ve continued to be stuck with purchase and ownership of costly and complex IT assets. As a result whilst most new development has moved to other platforms the case for shifting away from the mainframe has never seriously held water. Whilst redevelopment would generate huge expense and risk it would result in no fundamental business shift as a result. Essentially you still owned and paid for a load of technology ‘stuff’ and the people to support it even if you successfully navigated the huge organisational and technical challenges required to move ‘that stuff’ to ‘this stuff’. In addition the costs already sunk into the assets and the technology cost barriers to other people entering a market (due to the capital required for large scale IT ownership) also added to the general inertia.

At its heart cloud is not a shift to a new technology but – for once – genuinely a shift to a new paradigm. It means capabilities are packaged and ready to be accessed on demand.  You no longer need to make big investments in new hardware, software and skills before you can even get started. In addition suddenly everyone has access to the best IT and so your competitors (and new entrants) can immediately start building better capabilities than you without the traditional technology-based barriers of entry. This could lead to four important considerations that might eventually lead to the end of the mainframe:

  1. Should an organisation decide to develop its way off the mainframe they can start immediately without the traditional need to incur the huge expense and risk of buying hardware, software, development and systems integration capability before they can even start to redevelop code.  This removes a lot of the cost-based risks and allows a more incremental approach;
  2. Many of the applications implemented on mainframes will increasingly be in competition with external SaaS applications that offer broadly equivalent functionality.  In this context moving away from the mainframe is even less costly and risky (whilst still a serious undertaking) since we do not need to even redevelop the functionality required;
  3. The nature of the work that mainframe applications were set up to support (i.e. internal transaction processing across a tight internal value chain) is changing rapidly as we move towards much more collaborative and social working styles that extend across organisational boundaries.  The changing nature of work is likely to eat away further at the tightly integrated functionality at the heart of most legacy applications and leave fewer core transactional components running on the mainframe; and
  4. Most disruptive of all, as organisations increasingly take advantage of falling collaboration costs to outsource whole business capabilities to specialised partners, so much of the functionality on the mainframe (and other systems) becomes redundant since that work is no longer performed in house.

I think that the four threads outlined here have the possibility to lead to a serious decline in mainframe usage over the next ten years.

But then again they are like terminators – perhaps they will simply be acquired gradually by managed service providers offering to squeeze the cost of maintenance, morph into something else and survive in a low grade capacity for for some time.

 

 

Private Clouds “Surge” for Wrong Reasons?

14 Jul

I read a post by David Linthicum today on an apparent surge in demand for Private Clouds.  This was in turn spurred by thoughts from Steve Rosenbush on increasing demand for Private Cloud infrastructures.

To me this whole debate is slightly tragic as I believe that most people are framing the wrong issues when considering the public vs private cloud debate (and frankly for me it is a ridiculous debate as in my mind ‘the cloud’ can only exist ‘out there, somewhere’ and thus be shared; to me a ‘private’ cloud can only be a logically separate area of a shared infrastructure and not an organisation specific infrastructure which merely shares some of the technologies and approaches – which, frankly, is business as usual and not a cloud.  For that reason when I talk about public clouds I also include such logically private clouds running on shared infrastructures).  As David points out there are a whole host of reasons that people push back against the use of cloud infrastructures, mostly to do with retaining control in one way or another.  In essence there are a list of IT issues that people raise as absolute blockers that require private infrastructure to solve – particularly control, service levels and security – whilst they ignore the business benefits of specialisation, flexibility and choice.  Often “solving” the IT issues and propagating a model of ownership and mediocrity in IT delivery when it’s not really necessary merely denies the business the opportunity to solve their issues and transformationally improve their operations (and surely optimising the business is more important than undermining it in order to optimise the IT, right?).  That’s why for me the discussion should be about the business opportunities presented by the cloud and not simply a childish public vs private debate at the – pretty worthless – technology level.

Let’s have a look at a couple of issues:

  1. The degree of truth in the control, service and security concerns most often cited about public cloud adoption and whether they represent serious blockers to progress;
  2. Whether public and private clouds are logically equivalent or completely different.

IT issues and the Major Fallacies

Control

Everyone wants to be in control.  I do.  I want to feel as if I’m moving towards my goals, doing a good job – on top of things.  In order to be able to be on top of things, however, there are certain things I need to take for granted.  I don’t grow my own food, I don’t run my own bank, I don’t make my own clothes.  In order for me to concentrate on my purpose in life and deliver the higher level services that I provide to my customers there are a whole bunch of things that I just need to be available to me at a cost that fits into my parameters.  And to avoid being overly facetious I’ll also extend this into the IT services that I use to do my job – I don’t build my own blogging software or create my own email application but rather consume all of these as services over the web from people like WordPress.com and Google. 

By not taking personal responsibility for the design, manufacture and delivery of these items, however (i.e. by not maintaining ‘control’ of how they are delivered to me), I gain the more useful ability to be in control of which services I consume to give me the greatest chance of delivering the things that are important to me (mostly, lol).  In essence I would have little chance of sitting here writing about cloud computing if I also had to cater to all my basic needs (from both a personal as well as IT perspective).  I don’t want to dive off into economics but simplistically I’m taking advantage of the transformational improvements that come from division of labour and specialisation – by relying on products and services from other people who can produce them better and at lower cost I can concentrate on the things that add value for me.

Now let’s come back to the issue of private infrastructure.  Let’s be harsh.  Businesses simply need IT that performs some useful service.  In an ideal world they would simply pay a small amount for the applications they need, as they need them.  For 80% of IT there is absolutely no purpose in owning it – it provides no differentiation and is merely an infrastructural capability that is required to get on with value-adding work (like my blog software).  In a totally optimised world businesses wouldn’t even use software for many of their activities but rather consume business services offered by partners that make IT irrelevant. 

So far then we can argue that for 80% of IT we don’t actually need to own it (i.e. we don’t need to physically control how it is delivered) as long as we have access to it.  For this category we could easily consume software as a service from the “public” cloud and doing so gives us far greater choice, flexibility and agility.

In order to deliver some of the applications and services that a business requires to deliver its own specialised and differentiated capabilities, however, they still need to create some bespoke software.  To do this they need a development platform.  We can therefore argue that the lowest level of computing required by a business in future is a Platform as a Service (PaaS) capability; businesses never need to be aware of the underlying hardware as it has – quite literally – no value.  Even in terms of the required PaaS capability the business doesn’t have any interest in the way in which it supports software development as long as it enables them to deliver the required solutions quickly, cheaply and with the right quality.  As a result the internals of the PaaS (in terms of development tooling, middleware and process support) have no intrinsic value to a business beyond the quality of outcome delivered by the whole.  In this context we also do not care about control since as long as we get the outcomes we require (i.e. rapid, cost effective and reliable applications delivery and operation) we do not care about the internals of the platform (i.e. we don’t need to have any control over how it is internally designed, the technology choices to realise the design or how it is operated).  More broadly a business can leverage the economies of scale provided by PaaS providers – plus interoperability standards – to use multiple platforms for different purposes, increasing the ‘fitness’ of their overall IT landscape without the traditional penalties of heterogeneity (since traditionally they would be ‘bound’ to one platform by the inability of their internal IT department to cost-effectively support more than one technology).

Thinking more deeply about control in the context of this discussion we can see that for the majority of IT required by an organisation concentrating on access gives greater control than ownership due to increased choice, flexibility and agility (and the ability to leverage economies of scale through sharing).  In this sense the appropriate meaning of ‘control’ is that businesses have flexibility in choosing the IT services that best optimise their individual business capabilities and not that the IT department has ‘control’ of the way in which these services are built and delivered.  I don’t need to control how my clothes manufacturer puts my t-shirt together but I do want to control which t-shirts I wear.  Control in the new economy is empowerment of businesses to choose the most appropriate services and not of the IT department to play with technology and specify how they should be built.  Allowing IT departments to maintain control – and meddle in the way in which services are delivered – actually destroys value by creating a burden of ownership for absolutely zero value to the business.  As a result giving ‘control’ to the IT department results in the destruction of an equal and opposite amount of ‘control’ in the business and is something to be feared rather than embraced.

So the need to maintain control – in the way in which many IT groups are positioning it – is the first major and dangerous fallacy. 

Service levels

It is currently pretty difficult to get a guaranteed service level with cloud service providers.  On the other hand, most providers are consistently up in the 99th percentile and so the actual service levels are pretty good.  The lack of a piece of paper with this actual, experienced service level written down as a guarantee, however, is currently perceived as a major blocker to adoption.  Essentially IT departments use it as a way of demonstrating the superiority of their services (“look, our service level says 5 nines – guaranteed!”) whilst the level of stock they put in these service levels creates FUD in the minds of business owners who want to avoid major risks. 

So let’s lay this out.  People compare the current lack of service level guarantees from cloud service providers with the ability to agree ‘cast-iron’ service levels with internal IT departments.  Every project I’ve ever been involved in has had a set of service levels but very few ever get delivered in practice.  Sometimes they end up being twisted into worthless measures for simplicity of delivery – like whether a machine is running irrespective of whether the business service it supports is available – and sometimes they are just unachievable given the level of investment and resources available to internal IT departments (whose function, after all, is merely that of a barely-tolerated but traditionally necessary drain on the core purpose of the business). 

So to find out whether I’m right or not and whether service level guarantees have any meaning I will wait until every IT department in the world puts their actual achieved service levels up on the web like – for instance – Salesforce.  I’m keen to compare practice rather than promises.  Irrespective of guarantees my suspicion is that most organisations actual service levels are woeful in comparison to the actual service levels delivered by cloud providers but I’m willing to be convinced.   Despite the illusion of SLA guarantees and enforcement the majority of internal IT departments (and managed service providers who take over all of those legacy systems for that matter) get nowhere near the actual service levels of cloud providers irrespective of what internal documents might say.  It is a false comfort.  Businesses therefore need to wise up, consider real data and actual risks – in conjunction with the transformational business benefits that can be gained by offloading capabilities and specialising – rather than let such meaningless nonsense take them down the old path to ownership; in doing so they are potentially sacrificing a move to cloud services and therefore their best chance of transforming their relationship with their IT and optimising their business.  This is essentially the ‘promise’ of buying into updated private infrastructures (aka ‘private cloud’).

A lot of it comes down to specialisation again and the incentives for delivering high service levels.  Think about it – a cloud provider (literally) lives and dies by whether the services they offer are up; without them they make no money, their stock falls and customers move to other providers.  That’s some incentive to maintain excellence.  Internally – well, what you gonna do?  You own the systems and all of the people so are you really going to penalise yourself?  Realistically you just grit your teeth and live with the mediocrity even though it is driving rampant sub-optimisation of your business.  Traditionally there has been no other option and IT has been a long process of trying to have less bad capability than your competitors, to be able to stagger forward slightly faster or spend a few pence less.  Even outsourcing your IT doesn’t address this since whilst you have the fleeting pleasure of kicking someone else at the end of the day it’s still your IT and you’ve got nowhere to go from there.  Cloud services provide you with another option, however, one which takes advantage of the fact that other people are specialising on providing the services and that they will live and die by their quality.  Whilst we might not get service levels – at this point in their evolution at least – we do get transparency of historical performance and actual excellence; stepping back it is critical to realise that deeds are more important than words, particularly in the new reputation-driven economy. 

So the perceived need for service levels as a justification for private infrastructures is the second major and dangerous fallacy.  Businesses may well get better service levels from cloud providers than they would internally and any suggestion to the contrary will need to be backed up by thorough historical analysis of the actual service levels experienced for the equivalent capability.  Simply stating that you get a guarantee is no longer acceptable. 

Security

It’s worth stating from the beginning that there is nothing inherently less secure about cloud infrastructures.  Let’s just get that out there to begin with.  Also in getting infrastructure as a service out of the way – given that we’re taking the position in this post that PaaS is the first level of actual value to a business – we can  say that it’s just infrastructure; your data and applications will be no more or less secure than your own procedures make it but the data centre is likely to be at least as secure as your own and probably much more so due to the level of capability required by a true service provider.

So starting from ground zero with things that actually deliver something (i.e. PaaS and SaaS) a cloud provider can build a service that uses any of the technologies that you use in your organisation to secure your applications and data only they’ll have more usecases and hence will consider more threats than you will.  And that’s just the start.  From that point the cloud provider will also have to consider how they manage different tenants to ensure that their data remains secure and they will also have to protect customers’ data from their own (i.e. the cloud service providers) employees.  This is a level of security that is rarely considered by internal IT departments and results in more – and more deeply considered – data separation and encryption than would be possible within a single company. 

Looking at the cloud service from the outside we can see that providers will be more obvious targets for security attacks than individual enterprises but counter-intuitively this will make them more secure.  They will need to be secured against a broader range of attacks, they will learn more rapidly and the capabilities they learn through this process could never be created within an internal IT organisation.  Frankly, however, the need to make security of IT a core competency is one of the things that will push us towards consolidation of computing platforms into large providers – it is a complex subject that will be more safely handled by specialised platforms rather than each cloud service provider or enterprise individually. 

All of these changes are part of the more general shift to new models of computing; to date the paradigm for security has largely been that we hide our applications and data from each other within firewalled islands.  Increasing collaboration across organisations and the cost, flexibility and scale benefits of sharing mean that we need to find a way of making our services available outside our organisational boundaries, however.  Again in doing this we need to consider who is best placed to ensure the secure operation of applications that are supporting multiple clients – is it specialised cloud providers who have created a security model specifically to cope with secure open access and multi-tenancy for many customer organisations, or is it a group of keen “amateurs” with the limited experience that comes from the small number of usecases they have discovered within the bounds of a single organisation?  Furthermore as more and more companies migrate onto cloud services – and such services become ever more secure – so the isolated islands will become prime targets for security attacks, since the likelihood that they can maintain top levels of security cut off from the rest of the industry – and with far less investment in security than can be made by specialised platform providers – becomes ever less.  Slowly isolationism becomes a threat rather than a protection.  We really are stronger together.

A final key issue that falls under the ‘security’ tag is that of data location (basically the perceived requirement to keep data in the country of the customers operating business).  Often this starts out as the major, major barrier to adoption but slowly you often discover that people are willing to trade off where their data are stored when the costs of implementing such location policies can be huge for little value.  Again, in an increasingly global world businesses need to think more openly about the implications of storing data outside their country – for instance a UK company (perhaps even government) may have no practical issues in storing most data within the EU.  Again, however, in many cases businesses apply old rules or ways of thinking rather than challenging themselves in order to gain the benefits involved.  This is often tied into political processes – particularly between the business and IT – and leads to organisations not sufficiently examining the real legal issues and possible solutions in a truly open way.  This can often become an excuse to build a private infrastructure, fulfilling the IT departments desire to maintain control over the assets but in doing so loading unnecessary costs and inflexibility on the business itself – ironically as a direct result of the businesses unwillingness to challenge its own thinking. 

Does this mean that I believe that people should immediately begin throwing applications into the cloud without due care and attention?  Of course not.  Any potential provider of applications or platforms will need to demonstrate appropriate certifications and undergo some kind of due diligence.  Where data resides is a real issue that needs to be considered but increasingly this is regional rather than country specific.   Overall, however, the reality is that credible providers will likely have better, more up to date and broader security measures than those in place within a single organisation. 

So finally – at least for me – weak cloud security is the third major and dangerous and fallacy.

Comparing Public and Private

Private and Public are Not Equivalent

The real discussion here needs to be less about public vs private clouds – as if they are equivalent but just delivered differently – and more about how businesses can leverage the seismic change in model occurring in IT delivery and economics.  Concentrating on the small minded issues of whether technology should be deployed internally or externally as a result of often inconsequential concerns – as we have discussed – belittles the business opportunities presented by a shift to the cloud by dragging the discussion out of the business realm and back into the sphere of techno-babble.

The reality is that public and private clouds and services are not remotely equivalent; private clouds (i.e. internal infrastructure) are a vote to retain the current expensive, inflexible and one-size-fits-all model of IT that forces a business to sub-optimise a large proportion of its capabilities to make their IT costs even slightly tolerable.  It is a vote to restrict choice, reduce flexibility, suffer uncompetitive service levels and to continue to be distracted – and poorly served – by activities that have absolutely no differentiating value to the business. 

Public clouds and services on the other hand are about letting go of non-differentiating services and embracing specialisation in order to focus limited attention and money on the key mission of the business.  The key point in this whole debate is therefore specialisation; organisations need to treat IT as an enabler and not an asset, they need to  concentrate on delivering their services and not on how their clothes get made. 

Summary

If there is currently a ‘surge’ in interest in private clouds it is deeply confusing (and disturbing) to me given that the basis for focusing attention on private infrastructures appears to be deeply flawed thinking around control, service and security.  As we have discussed not only are cloud services the best opportunity that businesses have ever had to improve these factors to their own gain but a misplaced desire to retain the IT models of today also undermines the huge business optimisations available through specialisation and condemns businesses to limited choice, high costs and poor service levels.  The very concerns that are expressed as reasons not to move to cloud models – due to a concentration on FUD around a small number of technical issues – are actually the things that businesses have most to gain from should they be bold and start a managed transition to new models.  Cloud models will give them control over their IT by allowing them to choose from different providers to optimise different areas of their business without sacrificing scale and management benefits; service levels of cloud providers – whilst not currently guaranteed – are often better than they’ve ever experienced and entrusting security to focused third parties is probably smarter than leaving it as one of many diverse concerns for stretched IT departments. 

Fundamentally, though, there is no equivalence between the concept of public (including logically private but shared) and truly private clouds; public services enable specialisation, focus and all of the benefits we’ve outlined whereas private clouds are just a vote to continue with the old way.  Yes virtualisation might reduce some costs, yes consolidation might help but at the end of the day the choice is not the simple hosting decision it’s often made out to be but one of business strategy and outlook.  It boils down to a choice between being specialised, outward looking, networked and able to accelerate capability building by taking advantage of other people’s scale and expertise or rejecting these transformational benefits and living within the scale and capability constraints of your existing business – even as other companies transform and build new and powerful value networks without you.

iPad not harbinger of PC doom according to Steve Jobs

9 Jun

After having put some time into thinking about people’s discomfort with the iPad a couple of weeks ago I was interested in this brief article in AppleInsider where Steve Jobs admits that the notion of a post-PC era is ‘uncomfortable’ for many people – a subject that I touched on in my post.  Jobs’ comments appear to support my own impressions that this is really just a maturation of the industry, a democratisation of access to computing for the masses and that it won’t undermine traditional computing for those with the necessary skills.  This should be a relief to people who worry that such devices will replace computers and thereby destroy the ability of individuals to be technically “generative”.   I also basically agree with his summary of tablets as a new form factor that replaces the need for a PC for many people, that PCs will continue to exist and that more choice is good (and although I still don’t agree with the Apple business model – and feel that it will suffer as other people replicate their innovations in more open ecosystems – only one of us is obscenely rich :-)).

More broadly my gut feel is that as the interfaces and capabilities of tablets increase in sophistication so we will be able to encourage more ‘vertical’ and ‘individual’ creativity and “generativity” in the population as a whole.  These people won’t be using the same tools as those we’ve had to learn to create through PC use but then they also won’t need that lower level, general-purpose control over raw computing that many people have had to learn merely to pursue higher level interests.  There will still be plenty of IT work – in fact more than ever – implementing applications and services to help these newly liberated consumers ignore the underlying computer and be creative within their own domains.

Business Enablement as a Key Cloud Element

30 Apr

After finally posting my last update about ‘Industrialised Service Delivery’ yesterday I have been happily catching up with the intervening output of some of my favourite bloggers.

One post that caught my eye was a reference from Phil Wainwright – whilst he was talking about the VMForce announcement – to a post he had written earlier in the year about Microsoft’s partnership with Intuit.  Essentially one of his central statements was related directly to the series of posts I completed yesterday (so part 1, part 2 and part 3):

“the breadth of infrastructure <required for SaaS> extends beyond the development functionality to embrace the entirely new element of service delivery capabilities. This is a platform’s support for all the components that go with the as-a-service business model, including provisioning, pay-as-you-go pricing and billing, service level monitoring and so on. Conventional software platforms have no conception of these types of capability but they’re absolutely fundamental to delivering cloud services and SaaS applications”.

This is one of the key points that I think is still – inexplicably – lost on many people (particularly people who believe that cloud computing is primarily about providing infrastructure as a service).  In reality the whole world is moving to service models because they are simpler to consume, deliver clearer value for more transparent costs and can be shared across organisations to generate economies of scale.  In fact ‘as a service’ models are increasingly not going to be an IT phenomenon but also going to extend to the way in which businesses deal with each other across organisational boundaries.  For the sale and consumption of such services to work, however, we need to be able to ‘deliver’ them; in this context we need to be able to market them, make them easy to subscribe to, manage billing and service levels transparently for both the supplier and consumer and enable rapid change and development over time to meet the evolving needs of service consumers.  As a result anyone who wants to deliver business capabilities in the future – whether these are applications or business process utilities – will need to be able to ensure that their offering exhibits all of these characteristics. 

Interestingly these ‘business enablement’ functions are pretty generic across all kinds of software and services since they essentially cover account management, subscription, business model definition, rating and billing, security, marketplaces etc etc (i.e. all of the capabilities that I defined as being required in a ‘Service Delivery Platform’).  In this context the use of the term ‘Service Delivery Platform’ in place of cloud or PaaS was deliberate; what next generation infrastructures need to do is enable people to deliver business services as quickly and as robustly as possible, with the platforms themselves also helping to ensure trust by brokering between the interests of consumers and suppliers through transparent billing and service management mechanisms.

This belief in service delivery is one of the reasons I believe that the notion of ‘private clouds’ is an oxymoron – I found this hoary subject raised again on a Joe McKendrick post after a discussion on ebizQ – even without the central point about the obvious loss of economies of scale; essentially  the requirement to provide a whole business enablement fabric to facilitate cross organisational service ecosystems – initially for SaaS but increasingly for organisational collaboration and specialisation – is just one of the reasons I believe that ‘Private Clouds’ are really just evolutions of on-premise architecture patterns – with all of the costs and complexity retained – and thus purely marketecture.  When decreasing transaction costs are enabling much greater cross organisational value chains the benefits of a public service delivery platform are immense, enabling organisations to both scale and evolve their operations more easily whilst also providing all of the business support they need to offer and consume business services in extended value chains.  Whilst some people may think that this is a pretty future-oriented reason to not like the notion of private clouds, for completeness I will also say that to me  – in the sense of customer owned infrastructures – they are an anachronism; again this is just an extension of existing models (for good or ill) and nothing to do with ‘cloud’.  It is only the fact that most protagonists of such models are vendors with very low level maturity offerings like packaged infrastructure and/or middleware solutions that makes it viable, since the complexity of delivering true private SDP offerings would be too great (not to mention ridiculously wasteful).  In my view ‘private clouds’ in the sense of end organisation deployment is just building a new internal infrastructure (whether self managed or via a service company) sort of like the one you already already have but with a whole bunch of expensive new hardware and software (so 90% of the expense but only 10% of the benefits). 

To temper this stance I do believe that there is a more subtle, viable version of ‘privacy’ that will be supported by ‘real’ service delivery platforms over time – that of having a logically private area of a public SDP to support an organisational context (so a cohesive collection of branded services, information and partner integrations – or what I’ve always called ‘virtual private platforms’).  This differs greatly from the ‘literally’ private clouds that many organisations are positioning as a mechanism to extend the life of traditional hardware, middleware or managed service offerings – the ability of service delivery platforms to rapidly instantiate ‘virtual’ private platforms will be a core competency and give the appearance and benefits of privacy whilst also maintaining the transformational benefits of leveraging the cloud in the first place.  To me literally ‘private clouds’ on an organisations own infrastructure – with all of their capital expense, complexity of operation, high running costs and ongoing drag on agility – only exist in the minds of software and service companies looking to extend out their traditional businesses for as long as possible. 

Industrialised Service Delivery Redux II

22 Sep

In my previous post I discussed the way in which our increasingly sophisticated use of the Web is creating an unstoppable wave of change in the global business environment.  This resulting acceleration of change and expectation will require unprecedented organisational speed and adaptability whilst simultaneously driving globalisation and consumerisation of business.  I discussed my belief that companies will be forced to reform as a portfolio of systematically designed components with clear outcomes and how this kind of thinking changes the relationship between a business capability and its IT support.  In particular I discussed the need to create industrialised Service Delivery Platforms which vastly increase the speed, reliability and cost effectiveness of delivering service realisations. 

In this post I’ll to move into the second part of the story where I’ll look more specifically at how we can realise the industrialisation of service delivery through the creation of an SDP.

Industrialisation 101

There has been a great deal written about industrialisation over the last few years and most of this literature has focused on IT infrastructure (i.e. hardware) where components and techniques are more commoditised.  As an example many of my Japanese colleagues have spent decades working with leaders in the automotive industry and experienced firsthand the techniques and processes used in zero defect manufacturing and the application of lean principles. Sharing this same mindset around reliabliity, zero defect and technology commoditisation they created a process for delivering reliable and guaranteed outcomes through pre-integration and testing of combinations of hardware and software.  This kind of infrastructure industrialisation enables much higher success rates whilst simultaneously reducing the costs and lead times of implementation. 

In order to explore this a little further and to set some context, let’s Just think for a moment about the way in which IT has traditionally served its business customers.  

nonindutsrialisedvsindustrialised

We can see that generally speaking we are set a problem to solve and we then take a list of products selected by the customer – or often by one of our architects applying personal preference – and we try to integrate them together on the customers site, at the customers risk and at the customers expense. The problem is that we may never have used this particular combination of hardware, operating systems and middleware before – a problem that worsens exponentially as we increase the complexity of the solution, by the way – and so there are often glitches in their integration, it’s unclear how to manage them and there can’t be any guarantees about how they will perform when the whole thing is finally working. As a result projects take longer than they should – because much has to be learned from scratch every time – they cost a lot more than they should – because there are longer lead times to get things integrated, to get them working and then to get them into management – and, most damningly, they are often unreliable as there can be no guarantees that the combination will continue to work and there is learning needed to understand how to keep them up and running.

The idea of infrastructure industrialisation, however, helps us to concentrate on the technical capability required – do you want a Java application server? Well here it is, pre-integrated on known combinations of hardware and software and with manageability built in but – most importantly – tested to destruction with reference applications so that we can place some guarantees around the way this combination will perform in production.  As an example, 60% of the time taken within Fujitsu’s industrialisation process is in testing.  The whole idea of industrialisation is to transfer the risk to the provider – whether an internal IT department or an external provider – so that we are able to produce consistent results with standardised form and function, leading to quicker, more cost effective and reliable solutions for our customers.

Now such industrialisation has slowly been been maturing over the last few years but – as I stated at the beginning – has largely concentrated on infrastructure templating – hardware, operating systems and middleware combined and ready to receive applications.  Recent advances in virtualisation are also accelerating the commoditisation and industrialisation of IT infrastructure by making this templating process easier and more flexible than ever before.  Such industrialisation provides us with more reliable technology but does not address the ways in which we can realise higher level business value more rapidly and reliably.  The next (and more complex) challenge, therefore, is to take these same principles and apply them to the broader area of business service realisation and delivery.  The question is how we can do this?

Industrialisation From Top to Bottom

Well the first thing to do is understand how you are going to get from your expression of intent – i.e. the capability definitions I discussed in my previous post that abstract us away from implementation concerns – through to a running set of services that realise this capability on an industrialised Service Delivery Platform. This is a critical concern since If you don’t understand your end to end process then you can’t industrialise it through templating, transformation and automation.

endtoendservicerealisation

In this context we can look at our capability definitions and map concepts in the business architecture model down to classifications in the service model.  Capabilities map to concrete services, macro processes map to orchestrations, people tasks map to workflows, top level metrics become SLAs to be managed etc. The service model essentially bridges the gap between the expression of intent described by the target business architecture and the physical reality of assets needed to execute within the technology environment.

From here we broadly need to understand how each of our service types will be realised in the physical environment – so for instance we need a physical host to receive and execute each type of service, we need to understand how SLAs are provisioned so that we can monitor them etc. etc.

Basically the concern at this stage is to understand the end to end process through which we will transform the data that we capture at each stage of the process into ever more concrete terms – all the way from logical expressions of intent through greater information about the messages, service levels and type of implementation required, through to a whole set of assets that are physically deployed and executing on the physical service platform, thus realising the intent.

The core aim of this process must be to maximise both standardisation of approach and automation at each stage to ensure repeatability and reliability of outcome – essentially our aim in this process is to give business capability owners much greater reliability and rapidity of outcome as they look to realise business value.  We essentially want to give guarantees that we can not only realise functionality rapidly but also that these realisations will execute reliably and at low cost.  In addition we must also ensure that the linkage between each level of abstraction remains in place so that information about running physical services can be used to judge the performance of the capability that they realise, maximising the levers of change available to the organisation by putting them in control of the facts and allowing them to ‘know sooner’ what is actually happening.

Having an end to end view of this process essentially creates the rough outline of the production line that needs to be created to realise value – it gives us a feel for the overall requirements.  Unfortunately, however, that’s the nice bit, the kind of bit that I like to do. Whilst we need to understand broadly how we envisage an end to end capability realisation process working, the real work is in the nasty bit – when it comes to industrialisation work has to start at the bottom.

Industrialisation from Bottom to Top

If you imagine the creation of a production line for any kind of physical good they obviously have to be designed to optimise the creation of the end product. Every little conveyer belt or twisty robot arm has to be calibrated to nudge or weld the item in exactly the same spot to achieve repeatability of outcome. In the same way any attempt to industrialise the process of capability realisation has to start at the bottom with a consideration of the environment within which the final physical assets will execute and of how to create assets optimised for this environment as efficiently as possible. I use a simple ‘industrialisation pyramid’ to visualise this concept, since increasingly specialised and high value automation and industrialisation needs to be built on broader and more generic industrialised foundations. In reality the process is actually highly iterative as you need to continually be recalibrating both up and down the hierarchy to ensure that the process is both efficient and realises the expressed intent but for the sake of simplicity you can assume that we just build this up from the bottom.

industrialisationpyramid

So let’s start at the bottom with the core infrastructure technologies – what are the physical hosts that are required to support service execution? What physical assets will services need to create in order to execute on top of them? How does each host combine together to provide the necessary broad infrastructure and what quality of service guarantees can we put around each kind of host? Slightly more broadly, how will we manage each of the infrastructure assets? This stage requires a broad range of activity not just to standardise and templatise the hosts themselves but also to aggregate them into a platform and to create all of the information standards and process that deployed services will need to conform to so that we can find, provision, run and manage them successfully.

Moving up the pyramid we can now start to think in more conceptual terms about the reference architecture that we want to impose – the service classifications we want to use, the patterns and practices we want to impose on the realisation of each type, and more specifically the development practices.  Importantly we need to be clear about how these service classifications map seamlessly onto the infrastructure hosting templates and lower level management standards to ensure that our patterns and practices are optimised – its only in this way that we can guarantee outcomes by streamlining the realisation and asset creation process. Gradually through this definition activity we begin to build up a metamodel of the types of assets that need to be created as we move from the conceptual to the physical and the links and transformations between them. This is absolutely key as it enables us to move to the next level – which I call automating the “means of production”.

This level becomes the production line that pushes us reliably and repeatably from capability definition through to physical realisation. The metamodel we built up in the previous tier helps us to define domain specific languages that simplify the process of generating the final output, allowing the capture of data about each asset and the background generation of code that conforms to our preferred classification structure, architectural patterns and development practices. These DSLs can then be pulled together into “factories” specialised to the realisation of each type of asset, with each DSL representing a different viewpoint for the particular capability in hand.  Individual factories can then be aggregated into a ‘capability realisation factory’ that drives the end to end process.  As I stated in my previous post the whole factory and DSL space is mildly controversial at the moment with Microsoft advocating explicit DSL and factory technologies and others continuing to work towards MDA or flexible open source alternatives.  It is suffice to say in this context that the approaches I’m advocating are possible via either model – a subject I might return to actually with some examples of each (for an excellent consideration of this whole area consult Martin Fowler’s great coverage, btw).

The final level of this pyramid is to actually start taking the capability realisation factories and tailoring them for the creation of industry specific offerings – perhaps a whole set of ‘factories’ around banking, retail or travel capabilities. From my perspective this is the furthest out and may actually not come to pass; despite Jack Greenfield’s compelling arguments I feel that the rise of SOA and SaaS will obviate the need to generate the same application many times by allowing the composing of solutions from shared utilities.  I feel that the idea of an application or service specific factory assumes a continuation of IT oversupply through many deployments; as a result I feel that the key issue at stake in the industrialisation arena is actually that of democratising access to the means of capability production by giving people the tools to create new value rapidly and reliably.  As a result I feel that improving the reliability and repeatability of capability realisation across the board is more critical than a focus on any particular industry. (This may change in future with demand, however, and one potential area of interest is industry specific composition factories rather than industry specific application generation factories). 

Delivering Industrialised Services

So we come at last to a picture that demonstrates how the various components of our approach come together from a high level process perspective.

servicefactoryprocess

Across the top we have our service factory. We start on the left hand side with capability modelling, capturing the metadata that describes the capability and what it is meant to do. In this context we can use a domain specific language that allows us to model capabilities explicitly within the tooling. Our aim is then to use the metadata captured about a capability to realise it as one or more services. In this context information from the metamodel is transformed into an initial version of the service before we use a service domain language to add further detail about contracts, messages and service levels. It is important to note that at this point, however, the service is still abstract – we have not bound it to any particular realisation strategy. Once we have designed the service in the abstract we can then choose an implementation strategy – example classifications could be interaction services for Uis, workflow services for people tasks, process services for service orchestrations, domain services for services that manage and manipulate data and integration services that allow adaptation and integration with legacy or external systems.

Once we have chosen a realisation strategy all of the metadata captured about the service is used to generate a partially populated realisation of the chosen type – in this context we anticipate having a factory for each kind of service that will control the patterns and practices used and provide guidance in context to the developer.

Once we have designed our services we now want to be able to design a virtual deployment environment for them based wholly on industrialised infrastructure templates. In this view we can configure and soft test the resources required to run our services before generating provisioning information that can be used to create the virtual environment needed to host the services.

In the service platform the provisioning information can be used to create a number of hosting engines, deploy the services into them, provision the infrastructure to run them and then set up the necessary monitoring before finally publishing them into a catalogue. The Service Platform therefore consists of a number of specialised infrastructure hosts supporting runtime execution, along with runtime services that provide – for example – provisioning and eventing support.

The final component of the platform is what I call a ‘service wrap’. This is an implementation of the ITSM disciplines tailored for our environment. In this context you will find the catalogue, service management, reporting and metering capabilities that are needed to manage the services at runtime (this is again a subset to make a point). In this space the service catalogue will bring together service metadata, reports about performance and usage plus subscription and onboarding processes.  Most importantly there is a strong link between the capabilities originally required and the services used to realise them, since both are linked in the catalogue to support business performance management. In this context we can see a feedback loop from the service wrap which enables capability owners to make decisions about effectiveness and rework their capabilities appropriately.

Summary

In this second post of three I have demonstrated how we can use the increasing power of abstraction delivered by service-orientation to drive the industrialisation of capability realisation.  Despite current initiatives broadly targeting the infrastructure space I have discussed my belief that full industrialisation across the infrastructure, applications, service and business domains requires the creation and consistent application of known patterns, processes, infrastructures and skills to increase repeatability and reliability. We might sacrifice some flexibility in technology choice or systems design but increasing commoditisation of technology makes this far less important than cost effectiveness and reliability. It’s particularly important to realise that when industrialising you need to understand your end to end process and then do the nasty bit – bottom up in excruciating detail.

So in the third and final post on this subject I’m going to look a little bit at futures and how the creation of standardised and commoditised service delivery platforms will affect the industry more broadly – essentially as technology becomes about access rather than ownership so we will see the rise of global service delivery platforms that support capability realisation and execution on behalf of many organisations.

Service Delivery Platforms peek from behind the Cloud

17 Apr

Quick note having just read Dion Hincliffe’s article on Google App Engine and Amazon Web Services (their respective cloud computing infrastructures).  These platforms are early and very basic instances of the ‘service delivery platforms’ I’ve talked about for the past few years (for an example see a presentation I gave at the MS SOA & BPM conference last year).  As I discussed during this talk, I use the term ‘service delivery platform’ in place of ‘platform as a service’ or ‘cloud computing’ since in order to really support viable and fully rounded service delivery you need to provide far more than a ‘platform’ or ‘infrastructure’ – a topic I’m going to return to now that this area has been popularised by Google and I won’t seem like (such) a lunatic, lol.  More broadly, however, I’ve discussed a number of times why I feel that economics and technology commoditisation will drive people down this route eventually and I’m really excited now to see a number of competitors emerging in this space (with Amazon, Google and Salesforce now competing with a number of smaller startups (with more to follow)).  I know I’ve said this before but people are really going to have to start deciding what business they are in – if you work in an end user organisation then you need to recognise that and start treating IT like a utility rather than a differentiator; if you’re an IT service company, however, you’d better work out whether you want to be focused on relationship brokering and consulting, utility computing platform provision or  SaaS/BPU service offers, since the economics of each are very different (you may in fact want to play in all three but if so you’d best disaggregate your company and run them autonomously under your umbrella brand – or you’ll make a complete hash of them all).

One of the interesting things for me is whether a middle-ground model will emerge that enables enterprises to take advantage of the industrialisation and economies of scale available to service delivery platform vendors whilst also enabling the deployment of ‘edge’ infrastructures to customer specific environments.  In this context we may see locally deployed ‘chunks’ of the central service created for those organisations that are large enough or who have specific privacy or trust issues (still coordinated with and managed from the centre, however).  Whether such a model has a long term future is an interesting (but as yet undecided) question to my mind but in the short to medium term those SDP vendors who were able to deliver such a transitional solution would provide a compelling migration path to enterprises who are nervous about the implications of a wholesale shift to the cloud.

Relationships vs Transactions

27 Mar
Background

Got a comment from one of my colleagues – a chap called Martin Abbot – by email a couple of days ago with respect to my last post about institutional innovation.  Whilst I’ve been planning to find some time to write a follow up that links these ideas through to SOA and SaaS I thought it might be worth just posting the question and my email response.  It’s not a ‘proper’ blog post – and so is a bit messy – but the questions and my rushed answer are probably worth sharing.

The Question

Is the idea of moving away from transactional relationships to ones that are mutually beneficial not a little at odds with SaaS and SOA generally? Both of these seem to support the commoditization of resources and are therefore inherently transactional in nature.”

The Answer

Well I guess that there are a few things:

Lots of stuff tends towards commoditisation and so in many ways the argument Martin makes is understandable.  Even based purely on the use of ‘commodity’ services, however, I guess you could make the following arguments:

  • Commodity doesn’t necessarily mean wholly standardised.  In the future we’re going to need to support mass customisation of services to support our customers businesses.  In a SaaS/SOA environment this basically requires us to have many customisation points built into the software that support both multi-tenancy and customisation.  The key here is to build the software for customisation in a few key dimensions from the ground up rather than rely on your ability to bastardise it later.  To be able to understand the 20% key dimensions that need to be customisable, however, you need a pretty deep understanding of your consumers (I realise that this point is only tenuously related but I need to establish it first…)
  • The second point is that even if you are only delivering commodity services there is always the opportunity to a) understand your customers needs better and b) deliver services that are more appropriate to them through your improved contextual understanding.  This requires collaborating partners to share information much more freely in order to get the best service.  When you combine this with the ability to mass-customise services you can see that both the provider and consumer get best advantage when they each fully understand the capabilities and aspirations of the other.  This depth of relationship requires a degree of trust that goes above and beyond a traditional zero-sum approach, however.
  • From a provider perspective you also need to work closely with your partners to understand the effectiveness (or not of your services) in order to be able to shorten your feedback loop and accelerate the improvement of your capabilities.  Again this is a win-win scenario potentially as both parties get greater value.

All of this is fine from the perspective of thinking of two parties as ‘customer’ and ‘provider’ when the main goal is the consumption of services on a transactional basis.  i.e. there are still some low level advantages to both parties of creating a closer relationship.

Beyond this however is the real value.  If you build trust across organisations that offer some part of the value chain then the more important question becomes how you can leverage your capabilities together in order to improve existing offerings or – perhaps more importantly – create new services or offerings to existing or new markets.  The key here is that each service provider may have a fairly conventional view of what services they offer and who they serve but when each comes together to look at how they collaborate to deliver  value and how each others services could be used to approach their customers then a new perspective opens up.  Once this perspective is opened they can then begin to consider how each would need to optimise their services in order to create these new markets and this in turn can help them to consider how these ideas impact (and potentially improve) their ‘traditional’ offerings.  When you broaden this from two parties into everyone who performs any value adding activity within and across value webs you can see that the opportunities for collaboration and new value creation rise exponentially.

This is also applicable when you think of ‘customers’ in the traditional sense rather than groups of suppliers working together;  customers and their providers exist in a value-web and therefore by forging strong relationships and inviting suppliers to help improve the end product the overall value for everyone is greatly enhanced.  This is also a two-way dialogue since providers can use Web 2.0 techniques to bring their consumers into the service creation process and thereby become co-creators of the products and services that the company offers – an ideal way to balance push and pull models.

Even if we think about products and services that tend towards commoditisation – and many things do – we can see that the reality is that such commoditisation is inevitable and accelerating; sharing information with trusted partners – whilst it may hasten the commoditisation of certain services – more than offsets this by opening up far grander opportunities to utilise these services in new ways.  Furthermore if your services are subject to commoditisation then they are also subject to economies of scale and leveraging partners to find new ways of using these commoditised services to build new markets is a much smarter move than withdrawing into yourself and becoming obsessed with efficiency and cost reduction. 

To paraphrase Bill Joy, there are far more smart people outside your organisation than in it and maximising your ability to innovate by mutually leveraging this smartness is increasingly going to be a necessary capability.  As I stated in my original post, I agree that there are many things to be worked out in order to move people into this new way of working but I believe that those who make the transition will create and reap significantly greater value than those who do not.

Conclusion

My anticipated next post was to actually look at how service delivery platforms and SOA could help to accelerate the types of relationships that I discussed.  I’m still aiming to write that post but decided that there was value in exposing this information in the interim as a) it is related and adds to the discussion and b) given my excessive commitments at the moment it was easier than actually finding the time to write the post I need to at this point in time ;-)

Stability, Relationships and Institutional Innovation as Pillars of Success

5 Feb

Just read a few pieces from John Hagel – one of my favourite writers – whilst catching up on my blog backlog.  Essentially two posts caught my attention, one describing some areas of concentration for the coming year and another exploring the concept of institutional innovation.  Both of these posts resonated strongly with me and I felt that elements of both were strongly – even inextricably – linked together.

The two main points from the first post that caught my attention were around the value of stability in a changing world and on the decreasing value of transactional behaviour.  These ideas were reinforced by the second post about institutional innovation as I believe that such transformational innovation requires us to recognise the need for stability whilst simultaneously downplaying transactional behaviour in favour of mutually beneficial relationships.

I’ll explore this a little further.

The importance of internal stability

I’ve discussed many times how I believe that in order to be successful in a rapidly changing world we need to look at the structural properties of our organisations before considering how they work.  In this context understanding the capabilities that we need to be successful – along with the metrics that they need to support – is a key lever to greater organisational adaptability.  This is so since these abstractions allow us to concentrate on what we need to achieve whilst burying the detail of how we achieve it to prevent ourselves from being overwhelmed and therefore powerless to act decisively.  Equally importantly, however, a concentration on structural components and their relationships allows us to escape from tightly coupled process-oriented thinking and move to an organisational style that is more loosely coupled and output-driven. 

In this discussion, however, the key characteristic of interest in a structural view is that of stability – essentially a structural, capability based view of an organisation is far more stable than a dynamic, process-oriented one.  This is because the things that we do broadly remain static over time whilst the way in which we do them can be impacted hugely by influences such as technology advances, economic pressures or the talent pool we have at our disposal.

As increasingly rapid change impacts our organisations and forces us to rethink how we do things, the value of understanding explicitly what we do cannot be overstated.  Chaos ensues when people have no fixed reference points around which to manage change; a stable view of intent, however, allows us to have the benefit of both worlds – a stable understanding of things that need to be achieved and therefore a clear field of operations in which to implement change in a decisive and systematic fashion. 

The importance of external stability

Taking this thinking to the next level, increasing pressures on organisations will increasingly force them to look at the set of capabilities that they have and decide which ones represent their core areas of expertise (it is worth stating here that this is another, related reason to take a systematic view of the capabilities an organisation needs but that is assumed in this post).  This choice will increasingly be driven by economic forces and so – to use another of John’s broad ideas – organisations will need to decide whether they major on customer relationships, infrastructure or innovation.  Making this decision will be a further expression of stability at the macro level, however, since our organisation will now be signaling to its wider ecosystem that it has capabilities of a particular nature that can be leveraged and also that it will be looking for partners to provide other elements of the value chain on its behalf.  At the top level well positioned service providers can therefore become points of stability in the changing landscape of an industry, enhancing both their own positions but also providing reference points to new entrants trying to excel in a different section of the value chain. 

The futility of transactional behaviour

One of the key issues that emerges as we begin to seek complementary services from partners is the nature of the relationships we wish to build.  Current transactional based thinking would tend to treat partners as ‘suppliers’ held at arms length and squeezed for the minimum costs sustainable.  This approach ignores the far greater benefits to be accessed by building and leveraging these relationships to achieve profitable and sustainable growth on both sides and represent a zero sum game in place of efforts to mutually grow the available opportunities.  Many kinds of emerging relationships – whether those lauded by social networking advocates mentioned by John or those enabled by increasing B2B information exchanges – are merely transactional in nature and not relationships in the true sense of the word.  If we are to maximise our opportunities and accelerate the improvement of our chosen capabilities then we must build deeper relationships in place of impersonal transactions in order to leverage complementary expertise and expand our influence and opportunities into new markets.

The increasing importance of deepening relationships

Increasingly we are seeing a shift away from consumer-supplier relationships where one side owns the power towards a situation in which different organisations are composing value for customers out of their complementary capabilities.  In this context relationships are of far greater value than the lowest possible cost because we now need to leverage our combined talents to improve all of our services and by extension the overall result as experienced by the end customer.  We are therefore going to be looking to leverage our different perspectives and expertise in order to look at the overall value being created and to realign our individual services to improve the output of our joint activities

Two points in particular stood out for me in John’s post in the context of relationships:

  • Deep relationships become increasingly valuable in times of widespread change;
  • in fact, deep relationships are essential to capture the full value of weak ties.

Again one of the broader questions around the need for stability is what and who we can count on to help us make sense of the changes that are happening around us.  In this context the deeper our relationships with our partners the more we can both consider them a point of stability and support as we deal with change but also the broader range of perspectives and insight we have in making sense of change and in deciding how to react most effectively. 

In the second case it is worth highlighting the fact that deep relationships do not automatically infer tight integration.  In building relationships with partners who will offer their complementary capabilities to us we need to ensure that we deliver the loosest possible coupling between our respective services.  Counter-intuitively perhaps this is not because we wish to return to a view of our partner as a supplier to be easily swapped out but rather that we wish to give them the maximum scope for innovation in the services that they deliver to us.  In this context – again counter-intuitively perhaps – we require much deeper relationships and trust to be in place before we will consent to the loose coupling required to maximise innovation – the natural tendency when we unbundle capability to another is to want to understand and control the way in which things are done on our behalf.  This is limiting behaviour since we fail to take advantage of the complementary expertise of the specialised partner and instead continue to project our inadequacies onto their delivery and constrain their ability to deliver the best possible service on our behalf.  Breaking this habit requires deep and trusting relationships, however, since the more interdependent we become the more we each depend upon the other for mutual success and therefore the more we tend to want to feel in control.

Leveraging stability and deep relationships to realise institutional innovation

These ideas around the importance of stability and the inadequacy of transaction thinking lead onto a consideration of John’s other post around institutional innovation.  Developing a stable view of your organisation, zeroing in on your key capabilities and then concentrating on changing the nature of your relationships with partners can open the door to the benefits of institutional innovation:

Stability:  We know what we do and what our partners do and so can successfully build relationships around these points of stability.  We have a context for innovation across organisational boundaries.

Modularity:  A shift to a stable view also drives a more modular approach.  We can therefore minimise coupling and maximise innovation opportunities.  We therefore have a context for distributed and deconflicted innovation implementation.

Relationships:  Developing deep, positive sum game relationships enables us to jointly seek mutual advantage with our partners through leveraging our diversity.  We therefore have a context for longer term capability development and organisational growth.

Essentially as we and our partners become more dependent on each other to realise value so we also become dependent on each other to realise such value in the most effective way possible.  In this context innovation is no longer something that happens within the bounds of your organisation but rather is most potent at the intersection of your capabilities with those of your partners. 

As John points out such innovation transcends current practices around ‘open innovation’ and moves from point attempts to leverage occasional 3rd party expertise to inform our own innovation towards sustained and systematic examination of innnovation opportunities across our partner ecosystem.  This broader approach requires us to continually leverage the diverse experience, expertise and perspective of our wider ecosystem to look for mutual advantage, a subtle but important difference.  Essentially such innovation recognises the increasingly symbiotic nature of partnership and the huge opportunities to create breakthrough innovation through diversity  Such innovation may come in the form of improvements to individual capabilities as a result of partner feedback, in improvements in the functioning of the overall value web or in insights regarding new joint offering or market opportunities. 

Summary

This is an emerging area that requires us to rethink the way in which we deal with partners, the way in which we locate our people and the tools and technology that we can use to support distributed co-creation and innovation.  It may be that some of the tools are already here – I’ve written about using Web 2.0 techniques to leverage talent, for example – but our understanding of the practices and processes that will enable us to really exploit these opportunities is still limited.  Given the rich seam of benefit to be mined through broader, more collaborative innovation, however we all have a duty to promote and develop these ideas as quickly as possible.

Follow

Get every new post delivered to your Inbox.

Join 172 other followers