Archive | Web 2.0 RSS feed for this section

Is Social Media Rubbish?

8 Jul

I’ve read a few interesting posts recently relating to Social Media and ‘Enterprise 2.0’.  First up was Peter Evans-Greenwood talking about the myth of social organisations given their incompatibility with current structures and the lack of business cases for many efforts.  From there I followed links out to Martin Linssen and Dennis Howlett – both of whom commented on the current state of Enterprise 2.0 and social business, in particular their lack of clarity (i.e. are they primarily about tools, people or marketing efforts), the often ironic lack of focus on people in favour of technology and the paucity of compelling business cases.  Furthermore they also highlighted the continued migration of traditional vendors from one hot topic to another (e.g. from ECM to Enterprise 2.0 to Social Business) in order to support updated positioning for products, creating confusion and distraction by suggesting that success comes from owning specific tools rather than from particular ways of working.

Most damningly of all I found a link (courtesy of @adamson) to some strong commentry from David Chalke of Quantum Market Research suggesting that:

Social media: ‘Oversold, misused and in decline’

All of these discussions made me think a bit about my own feelings about these topics at the moment.

The first thing to state is that it seems clear to me that in the broadest sense businesses will increasingly exist in extended value webs of customers and partners.  From that perspective ‘business sociability’ – i.e. the ability to take up a specialised position within a complex value web of complementary partners and to collaborate across organisational and geographical boundaries – will be critical.  The strength of an organisation’s network will increasingly define the strength of their capabilities.  Social tools that support people in building useful networks and in collaborating across boundaries – like social networks, micro-blogs, blogs, wikis, forums etc – will be coupled with new architectures and approaches – like SOA, open APIs and cloud computing – as the necessary technical foundations for “opening up” a business and allowing it to participate in wider value creation networks.  As I’ve discussed before, however, tooling will only exist to support talented people undertaking creative processes within the context of broader networks of codified and automated processes.

Whilst therefore having the potential to support increasing participation in extended value webs, develop knowledge and support the work of our most talented people, it’s clear that throwing random combinations of tools at the majority of existing business models without significant analysis of this broader picture is both pointless but also extremely distracting and potentially ultimately very damaging (as failed, ill thought through initiatives can lead to an opportunity for entrenched interests to ignore the broader change for longer).

Most of the organisations I have worked with are failing to see the bigger picture outlined above, however.  For them ‘social tools’ are either all about the way in which they make themselves ‘cooler’ or ‘more relevant’ by ‘engaging’ in social media platforms for marketing or customer support (looking externally) or something vaguely threatening and of marginal interest that undermines organisational structures and leads to staff wasting time outside the restrictions of their job role (looking internally).  To date they seem to be less interested in how these tools relate to a wider transformation to more ‘social’ (i.e.  specialised and interconnected) business models.  As with the SOA inertia I discussed in a previous blog post there is no heartfelt internal urgency for the business model reconfiguration required to really take social thinking to the heart of the organisation.  Like SOA, social tools drive componentisation and specialisation along with networked collaboration and hence the changes required for one are pretty similar to the changes required for the other.  As with SOA it may take the emergence of superior external service providers built from the ground up to be open, social and designed for composition to really start to trigger internal change.

In lieu of reflecting on the deeper and more meaningful trends towards ‘business model sociability’ that are eroding the effectiveness of their existing organisation, then, many are currently trying to bolt ‘sociability’ onto the edge of their current model as simply another channel for PR activity.  Whilst this often goes wrong it can also add terrific value if done honestly or with a clear business purpose.  Mostly it is done with little or no business case – it is after all an imperative to be more social, isn’t it? – and for each accidental success that occurs because a company’s unarticulated business model happens to be right for such channels there are also many failures (because it isn’t).

The reality is that the value of social tools will depend on the primary business model you follow (and increasingly the business model of each individual business capability in your value web, both internal and external – something I discussed in more detail here).

I think my current feeling is therefore that we have a set of circumstances that go kind of like this:

  1. There is an emerging business disruption that will drive organisational specialisation around a set of ‘business model types’ but which isn’t yet broadly understood or seen by the majority of people who are busy doing real work;
  2. We have a broad set of useful tools that can be used to create enormous value by fostering collaboration amongst groups of people across departmental, organisational and geographic boundaries; and
  3. There are a small number of organisations who – often through serendipity – have happened to make a success of using a subset of these tools with particular consumer groups due to the accidental fit of their primary business model with the project and tools selected.

As a result although most people’s reptilian brain instinctively feels that ‘something’ big is happening, instead of:

  • focusing on understanding their future business model (1) before
  • selecting useful tools to amplify this business model (2) and then
  • using them to engage with appropriate groups in a culturally appropriate way (3)

People are actually:

  • trying to blindly replicate others serendipitous success (3)
  • with whatever tools seems ‘coolest’ or most in use (2) and
  • no hope of fundamentally addressing the disruptions to their business model (1)

Effectively most people are therefore coming at the problem from entirely the wrong direction and wasting time, money and – potentially – the good opinion of their customers.

More clearly – rather than looking at their business as a collection of different business models and trying to work out how social tools can help in each different context, companies are all trying to use a single approach based largely on herd behaviour when their business model often has nothing directly to do with the target audience.  Until we separate the kinds of capabilities that require the application of creative or networking talent, understand the business models that underpin them and then analyse the resulting ‘types’ of work (and hence outcomes) to be enabled by tooling we will never gain significant value or leverage from the whole Enterprise 2.0 / social business / whatever field.

What’s the Future of SOA?

9 Nov

EbizQ asked last week for views on the improvements people believe are required to make SOA a greater success.  I think that if we step back we can see some hope – in fact increasing necessity – for SOA and the cloud is going to be the major factor in this.

If we think about the history of SOA to date it was easy to talk about the need for better integration across the organisation, clearer views of what was going on or the abstract notion of agility. Making it concrete and urgent was more of an issue, however. Whilst we can discuss the ‘failure’ of SOA by pointing to a lack of any application of service principles at a business level (i.e. organisationally through some kind of EA) this is really only a symptom and not the underlying cause. In reality the cause of SOA failure to date has been business inertia – organisations were already set up to do what they did, they did it well enough in a push economy and the (understandable) incentives for wholesale consideration of the way the business worked were few.

The cloud changes all of this, however. The increasing availability of cloud computing platforms and services acts as a key accelerator to specialisation and pull business models since it allows new entrants to join the market quickly, cheaply and scalably and to be more specialised than ever before. As a result many organisational capabilities that were economically unviable as market offerings are now becoming increasingly viable because of the global nature of cloud services. All of these new service providers need to make their capabilities easy to consume, however, and as a result are making good use of what people are now calling ‘apis’ in a web 2.0 context but which are really just services; this is important as one of the direct consequences of specialisation is the need to be hooked into the maximum number of appropriate value web participants as easily as possible.

On the demand side, as more and more external options become available in the marketplace that offer the potential to replace those capabilities that enterprises have traditionally executed in house, so leaders will start to rethink the purpose of their organisations and leverage the capabilites of external service providers in place of their own.

As a result cloud and SOA are indivisable if we are to realise the potential of either; cloud enables a much broader and more specialised set of business service providers to enter a global market with cost and capability profiles far better than those which an enterprise can deliver internally. Equally importantly, however, they will be implicitly (but concretely) creating a ‘business SOA catalogue’ within the marketplace, removing the need for organisations to undertake a difficult internal slog to re-implement or re-configure outdated capabilities for reuse in service models. Organisations need to use this insight now to trigger the use of business architecture techniques to understand their future selves as service-based organisations – both by using external services as archtypes to help them understand the ways in which they need to change and offer their own specialised services but also to work with potential partners to co-develop and then disaggregate those services in which they don’t wish to specialise in future.

Having said all that to set the scene for my answer(!) I believe that SOA research needs to be focused on raising the concepts of IT mediated service provision to a business level – including concrete modelling of business capabilities and value webs – along with complex service levels, contracts, pricing and composition – new cloud development platforms, tooling and management approaches linked more explicitly to business outcomes – and which give specialised support to different kinds of work – and the emergence of new 3rd parties who will mediate, monitor and monetise such relationships on behalf of participants in order to provide the required trust.

All in all I guess there’s still plenty to do.

Cloud vs Mainframes

19 Oct

David Linthicum highlights some interesting research about mainframes and their continuation in a cloud era.

I think David is right that mainframes may be one of the last internal components to be switched off and that in 5 years most of them will still be around.  I also think, however, that the shift to cloud models may have a better chance of achieving the eventual decommissioning of mainframes than any previous technological advance.  Hear me out for a second.

All previous new generations of technology looking to supplant the mainframe have essentially been slightly better ways of doing the same thing.  Whilst we’ve had massive improvements in the cost and productivity of hardware, middleware and development languages essentially we’ve continued to be stuck with purchase and ownership of costly and complex IT assets. As a result whilst most new development has moved to other platforms the case for shifting away from the mainframe has never seriously held water. Whilst redevelopment would generate huge expense and risk it would result in no fundamental business shift as a result. Essentially you still owned and paid for a load of technology ‘stuff’ and the people to support it even if you successfully navigated the huge organisational and technical challenges required to move ‘that stuff’ to ‘this stuff’. In addition the costs already sunk into the assets and the technology cost barriers to other people entering a market (due to the capital required for large scale IT ownership) also added to the general inertia.

At its heart cloud is not a shift to a new technology but – for once – genuinely a shift to a new paradigm. It means capabilities are packaged and ready to be accessed on demand.  You no longer need to make big investments in new hardware, software and skills before you can even get started. In addition suddenly everyone has access to the best IT and so your competitors (and new entrants) can immediately start building better capabilities than you without the traditional technology-based barriers of entry. This could lead to four important considerations that might eventually lead to the end of the mainframe:

  1. Should an organisation decide to develop its way off the mainframe they can start immediately without the traditional need to incur the huge expense and risk of buying hardware, software, development and systems integration capability before they can even start to redevelop code.  This removes a lot of the cost-based risks and allows a more incremental approach;
  2. Many of the applications implemented on mainframes will increasingly be in competition with external SaaS applications that offer broadly equivalent functionality.  In this context moving away from the mainframe is even less costly and risky (whilst still a serious undertaking) since we do not need to even redevelop the functionality required;
  3. The nature of the work that mainframe applications were set up to support (i.e. internal transaction processing across a tight internal value chain) is changing rapidly as we move towards much more collaborative and social working styles that extend across organisational boundaries.  The changing nature of work is likely to eat away further at the tightly integrated functionality at the heart of most legacy applications and leave fewer core transactional components running on the mainframe; and
  4. Most disruptive of all, as organisations increasingly take advantage of falling collaboration costs to outsource whole business capabilities to specialised partners, so much of the functionality on the mainframe (and other systems) becomes redundant since that work is no longer performed in house.

I think that the four threads outlined here have the possibility to lead to a serious decline in mainframe usage over the next ten years.

But then again they are like terminators – perhaps they will simply be acquired gradually by managed service providers offering to squeeze the cost of maintenance, morph into something else and survive in a low grade capacity for for some time.

 

 

Private Clouds “Surge” for Wrong Reasons?

14 Jul

I read a post by David Linthicum today on an apparent surge in demand for Private Clouds.  This was in turn spurred by thoughts from Steve Rosenbush on increasing demand for Private Cloud infrastructures.

To me this whole debate is slightly tragic as I believe that most people are framing the wrong issues when considering the public vs private cloud debate (and frankly for me it is a ridiculous debate as in my mind ‘the cloud’ can only exist ‘out there, somewhere’ and thus be shared; to me a ‘private’ cloud can only be a logically separate area of a shared infrastructure and not an organisation specific infrastructure which merely shares some of the technologies and approaches – which, frankly, is business as usual and not a cloud.  For that reason when I talk about public clouds I also include such logically private clouds running on shared infrastructures).  As David points out there are a whole host of reasons that people push back against the use of cloud infrastructures, mostly to do with retaining control in one way or another.  In essence there are a list of IT issues that people raise as absolute blockers that require private infrastructure to solve – particularly control, service levels and security – whilst they ignore the business benefits of specialisation, flexibility and choice.  Often “solving” the IT issues and propagating a model of ownership and mediocrity in IT delivery when it’s not really necessary merely denies the business the opportunity to solve their issues and transformationally improve their operations (and surely optimising the business is more important than undermining it in order to optimise the IT, right?).  That’s why for me the discussion should be about the business opportunities presented by the cloud and not simply a childish public vs private debate at the – pretty worthless – technology level.

Let’s have a look at a couple of issues:

  1. The degree of truth in the control, service and security concerns most often cited about public cloud adoption and whether they represent serious blockers to progress;
  2. Whether public and private clouds are logically equivalent or completely different.

IT issues and the Major Fallacies

Control

Everyone wants to be in control.  I do.  I want to feel as if I’m moving towards my goals, doing a good job – on top of things.  In order to be able to be on top of things, however, there are certain things I need to take for granted.  I don’t grow my own food, I don’t run my own bank, I don’t make my own clothes.  In order for me to concentrate on my purpose in life and deliver the higher level services that I provide to my customers there are a whole bunch of things that I just need to be available to me at a cost that fits into my parameters.  And to avoid being overly facetious I’ll also extend this into the IT services that I use to do my job – I don’t build my own blogging software or create my own email application but rather consume all of these as services over the web from people like WordPress.com and Google. 

By not taking personal responsibility for the design, manufacture and delivery of these items, however (i.e. by not maintaining ‘control’ of how they are delivered to me), I gain the more useful ability to be in control of which services I consume to give me the greatest chance of delivering the things that are important to me (mostly, lol).  In essence I would have little chance of sitting here writing about cloud computing if I also had to cater to all my basic needs (from both a personal as well as IT perspective).  I don’t want to dive off into economics but simplistically I’m taking advantage of the transformational improvements that come from division of labour and specialisation – by relying on products and services from other people who can produce them better and at lower cost I can concentrate on the things that add value for me.

Now let’s come back to the issue of private infrastructure.  Let’s be harsh.  Businesses simply need IT that performs some useful service.  In an ideal world they would simply pay a small amount for the applications they need, as they need them.  For 80% of IT there is absolutely no purpose in owning it – it provides no differentiation and is merely an infrastructural capability that is required to get on with value-adding work (like my blog software).  In a totally optimised world businesses wouldn’t even use software for many of their activities but rather consume business services offered by partners that make IT irrelevant. 

So far then we can argue that for 80% of IT we don’t actually need to own it (i.e. we don’t need to physically control how it is delivered) as long as we have access to it.  For this category we could easily consume software as a service from the “public” cloud and doing so gives us far greater choice, flexibility and agility.

In order to deliver some of the applications and services that a business requires to deliver its own specialised and differentiated capabilities, however, they still need to create some bespoke software.  To do this they need a development platform.  We can therefore argue that the lowest level of computing required by a business in future is a Platform as a Service (PaaS) capability; businesses never need to be aware of the underlying hardware as it has – quite literally – no value.  Even in terms of the required PaaS capability the business doesn’t have any interest in the way in which it supports software development as long as it enables them to deliver the required solutions quickly, cheaply and with the right quality.  As a result the internals of the PaaS (in terms of development tooling, middleware and process support) have no intrinsic value to a business beyond the quality of outcome delivered by the whole.  In this context we also do not care about control since as long as we get the outcomes we require (i.e. rapid, cost effective and reliable applications delivery and operation) we do not care about the internals of the platform (i.e. we don’t need to have any control over how it is internally designed, the technology choices to realise the design or how it is operated).  More broadly a business can leverage the economies of scale provided by PaaS providers – plus interoperability standards – to use multiple platforms for different purposes, increasing the ‘fitness’ of their overall IT landscape without the traditional penalties of heterogeneity (since traditionally they would be ‘bound’ to one platform by the inability of their internal IT department to cost-effectively support more than one technology).

Thinking more deeply about control in the context of this discussion we can see that for the majority of IT required by an organisation concentrating on access gives greater control than ownership due to increased choice, flexibility and agility (and the ability to leverage economies of scale through sharing).  In this sense the appropriate meaning of ‘control’ is that businesses have flexibility in choosing the IT services that best optimise their individual business capabilities and not that the IT department has ‘control’ of the way in which these services are built and delivered.  I don’t need to control how my clothes manufacturer puts my t-shirt together but I do want to control which t-shirts I wear.  Control in the new economy is empowerment of businesses to choose the most appropriate services and not of the IT department to play with technology and specify how they should be built.  Allowing IT departments to maintain control – and meddle in the way in which services are delivered – actually destroys value by creating a burden of ownership for absolutely zero value to the business.  As a result giving ‘control’ to the IT department results in the destruction of an equal and opposite amount of ‘control’ in the business and is something to be feared rather than embraced.

So the need to maintain control – in the way in which many IT groups are positioning it – is the first major and dangerous fallacy. 

Service levels

It is currently pretty difficult to get a guaranteed service level with cloud service providers.  On the other hand, most providers are consistently up in the 99th percentile and so the actual service levels are pretty good.  The lack of a piece of paper with this actual, experienced service level written down as a guarantee, however, is currently perceived as a major blocker to adoption.  Essentially IT departments use it as a way of demonstrating the superiority of their services (“look, our service level says 5 nines – guaranteed!”) whilst the level of stock they put in these service levels creates FUD in the minds of business owners who want to avoid major risks. 

So let’s lay this out.  People compare the current lack of service level guarantees from cloud service providers with the ability to agree ‘cast-iron’ service levels with internal IT departments.  Every project I’ve ever been involved in has had a set of service levels but very few ever get delivered in practice.  Sometimes they end up being twisted into worthless measures for simplicity of delivery – like whether a machine is running irrespective of whether the business service it supports is available – and sometimes they are just unachievable given the level of investment and resources available to internal IT departments (whose function, after all, is merely that of a barely-tolerated but traditionally necessary drain on the core purpose of the business). 

So to find out whether I’m right or not and whether service level guarantees have any meaning I will wait until every IT department in the world puts their actual achieved service levels up on the web like – for instance – Salesforce.  I’m keen to compare practice rather than promises.  Irrespective of guarantees my suspicion is that most organisations actual service levels are woeful in comparison to the actual service levels delivered by cloud providers but I’m willing to be convinced.   Despite the illusion of SLA guarantees and enforcement the majority of internal IT departments (and managed service providers who take over all of those legacy systems for that matter) get nowhere near the actual service levels of cloud providers irrespective of what internal documents might say.  It is a false comfort.  Businesses therefore need to wise up, consider real data and actual risks – in conjunction with the transformational business benefits that can be gained by offloading capabilities and specialising – rather than let such meaningless nonsense take them down the old path to ownership; in doing so they are potentially sacrificing a move to cloud services and therefore their best chance of transforming their relationship with their IT and optimising their business.  This is essentially the ‘promise’ of buying into updated private infrastructures (aka ‘private cloud’).

A lot of it comes down to specialisation again and the incentives for delivering high service levels.  Think about it – a cloud provider (literally) lives and dies by whether the services they offer are up; without them they make no money, their stock falls and customers move to other providers.  That’s some incentive to maintain excellence.  Internally – well, what you gonna do?  You own the systems and all of the people so are you really going to penalise yourself?  Realistically you just grit your teeth and live with the mediocrity even though it is driving rampant sub-optimisation of your business.  Traditionally there has been no other option and IT has been a long process of trying to have less bad capability than your competitors, to be able to stagger forward slightly faster or spend a few pence less.  Even outsourcing your IT doesn’t address this since whilst you have the fleeting pleasure of kicking someone else at the end of the day it’s still your IT and you’ve got nowhere to go from there.  Cloud services provide you with another option, however, one which takes advantage of the fact that other people are specialising on providing the services and that they will live and die by their quality.  Whilst we might not get service levels – at this point in their evolution at least – we do get transparency of historical performance and actual excellence; stepping back it is critical to realise that deeds are more important than words, particularly in the new reputation-driven economy. 

So the perceived need for service levels as a justification for private infrastructures is the second major and dangerous fallacy.  Businesses may well get better service levels from cloud providers than they would internally and any suggestion to the contrary will need to be backed up by thorough historical analysis of the actual service levels experienced for the equivalent capability.  Simply stating that you get a guarantee is no longer acceptable. 

Security

It’s worth stating from the beginning that there is nothing inherently less secure about cloud infrastructures.  Let’s just get that out there to begin with.  Also in getting infrastructure as a service out of the way – given that we’re taking the position in this post that PaaS is the first level of actual value to a business – we can  say that it’s just infrastructure; your data and applications will be no more or less secure than your own procedures make it but the data centre is likely to be at least as secure as your own and probably much more so due to the level of capability required by a true service provider.

So starting from ground zero with things that actually deliver something (i.e. PaaS and SaaS) a cloud provider can build a service that uses any of the technologies that you use in your organisation to secure your applications and data only they’ll have more usecases and hence will consider more threats than you will.  And that’s just the start.  From that point the cloud provider will also have to consider how they manage different tenants to ensure that their data remains secure and they will also have to protect customers’ data from their own (i.e. the cloud service providers) employees.  This is a level of security that is rarely considered by internal IT departments and results in more – and more deeply considered – data separation and encryption than would be possible within a single company. 

Looking at the cloud service from the outside we can see that providers will be more obvious targets for security attacks than individual enterprises but counter-intuitively this will make them more secure.  They will need to be secured against a broader range of attacks, they will learn more rapidly and the capabilities they learn through this process could never be created within an internal IT organisation.  Frankly, however, the need to make security of IT a core competency is one of the things that will push us towards consolidation of computing platforms into large providers – it is a complex subject that will be more safely handled by specialised platforms rather than each cloud service provider or enterprise individually. 

All of these changes are part of the more general shift to new models of computing; to date the paradigm for security has largely been that we hide our applications and data from each other within firewalled islands.  Increasing collaboration across organisations and the cost, flexibility and scale benefits of sharing mean that we need to find a way of making our services available outside our organisational boundaries, however.  Again in doing this we need to consider who is best placed to ensure the secure operation of applications that are supporting multiple clients – is it specialised cloud providers who have created a security model specifically to cope with secure open access and multi-tenancy for many customer organisations, or is it a group of keen “amateurs” with the limited experience that comes from the small number of usecases they have discovered within the bounds of a single organisation?  Furthermore as more and more companies migrate onto cloud services – and such services become ever more secure – so the isolated islands will become prime targets for security attacks, since the likelihood that they can maintain top levels of security cut off from the rest of the industry – and with far less investment in security than can be made by specialised platform providers – becomes ever less.  Slowly isolationism becomes a threat rather than a protection.  We really are stronger together.

A final key issue that falls under the ‘security’ tag is that of data location (basically the perceived requirement to keep data in the country of the customers operating business).  Often this starts out as the major, major barrier to adoption but slowly you often discover that people are willing to trade off where their data are stored when the costs of implementing such location policies can be huge for little value.  Again, in an increasingly global world businesses need to think more openly about the implications of storing data outside their country – for instance a UK company (perhaps even government) may have no practical issues in storing most data within the EU.  Again, however, in many cases businesses apply old rules or ways of thinking rather than challenging themselves in order to gain the benefits involved.  This is often tied into political processes – particularly between the business and IT – and leads to organisations not sufficiently examining the real legal issues and possible solutions in a truly open way.  This can often become an excuse to build a private infrastructure, fulfilling the IT departments desire to maintain control over the assets but in doing so loading unnecessary costs and inflexibility on the business itself – ironically as a direct result of the businesses unwillingness to challenge its own thinking. 

Does this mean that I believe that people should immediately begin throwing applications into the cloud without due care and attention?  Of course not.  Any potential provider of applications or platforms will need to demonstrate appropriate certifications and undergo some kind of due diligence.  Where data resides is a real issue that needs to be considered but increasingly this is regional rather than country specific.   Overall, however, the reality is that credible providers will likely have better, more up to date and broader security measures than those in place within a single organisation. 

So finally – at least for me – weak cloud security is the third major and dangerous and fallacy.

Comparing Public and Private

Private and Public are Not Equivalent

The real discussion here needs to be less about public vs private clouds – as if they are equivalent but just delivered differently – and more about how businesses can leverage the seismic change in model occurring in IT delivery and economics.  Concentrating on the small minded issues of whether technology should be deployed internally or externally as a result of often inconsequential concerns – as we have discussed – belittles the business opportunities presented by a shift to the cloud by dragging the discussion out of the business realm and back into the sphere of techno-babble.

The reality is that public and private clouds and services are not remotely equivalent; private clouds (i.e. internal infrastructure) are a vote to retain the current expensive, inflexible and one-size-fits-all model of IT that forces a business to sub-optimise a large proportion of its capabilities to make their IT costs even slightly tolerable.  It is a vote to restrict choice, reduce flexibility, suffer uncompetitive service levels and to continue to be distracted – and poorly served – by activities that have absolutely no differentiating value to the business. 

Public clouds and services on the other hand are about letting go of non-differentiating services and embracing specialisation in order to focus limited attention and money on the key mission of the business.  The key point in this whole debate is therefore specialisation; organisations need to treat IT as an enabler and not an asset, they need to  concentrate on delivering their services and not on how their clothes get made. 

Summary

If there is currently a ‘surge’ in interest in private clouds it is deeply confusing (and disturbing) to me given that the basis for focusing attention on private infrastructures appears to be deeply flawed thinking around control, service and security.  As we have discussed not only are cloud services the best opportunity that businesses have ever had to improve these factors to their own gain but a misplaced desire to retain the IT models of today also undermines the huge business optimisations available through specialisation and condemns businesses to limited choice, high costs and poor service levels.  The very concerns that are expressed as reasons not to move to cloud models – due to a concentration on FUD around a small number of technical issues – are actually the things that businesses have most to gain from should they be bold and start a managed transition to new models.  Cloud models will give them control over their IT by allowing them to choose from different providers to optimise different areas of their business without sacrificing scale and management benefits; service levels of cloud providers – whilst not currently guaranteed – are often better than they’ve ever experienced and entrusting security to focused third parties is probably smarter than leaving it as one of many diverse concerns for stretched IT departments. 

Fundamentally, though, there is no equivalence between the concept of public (including logically private but shared) and truly private clouds; public services enable specialisation, focus and all of the benefits we’ve outlined whereas private clouds are just a vote to continue with the old way.  Yes virtualisation might reduce some costs, yes consolidation might help but at the end of the day the choice is not the simple hosting decision it’s often made out to be but one of business strategy and outlook.  It boils down to a choice between being specialised, outward looking, networked and able to accelerate capability building by taking advantage of other people’s scale and expertise or rejecting these transformational benefits and living within the scale and capability constraints of your existing business – even as other companies transform and build new and powerful value networks without you.

iPad not harbinger of PC doom according to Steve Jobs

9 Jun

After having put some time into thinking about people’s discomfort with the iPad a couple of weeks ago I was interested in this brief article in AppleInsider where Steve Jobs admits that the notion of a post-PC era is ‘uncomfortable’ for many people – a subject that I touched on in my post.  Jobs’ comments appear to support my own impressions that this is really just a maturation of the industry, a democratisation of access to computing for the masses and that it won’t undermine traditional computing for those with the necessary skills.  This should be a relief to people who worry that such devices will replace computers and thereby destroy the ability of individuals to be technically “generative”.   I also basically agree with his summary of tablets as a new form factor that replaces the need for a PC for many people, that PCs will continue to exist and that more choice is good (and although I still don’t agree with the Apple business model – and feel that it will suffer as other people replicate their innovations in more open ecosystems – only one of us is obscenely rich :-)).

More broadly my gut feel is that as the interfaces and capabilities of tablets increase in sophistication so we will be able to encourage more ‘vertical’ and ‘individual’ creativity and “generativity” in the population as a whole.  These people won’t be using the same tools as those we’ve had to learn to create through PC use but then they also won’t need that lower level, general-purpose control over raw computing that many people have had to learn merely to pursue higher level interests.  There will still be plenty of IT work – in fact more than ever – implementing applications and services to help these newly liberated consumers ignore the underlying computer and be creative within their own domains.

Business Enablement as a Key Cloud Element

30 Apr

After finally posting my last update about ‘Industrialised Service Delivery’ yesterday I have been happily catching up with the intervening output of some of my favourite bloggers.

One post that caught my eye was a reference from Phil Wainwright – whilst he was talking about the VMForce announcement – to a post he had written earlier in the year about Microsoft’s partnership with Intuit.  Essentially one of his central statements was related directly to the series of posts I completed yesterday (so part 1, part 2 and part 3):

“the breadth of infrastructure <required for SaaS> extends beyond the development functionality to embrace the entirely new element of service delivery capabilities. This is a platform’s support for all the components that go with the as-a-service business model, including provisioning, pay-as-you-go pricing and billing, service level monitoring and so on. Conventional software platforms have no conception of these types of capability but they’re absolutely fundamental to delivering cloud services and SaaS applications”.

This is one of the key points that I think is still – inexplicably – lost on many people (particularly people who believe that cloud computing is primarily about providing infrastructure as a service).  In reality the whole world is moving to service models because they are simpler to consume, deliver clearer value for more transparent costs and can be shared across organisations to generate economies of scale.  In fact ‘as a service’ models are increasingly not going to be an IT phenomenon but also going to extend to the way in which businesses deal with each other across organisational boundaries.  For the sale and consumption of such services to work, however, we need to be able to ‘deliver’ them; in this context we need to be able to market them, make them easy to subscribe to, manage billing and service levels transparently for both the supplier and consumer and enable rapid change and development over time to meet the evolving needs of service consumers.  As a result anyone who wants to deliver business capabilities in the future – whether these are applications or business process utilities – will need to be able to ensure that their offering exhibits all of these characteristics. 

Interestingly these ‘business enablement’ functions are pretty generic across all kinds of software and services since they essentially cover account management, subscription, business model definition, rating and billing, security, marketplaces etc etc (i.e. all of the capabilities that I defined as being required in a ‘Service Delivery Platform’).  In this context the use of the term ‘Service Delivery Platform’ in place of cloud or PaaS was deliberate; what next generation infrastructures need to do is enable people to deliver business services as quickly and as robustly as possible, with the platforms themselves also helping to ensure trust by brokering between the interests of consumers and suppliers through transparent billing and service management mechanisms.

This belief in service delivery is one of the reasons I believe that the notion of ‘private clouds’ is an oxymoron – I found this hoary subject raised again on a Joe McKendrick post after a discussion on ebizQ – even without the central point about the obvious loss of economies of scale; essentially  the requirement to provide a whole business enablement fabric to facilitate cross organisational service ecosystems – initially for SaaS but increasingly for organisational collaboration and specialisation – is just one of the reasons I believe that ‘Private Clouds’ are really just evolutions of on-premise architecture patterns – with all of the costs and complexity retained – and thus purely marketecture.  When decreasing transaction costs are enabling much greater cross organisational value chains the benefits of a public service delivery platform are immense, enabling organisations to both scale and evolve their operations more easily whilst also providing all of the business support they need to offer and consume business services in extended value chains.  Whilst some people may think that this is a pretty future-oriented reason to not like the notion of private clouds, for completeness I will also say that to me  – in the sense of customer owned infrastructures – they are an anachronism; again this is just an extension of existing models (for good or ill) and nothing to do with ‘cloud’.  It is only the fact that most protagonists of such models are vendors with very low level maturity offerings like packaged infrastructure and/or middleware solutions that makes it viable, since the complexity of delivering true private SDP offerings would be too great (not to mention ridiculously wasteful).  In my view ‘private clouds’ in the sense of end organisation deployment is just building a new internal infrastructure (whether self managed or via a service company) sort of like the one you already already have but with a whole bunch of expensive new hardware and software (so 90% of the expense but only 10% of the benefits). 

To temper this stance I do believe that there is a more subtle, viable version of ‘privacy’ that will be supported by ‘real’ service delivery platforms over time – that of having a logically private area of a public SDP to support an organisational context (so a cohesive collection of branded services, information and partner integrations – or what I’ve always called ‘virtual private platforms’).  This differs greatly from the ‘literally’ private clouds that many organisations are positioning as a mechanism to extend the life of traditional hardware, middleware or managed service offerings – the ability of service delivery platforms to rapidly instantiate ‘virtual’ private platforms will be a core competency and give the appearance and benefits of privacy whilst also maintaining the transformational benefits of leveraging the cloud in the first place.  To me literally ‘private clouds’ on an organisations own infrastructure – with all of their capital expense, complexity of operation, high running costs and ongoing drag on agility – only exist in the minds of software and service companies looking to extend out their traditional businesses for as long as possible. 

Industrialised Service Delivery Redux II

22 Sep

In my previous post I discussed the way in which our increasingly sophisticated use of the Web is creating an unstoppable wave of change in the global business environment.  This resulting acceleration of change and expectation will require unprecedented organisational speed and adaptability whilst simultaneously driving globalisation and consumerisation of business.  I discussed my belief that companies will be forced to reform as a portfolio of systematically designed components with clear outcomes and how this kind of thinking changes the relationship between a business capability and its IT support.  In particular I discussed the need to create industrialised Service Delivery Platforms which vastly increase the speed, reliability and cost effectiveness of delivering service realisations. 

In this post I’ll to move into the second part of the story where I’ll look more specifically at how we can realise the industrialisation of service delivery through the creation of an SDP.

Industrialisation 101

There has been a great deal written about industrialisation over the last few years and most of this literature has focused on IT infrastructure (i.e. hardware) where components and techniques are more commoditised.  As an example many of my Japanese colleagues have spent decades working with leaders in the automotive industry and experienced firsthand the techniques and processes used in zero defect manufacturing and the application of lean principles. Sharing this same mindset around reliabliity, zero defect and technology commoditisation they created a process for delivering reliable and guaranteed outcomes through pre-integration and testing of combinations of hardware and software.  This kind of infrastructure industrialisation enables much higher success rates whilst simultaneously reducing the costs and lead times of implementation. 

In order to explore this a little further and to set some context, let’s Just think for a moment about the way in which IT has traditionally served its business customers.  

nonindutsrialisedvsindustrialised

We can see that generally speaking we are set a problem to solve and we then take a list of products selected by the customer – or often by one of our architects applying personal preference – and we try to integrate them together on the customers site, at the customers risk and at the customers expense. The problem is that we may never have used this particular combination of hardware, operating systems and middleware before – a problem that worsens exponentially as we increase the complexity of the solution, by the way – and so there are often glitches in their integration, it’s unclear how to manage them and there can’t be any guarantees about how they will perform when the whole thing is finally working. As a result projects take longer than they should – because much has to be learned from scratch every time – they cost a lot more than they should – because there are longer lead times to get things integrated, to get them working and then to get them into management – and, most damningly, they are often unreliable as there can be no guarantees that the combination will continue to work and there is learning needed to understand how to keep them up and running.

The idea of infrastructure industrialisation, however, helps us to concentrate on the technical capability required – do you want a Java application server? Well here it is, pre-integrated on known combinations of hardware and software and with manageability built in but – most importantly – tested to destruction with reference applications so that we can place some guarantees around the way this combination will perform in production.  As an example, 60% of the time taken within Fujitsu’s industrialisation process is in testing.  The whole idea of industrialisation is to transfer the risk to the provider – whether an internal IT department or an external provider – so that we are able to produce consistent results with standardised form and function, leading to quicker, more cost effective and reliable solutions for our customers.

Now such industrialisation has slowly been been maturing over the last few years but – as I stated at the beginning – has largely concentrated on infrastructure templating – hardware, operating systems and middleware combined and ready to receive applications.  Recent advances in virtualisation are also accelerating the commoditisation and industrialisation of IT infrastructure by making this templating process easier and more flexible than ever before.  Such industrialisation provides us with more reliable technology but does not address the ways in which we can realise higher level business value more rapidly and reliably.  The next (and more complex) challenge, therefore, is to take these same principles and apply them to the broader area of business service realisation and delivery.  The question is how we can do this?

Industrialisation From Top to Bottom

Well the first thing to do is understand how you are going to get from your expression of intent – i.e. the capability definitions I discussed in my previous post that abstract us away from implementation concerns – through to a running set of services that realise this capability on an industrialised Service Delivery Platform. This is a critical concern since If you don’t understand your end to end process then you can’t industrialise it through templating, transformation and automation.

endtoendservicerealisation

In this context we can look at our capability definitions and map concepts in the business architecture model down to classifications in the service model.  Capabilities map to concrete services, macro processes map to orchestrations, people tasks map to workflows, top level metrics become SLAs to be managed etc. The service model essentially bridges the gap between the expression of intent described by the target business architecture and the physical reality of assets needed to execute within the technology environment.

From here we broadly need to understand how each of our service types will be realised in the physical environment – so for instance we need a physical host to receive and execute each type of service, we need to understand how SLAs are provisioned so that we can monitor them etc. etc.

Basically the concern at this stage is to understand the end to end process through which we will transform the data that we capture at each stage of the process into ever more concrete terms – all the way from logical expressions of intent through greater information about the messages, service levels and type of implementation required, through to a whole set of assets that are physically deployed and executing on the physical service platform, thus realising the intent.

The core aim of this process must be to maximise both standardisation of approach and automation at each stage to ensure repeatability and reliability of outcome – essentially our aim in this process is to give business capability owners much greater reliability and rapidity of outcome as they look to realise business value.  We essentially want to give guarantees that we can not only realise functionality rapidly but also that these realisations will execute reliably and at low cost.  In addition we must also ensure that the linkage between each level of abstraction remains in place so that information about running physical services can be used to judge the performance of the capability that they realise, maximising the levers of change available to the organisation by putting them in control of the facts and allowing them to ‘know sooner’ what is actually happening.

Having an end to end view of this process essentially creates the rough outline of the production line that needs to be created to realise value – it gives us a feel for the overall requirements.  Unfortunately, however, that’s the nice bit, the kind of bit that I like to do. Whilst we need to understand broadly how we envisage an end to end capability realisation process working, the real work is in the nasty bit – when it comes to industrialisation work has to start at the bottom.

Industrialisation from Bottom to Top

If you imagine the creation of a production line for any kind of physical good they obviously have to be designed to optimise the creation of the end product. Every little conveyer belt or twisty robot arm has to be calibrated to nudge or weld the item in exactly the same spot to achieve repeatability of outcome. In the same way any attempt to industrialise the process of capability realisation has to start at the bottom with a consideration of the environment within which the final physical assets will execute and of how to create assets optimised for this environment as efficiently as possible. I use a simple ‘industrialisation pyramid’ to visualise this concept, since increasingly specialised and high value automation and industrialisation needs to be built on broader and more generic industrialised foundations. In reality the process is actually highly iterative as you need to continually be recalibrating both up and down the hierarchy to ensure that the process is both efficient and realises the expressed intent but for the sake of simplicity you can assume that we just build this up from the bottom.

industrialisationpyramid

So let’s start at the bottom with the core infrastructure technologies – what are the physical hosts that are required to support service execution? What physical assets will services need to create in order to execute on top of them? How does each host combine together to provide the necessary broad infrastructure and what quality of service guarantees can we put around each kind of host? Slightly more broadly, how will we manage each of the infrastructure assets? This stage requires a broad range of activity not just to standardise and templatise the hosts themselves but also to aggregate them into a platform and to create all of the information standards and process that deployed services will need to conform to so that we can find, provision, run and manage them successfully.

Moving up the pyramid we can now start to think in more conceptual terms about the reference architecture that we want to impose – the service classifications we want to use, the patterns and practices we want to impose on the realisation of each type, and more specifically the development practices.  Importantly we need to be clear about how these service classifications map seamlessly onto the infrastructure hosting templates and lower level management standards to ensure that our patterns and practices are optimised – its only in this way that we can guarantee outcomes by streamlining the realisation and asset creation process. Gradually through this definition activity we begin to build up a metamodel of the types of assets that need to be created as we move from the conceptual to the physical and the links and transformations between them. This is absolutely key as it enables us to move to the next level – which I call automating the “means of production”.

This level becomes the production line that pushes us reliably and repeatably from capability definition through to physical realisation. The metamodel we built up in the previous tier helps us to define domain specific languages that simplify the process of generating the final output, allowing the capture of data about each asset and the background generation of code that conforms to our preferred classification structure, architectural patterns and development practices. These DSLs can then be pulled together into “factories” specialised to the realisation of each type of asset, with each DSL representing a different viewpoint for the particular capability in hand.  Individual factories can then be aggregated into a ‘capability realisation factory’ that drives the end to end process.  As I stated in my previous post the whole factory and DSL space is mildly controversial at the moment with Microsoft advocating explicit DSL and factory technologies and others continuing to work towards MDA or flexible open source alternatives.  It is suffice to say in this context that the approaches I’m advocating are possible via either model – a subject I might return to actually with some examples of each (for an excellent consideration of this whole area consult Martin Fowler’s great coverage, btw).

The final level of this pyramid is to actually start taking the capability realisation factories and tailoring them for the creation of industry specific offerings – perhaps a whole set of ‘factories’ around banking, retail or travel capabilities. From my perspective this is the furthest out and may actually not come to pass; despite Jack Greenfield’s compelling arguments I feel that the rise of SOA and SaaS will obviate the need to generate the same application many times by allowing the composing of solutions from shared utilities.  I feel that the idea of an application or service specific factory assumes a continuation of IT oversupply through many deployments; as a result I feel that the key issue at stake in the industrialisation arena is actually that of democratising access to the means of capability production by giving people the tools to create new value rapidly and reliably.  As a result I feel that improving the reliability and repeatability of capability realisation across the board is more critical than a focus on any particular industry. (This may change in future with demand, however, and one potential area of interest is industry specific composition factories rather than industry specific application generation factories). 

Delivering Industrialised Services

So we come at last to a picture that demonstrates how the various components of our approach come together from a high level process perspective.

servicefactoryprocess

Across the top we have our service factory. We start on the left hand side with capability modelling, capturing the metadata that describes the capability and what it is meant to do. In this context we can use a domain specific language that allows us to model capabilities explicitly within the tooling. Our aim is then to use the metadata captured about a capability to realise it as one or more services. In this context information from the metamodel is transformed into an initial version of the service before we use a service domain language to add further detail about contracts, messages and service levels. It is important to note that at this point, however, the service is still abstract – we have not bound it to any particular realisation strategy. Once we have designed the service in the abstract we can then choose an implementation strategy – example classifications could be interaction services for Uis, workflow services for people tasks, process services for service orchestrations, domain services for services that manage and manipulate data and integration services that allow adaptation and integration with legacy or external systems.

Once we have chosen a realisation strategy all of the metadata captured about the service is used to generate a partially populated realisation of the chosen type – in this context we anticipate having a factory for each kind of service that will control the patterns and practices used and provide guidance in context to the developer.

Once we have designed our services we now want to be able to design a virtual deployment environment for them based wholly on industrialised infrastructure templates. In this view we can configure and soft test the resources required to run our services before generating provisioning information that can be used to create the virtual environment needed to host the services.

In the service platform the provisioning information can be used to create a number of hosting engines, deploy the services into them, provision the infrastructure to run them and then set up the necessary monitoring before finally publishing them into a catalogue. The Service Platform therefore consists of a number of specialised infrastructure hosts supporting runtime execution, along with runtime services that provide – for example – provisioning and eventing support.

The final component of the platform is what I call a ‘service wrap’. This is an implementation of the ITSM disciplines tailored for our environment. In this context you will find the catalogue, service management, reporting and metering capabilities that are needed to manage the services at runtime (this is again a subset to make a point). In this space the service catalogue will bring together service metadata, reports about performance and usage plus subscription and onboarding processes.  Most importantly there is a strong link between the capabilities originally required and the services used to realise them, since both are linked in the catalogue to support business performance management. In this context we can see a feedback loop from the service wrap which enables capability owners to make decisions about effectiveness and rework their capabilities appropriately.

Summary

In this second post of three I have demonstrated how we can use the increasing power of abstraction delivered by service-orientation to drive the industrialisation of capability realisation.  Despite current initiatives broadly targeting the infrastructure space I have discussed my belief that full industrialisation across the infrastructure, applications, service and business domains requires the creation and consistent application of known patterns, processes, infrastructures and skills to increase repeatability and reliability. We might sacrifice some flexibility in technology choice or systems design but increasing commoditisation of technology makes this far less important than cost effectiveness and reliability. It’s particularly important to realise that when industrialising you need to understand your end to end process and then do the nasty bit – bottom up in excruciating detail.

So in the third and final post on this subject I’m going to look a little bit at futures and how the creation of standardised and commoditised service delivery platforms will affect the industry more broadly – essentially as technology becomes about access rather than ownership so we will see the rise of global service delivery platforms that support capability realisation and execution on behalf of many organisations.

Follow

Get every new post delivered to your Inbox.

Join 172 other followers