Archive | Cloud Computing RSS feed for this section

This is Not Your Daddy’s IT Department

3 Nov

Whilst noodling around the net looking at stuff for a longer post I’m writing I came across an excellent Peter Evans-Greenwood piece from a few months ago on a related theme – namely the future of the IT department.  I found it so interesting I decided to forgo my other post for now and jot down some thoughts.

After an interesting discussion about the way in which IT organisations have traditionally been managed and the ways in which outsourcing has evolved Peter turns to a discussion of the future shape of IT as a result of the need for businesses to focus more tightly, change more rapidly and deal with globalisation.  He posits that the ideal future shape of provision looks like that below (most strategic IT work at peak):

pyramid

Firstly I agree with the general shape of this graphic – it seems clear to me that much of what goes on in existing enterprises will be ceded to specialised third parties.  My only change would be to substitute ‘replace with software’ with ‘replace with external capability’ as I believe that businesses will outsource more than just software.  Given that this diagram was meant to look at the work of the IT department, however, it’s scope is understandable.

The second observation is that I believe that the IT “function” will disaggregate and be spread around both the residual business and the new external providers.  I believe that this split will happen based on cultural and economic factors.

Firstly all ‘platform’ technologies will be outsourced to the public cloud to gain economies of scale as the technology matures.  There may be a residual internal IT estate for quite some time but it is essentially something that gets run down rather than invested in for new capability.  It is probable that this legacy estate would go to one kind of outsourcer in the ‘waist’ of the triangle.

Secondly many business capabilities currently performed in house will be outsourced to specialised service providers – this is reflected in the triangle by the ‘replace with software’ bulge (although as I stated I would suggest ‘replace with external capability’ in this post to cover the fact that I’m also talking about business capabilities rather than just SaaS).

Thirdly – business capabilities that remain in house due to their differentiating or strategic nature will each absorb a subset of enterprise architects, managers and developers to enable a more lean process – essentially these people will be embedded with the rest of their business peers to support continual improvement based on aligned outcomes.  The developers producing these services will use cloud platforms to minimise infrastructural concerns and focus on software-based encoding of the specialised IP encapsulated by their business capability.  It is probable that enterprise architects, managers and developers in this context will also be supplemented by external resources from the ‘waist’ as need arises.

Finally a residual ‘portfolio and strategy’ group will sit with the executive and manage the enterprise as a collection of business capabilities sourced internally and externally against defined outcomes.  This is where the CIO and portfolio level EA people will sit and where traditional consulting suppliers would sell their services.

As a result my less elegant (i.e. pig ugly :)) diagram updated to reflect the disaggregation of the IT department and the different kinds of outsourcing capabilities they require would look something like:

future_it_department

In terms of whether the IT ‘department’ continues to exist as an identifiable capability after this disaggregation I suspect not – once the legacy platform has been replaced by a portfolio of public cloud platforms and the ‘IT staff’ merged with other cross-functional peers behind the delivery of outcomes I guess IT becomes part of the ‘fabric’ of the organisation rather than a separate capability.  I don’t believe that this means that IT becomes ‘only’ about procurement and vendor management, however, since those business capabilities that remain in house will still use IT literate staff to design and build new IT driven processes in partnership with their peers.

I did write a number of draft papers about all these issues a few years ago but they all got stuck down the gap between two jobs.  I should probably think about putting them up here one day and then updating them.

Cloud vs Mainframes

19 Oct

David Linthicum highlights some interesting research about mainframes and their continuation in a cloud era.

I think David is right that mainframes may be one of the last internal components to be switched off and that in 5 years most of them will still be around.  I also think, however, that the shift to cloud models may have a better chance of achieving the eventual decommissioning of mainframes than any previous technological advance.  Hear me out for a second.

All previous new generations of technology looking to supplant the mainframe have essentially been slightly better ways of doing the same thing.  Whilst we’ve had massive improvements in the cost and productivity of hardware, middleware and development languages essentially we’ve continued to be stuck with purchase and ownership of costly and complex IT assets. As a result whilst most new development has moved to other platforms the case for shifting away from the mainframe has never seriously held water. Whilst redevelopment would generate huge expense and risk it would result in no fundamental business shift as a result. Essentially you still owned and paid for a load of technology ‘stuff’ and the people to support it even if you successfully navigated the huge organisational and technical challenges required to move ‘that stuff’ to ‘this stuff’. In addition the costs already sunk into the assets and the technology cost barriers to other people entering a market (due to the capital required for large scale IT ownership) also added to the general inertia.

At its heart cloud is not a shift to a new technology but – for once – genuinely a shift to a new paradigm. It means capabilities are packaged and ready to be accessed on demand.  You no longer need to make big investments in new hardware, software and skills before you can even get started. In addition suddenly everyone has access to the best IT and so your competitors (and new entrants) can immediately start building better capabilities than you without the traditional technology-based barriers of entry. This could lead to four important considerations that might eventually lead to the end of the mainframe:

  1. Should an organisation decide to develop its way off the mainframe they can start immediately without the traditional need to incur the huge expense and risk of buying hardware, software, development and systems integration capability before they can even start to redevelop code.  This removes a lot of the cost-based risks and allows a more incremental approach;
  2. Many of the applications implemented on mainframes will increasingly be in competition with external SaaS applications that offer broadly equivalent functionality.  In this context moving away from the mainframe is even less costly and risky (whilst still a serious undertaking) since we do not need to even redevelop the functionality required;
  3. The nature of the work that mainframe applications were set up to support (i.e. internal transaction processing across a tight internal value chain) is changing rapidly as we move towards much more collaborative and social working styles that extend across organisational boundaries.  The changing nature of work is likely to eat away further at the tightly integrated functionality at the heart of most legacy applications and leave fewer core transactional components running on the mainframe; and
  4. Most disruptive of all, as organisations increasingly take advantage of falling collaboration costs to outsource whole business capabilities to specialised partners, so much of the functionality on the mainframe (and other systems) becomes redundant since that work is no longer performed in house.

I think that the four threads outlined here have the possibility to lead to a serious decline in mainframe usage over the next ten years.

But then again they are like terminators – perhaps they will simply be acquired gradually by managed service providers offering to squeeze the cost of maintenance, morph into something else and survive in a low grade capacity for for some time.

 

 

Is Your Business Right For The Cloud?

15 Oct

I left a short comment on the ebizq website a couple of days ago in response to the question ‘is the cloud right for my business?’

I thought I’d also post an extended response here as I strongly believe that this is the wrong question.  Basically I see questions like this all the time and they are always framed and answered at the wrong level, generating a lot of heat – as people argue about the merits of public vs private infrastructures etc – but little insight.  Essentially there are a number of technology offerings available which may or may not meet the specific IT requirements of a business at a particular point in time.  Framed in the context of traditional business and IT models the issues raised often focus on the potentially limited benefits of a one to one replacement of internal with external capability in the context of a static business.  Its usually just presented as a question of whether I provide equivalent IT from somewhere else (usually somewhere dark and scary) or continue to run it in house (warm, cuddly and with tea and biscuits thrown in).  The business is always represented as static and unaffected by the cloud other than in the degree to which its supporting IT is (marginally) better or (significantly) worse.

If the cloud was truly just about taking a traditional IT managed service (with some marginal cost benefit) vs running it in house – as is usually positioned – then I wouldn’t see the point either and would remain in front of the heater in my carpet slippers with everyone else in IT.  Unfortunately for people stuck in this way of thinking  – and the businesses that employ them – the cloud is a much, much bigger deal.

Essentially people are thinking too narrowly in terms of what the cloud represents.  It’s not about having IT infrastructure somewhere else or sourcing ‘commodity applications’ differently.  These may be the low hanging fruit visible to IT folks currently but they are a symptom of the impact of cloud and not the whole story.

The cloud is all about the falling transaction costs of collaboration and the current impact on IT business models is really just a continuation of the disruptions of the broader Internet.  As a result whilst we’re currently seeing this disruption playing out in the IT industry (through the commoditisation of technology and a move towards shared computing of all kinds) it is inevitable that other industry disruptions will follow as the costs of consuming services from world-class partners plummets and the enabling technology becomes cheaper, more configurable, more social and more scalable as a result of the reformation of the IT industry.

Essentially all businesses need to become more adaptive, more connected and more specialised to succeed in the next ten years and the cloud will both force this and support it.  Getting your business to understand and plan for these opportunities – and having a strong cloud strategy to support them – is probably the single most important thing a CIO can do at the moment.  Not building your own ‘private cloud’ with no expertise or prior practice to package or concentrating on trying to stop business colleagues with an inkling of the truth from sourcing cloud services more appropriate to their needs.  Making best use of new IT delivery models to deliver truly competitive and world-class business capabilities for the emerging market is the single biggest strategic issue facing CIOs and the long term health of the businesses they serve.  There is both huge untapped value and terrific waste languishing inside existing business structures and both can be tackled head on with the help of the cloud.  Optimising the limited number of business capabilities that remain in a business’s direct control – as opposed to those increasingly consumed from partners – will be a key part of making reformed organisations fit for the new business ecosystem.

As a result the question isn’t whether the cloud is or will be ‘right’ for your business but rather how ‘right’ your business will be for the cloud. Those organisations that fail to take a broader view and move their business and technical models to be ‘right’ for the cloud will face a tough struggle to survive in a marketplace that has evolved far beyond their capabilities.

Reporting of “Cloud” Failures

12 Oct

I’ve been reading an article from Michael Krigsman today related to Virgin Blue’s “cloud” failure in Australia along with a response from Bob Warfield.  These articles raised the question in passing of whether such offerings can really be called cloud offerings and also brought back the whole issue of ‘private clouds’ and their potentially improper use as a source of FUD and protectionism.

Navitaire essentially seem to have been hosting an instance of their single-tenancy system in what appears to be positioned as a ‘private cloud’.  As other people have pointed out, if this was a true multi-tenant cloud offering then everyone would have been affected and not just a single customer.  Presumably then – as a private cloud offering – this is more secure, more reliable, has service levels you can bet the business on and won’t go down.  Although looking at these reports it seems like it does, sometimes.

Now I have no doubt that Navitaire are a competent, professional and committed organisation who are proud of the service they offer.  As a result I’m not really holding them up particularly as an example of bad operational practice but rather to highlight widespread current practices of repositioning ‘legacy’ offerings as ‘private cloud’ and the way in which this affects customers and the reporting of failures.

Many providers whose software or platform is not multi-tenant are aggressively positioning their offering as ‘private cloud’ both as an attempt to maintain revenues for their legacy systems and a slightly cynical way to press on companies’ worries about sharing.  Such providers are usually traditional software or managed service providers who have no multi-tenant expertise or assets; as a result they try to brand things cloud whilst really just delivering old software in an old hosted model.  Whilst there is still potentially a viable market in this space – i.e. moving single-tenant legacy applications from on-premise to off-premise as a way of reducing the costs of what you already have and increasing focus on core business – such offerings are really just managed services and not cloud offerings.  The ‘private’ positioning is a sweet spot for these people, however, as it simultaneously allows them to avoid the significant investment required to recreate their offerings as true cloud services, prolongs their existing business models and plays on customers uncertainty about security and other issues.  Whilst I understand the need to protect revenue at companies involved in such ‘cloud washing’ – and thus would stop short of calling these practices cynical – it illustrates that customers do need to be aware of the underlying architecture of offerings (as Phil Wainwright correctly argued).  In reality most current ‘private cloud’ offerings are not going to deliver the levels of reliability, configurability and scale that customers associate with the promise of the cloud.  And that’s before we even get to the more business transformational issues of connectivity and specialisation.

Looking at these kinds of offerings we can see why single-tenant software and private infrastructure provided separately for each customer (or indeed internally) is more likely to suffer a large scale failure of the kind experienced by Virgin Blue.  Essentially developing truly resilient and failure optimised solutions for the cloud needs to address every level of the offering stack and realistically requires a complete re-write of software, deep integration with the underlying infrastructure and expert operations who understand the whole service intimately.  This is obviously cost prohibitive without the ability to share a solution across multiple customers (remember that cloud != infrastructure and that you must design an integrated infrastructure, software and operations platform that inherently understands the structure of systems and deals with failures across all levels in an intelligent way).  Furthermore even if cost was not a consideration, without re-development the individual parts that make up such ‘private’ solutions (i.e. infrastructure, software and operations) were not optimised from the beginning to operate seamlessly together in a cloud environment and can be difficult to keep aligned and manage as a whole.  As a result it’s really just putting lipstick on a pig and making the best of an architecture that combines components that were never meant to be consumed in this way.

However much positioning companies try to do it’s plain that you can’t get away from the fact that ultimately multi-tenancy at every level of a completely integrated technology stack will be a pre-requisite for operating reliable, scalable, configurable and cost effective cloud solutions.  As a result – and in defiance of the claims – the lack of multi-tenant architectures at the heart of most offerings currently positioned as ‘private cloud’ (both hardware and software related, internal and external) probably makes them less secure, less reliable, less cost effective and less configurable (i.e. able to meet a business need) than their ‘public’ (i.e. new) counterparts.

In defiance of the current mass of positioning and marketing to the contrary, then, it could be suggested that companies like Virgin Blue would be less likely to suffer catastrophic failures in future if they seek out real, multi-tenant cloud services that share resources and thus have far greater resilience than those that have to accommodate the cost profiles of serving individual tenants using repainted legacy technologies.  This whole episode thus appears to be a failure of the notion that you can rebrand managed services as ‘private cloud’ rather than a failure of an actual cloud service.

Most ironically of all the headlines incorrectly proclaiming such episodes as failures of cloud systems will fuel fear within many organisations and make them even more likely to fall victim to the FUD from disingenuous vendors and IT departments around ‘private cloud’.  In reality failures such as the case discussed may just prove that ‘private cloud’ offerings create exposure to far greater risk than adopting real cloud services due to the incompatibility of architecting for high scale and failure tolerance across a complete stack at the same time as architecting for the cost constraints of a single tenant.

Private Clouds “Surge” for Wrong Reasons?

14 Jul

I read a post by David Linthicum today on an apparent surge in demand for Private Clouds.  This was in turn spurred by thoughts from Steve Rosenbush on increasing demand for Private Cloud infrastructures.

To me this whole debate is slightly tragic as I believe that most people are framing the wrong issues when considering the public vs private cloud debate (and frankly for me it is a ridiculous debate as in my mind ‘the cloud’ can only exist ‘out there, somewhere’ and thus be shared; to me a ‘private’ cloud can only be a logically separate area of a shared infrastructure and not an organisation specific infrastructure which merely shares some of the technologies and approaches – which, frankly, is business as usual and not a cloud.  For that reason when I talk about public clouds I also include such logically private clouds running on shared infrastructures).  As David points out there are a whole host of reasons that people push back against the use of cloud infrastructures, mostly to do with retaining control in one way or another.  In essence there are a list of IT issues that people raise as absolute blockers that require private infrastructure to solve – particularly control, service levels and security – whilst they ignore the business benefits of specialisation, flexibility and choice.  Often “solving” the IT issues and propagating a model of ownership and mediocrity in IT delivery when it’s not really necessary merely denies the business the opportunity to solve their issues and transformationally improve their operations (and surely optimising the business is more important than undermining it in order to optimise the IT, right?).  That’s why for me the discussion should be about the business opportunities presented by the cloud and not simply a childish public vs private debate at the – pretty worthless – technology level.

Let’s have a look at a couple of issues:

  1. The degree of truth in the control, service and security concerns most often cited about public cloud adoption and whether they represent serious blockers to progress;
  2. Whether public and private clouds are logically equivalent or completely different.

IT issues and the Major Fallacies

Control

Everyone wants to be in control.  I do.  I want to feel as if I’m moving towards my goals, doing a good job – on top of things.  In order to be able to be on top of things, however, there are certain things I need to take for granted.  I don’t grow my own food, I don’t run my own bank, I don’t make my own clothes.  In order for me to concentrate on my purpose in life and deliver the higher level services that I provide to my customers there are a whole bunch of things that I just need to be available to me at a cost that fits into my parameters.  And to avoid being overly facetious I’ll also extend this into the IT services that I use to do my job – I don’t build my own blogging software or create my own email application but rather consume all of these as services over the web from people like WordPress.com and Google. 

By not taking personal responsibility for the design, manufacture and delivery of these items, however (i.e. by not maintaining ‘control’ of how they are delivered to me), I gain the more useful ability to be in control of which services I consume to give me the greatest chance of delivering the things that are important to me (mostly, lol).  In essence I would have little chance of sitting here writing about cloud computing if I also had to cater to all my basic needs (from both a personal as well as IT perspective).  I don’t want to dive off into economics but simplistically I’m taking advantage of the transformational improvements that come from division of labour and specialisation – by relying on products and services from other people who can produce them better and at lower cost I can concentrate on the things that add value for me.

Now let’s come back to the issue of private infrastructure.  Let’s be harsh.  Businesses simply need IT that performs some useful service.  In an ideal world they would simply pay a small amount for the applications they need, as they need them.  For 80% of IT there is absolutely no purpose in owning it – it provides no differentiation and is merely an infrastructural capability that is required to get on with value-adding work (like my blog software).  In a totally optimised world businesses wouldn’t even use software for many of their activities but rather consume business services offered by partners that make IT irrelevant. 

So far then we can argue that for 80% of IT we don’t actually need to own it (i.e. we don’t need to physically control how it is delivered) as long as we have access to it.  For this category we could easily consume software as a service from the “public” cloud and doing so gives us far greater choice, flexibility and agility.

In order to deliver some of the applications and services that a business requires to deliver its own specialised and differentiated capabilities, however, they still need to create some bespoke software.  To do this they need a development platform.  We can therefore argue that the lowest level of computing required by a business in future is a Platform as a Service (PaaS) capability; businesses never need to be aware of the underlying hardware as it has – quite literally – no value.  Even in terms of the required PaaS capability the business doesn’t have any interest in the way in which it supports software development as long as it enables them to deliver the required solutions quickly, cheaply and with the right quality.  As a result the internals of the PaaS (in terms of development tooling, middleware and process support) have no intrinsic value to a business beyond the quality of outcome delivered by the whole.  In this context we also do not care about control since as long as we get the outcomes we require (i.e. rapid, cost effective and reliable applications delivery and operation) we do not care about the internals of the platform (i.e. we don’t need to have any control over how it is internally designed, the technology choices to realise the design or how it is operated).  More broadly a business can leverage the economies of scale provided by PaaS providers – plus interoperability standards – to use multiple platforms for different purposes, increasing the ‘fitness’ of their overall IT landscape without the traditional penalties of heterogeneity (since traditionally they would be ‘bound’ to one platform by the inability of their internal IT department to cost-effectively support more than one technology).

Thinking more deeply about control in the context of this discussion we can see that for the majority of IT required by an organisation concentrating on access gives greater control than ownership due to increased choice, flexibility and agility (and the ability to leverage economies of scale through sharing).  In this sense the appropriate meaning of ‘control’ is that businesses have flexibility in choosing the IT services that best optimise their individual business capabilities and not that the IT department has ‘control’ of the way in which these services are built and delivered.  I don’t need to control how my clothes manufacturer puts my t-shirt together but I do want to control which t-shirts I wear.  Control in the new economy is empowerment of businesses to choose the most appropriate services and not of the IT department to play with technology and specify how they should be built.  Allowing IT departments to maintain control – and meddle in the way in which services are delivered – actually destroys value by creating a burden of ownership for absolutely zero value to the business.  As a result giving ‘control’ to the IT department results in the destruction of an equal and opposite amount of ‘control’ in the business and is something to be feared rather than embraced.

So the need to maintain control – in the way in which many IT groups are positioning it – is the first major and dangerous fallacy. 

Service levels

It is currently pretty difficult to get a guaranteed service level with cloud service providers.  On the other hand, most providers are consistently up in the 99th percentile and so the actual service levels are pretty good.  The lack of a piece of paper with this actual, experienced service level written down as a guarantee, however, is currently perceived as a major blocker to adoption.  Essentially IT departments use it as a way of demonstrating the superiority of their services (“look, our service level says 5 nines – guaranteed!”) whilst the level of stock they put in these service levels creates FUD in the minds of business owners who want to avoid major risks. 

So let’s lay this out.  People compare the current lack of service level guarantees from cloud service providers with the ability to agree ‘cast-iron’ service levels with internal IT departments.  Every project I’ve ever been involved in has had a set of service levels but very few ever get delivered in practice.  Sometimes they end up being twisted into worthless measures for simplicity of delivery – like whether a machine is running irrespective of whether the business service it supports is available – and sometimes they are just unachievable given the level of investment and resources available to internal IT departments (whose function, after all, is merely that of a barely-tolerated but traditionally necessary drain on the core purpose of the business). 

So to find out whether I’m right or not and whether service level guarantees have any meaning I will wait until every IT department in the world puts their actual achieved service levels up on the web like – for instance – Salesforce.  I’m keen to compare practice rather than promises.  Irrespective of guarantees my suspicion is that most organisations actual service levels are woeful in comparison to the actual service levels delivered by cloud providers but I’m willing to be convinced.   Despite the illusion of SLA guarantees and enforcement the majority of internal IT departments (and managed service providers who take over all of those legacy systems for that matter) get nowhere near the actual service levels of cloud providers irrespective of what internal documents might say.  It is a false comfort.  Businesses therefore need to wise up, consider real data and actual risks – in conjunction with the transformational business benefits that can be gained by offloading capabilities and specialising – rather than let such meaningless nonsense take them down the old path to ownership; in doing so they are potentially sacrificing a move to cloud services and therefore their best chance of transforming their relationship with their IT and optimising their business.  This is essentially the ‘promise’ of buying into updated private infrastructures (aka ‘private cloud’).

A lot of it comes down to specialisation again and the incentives for delivering high service levels.  Think about it – a cloud provider (literally) lives and dies by whether the services they offer are up; without them they make no money, their stock falls and customers move to other providers.  That’s some incentive to maintain excellence.  Internally – well, what you gonna do?  You own the systems and all of the people so are you really going to penalise yourself?  Realistically you just grit your teeth and live with the mediocrity even though it is driving rampant sub-optimisation of your business.  Traditionally there has been no other option and IT has been a long process of trying to have less bad capability than your competitors, to be able to stagger forward slightly faster or spend a few pence less.  Even outsourcing your IT doesn’t address this since whilst you have the fleeting pleasure of kicking someone else at the end of the day it’s still your IT and you’ve got nowhere to go from there.  Cloud services provide you with another option, however, one which takes advantage of the fact that other people are specialising on providing the services and that they will live and die by their quality.  Whilst we might not get service levels – at this point in their evolution at least – we do get transparency of historical performance and actual excellence; stepping back it is critical to realise that deeds are more important than words, particularly in the new reputation-driven economy. 

So the perceived need for service levels as a justification for private infrastructures is the second major and dangerous fallacy.  Businesses may well get better service levels from cloud providers than they would internally and any suggestion to the contrary will need to be backed up by thorough historical analysis of the actual service levels experienced for the equivalent capability.  Simply stating that you get a guarantee is no longer acceptable. 

Security

It’s worth stating from the beginning that there is nothing inherently less secure about cloud infrastructures.  Let’s just get that out there to begin with.  Also in getting infrastructure as a service out of the way – given that we’re taking the position in this post that PaaS is the first level of actual value to a business – we can  say that it’s just infrastructure; your data and applications will be no more or less secure than your own procedures make it but the data centre is likely to be at least as secure as your own and probably much more so due to the level of capability required by a true service provider.

So starting from ground zero with things that actually deliver something (i.e. PaaS and SaaS) a cloud provider can build a service that uses any of the technologies that you use in your organisation to secure your applications and data only they’ll have more usecases and hence will consider more threats than you will.  And that’s just the start.  From that point the cloud provider will also have to consider how they manage different tenants to ensure that their data remains secure and they will also have to protect customers’ data from their own (i.e. the cloud service providers) employees.  This is a level of security that is rarely considered by internal IT departments and results in more – and more deeply considered – data separation and encryption than would be possible within a single company. 

Looking at the cloud service from the outside we can see that providers will be more obvious targets for security attacks than individual enterprises but counter-intuitively this will make them more secure.  They will need to be secured against a broader range of attacks, they will learn more rapidly and the capabilities they learn through this process could never be created within an internal IT organisation.  Frankly, however, the need to make security of IT a core competency is one of the things that will push us towards consolidation of computing platforms into large providers – it is a complex subject that will be more safely handled by specialised platforms rather than each cloud service provider or enterprise individually. 

All of these changes are part of the more general shift to new models of computing; to date the paradigm for security has largely been that we hide our applications and data from each other within firewalled islands.  Increasing collaboration across organisations and the cost, flexibility and scale benefits of sharing mean that we need to find a way of making our services available outside our organisational boundaries, however.  Again in doing this we need to consider who is best placed to ensure the secure operation of applications that are supporting multiple clients – is it specialised cloud providers who have created a security model specifically to cope with secure open access and multi-tenancy for many customer organisations, or is it a group of keen “amateurs” with the limited experience that comes from the small number of usecases they have discovered within the bounds of a single organisation?  Furthermore as more and more companies migrate onto cloud services – and such services become ever more secure – so the isolated islands will become prime targets for security attacks, since the likelihood that they can maintain top levels of security cut off from the rest of the industry – and with far less investment in security than can be made by specialised platform providers – becomes ever less.  Slowly isolationism becomes a threat rather than a protection.  We really are stronger together.

A final key issue that falls under the ‘security’ tag is that of data location (basically the perceived requirement to keep data in the country of the customers operating business).  Often this starts out as the major, major barrier to adoption but slowly you often discover that people are willing to trade off where their data are stored when the costs of implementing such location policies can be huge for little value.  Again, in an increasingly global world businesses need to think more openly about the implications of storing data outside their country – for instance a UK company (perhaps even government) may have no practical issues in storing most data within the EU.  Again, however, in many cases businesses apply old rules or ways of thinking rather than challenging themselves in order to gain the benefits involved.  This is often tied into political processes – particularly between the business and IT – and leads to organisations not sufficiently examining the real legal issues and possible solutions in a truly open way.  This can often become an excuse to build a private infrastructure, fulfilling the IT departments desire to maintain control over the assets but in doing so loading unnecessary costs and inflexibility on the business itself – ironically as a direct result of the businesses unwillingness to challenge its own thinking. 

Does this mean that I believe that people should immediately begin throwing applications into the cloud without due care and attention?  Of course not.  Any potential provider of applications or platforms will need to demonstrate appropriate certifications and undergo some kind of due diligence.  Where data resides is a real issue that needs to be considered but increasingly this is regional rather than country specific.   Overall, however, the reality is that credible providers will likely have better, more up to date and broader security measures than those in place within a single organisation. 

So finally – at least for me – weak cloud security is the third major and dangerous and fallacy.

Comparing Public and Private

Private and Public are Not Equivalent

The real discussion here needs to be less about public vs private clouds – as if they are equivalent but just delivered differently – and more about how businesses can leverage the seismic change in model occurring in IT delivery and economics.  Concentrating on the small minded issues of whether technology should be deployed internally or externally as a result of often inconsequential concerns – as we have discussed – belittles the business opportunities presented by a shift to the cloud by dragging the discussion out of the business realm and back into the sphere of techno-babble.

The reality is that public and private clouds and services are not remotely equivalent; private clouds (i.e. internal infrastructure) are a vote to retain the current expensive, inflexible and one-size-fits-all model of IT that forces a business to sub-optimise a large proportion of its capabilities to make their IT costs even slightly tolerable.  It is a vote to restrict choice, reduce flexibility, suffer uncompetitive service levels and to continue to be distracted – and poorly served – by activities that have absolutely no differentiating value to the business. 

Public clouds and services on the other hand are about letting go of non-differentiating services and embracing specialisation in order to focus limited attention and money on the key mission of the business.  The key point in this whole debate is therefore specialisation; organisations need to treat IT as an enabler and not an asset, they need to  concentrate on delivering their services and not on how their clothes get made. 

Summary

If there is currently a ‘surge’ in interest in private clouds it is deeply confusing (and disturbing) to me given that the basis for focusing attention on private infrastructures appears to be deeply flawed thinking around control, service and security.  As we have discussed not only are cloud services the best opportunity that businesses have ever had to improve these factors to their own gain but a misplaced desire to retain the IT models of today also undermines the huge business optimisations available through specialisation and condemns businesses to limited choice, high costs and poor service levels.  The very concerns that are expressed as reasons not to move to cloud models – due to a concentration on FUD around a small number of technical issues – are actually the things that businesses have most to gain from should they be bold and start a managed transition to new models.  Cloud models will give them control over their IT by allowing them to choose from different providers to optimise different areas of their business without sacrificing scale and management benefits; service levels of cloud providers – whilst not currently guaranteed – are often better than they’ve ever experienced and entrusting security to focused third parties is probably smarter than leaving it as one of many diverse concerns for stretched IT departments. 

Fundamentally, though, there is no equivalence between the concept of public (including logically private but shared) and truly private clouds; public services enable specialisation, focus and all of the benefits we’ve outlined whereas private clouds are just a vote to continue with the old way.  Yes virtualisation might reduce some costs, yes consolidation might help but at the end of the day the choice is not the simple hosting decision it’s often made out to be but one of business strategy and outlook.  It boils down to a choice between being specialised, outward looking, networked and able to accelerate capability building by taking advantage of other people’s scale and expertise or rejecting these transformational benefits and living within the scale and capability constraints of your existing business – even as other companies transform and build new and powerful value networks without you.

Differentiation vs Integration (Addenda)

22 Jun

After completing my post on different kinds of differentiation the other day I still had a number of points left over that didn’t really fit neatly into the flow of the arguments I presented.  I still think some of them are interesting, though, and so thought I’d add them as addendums to my previous post!

Addendum 1

The first point was a general feeling that ‘standardisation’ is a good thing from an IT perspective.  This stemmed from one of Richard’s explicit statements that:

“Many people in the IT world take for granted that standardization (reduction in variety) is a good thing”

Interestingly it is true to say that from an IT perspective standardisation is generally a good thing (since IT is an infrastructural capability).  Such standardisation, however, must allow for key variances that allow people to configure and consume the standardised applications and systems in a way that enables them to reach their goals (so they must support configuration for each ‘tenant’).  Echoing my other post on evolution – in order to consider this at both an organisational and a market level – we can see that a shift to cloud computing (and ultimately consumption of specialised business capabilities across organisational boundaries) opens up a wider vista than is traditionally available within a single company.

In the traditional way of thinking about IT people within a single organisation are looking to increase standardisation as a valid way of reducing costs and increasing reliability within the bounds of a single organisation’s IT estate.  The issue with this is that such IT standardisation often forces inappropriate standardisation – both in terms of technology support and change processes – on capabilities within the business (something I talked about a while ago).  Essentially the need to standardise for operational IT efficiency tries to override the often genuine cost and capability differences required by each business area.  In addition on-premise solutions have rarely been created with simple mass-configuration in mind, requiring expensive IT customisation and integration to create a single ‘standard’ solution that cannot be varied by tenant (tenant in this case being a business capability with different needs).  Such tensions result in a constant war between IT and the single ‘standard’ solution they can afford to support and individual business capabilities and the different cost and capability requirements they have (which often results in departmental or ‘shadow’ IT implemented by end users outside the control of the IT department).

The interesting point about this, however, is that cloud computing allows organisations to make use of many platforms and applications without a) the upfront expenditure usually required for hardware, training and operational setup and b) the ongoing operational management costs.  In this instance the valid reasons that IT departments try to drive towards standardisation – i.e. reducing the number of heterogeneous technologies they must deploy, manage and upgrade – largely disappear.  If we also accept that IT is essentially infrastructural in nature – and hence provides no differentiation – then we can easily rely on external technology platforms to provide standardisation and economies of scale on our behalf without having to mandate a single platform or application to gain these efficiencies.  At this point we can turn the traditional model on its head – we can choose different platforms and applications for each capability dependent on its needs without sacrificing any of the benefits of standardisation (subject to the applications and platforms supporting interoperability standards to facilitate integration).  Significant and transformational improvements enabled by capability-specific optimisation of the business is therefore (almost tragically) dependent on freeing ourselves from the drag of internal IT.

Addendum 2

Richard also highlighted the fact that there is still a strong belief in many quarters that ‘business architecture’ should be an IT discipline (largely I guess from people who can’t read?).  I believe that ‘business’ architecture is fundamentally about propositions, structure and culture before anything else and that IT is simply one element of a lower level set of implementation decisions.  Whilst IT people may have a leg up on the required ‘structured thinking’ aspects necessary to think about a businesses architecture I feel that any suggestion that business owners are too stupid to design their own organisations – especially using abstraction methods like capabilities – seems outrageous to me.  IT people have an increasingly strong role to play in ‘fusing’ with business colleagues to more rapidly implement differentiating capabilities but they don’t own the business.  Additionally, continued IT ownership of business architecture and EA causes two additional issues: 1) IT architecture techniques are still a long way in advance of business architecture techniques and this means it is faster, easier and more natural for IT people to concentrate on this; and 2) The lack of business people working in the field – since they don’t know IT – limits the rate at which the harder questions about propositions and organisational fitness are being asked and tackled.  As a result – at least from my perspective – ‘business architecture’ owned by IT delivers a potential double whammy against progress; on the one hand it leads to a profusion of IT-centric EA efforts targeted at low interest areas like IT efficiency or cost reduction whilst on the other it allows people to avoid studying, codifying and tackling the real business architecture issues that could be major strategic levers.

Addendum 3

As a final quick aside the model that I discussed for viewing an organisation as a set of business capabilities gives rise to the need for different ‘kinds’ of business architects with many levels of responsibility.  Essentially you can be a business architect helping the overall enterprise to understand what capabilities are needed to realise value streams (so having an enterprise and market view of ‘what’ is required) through to a business architect responsible for how a given capability is actually implemented in terms of process, people and technology (so having an implementation view of ‘how’ to realise a specific ‘what’).  In this latter case – for capabilities that are infrastructural in nature and thus require high standardisation – it may still be appropriate to use detailed, scientific management approaches.

iPad and Cloud Platforms

24 May

As part of my ongoing odyssey within the blogosphere I spent some time catching up on Nick Carr’s blog – someone whose ideas always fascinate me. The most active article over the last few weeks appeared to be one entitled "The iPad Luddites" about the wide range of emotions that this seemingly simple device has evoked.  I wanted to comment on this from three perspectives really: firstly about the notion of computing devices like the iPad in general, secondly about the issues of ‘generativity’ that have been sparked by the release of such a device and then lastly about what the emotions around the iPad might tell us about cloud platforms (or service delivery platforms) in general.

The iPad and The (New) Third Way

At a macro level the amount of discomfort felt by technologists seems to depend in direct proportion on whether you view the iPad as a new kind of window into the web (i.e. primarily a highly specialised, beautifully packaged and superbly usable content delivery mechanism for the mass market) or as a replacement for general purpose computing devices.  To clear that point initially – as the iPad is not the point of this post per se – I am in the former camp and therefore see such devices as a ‘democratisation’ of computing in much the same way as other mobile computing devices.  I think it is interesting and good that people have more choice in how they access content and applications and I feel that such devices – in a general sense – create opportunities for both new consumers (so people who would previously not have been able to confidently access computing services or the web) and producers (so people who can now create content and applications that deliver new services to this newly empowered group).  In this sense such devices can lower the barrier of entry and therefore reach wider groups and this in turn can enable long tail business models to flourish by enabling broad access to hitherto inaccessible niches.  Even for technology literate people like me I can see a perfectly reasonable desire to not be ‘on duty’ 100% of the time and to feel the relief of falling back into a consumer role for a while.  I don’t see tablets as a replacement for general purpose computing devices, therefore, but rather as an alternative way of accessing services and content.   As a concrete – and personal – illustration of this I have an 85 year old grandfather who – about 10 years ago – decided that he should become ‘computer literate’.  He now has three PCs in his house – one connected to an amateur radio set, another that he uses for ‘tinkering and learning’ (so mild "generative" activities) and one that he ring-fences from his mildly ham-fisted experimentation (he once cut a screen off a laptop and connected the base to a monitor in order to use it purely as a desktop).  This third PC is left alone so that he always has simple and secure access to the web and his email.  I can see that this third PC could easily be replaced by a device like the iPad and that this would be both a more pleasurable experience for him and potentially open up more opportunities on the web by simplifying the experience and making it more easily consumed.  In no way would this reduce his desire to dismember and otherwise experiment on more general purpose computing devices, however.  So far so what.

The ‘what’ at this point is the fact that Apple has created not just a new device – which in and of itself is pretty standards compliant from an interoperability perspective – but that they have also created a platform to service it.  The nature of this platform – and its tight packaging with Apple products – leads to “generativity” concerns due to a perceived lack of ‘openness’.

The iPad and "Generativity"

One of the key worries that people have about the iPad relates to  its closed nature – both from a hardware and software perspective – and Apple’s desire to control access to the consumer base in order to ‘manage’ the experience.  There is a perception that controlled platforms like this can have a  negative impact on both "generativity" – through heavy-handed governance – and consumer freedom. 

In general I support open hardware, platforms and business models and each of these perspectives has different ‘contexts’ for "generativity" that need to be separated if we are to get to the bottom of the issue.

Hardware

Whilst many people mourn a past where machines could be opened, modified and enhanced in order to tinker and see what happens (i.e. one form of "generativity") I believe that the increasing internal complexity of devices coupled with their decreasing external complexity – i.e. ensuring simple operation for the majority – is an inevitable marker of the increasing maturity of a product category for three synergistic reasons:

  1. As devices increase in complexity so it becomes both undesirable – and indeed practically impossible – to allow even competent people to open up a device and work on the internals without compromising it;
  2. This same cohesion is also a key enabler for mass market adoption given that the majority of people just want devices that work; as a result those companies who can attractively package technology for consumption are rewarded financially by the broadest market.  This is also understandable from the perspective that once things work well enough and are mature enough messing around with them is more trouble than it’s worth; and
  3. As a device becomes increasingly consumerised in pursuit of this mass market so the need to produce them at scale also kicks in; this in turn promotes tight integration of components for manufacturing optimisation and also reduces the cost of the device – as a result opening up a device and replacing individual components becomes increasingly difficult and decreasingly cost effective at the same time (rapidly to a point at which it starts to become cheaper to replace the whole device from a consumer electronics perspective).

Whilst it is therefore sad that a tradition of hobbyist hardware tinkering has become obsolete  this aspect of ‘generativity’ is probably now the least valuable in any case, as hardware has long since become a way of delivering software rather than an end in itself (i.e. most hardware works sufficiently well and is so complex that the value of “generative” activities has diminished almost to the point of zero; at the same time what can be done with software has increased hugely and hence become the new sweet spot for “generative” activity).  There are emerging ‘open hardware’ models that reflect the work that has been done in the software community but this is still a pretty niche activity; for those who really want to experiment with hardware design the complexity of devices now means that most of their work must be done in software anyway.

What is still a critical aspect of ‘openness’ from a device perspective, however – much like in software – is interoperability (so use of standard ports and connectors to enable the integration of your various gadgets).  Given that most devices these days are bluetooth, wifi, 3g and usb enabled, however, interoperability to allow different devices to play nice together is now generally a hygiene factor.

“Generativity” and Hardware

Looking more broadly at the question of whether closed hardware decreases “generativity” within society overall we can consider the question from two perspectives.  From a strictly IT-centric view we may feel that the increasing penetration of consumer devices means less people who are ‘pc-literate’ and thus able to ‘create’ IT-centric systems (i.e. less people tinkering with hardware or general purpose programming languages to create value through IT).  This is to miss the point in my view.  In reality the majority of people who will pick up devices like the iPad would never have been “generative” through traditional IT in any form at all – if we give them access to new tools and software that are easily consumable, however, they will be able to use these tools to become vastly more “generative” within their own spheres of expertise.  As a result we would have to say that putting better packaged, more accessible and user friendly devices into the hands of the largest number of people increases the overall ability of society to be “generative” in the broadest sense.

Platforms

Given the maturity of the IT industry as a whole, platforms – from mainframes to windows to cloud – are still a major area of contention and competition despite their essentially infrastructural nature.  In much the same way as hardware went through a golden age of tinkering and “generative” activities so the last decade has been the golden age of platforms with everyone wanting to develop a different way of implementing software and controlling hardware; as a result I guess the next level of discomfort people feel with the iPad is the closed nature of the ecosystem in which it exists.

In considering the “openness” of platforms, however, I would look at two key points: how innovation often leads to proprietary platforms in immature markets and whether we should even care about proprietary implementation platforms in a web age.

Innovation and proprietary platforms

From my perspective the rise of proprietary platforms is all part of the natural cycle of any product category that takes a sudden leap into consumerisation and mass market adoption.  For a long time people tinker with various apparently disconnected concepts but then at some point someone decides to take a whole bunch of these innovations and collapse them all together into highly specific offerings that have only the 20% of functionality that is really required to enable 80% of the value (i.e. they create a new and much simplified way of doing things within the context of the problem domain – in this case touch-enabled web applications).  In order to do this they often have to create proprietary data formats, protocols and APIs – or some subset of these to fill gaps in existing standards – and then orchestrate them all together in a consumer centric way; such combinations become a new platform.  At that point they have recognised the needs of their consumers, delivered something ‘new’ that enables them to do things they haven’t been able to do in the past and thereby created a new market via a proprietary platform.  All sorts of other people then decide that they can develop a better platform and join in.  Apple (and others) are still at this stage since their platform is the proprietary integration of the formats, APIs and standards required to build and deliver applications through their ecosystem.  Successful first movers in these spaces often appear ‘magical’ as for the consumer the experience of controlling all that power through simple tools is nothing short of revelationary.

Looking at this process, however, there is nothing specific that impacts “generativity” in the broadest sense; whilst the reach of things that result from “generative” activities might be less because applications and services are tied to a specific platform they do not stifle “generativity” per se.

What is openness in a web age?

Taking this further, in the new business ecosystem do we really care that there are proprietary platforms?  That may sound like a strange question but if we step away from technology and concentrate on outcomes for a moment then we can think about the key points at which openness is really important rather than just use it as a mantra.

At the end of the day we’re all just trying to get something done and IT is mostly a pain in the ass that gets in the way.  With increasingly sophisticated ways of describing the outcomes we need, however, the key elements of any platform become how effectively they can support us in realising our intent rather than on how they work internally.  Two often used measures of ‘openness’ are interoperability and portability; whilst they are not mutually exclusive, preference for interoperability tends to favour implementation diversity and specialisation as a goal whereas preference for portability tends to favour lowest common denominator single implementations and general purpose use cases. 

One of the implications of a wholesale shift to web based platforms is that technology is no longer a) monolithic in its delivery and b) an all or nothing costly purchase decision.  Web architectures encourage the creation of lightweight components and integration, whilst cloud platforms allow easy access and exit.  Given that different kinds of components will be best optimised on different kinds of platforms and with different processes and tooling we can also start to think about the best way to realise them individually; cloud platforms basically allow us to choose the best implementation vehicle whilst still allowing us to integrate the parts back together.  If a proprietary platform – because of the degree to which it is specialised and optimised to realise a specific task as effectively as possible – allows us to deliver value far more quickly and cost effectively than an ‘open’ platform (i.e. one which is optimised for portability and hence more general purpose) then which would we pragmatically choose (especially given that the entry and exit costs for delivering on a platform are much lower than in previous models of IT and so we could realistically redevelop that component rapidly somewhere else in future if necessary)?  In any other industry we see people building around the components of partners whose design and manufacture is specialised (and hence their internals can be considered proprietary) and all we care about is a) how well those components work within our broader use case and b) whether they fit together with the other components we have.  Coming at ‘openness’ from this perspective we see that from a consumers viewpoint (whether that is an end consumer or a partner in a value chain) all that truly matters are that the component in question performs as required, is able to be integrated and has robust processes in use in its creation and maintenance; as a result ‘interoperability’ and measures around process quality become far more important than the technology being used to implement it (and hence portability). 

To bring this back to the discussion of the iPad and whether the lack of portability in the underlying platform impacts “generativity” or “open access”, however, we must step back and consider the platform in the light of the macro level definition of “openness” explored above:

  1. Apple enables anyone to develop an application to execute on their platform subject to certain quality criteria; as a result application providers have access to a huge market of consumers via Apple’s marketplace;
  2. Application providers can integrate with external services not provided by Apple or any of its other partners in order to enrich the experience delivered by the application on the end device;
  3. Consumers have access to many different application providers – as well as web access via a browser – and could hence be argued to have complete freedom of choice. 
  4. Consumers can configure many of the applications to integrate data from other web services that they use and hence are not locked into having all of their data held by Apple or its chosen providers.

As a result of these observations it is fair to say that Apple’s platform is fairly open at a macro level – anyone can consume services, provided by anyone using data and services that are part of other ecosystems. 

“Generativity” and Platform

So looking at the question purely from a platform perspective we would have to say that the Apple platform does not impact “generativity”, only the ability of the application implementer to define the platform they wish to use according to their preference (so to perform tinkering and “generative” activities in the platform space).  For the vast majority of people who just want to deliver services quickly and cheaply to make money or for those who just wish to have a pleasant experience in consuming such services the platform is only of interest in-so-much as it helps or hinders them (as long as they can connect their various services together – so interoperability remains a key requirement).  If we relate platforms, therefore – as a necessary but essentially hygiene related infrastructural capability – to hardware then we could consider that the discomfort people feel with proprietary platforms is not that they limit “generativity” overall (so both Microsoft and Apple platforms have unleashed a huge amount of higher level “generative” activity by removing complexity at the platform level) but that they limit “generativity” within a technical space that technology-literate people have been used to controlling and tinkering within.  In this context one could propose that decreasing choices in how we implement software that actually does useful stuff is merely the consumerisation of software platforms (in this case the consumer being software developers who have good ideas and just want to get stuff done or people who want to easily consume their good work) and hence the next logical step in the commoditisation of technology markets.

Does this mean that we are doomed to a future of competing proprietary platforms?  It is one possible outcome (i.e. the dominance of mega-platform providers) but not the only one.  Given that platforms are infrastructural in nature – and hence tend towards commoditisation – it is feasible that there will be open source alternatives which can compete against the big companies.  Furthermore it is likely that all platforms will tend towards broad functional equivalence over time – as people adopt useful ideas from competitors – and thus the question will be whether open source or proprietary models deliver the ‘killer’ offering that scales into a de-facto standard.  As always the market will decide; the key message, however, is that how we implement the internals of things hardly matters anymore – beyond a few interoperability caveats; what matters is how well we can realise higher level outcomes for people (both economic and social). 

Business

The final level at which we need to examine the impact of the iPad is at the level of business models.  Whilst we have examined the implications for hardware and software platforms as individual elements, we also have to consider the broader question of how these components are integrated into a business proposition.  This is probably the area where most concern justifiable exists, as whilst technologies in the hardware and platform space are neither inherently good or bad, Apple’s current business model is a top-to-bottom walled garden of hardware, platform and delivery channel within a single brand experience.  This model delivers a tight integration between all elements of the offer that prevents other companies competing against Apple for specific components within the overall ecosystem of business types they have assembled.  To know whether this is a problem – and for whom – we need to consider the different kinds of businesses at play and the relationship between them.

The Apple Business Stovepipe

Currently the whole Apple ecosystem is a tightly coupled, single business model stovepipe that you take or leave as a package. 

image

In this context all of Apple’s components live and die as a package and cannot be independently optimised.  On the reverse side consumers cannot choose to only take those elements of the Apple product set that they really want and have to take all elements together (they cannot choose to have an iPad but source applications from elsewhere, for instance, or choose to make use of Apple’s platform on another manufacturer’s device). 

These properties of the Apple business model are far more troubling than the individual technologies themselves – which as we have seen are as open as necessary and clearly supportive of “generativity” – because they limit competition within the layers of the ecosystem and prevent “generativity” in the business model space.  The iron control exerted by Apple on the people who want to live within their top-to-bottom closed ecosystem means that there are no opportunities for “generative tinkering” from a business perspective (different devices, different stores, different payment models, different brand experiences etc. etc.). 

This stifling of competition has some serious consequences for both the consumer – as it limits competition, choice and innovation, allowing Apple to drip feed innovation at their own pace to  maximise their revenue whilst simultaneously keeping prices artificially high – and – perhaps more surprisingly – for Apple themselves.

Apple’s Business Components

In my previous post about divergent forms of differentiation I touched on the fact that there are four broad kinds of businesses – culturally and economically – that need to be optimised in different ways.  To recap these were:

  • Relationship businesses:  These businesses are essentially business capabilities that leverage trust relationships to bring together different parties for mutual gain;
  • Innovation and commercialisation businesses: These businesses are essentially small, innovation focused capabilities who specialise in IP generation and product and service commercialisation;
  • Infrastructure businesses: These businesses are essentially business capabilities that respond to economies of scale; and
  • Portfolio businesses:  These businesses own and manage brands and invest their capital across a range of other business types to maximise asset growth. 

If we look at Apple then we could propose a neat division of the elements of their ecosystem into this categorisation:

  • Relationship business:  App store.  In this business Apple wants to maintain the relationship to the end customer and ‘mediate’ access to  other providers to leverage their brand loyalty and the resulting trust people place in their offers and recommendations.  This is one of the reasons Apple are so keen to ensure the quality of applications distributed through this channel;
  • Innovation and commercialisation business:  e.g. iPad.  Apple does a great job of bringing innovative and beautifully designed products to market and both the iPhone and the iPad are great examples of this.  In this context Apple hold the IP for these products but they do not carry out any of the actual manufacturing etc. themselves.  As a result in the device space Apple are concentrating on IP generation;
  • Infrastructure business:  iPhone OS.  In this context Apple are absolutely a platform business and such platforms are infrastructural in nature.  The value of a platform is in the breadth and depth of its ecosystem and hence the economies of scale it can generate;
  • Portfolio business:  The Apple brand.  Apple itself is clearly a strong global brand which is loved by its many fans and equally obsessed over by its detractors.   This element of the business needs to have a balanced strategy and set of resulting investments across the other business types to deliver growth and ensure the long term health of Apple.
Optimise Apple, Unleash Business “Generativity”, Treat the Consumer Fairly

If we take this view of the components that make up Apple’s ecosystem then we can see that the major elements fall into different business types and hence need to be optimised differently (as per my longer post on this subject generally).  As a result rather than sub-optimising all of the capabilities it holds to maintain end to end control Apple could alternatively hold these components as part of a brand portfolio that gets optimised in different ways.

image In this instance the optimisation would be (starting from the bottom):

  • Infrastructure business (iPhone OS):  Allow other manufacturers to license and use the iPhone OS.  Platform businesses are all about economies of scale and restricting penetration of your platform is a long term bad bet.  Opening up the platform to the broadest market would allow Apple to monetise their IP, create more choice for the consumer and facilitate far greater “generativity” in the business space.  Apple would benefit from this “generativity” at the macro level as the larger its ecosystem the greater the penetration of the platform and hence the greater the revenue for Apple.  There is a direct parallel here to what happened between Macs and PCs in the last major platform wars; Apple tried to retain soup-to-nuts control whilst Microsoft concentrated on getting their platform into as many computers as possible.  Only one of these companies is the largest software vendor on the planet.
  • Innovation and Commercialisation business (e.g. iPad):  There are three separate ways that Apple could monetise the iPad (or other hardware devices); firstly it could take a decision to retain sole control of the iPad as its own brand device for  accessing the broader market that it has enabled through its platform.  This would be a sensible position for them given the quality and desirability of their devices, enabling them to be a top-tier producer of devices for the iPhone OS platform that they have created.  Secondly it could take the IP that it develops and license it to other manufacturers – it is possible that it could make more money from licensing IP to a broader range of device producers than from selling hardware in isolation.  The third way is a combination of both of these approaches, licensing technical IP but also continuing to use this IP themselves to manufacture design-led hardware as they do today.  All of these options would help to enable broader business “generativity” by creating competition in the device market around the core iPhone OS platform; from Apple’s perspective, the additional devices resulting from such “generativity” would legitimise the platform as a de facto standard, encourage people to buy their high-end devices (as they won’t feel ‘trapped’) and enable Apple to gain revenue from its other business lines (e.g. platform and apps sales) from a broader base of users;
  • Relationship business (App Store):  Although Apple would now have to allow other people to deliver applications to devices – as a result of losing absolute control over the platform – an own brand App Store proposition could still hold massive appeal due to the Apple brand and the trust it has amongst consumers (especially those who value simplicity and function).  Apple could continue to position itself as a ‘high end’ App Store that has strong quality constraints for offered applications and which ‘guarantees’ successful execution (as today).  This opening up of the applications ecosystem to other sellers would enable business “generativity” within the relationship space as consumers could choose the level of cost, support and quality they were happy to live with.  Apple could benefit however the market evolved, though, by firstly having a differentiating proposition from a relationship (i.e. app store) perspective based around both their brand and their ease of use (including guarantees).  From a broader perspective more competition in the app store space would encourage greater adoption of the iPhone OS platform and the devices required to access it.
  • Portfolio business (Apple):  There would be a number of implications overall for Apple if they decided to follow an ‘open’ business model, split their capabilities along economic optimisation lines and allow competition into their ecosystem.  The primary implication would be that Apple would now be able to optimise all of their business types independently creating more value overall for the Apple brand.  An obvious example of such an optimisation would be the licensing of their platform to other companies.  More subtly, however , there would be nothing to stop Apple simultaneously pursuing a strategy of openness and individual business optimisation whilst also delivering an integrated end-to-end experience for their consumers as they do today.  Each of their ‘open’ business components could still be wrapped in the next layer of business in order to deliver the same simple and integrated customer experience with the same level of guarantees that they do now; this in itself could be the overall brand proposition even as the individual elements are available to other people to maximise Apple revenue, business “generativity” (and hence secondary revenue) and consumer choice (and hence trust).  As a result the irony is that an open strategy would benefit Apple as much as the other participants.
“Generativity” and Business

Overall, then, we would have to say that the current Apple business model prevents “generativity” in the business spaces around which they have chosen to build their ecosystem (i.e. hardware, platform and service distribution).  This position is probably equally bad for the consumer, for potential Apple partners and – most strangely – for Apple itself.  Google’s foray into the mobile market is far more dynamic at the moment primarily due to the openness of its business model and the resulting number of companies involved; it does, however, suffer from the kinds of fragmentation and lack of trust that Apple has managed to avoid through its tight control over its ecosystem.  Like with platforms, however, it is likely that such a strategy – whilst probably unavoidable initially – will only work for a short period of time – essentially until the market matures sufficiently for people to understand its basic operation; at that point commoditisation of function – coupled with providers who will take on the role of integrators and trusted brands for consumers in the way that Apple has in its closed system – will require companies to optimise the component parts of the value chain as described above in order to remain competitive. 

The iPad and Lessons for Cloud Computing

Given that this post has already extended way beyond my original intention I will keep this last set of observations to a set of bullet points:

  • Cloud platforms are increasingly driving people’s understanding that infrastructure is pointless and (mostly) worthless.  In much the same way as devices are becoming commoditised and hence packaged so traditional infrastructure is disappearing from our purview as it becomes commoditised.  Strangely, however, this doesn’t stop many IT departments and architects continuing to think that infrastructure design and support is a primary and important activity.  These activities represent continued “generative” activities in the hardware space, however, and thus provide minimal value for maximal effort given that software is now the primary medium for value adding, “generative” activities.  Even infrastructure-as-a-service offerings are only postponing the point at which infrastructure disappears completely behind software platforms;
  • Cloud platforms support “generativity” in much the same way as the Apple platform.  They enable much greater and more rapid “generative” activities in delivering applications and services to fulfil higher level functions but you have to accept that you can no longer perform “generative” activities in the platform space itself – these are now inherently the domain of platform providers.  Experience from the angst of the iPad launch suggests that this will not be an easy transition for the hundreds of thousands of geeks whose primary function is to re-create platforms and frameworks over and over again.  In terms of openness, interoperability is now far more important than portability as systems will be constructed from services that potentially span many platforms.  The risks inherent in these services being implemented on proprietary platforms diminishes both as a result of the broader portfolio of platforms in play but also due to the speed at which services can be redeveloped elsewhere if necessary.  Interoperability also enables “generativity” for platform providers, enabling them to innovate far more rapidly and find ways of helping us to realise our own services much more cheaply and quickly than ever.
  • Cloud platforms will operate within the same basic business models as discussed above in the context of Apple; as a result organisations that wish to operate within the IT market of the future will need to decide what ‘type’ of business they are and optimise accordingly.  IT companies may hold a portfolio of individually optimised relationship, platform and innovation businesses but the idea of an ‘integrated’ (i.e. stovepiped) IT service provider is a dangerous fallacy.  Looking specifically at internal IT departments they will increasingly need to play the role of ‘relationship manager’ and integrate many external services together to realise business aims; in this context they should be looking for exit strategies for the majority of the IT systems and platforms they operate today.

Cloud Platforms and Future Middleware

6 May

I’m going to try and break the habit of a lifetime in this ‘second life’ of my blog and post the odd ‘peppy’ comment on things I’ve seen as well as getting sucked into long analyses :-p

In that spirit I thought I’d just comment on a post I saw today by John Rymer at Forrester; essentially John was expressing some mild disappointment at a discussion about future app servers he was involved in and suggesting that the future of these products needs to be radically different in a connected, cloud environment.  I completely agreed with his points about more lightweight, specialised and virtualised ‘containers’ and this reflected the work I discussed in one of my older posts, where I talked about the need to use virtual templates, lightweight product and framework configurations, specific patterns and metadata plus domain specific languages and factories in pursuit of IT industrialisation.  Such lightweight and specialised containers for service realisation help to make developers more productive but also enable much greater agility and efficiency in resource usage by allowing each such service to change and scale according to its purpose and needs independent of the others.  In this sense I understand the feeling of one person who left a comment who described such platforms in terms of a fabric; this is probably an apt description given that you will have independent, specialised services bound to specific lightweight containers, ‘floating’ on a virtual infrastructure and collaborating with others to realise wider intent.  At heart a lot of John’s post was about simplifying, downsizing and specialising containers for different kinds of services and so I heartily agreed with his sentiments on the matter.

Business Enablement as a Key Cloud Element

30 Apr

After finally posting my last update about ‘Industrialised Service Delivery’ yesterday I have been happily catching up with the intervening output of some of my favourite bloggers.

One post that caught my eye was a reference from Phil Wainwright – whilst he was talking about the VMForce announcement – to a post he had written earlier in the year about Microsoft’s partnership with Intuit.  Essentially one of his central statements was related directly to the series of posts I completed yesterday (so part 1, part 2 and part 3):

“the breadth of infrastructure <required for SaaS> extends beyond the development functionality to embrace the entirely new element of service delivery capabilities. This is a platform’s support for all the components that go with the as-a-service business model, including provisioning, pay-as-you-go pricing and billing, service level monitoring and so on. Conventional software platforms have no conception of these types of capability but they’re absolutely fundamental to delivering cloud services and SaaS applications”.

This is one of the key points that I think is still – inexplicably – lost on many people (particularly people who believe that cloud computing is primarily about providing infrastructure as a service).  In reality the whole world is moving to service models because they are simpler to consume, deliver clearer value for more transparent costs and can be shared across organisations to generate economies of scale.  In fact ‘as a service’ models are increasingly not going to be an IT phenomenon but also going to extend to the way in which businesses deal with each other across organisational boundaries.  For the sale and consumption of such services to work, however, we need to be able to ‘deliver’ them; in this context we need to be able to market them, make them easy to subscribe to, manage billing and service levels transparently for both the supplier and consumer and enable rapid change and development over time to meet the evolving needs of service consumers.  As a result anyone who wants to deliver business capabilities in the future – whether these are applications or business process utilities – will need to be able to ensure that their offering exhibits all of these characteristics. 

Interestingly these ‘business enablement’ functions are pretty generic across all kinds of software and services since they essentially cover account management, subscription, business model definition, rating and billing, security, marketplaces etc etc (i.e. all of the capabilities that I defined as being required in a ‘Service Delivery Platform’).  In this context the use of the term ‘Service Delivery Platform’ in place of cloud or PaaS was deliberate; what next generation infrastructures need to do is enable people to deliver business services as quickly and as robustly as possible, with the platforms themselves also helping to ensure trust by brokering between the interests of consumers and suppliers through transparent billing and service management mechanisms.

This belief in service delivery is one of the reasons I believe that the notion of ‘private clouds’ is an oxymoron – I found this hoary subject raised again on a Joe McKendrick post after a discussion on ebizQ – even without the central point about the obvious loss of economies of scale; essentially  the requirement to provide a whole business enablement fabric to facilitate cross organisational service ecosystems – initially for SaaS but increasingly for organisational collaboration and specialisation – is just one of the reasons I believe that ‘Private Clouds’ are really just evolutions of on-premise architecture patterns – with all of the costs and complexity retained – and thus purely marketecture.  When decreasing transaction costs are enabling much greater cross organisational value chains the benefits of a public service delivery platform are immense, enabling organisations to both scale and evolve their operations more easily whilst also providing all of the business support they need to offer and consume business services in extended value chains.  Whilst some people may think that this is a pretty future-oriented reason to not like the notion of private clouds, for completeness I will also say that to me  – in the sense of customer owned infrastructures – they are an anachronism; again this is just an extension of existing models (for good or ill) and nothing to do with ‘cloud’.  It is only the fact that most protagonists of such models are vendors with very low level maturity offerings like packaged infrastructure and/or middleware solutions that makes it viable, since the complexity of delivering true private SDP offerings would be too great (not to mention ridiculously wasteful).  In my view ‘private clouds’ in the sense of end organisation deployment is just building a new internal infrastructure (whether self managed or via a service company) sort of like the one you already already have but with a whole bunch of expensive new hardware and software (so 90% of the expense but only 10% of the benefits). 

To temper this stance I do believe that there is a more subtle, viable version of ‘privacy’ that will be supported by ‘real’ service delivery platforms over time – that of having a logically private area of a public SDP to support an organisational context (so a cohesive collection of branded services, information and partner integrations – or what I’ve always called ‘virtual private platforms’).  This differs greatly from the ‘literally’ private clouds that many organisations are positioning as a mechanism to extend the life of traditional hardware, middleware or managed service offerings – the ability of service delivery platforms to rapidly instantiate ‘virtual’ private platforms will be a core competency and give the appearance and benefits of privacy whilst also maintaining the transformational benefits of leveraging the cloud in the first place.  To me literally ‘private clouds’ on an organisations own infrastructure – with all of their capital expense, complexity of operation, high running costs and ongoing drag on agility – only exist in the minds of software and service companies looking to extend out their traditional businesses for as long as possible. 

Industrialised Service Delivery Redux III

29 Apr

It’s a bit weird editing this more or less complete post 18 months later but this is a follow on to my previous posts here and here.  In those posts I discussed the need for much greater agility to cope with an increasingly unpredictable world and ran through the ways in which we can industrialise IT provision to focus on tangible business value and rapid realisation of business capability.  This story relied upon the core notion that technology is no longer a differentiator in and of itself and thus we just need workable patterns that meet our needs for particular classes of problem – which in turn reduces the design space we need to consider and allows increasing use of specialised platforms, templates and development tools.

In this final post I will discuss the notion that such standardisation calls into question the need to own such technology at all; essentially as platforms and tools become more standardised and available over the network so the importance of technology moves to access rather than ownership.

Future Consolidation

One of the interesting things from my perspective is that once you start to build out an asset-based business – like a service delivery platform – it quickly becomes subject to economies of scale.

It is rapidly becoming plain, therefore, that game changing trends such as:

  • Increasing middleware consolidation around traditional ‘mega platform’ providers;
  • Flexible infrastructure enabled by virtualisation technology;
  • Increasingly powerful abstractions such as service-orientation;
  • The growing influence of open source software and collaborating communities; and
  • The massively increased interconnectivity enabled by the web.

are all going to combine to change not just the shape of the IT industry itself but increasingly all industries; essentially as IT moves to service models so organisations will need to reshape themselves to align with these new realities, both in terms of their use of IT but also in terms of finding their distinctive place within their own disaggregating business ecosystems.

From a technology perspective it is therefore clear that these forces are combinatory and lead to accelerating commoditisation.  The implication of this acceleration is that decreasing differentiation should lead to increased consolidation as organisations no longer need to own and operate their own IT when such IT incurs cost and complexity penalties without delivering differentiation.

Picture1

In a related way such a shift by organisations to shared IT platforms is also likely to be an amplifying trend; as we see greater platform consolidation – and hence decreasing differentiation to organisations owning their own IT – so will laggard organisations become less competitive as a result of their expensive and high drag IT relative to their low cost, fleet of foot competitors.  Such organisations will then also seek to transition, eventually creating a tipping point at which ownership of IT becomes an anachronism.

From the supply perspective we can also see that as platforms become less differentiating and more commoditised they also become subject to increasing economies of scale – from an overall market perspective, therefore, offering platforms as a service becomes a far more effective use of capital than the creation and ownership of an island of IT, since scale technologies drift naturally towards consolidation.  There are some implications to this for the IT industry given the share of overall IT spend that goes on repeated individual installation and consulting for software and hardware but we shall leave that for another post.

As a result of these trends it is highly likely that we will see platform as a service propositions growing in influence fairly rapidly.  Initially these platforms are likely to be infrastructure-oriented and targeted at new SaaS providers or transitioning ISVs to lower the cost of entry but I believe that they will eventually expand to deliver the full business enablement support required by all organisations that need to exist in extended value webs (i.e. eventually everyone).  These latter platforms will need to have all of the capabilities I discussed in the previous post and will be far beyond the technology-centric platforms envisaged by the majority of emerging platform providers today.  Essentially as everybody becomes a service provider (or BPU in other terms) in their particular business ecosystem so they will need to rapidly realise, commercialise, manage and adapt the services they offer to their value webs.  In this latter scenario I believe that organisations will be caught in the jaws of a vise – the unbundling of capability to SaaS or other BPU providers to allow them to specialise and optimise the overall value stream will see their residual IT costs rocket as there are less capabilities to share it around; at the same time economies of scale produced by IT service companies will see the costs of platform as a service offerings plummet and make the transition a no brainer.

So what would a global SDP look like?

Picture2

Well remarkably like the one I showed in my previous posts given that I was leading up to this point, lol.  The first difference is that the main bulk of the platform is now explicitly deployed in the cloud – and it’ll obviously need to scale up and down smoothly and at low cost.  In addition all of the patterns that we discussed in my previous post will need to support multi-tenancy and such patterns will need to be built into the tools and factories that we will use to create systems optimised to run on our Service Delivery Platform.

At the same time the service factory becomes a way of enabling the broadest range of stakeholders to rapidly and reliably create services and applications that can be deployed to our platform – in fact it moves from being “just” an interesting set of tools to support industrialised capability realisation to being one of the main battlegrounds for PaaS providers trying to broaden their subscriber base by increasing the fidelity of realisation and reducing the barrier of entry to the lowest level possible.

Together the cloud platform and associated service factory will be the clear option of choice for most organisations, since it will yield the greatest economies of scale to the people using it.

One last element on this diagram that differentiates it from the earlier one is the on-premise ‘customer service platform’. In this context there is still a belief in many quarters that organisations will not want to physically share space and hardware with other people – they may be less mature, they may not trust sufficiently or they may genuinely have reasons why their data and services are so important that they are willing to pay to host them separately.  In the long term I do not subscribe to this view and to me the notion of ‘private clouds’ – outside of perhaps government and military use cases – is oxymoronic and at best a transitional situation as people learn to trust public infrastructures.  On the other hand whilst this may be playing with semantics I can see the case for ‘virtual private clouds’ (i.e. logically ring fenced areas of public clouds) that give the appearance and majority of benefits of being private through ‘soft’ partitioning (i.e. through logical security mechanisms) whilst allowing the retention of economies of scale through avoidance ‘hard’ partitioning (i.e. through separate physical infrastructure).  Indeed I would state that such mechanisms for making platforms appear private (including whitelabelling capabilities) will be necessary to support the branding requirements of resellers, systems integrators and end organisations.  For the sake of completeness, however, I would position transitional ‘private clouds’ as reduced functionality versions of a Service Delivery Platform that simply package up some hardware but leave the majority of the operational and business support – along with things like backup and failover – back at the main data centres of the provider in order to create an acceptable trade-off in cost.

Summary

So in this final post I have touched on some of the wider changes that are an implication of technology commoditisation and the industrialisation of service realisation.  For completeness I’ll recap the main messages from the three posts:

  • In post one I discussed how businesses are going to be forced to become much more aware of their business capabilities – and their value – by the increasingly networked and global nature of business ecosystems.  As a result they will be driven to concentrate very hard on realising their differentiating capabilities as quickly, flexibly and cost effectively as possible; in addition they will need to deliver these capabilities with stringent metrics.  This has some serious implications for the IT industry as we will need to shift away from a technology focus (where the client has to discover the value as a hit and miss emergent process) to one where we can demonstrate a much more mature, reliable and outcome based proposition. To do this we’ll need to build the platforms to realise capabilities effectively and in the broadest sense.
  • In post two I discussed how industrialisation is the creation and consistent application of known patterns, processes and infrastructures to increase repeatability and reliability. We might sacrifice some flexibility but increasing commoditisation of technology makes this far less important than cost effectiveness and reliability. When industrialising you need to understand your end to end process and then do the nasty bit – bottom up in excruciating detail.
  • Finally in post three I have discussed my belief that increasing standardisation of technology will lead to accelerating platform consolidation.  Essentially as technology becomes less differentiating and subject to economies of scale it’s likely that IT ownership and management will be less attractive. I believe, therefore, that we will see increasing and accelerating activity in the global Service Delivery Platform arena and that IT organisations and their customers need to have serious, robust and viable strategies to transition their business models.
Follow

Get every new post delivered to your Inbox.

Join 172 other followers