Archive | enterprise computing RSS feed for this section

Evolution and the IT Industry – Part I

25 Oct

(I’m cross-posting this from the Fujitsu RunMyProcess blog where I am now a regular contributor).

A few years ago I wrote a (rather long) post about evolution in the context of business and in particular the use of emerging business architecture techniques to increase the chances of successfully navigating its influence.

Prompted by two recent posts on this blog, however – ‘Software Darwinism’ by Malcolm Haslam and ‘The Death and Rebirth of Outsourcing’ by Massimo Cappato – I thought I would simplify my original piece to create a much shorter and more IT-centric two part set of observations on this theme.  I basically wanted to pick up on the concept of evolution raised by Malcolm and use this as a vehicle to explore the potential impact on businesses and IT of the disruption described by Massimo; how have we arrived at the landscape of today and what can we learn from evolutionary processes about the likely impact of the disruption on the businesses paying large amounts of money for ‘artificially alive’ systems.

In part 1 I will introduce some ideas about evolution and discuss the current state of businesses in this context.  In part 2 I will continue the theme to discuss the way in which current disruptions represent a ‘punctuated equilibrium’ that demands rapid business evolution – or creates a high likelihood of extinction.

Evolution as an Algorithm

A fascinating book I once read about ‘complexity economics’ described evolution as an algorithm for exploring very large design spaces.  In this interpretation the ‘evolutionary algorithm’  allows the evaluation of a potentially infinite number of random designs against the selection criteria of a given environment. Those characteristics that are judged as ‘fit’ are amplified – through propagation and combination – while those which are not die out.

In the natural world evolution throws up organisms that have many component traits and success is judged – often brutally – by how well the combination of traits enables an animal to survive in the environment in which it exists.  For instance individuals of a particular colour or camouflage may survive due to their relative invisibility while others are eaten. Furthermore this is an ongoing process – individuals  with desirable traits will be better equipped to survive and the mating of such individuals will combine – and hence amplify – their desirable traits within their offspring.  Over time the propagation and combination of the most effective traits will increase in the population overall and where this happens quickly enough a species will evolve successfully for the environment..

Punctuated Equilibrium

Another interesting aspect of evolutionary systems is that they often exhibit long periods of relative stability until some set of external changes creates a ‘punctuated equilibrium’; that is a change to the environment which brings new selection criteria to the fore.  Such changes can have a devastating effect on species which have evolved successfully within the previous environment and lead to new periods of dominance or success for new or previously less successful species whose traits make them better adapted to the new selection criteria that result from the change.  Such species then continue to evolve towards mastery of their environment while others which are too specialised to adapt simply die out.

A particularly dramatic example of this process was the extinction of the dinosaurs, where a change in the environment lowered temperatures and destroyed the lush foliage they depended upon.  This led them from masters of the world to extinction in a relatively short period – the combination of traits that previously made them highly successful was no longer well aligned to the selection criteria of the new environment.

Markets as Evolutionary Systems

It has been argued that the complexity of markets (in terms of their scale, their breadth of participation and the differing intents of the participants) means that they can effectively be viewed as evolutionary systems.  Markets are essentially an environment in which we participate rather than something that can be clearly understood or designed in advance.  They are effectively a very large design space where the characteristics for success are often not known in advance and must be discovered through experimentation and adaptation.

When we look at businesses in an evolutionary context we can therefore hypothesize that those which converge over time  towards successful combinations of traits – as judged by their stakeholders through a process of interaction and adaptation – will be the ones best adapted  to market needs and thus chosen by consumers.  These traits – whether they are talent strategies, process strategies or technology strategies – are then copied by other businesses, replicating and amplifying successful traits within the economic system.

The Influence of IT

If we focus specifically on IT,  we can see that even today IT systems have a large influence on the quality of the business capabilities that underpin a company’s offerings.  Every business is competing for selection against competitors with other applications – and software is increasingly moving to the core as business becomes ‘digital’; as a result it is clear that IT is a major (and increasing) factor in deciding the ‘fitness’ of any particular business versus another.  In this context we can see that the degree to which IT helps or hinders a business makes a huge difference to the quality of its ‘traits’ – both individually and in aggregation.  IT can therefore be a significant influence on whether a business’s offerings are ‘fit’ when judged by the evolutionary algorithm of the market.

Competition in an Age of Universally Bad IT

Despite the illusion of change over the last 30 years, at the macro level things have actually been relatively static from  a technology perspective.  While we have moved from mainframes to client-server and from client-server to the Web the fundamental roles of business and IT have remained unchanged (i.e. firms exist to minimise the transaction costs of doing business by building scale and such businesses spend a lot of money on owning and operating IT in pursuit of efficiencies and consistency across their large scale operations).  In reality most IT investment has therefore been inward facing and viewed as a cost of doing business (a ‘tax’ as Massimo would describe it) rather than a platform for the delivery of innovation and differentiation from an external perspective.

Under this model we have seen large businesses use their scale to pay for IT products and services that are inaccessible to smaller organisations.  Over time -because the focus has often been on efficiencies and standardisation – many IT estates have tended to converge around similar packaged applications and technology.  This convergence has all but wiped out the flexibility required for business differentiation while simultaneously placing organisations functionally and temporally in lockstep (as a concrete example it is no surprise that all companies are facing huge challenges as a result of mobility or that their challenges are more or less the same).  Together these developments have led to a broadly static business environment in which a smaller number of large companies dominate each market segment, providing mediocre levels of innovation and service while dictating both the shape of industries and the kinds of offerings consumers can expect  from each.

As a result while IT has enabled large scale efficiencies, it has led to the situation outlined by Massimo – a situation in which businesses have huge investment responsibilities, a crushing burden from bloated support and delivery organisations and a limited ability to evolve quickly (if at all).  The irony is that it has done this equally to all organisations who could afford it, however,while simultaneously acting as a competitive barrier by limiting the economies of scale that can be achieved by organisations who could not.  As a result the costs, complexity, inflexibility and balkanisation around industry boundaries – along with a lack of innovation and customer-centricity – have become part of the settled fabric of business.

While this has not been a significant issue for large organisations during an extended period of relative stability, it does however threaten to create significant challenges as a result of any disruption to the status quo.  It is perhaps interesting to think of today’s businesses as the dinosaurs of the modern age – large and perfectly adapted to the warm and plant rich environment in which they exist unchallenged.

A Punctuated Equilibrium for Business?

Over the last few years, however, we have seen the genesis of a major disruption – a disruption that is going to change the evaluation criteria of the market and require the development of wholly different traits to succeed.  As cloud delivery models, large scale mobility and the mass sharing of content in social graphs converge I believe that they herald a ‘punctuated equilibrium’ whose effects on business will be profound.  These are not just technology changes but rather a change to the fundamental environment in which we all work, play and socialise – and a signal that business models and even industry boundaries are up for radical change.

The possibilities that these advances create in tandem are akin to an emerging ice age for large businesses and their technology providers – an age in which businesses must fight for every customer and must mutate their organisations, business models and technology to attain a new definition of ‘fitness’.  The easy days of domination through mass and an abundance of low hanging cash to be grazed are passing.

In part 2 of this post I will therefore talk more about the nature of this punctuated equilibrium and my personal views on the shifts in business and technology models that will be required to survive it.

Cloud is not Virtualization

12 Jan

In the interests of keeping a better record of my online activity I’ve recently decided to cross-post opinions and thoughts I inflict on people via forums and other technology sites via my blog as well (at least where they are related to my subject and have any level of coherence, lol).  In this context I replied to an ebizq question yesterday that asked “Is it better to use virtualization for some business apps than the cloud?“.  This question was essentially prompted by a survey finding that some companies are more likely to use virtualisation technologies than move to the cloud.

Whilst I was only vaguely interested in the facts presented per se, I often find that talking about cloud and virtualisation together begs people to draw a false equivalence between two things that – at least in my mind – are entirely different in their impact and importance.

Virtualisation is a technology that can (possibly)  increase efficiency in your existing data centre and which might be leveraged by some cloud providers as well.  That’s nice and it can reduce the costs of hosting all your old cack in the short term. Cloud on the other hand is a disruptive shift in the value proposition of IT and the start of a prolonged disruption in the nature and purpose of businesses.

In essence cloud will enable organisations to share multi-tenant business capabilities over the network in order to specialise on their core value. Whilst virtualisation can help you improve your legacy mess (or make it worse if done badly) it does nothing significant to help you take advantage of the larger disruption as it just reduces the costs of hosting applications that are going to increasingly be unfit for purpose due to their architecture rather than their infrastructure.

In this context I guess it’s up to people to decide what’s best to do with their legacy apps – it may indeed make sense in the short term to move them onto virtualised platforms for efficiency’s sake (should it cost out) in order to clean up their mess during the transition stage.

In the longer term, however, people are going to have to codify their business architecture, make decisions about their core purpose and then build new cloud services for key capabilities whilst integrating 3rd party cloud services for non-differentiating capabilities. In this scenario you need to throw away your legacy and develop cloud native and multi-tenant services on higher level PaaS platforms to survive – in which case VMs have no place as a unit of value and the single tenant legacy applications deployed within them will cease to be necessary. In that context the discussion becomes a strategic one – how aggressively will you adopt cloud platforms, what does this mean for the life span of your applications and how will it impact the case for building a virtualised infrastructure (I was assuming it was a question of internal virtualisation rather than IaaS due to the nature of the original question). If it doesn’t pay back or you’re left with fairly stable applications already covered by existing kit then don’t do it.

Either way – don’t build new systems using old architectures and think that running it in a virtualised environment ‘future proofs’ you; the future is addressing a set of higher level architectural issues related to delivering flexible, multi-tenant and mass customisable business capabilities to partners in specialised value webs. Such architectural issues will increasingly be addressed by higher level platform offerings that industrialise and consumerise IT to reduce the issues of managing the complex list of components required to deliver business systems (also mentioned as an increasing issue in the survey).   As a result your route to safety doesn’t lie in simply using less physical – but equally dumb – infrastructure.

Cloud vs Mainframes

19 Oct

David Linthicum highlights some interesting research about mainframes and their continuation in a cloud era.

I think David is right that mainframes may be one of the last internal components to be switched off and that in 5 years most of them will still be around.  I also think, however, that the shift to cloud models may have a better chance of achieving the eventual decommissioning of mainframes than any previous technological advance.  Hear me out for a second.

All previous new generations of technology looking to supplant the mainframe have essentially been slightly better ways of doing the same thing.  Whilst we’ve had massive improvements in the cost and productivity of hardware, middleware and development languages essentially we’ve continued to be stuck with purchase and ownership of costly and complex IT assets. As a result whilst most new development has moved to other platforms the case for shifting away from the mainframe has never seriously held water. Whilst redevelopment would generate huge expense and risk it would result in no fundamental business shift as a result. Essentially you still owned and paid for a load of technology ‘stuff’ and the people to support it even if you successfully navigated the huge organisational and technical challenges required to move ‘that stuff’ to ‘this stuff’. In addition the costs already sunk into the assets and the technology cost barriers to other people entering a market (due to the capital required for large scale IT ownership) also added to the general inertia.

At its heart cloud is not a shift to a new technology but – for once – genuinely a shift to a new paradigm. It means capabilities are packaged and ready to be accessed on demand.  You no longer need to make big investments in new hardware, software and skills before you can even get started. In addition suddenly everyone has access to the best IT and so your competitors (and new entrants) can immediately start building better capabilities than you without the traditional technology-based barriers of entry. This could lead to four important considerations that might eventually lead to the end of the mainframe:

  1. Should an organisation decide to develop its way off the mainframe they can start immediately without the traditional need to incur the huge expense and risk of buying hardware, software, development and systems integration capability before they can even start to redevelop code.  This removes a lot of the cost-based risks and allows a more incremental approach;
  2. Many of the applications implemented on mainframes will increasingly be in competition with external SaaS applications that offer broadly equivalent functionality.  In this context moving away from the mainframe is even less costly and risky (whilst still a serious undertaking) since we do not need to even redevelop the functionality required;
  3. The nature of the work that mainframe applications were set up to support (i.e. internal transaction processing across a tight internal value chain) is changing rapidly as we move towards much more collaborative and social working styles that extend across organisational boundaries.  The changing nature of work is likely to eat away further at the tightly integrated functionality at the heart of most legacy applications and leave fewer core transactional components running on the mainframe; and
  4. Most disruptive of all, as organisations increasingly take advantage of falling collaboration costs to outsource whole business capabilities to specialised partners, so much of the functionality on the mainframe (and other systems) becomes redundant since that work is no longer performed in house.

I think that the four threads outlined here have the possibility to lead to a serious decline in mainframe usage over the next ten years.

But then again they are like terminators – perhaps they will simply be acquired gradually by managed service providers offering to squeeze the cost of maintenance, morph into something else and survive in a low grade capacity for for some time.

 

 

Differentiation vs Integration (Addenda)

22 Jun

After completing my post on different kinds of differentiation the other day I still had a number of points left over that didn’t really fit neatly into the flow of the arguments I presented.  I still think some of them are interesting, though, and so thought I’d add them as addendums to my previous post!

Addendum 1

The first point was a general feeling that ‘standardisation’ is a good thing from an IT perspective.  This stemmed from one of Richard’s explicit statements that:

“Many people in the IT world take for granted that standardization (reduction in variety) is a good thing”

Interestingly it is true to say that from an IT perspective standardisation is generally a good thing (since IT is an infrastructural capability).  Such standardisation, however, must allow for key variances that allow people to configure and consume the standardised applications and systems in a way that enables them to reach their goals (so they must support configuration for each ‘tenant’).  Echoing my other post on evolution – in order to consider this at both an organisational and a market level – we can see that a shift to cloud computing (and ultimately consumption of specialised business capabilities across organisational boundaries) opens up a wider vista than is traditionally available within a single company.

In the traditional way of thinking about IT people within a single organisation are looking to increase standardisation as a valid way of reducing costs and increasing reliability within the bounds of a single organisation’s IT estate.  The issue with this is that such IT standardisation often forces inappropriate standardisation – both in terms of technology support and change processes – on capabilities within the business (something I talked about a while ago).  Essentially the need to standardise for operational IT efficiency tries to override the often genuine cost and capability differences required by each business area.  In addition on-premise solutions have rarely been created with simple mass-configuration in mind, requiring expensive IT customisation and integration to create a single ‘standard’ solution that cannot be varied by tenant (tenant in this case being a business capability with different needs).  Such tensions result in a constant war between IT and the single ‘standard’ solution they can afford to support and individual business capabilities and the different cost and capability requirements they have (which often results in departmental or ‘shadow’ IT implemented by end users outside the control of the IT department).

The interesting point about this, however, is that cloud computing allows organisations to make use of many platforms and applications without a) the upfront expenditure usually required for hardware, training and operational setup and b) the ongoing operational management costs.  In this instance the valid reasons that IT departments try to drive towards standardisation – i.e. reducing the number of heterogeneous technologies they must deploy, manage and upgrade – largely disappear.  If we also accept that IT is essentially infrastructural in nature – and hence provides no differentiation – then we can easily rely on external technology platforms to provide standardisation and economies of scale on our behalf without having to mandate a single platform or application to gain these efficiencies.  At this point we can turn the traditional model on its head – we can choose different platforms and applications for each capability dependent on its needs without sacrificing any of the benefits of standardisation (subject to the applications and platforms supporting interoperability standards to facilitate integration).  Significant and transformational improvements enabled by capability-specific optimisation of the business is therefore (almost tragically) dependent on freeing ourselves from the drag of internal IT.

Addendum 2

Richard also highlighted the fact that there is still a strong belief in many quarters that ‘business architecture’ should be an IT discipline (largely I guess from people who can’t read?).  I believe that ‘business’ architecture is fundamentally about propositions, structure and culture before anything else and that IT is simply one element of a lower level set of implementation decisions.  Whilst IT people may have a leg up on the required ‘structured thinking’ aspects necessary to think about a businesses architecture I feel that any suggestion that business owners are too stupid to design their own organisations – especially using abstraction methods like capabilities – seems outrageous to me.  IT people have an increasingly strong role to play in ‘fusing’ with business colleagues to more rapidly implement differentiating capabilities but they don’t own the business.  Additionally, continued IT ownership of business architecture and EA causes two additional issues: 1) IT architecture techniques are still a long way in advance of business architecture techniques and this means it is faster, easier and more natural for IT people to concentrate on this; and 2) The lack of business people working in the field – since they don’t know IT – limits the rate at which the harder questions about propositions and organisational fitness are being asked and tackled.  As a result – at least from my perspective – ‘business architecture’ owned by IT delivers a potential double whammy against progress; on the one hand it leads to a profusion of IT-centric EA efforts targeted at low interest areas like IT efficiency or cost reduction whilst on the other it allows people to avoid studying, codifying and tackling the real business architecture issues that could be major strategic levers.

Addendum 3

As a final quick aside the model that I discussed for viewing an organisation as a set of business capabilities gives rise to the need for different ‘kinds’ of business architects with many levels of responsibility.  Essentially you can be a business architect helping the overall enterprise to understand what capabilities are needed to realise value streams (so having an enterprise and market view of ‘what’ is required) through to a business architect responsible for how a given capability is actually implemented in terms of process, people and technology (so having an implementation view of ‘how’ to realise a specific ‘what’).  In this latter case – for capabilities that are infrastructural in nature and thus require high standardisation – it may still be appropriate to use detailed, scientific management approaches.

Evolution and IT

3 Jun

This is a subject that has been on my mind a lot lately as I recently read an astounding book by Eric D. Beinhocker called “The Origins of Wealth”.  It was astounding to me for the way in which Beinhocker imperiously swept across traditional economic theories based on equilibrium systems, critiqued the inherent weaknesses of such theories when faced with real world scenarios and then hypothesised the use of the evolutionary algorithm as a basis for a fundamental shift to what he called ‘complexity economics’.  I’m going to return to discuss some of the points from this book – and the way in which they resonated with my own thoughts around business design, economic patterns and technology change – but for today I just wanted to comment on a post by Steve Jones where he raises the issue of evolution in the context of IT systems.

Steve’s question was whether we should “reject evolution and instead take up arms with the Intelligent design mob”.  His thoughts have been influenced by the writing of Richard Dawkins, in particular the oft-times contrast between the apparent elegance of the external appearance of an animal (including its fitness for its environment) with the messy internals that give it life.  Steve suggests that he sees parallels in the IT world and brings this around to issues with the way in which a shift to service-based models often creates unfounded expectations on internal agility:

“The point is that actually we shouldn’t sell SOA from the perspective of evolution of the INSIDE at all we should sell it as an intelligent design approach based on the outside of the service. Its interfaces and its contracts. By claiming internal elements as benefits we are actually undermining the whole benefits that SOA can actually deliver.”

In the rest of the post and into the comments Steve then extends this argument to call for intelligent design (of externals) in place of evolution:

“The point I’m making is that Evolution is a bad way to design a system the whole point of evolution is that of selection, selection of an individual against others. In IT we have just one individual (our IT estate) and so selection doesn’t apply.”

My own feeling is that there isn’t a direct 1:1 relationship in thinking about evolution and the difficulties of changing the internals of a service in the way that Steve suggests.  I believe that evolution is a fractal algorithm whose principles apply equally to the design of business capabilities, service contracts and code.  To think about this more specifically I’d like to consider a number of his points after first considering evolution and how we frame it more broadly from a market and enterprise context.

What is evolution?

Evolution is an algorithm that allows us to explore large design spaces without knowing everything in advance.  It allows us to try out random designs, apply some selection criteria and then amplify those characteristics of a design that are judged as ‘fit’ by the environment (i.e. the selection criteria).  In the natural world evolution throws up organisms that have many component traits and success is judged – often brutally – by how well these traits enable an animal to survive in the environment in which it exists.  Within an individual species there will be a particular subset of traits that define that species (so traits that govern size, speed or vision for instance).  Individuals within a species who have the most desirable instances of these traits will be better equipped to survive, the mating of these individuals will merge their desirable traits and over time the preponderance of the most effective traits will therefore increase in the population overall.  As a result evolution creates a number of designs and uses a selection algorithm to more rapidly arrive at designs that are ‘good enough’ to thrive within the context of the environment in which they exist.  It is a much more rapid method of exploring large design spaces than trying to think about every possible design, work out the best combination of traits and then create the ‘perfect’ design from scratch (i.e. “intelligently” design something without a full understanding of the complexities of the selection criteria and hence what will be successful).

Enterprises and evolution

In Beinhockers book he uses a ‘business’ as the unit of selection that operates within the evolutionary context of the market.  Those businesses with successful traits are chosen by consumers and thus excel.  These traits – whether they be talent strategies, process strategies or technology strategies – are then copied by other businesses, replicating and amplifying successful traits within the economic system. 

I believe that this is the best approximation that we can use in the – rather unsystematic – businesses that exist today but that we can use systematic business architecture to do better.  I have often written about my belief in the need for companies to become more adaptable by identifying and then reforming around the discrete business capabilities they require to realise value.  Such capabilities would form a portfolio of discrete components with defined outcomes which could then be combined and recombined as necessary to realise systematic value streams. 

Such a shift to business capabilities will allow an enterprise to adapt its organisation through the optimisation and recombination of components; whilst at this stage of maturity Beinhocker’s hypothesis of the ‘business’ as the element of selection remains sound (since capabilities are still internal and not individually selectable as desirable traits) we can at least begin to talk about capabilities and the way in which we combine them as the primary traits that need to be ‘amplified’ to increase the fitness of the design of our business. 

Inside Out 

Whilst realigning internal capabilities is a worthwhile exercise in its own right, evolutionary systems also tend to exhibit long periods of relative stability punctuated by rapid change as something happens to alter the selection criteria for ‘fitness’.  The Web and related techniques for decomposition – such as service-orientation – have made it possible to consume external services as easily as internal services.  Business capabilities can thus be made available by specialised providers from anywhere in the world in such a way that they can be easily integrated with internal capabilities to form collaborative value webs.  We can therefore view the current convergence of business architecture, technology and a mass shift to service models as a point of ‘punctuated equilibrium’. 

In this environment continuing to execute capabilities that are non-differentiating will cease to be an attractive option as working with specialised providers will deliver both better outcomes and more opportunities for innovation.  From an evolutionary perspective our algorithm will continue to select those organisations that are most fit (as judged by the market) and those organisations will be those with the strongest combination of traits (i.e. capabilities).  Specialised, external capabilities can be considered to be more attractive ‘traits’ due to their sharp focus, shorter feedback loops and market outlook; they will thus be amplified as more organisations close down their own internal capabilities and integrate them instead, a kind of organisational mutation caused by the combination of the best capabilities available to increase overall fitness.    Enterprises working with limited, expensive and non-differentiating internal capabilities will risk extinction.

Once this shift reaches a tipping point we discover that business capabilities become the unit of market selection since they are now visible as businesses in their own right.  Whilst this could be considered pedantry – as a ‘business’ is still the unit of selection even though what we consider a business has become smaller – there is an important shift that happens at this point.  Essentially as business capabilities become units of selection in their own right the ‘traits’ for selection and amplification of their services become a combination of their own internal people, process and technology capabilities plus the quality of the external capabilities they integrate.  Equally importantly they have to act as businesses – rather than internal, supporting organisations – and support the needs of many customers – and hence support mass customisation.  This will mean that they will have many more consumers than internal support functions would ever have had and the needs of these consumers could be both very different and impossible to guess in advance; there will be new opportunities to rapidly improve their services based on insight from different industries, orthogonal areas and new collaborations.  An ability to respond to these new opportunities by changing their own capabilities or finding new partners to work with will be a significant factor in whether these capabilities thrive and are thus judged as ‘fit’ by the selection criteria of the market.  An ability to evolve externally to provide the ‘right’ services will thus be a core competency required in the new world. 

What has this got to do with services?

The basic points I’m making here are that evolution acts at the scale of markets and is a process that we participate in rather than a way of designing.  We design our offers using the best knowledge that we have available but the market will decide whether the ‘traits’ we exhibit fit the selection criteria of the environment.  Business capabilities can become the ‘traits’ that make particular market offers (or businesses) fit for selection or not by having a huge influence over the overall cost, quality and desirability of a particular offer.  From a technology perspective such capabilities will in reality need to offer their services as services in order for them to be easily integrated into the overall value webs of their customers and partners; in many cases there may be a 1:1 mapping between the business capability and the service interface used to consume it.  In that sense services are just as much a driver of fitness in the overall ecosystem and their interface and purpose will inevitably need to change as the overall ecosystem evolves.  Hence it is not simply a question of ‘fixing’ interfaces and ‘evolving’ internals; the reality is that the whole market is an evolutionary system and businesses – plus the services they offer for consumption –will need to continually evolve in order to remain fit against changing selection criteria.

Intelligent design or evolution

The core question raised by Steve is whether ‘evolution’ has any place in our notion of service design.  In particular:

“The point I’m making is that Evolution is a bad way to design a system the whole point of evolution is that of selection, selection of an individual against others. In IT we have just one individual (our IT estate) and so selection doesn’t apply.”

Is evolution a ‘bad’ designer?

I do not believe that evolution is either a good or a bad designer but it is a very successful one.  Evolution is an algorithm that takes external selection criteria, applies them and amplifies those traits that are most successful in meeting the criteria.  It is brilliant at evaluating near-infinite design spaces (such as living organisms or markets) and continually refining designs to make them fit for the environmental selection criteria in play.

If I read Steve’s post correctly, however, he actually isn’t objecting to the notion of evolution per se – since at the macro level it is a market process in which we are all involved and not a conscious way of designing our services – but rather to a lack of design being labelled an ‘evolutionary’ approach. 

What is ‘evolutionary’ design?

In the majority of cases when people talk about ‘evolution’ in the context of systems they really mean that they want to implement as quickly and cheaply as possible and then ‘evolve’ the system.  Often vendors encourage this behaviour by promising that new technologies are so rapid as to make changes easy and inexpensive.  Such approaches often eschew any attempt at formal design, choosing instead to implement in isolation and then retro-integrate with anything else on a case by case basis.  I have often seen Steve talk about the evils of generating WSDL from code and I imagine that this is the sort of behaviour that he is classifying as ‘evolutionary’ changes to internals.

Is this a good or a bad approach?  From an evolutionary perspective we can say that we do not care.  Given that we are talking about evolution in its true sense the algorithm would merely continue to churn through its evaluation of services, amplifying successful traits.  It is just that such behaviour might have some unrecognised issues: firstly evolution would have to work for longer to bring a service to a point at which it is ‘fit’, secondly the combination of all of these unfit services means that there is a multiplier effect to evolving the ecosystem of services to a point at which it is fit overall and thirdly whilst all of this goes on at a micro level the fitness of the enterprise against the selection criteria of the market might be poor due to the unfitness of some of its major ‘traits’.

Intelligence in design

Whilst a lack of design might extend the evolutionary process to a point at which it is unlikely that a business could ever become fit before it became extinct, an assumption that we can design service interfaces that are fixed also ignores the reality of operating in a complex evolutionary system (like a business). 

Creating a ‘perfect’ service from scratch is a very difficult thing to do as even within the bounds of a single organisation we cannot know all of the potential uses that might come to pass.  We can however use the best available data to create an approximation of the business capabilities and resulting services required in order to try and speed up the evolutionary process by reducing the design space it has to search.  Hence the notion that we use an evolutionary process of service design (a bit like I discussed here) is an important one; often people will not know what good looks like until they see something.  Whilst I therefore accept that we can start with an approximation of the capabilities (and services) we believe we will need we have to accept that these will evolve as we gain experience and exposure to new use cases.  

From this perspective I don’t agree with the literal statement that Steve has made; it is not about intelligent design vs evolution but rather about intelligence of design to support the evolutionary process.  As I stated previously markets are fundamentally evolutionary systems and therefore our businesses – and the business capabilities and services that represent their traits – are assessed by the evolutionary algorithm for fitness against market selection criteria.  We are not dumb observers in this process, however, and must fight to create offers that are attractive to the market along with supporting organisations that enable us to do it at the right price and service levels.  We can apply our intelligence to this process to increase our chances of success but a key element will be to understand that our enterprises will increasingly become a value web of individual capabilities, that it is the combination of our capabilities that is judged and that we must therefore design our organisations to evolve by adopting successful traits to improve our overall fitness.  As a result we should not expect the evolutionary process to do our work for us – by choosing not to apply any intelligence in design – but we should also not assume that evolution has no place in design given that meeting its demands is becoming the primary requirement of business architecture.

Macro evolution in the economy

Stepping back and taking an external perspective leads us to realise that it is also untrue to say that we only have one individual (in terms of a single IT estate) and that there is nothing to select against; in reality even today we are competing for selection against businesses with other IT estates and thus our ‘traits’ (in the form of our IT) are already a major factor in deciding our fitness (and thus our ability to be ‘selected’ by the evolutionary algorithm of the market).  If we factor in the emerging discontinuities we see as part of the ‘punctuated equilibrium’ process it only makes things worse; the specific IT we have within specific business capabilities will have a large impact on the fitness of these capabilities to survive.  In that context continually evolving our business capabilities (and with them the IT and software services that enable them) is the only way to ensure future success.

More importantly as we look at the wider picture of the position of our business capabilities within the market as a whole so our unknowns become more acute and the more that we can only rely on selection and amplification (i.e. evolution) to guide us in shaping them.  Looking beyond the boundaries of our single organisation we have to consider the fact that all of our services will exist in a market ecosystem many of whose needs and usage we are even less equipped to know in advance.  There will often be new and novel ways in which we can change our services to meet the emerging needs of customers and partners and in this way the overall ecosystem itself will evolve.  As a result selection is the only way in which design can occur in an ecosystem as complex as a market where there are many participants whose needs are diverse.  Nobody can ‘intelligently design’ a whole market from top to bottom.  Furthermore the market – as an evolutionary system – will be subject to a process of ‘punctuated equilibrium’ meaning that sudden changes in the criteria used to judge fitness can occur.  From an IT perspective the shift towards service models such as cloud computing could be considered to be one such change, since it changes the economics of IT from one based on differentiation based on ownership to one based on universal access.  Such changes could be considered ‘revolutionary’ as the carefully crafted and scaled business models created during a period of relative stability cease to be appropriate and new capabilities have to be developed or integrated to be successful.  This is one area where I disagreed with Steve’s comment about the relationship between revolution and evolution:

“The point of revolutionary change is that it may require a drop back to the beginning and starting again. This isn’t possible in an evolutionary model.”

Essentially revolutionary change often happens in evolutionary systems – evolution is always exploring the design space and changes in the environment can lead to previously uninteresting traits becoming key selection criteria.  In this case ‘revolutionary change’ is a side-effect of the way in which evolution starts to amplify different traits due to the changes in selection criteria.  In the natural world such changes can lead to catastrophic outcomes for whole species whose specialisations are too far removed from the new selection criteria and this can also happen to businesses (it will be interesting to see how many IT companies survive the shift to new models, for instance).  Evolution also allows the development of new ‘traits’ that make us sustainable, however, and therefore can support us in surviving ‘revolutionary’ changes if we have sufficient desirable ‘traits’ to prevent total collapse.  The trick is to understand how you can evolve at the macro level to incorporate the changes that have occurred in the selection criteria of your market and to realign your capabilities as appropriate.  Often the safest way to do this is to have different services on offer that try different combinations of traits, hence keeping sensors within the environment to warn you of impending changes. 

Summary

As a result there is no question that both evolution and intelligence in design have a place in the creation of sustainable architectures (whether macro business architectures or micro service architectures).  We have to be precise in the ways in which we use this language, however; it is not sufficient to label a lack of design as ‘evolution’ (which I believe was Steve’s core point).  Evolution is a larger, exogenous force that shapes systems by highlighting and amplifying desirable traits and not something that we can rely on to reliably fix our design issues without an infinite amount of time and change capability.  We therefore need to apply intelligence to the process of design – even when there is great uncertainty – to try and narrow down the design space to minimise the amount we have to rely on evolution to arrive at a viable ‘system’; even once we get to this point, however, we need to be aware of the fact that evolution is an ongoing process of selection and amplification and design our business architectures with the flexibility necessary to recognise this fact.

More broadly I believe that we can also look at the application of ‘intelligence’ and ‘evolution’ as a matter of scale;  we can design individual services with a fair degree of intelligence, we can design our business capabilities with some fair approximations and then rely on evolution to improve them but we can only rely on evolution to shape the market itself and thus the selection criteria that define our participation.  For this reason strategies that stress adaptability (i.e. an ability to evolve in response to changing selection criteria) have to take precedence over strategies that stress certainty and efficiency.

You work for the man

22 Nov

Interesting quotes from Gartner on the ZDNet website today that I felt like cheering to the rafters – in short people like me (and maybe you) are in business not IT.

“To remain relevant, IT managers need to wake up and admit they work in business, not IT, Gartner’s leading analysts said in the keynote address at the Gartner Symposium in Sydney.

“None of you are in IT; all of you are in business,” said Andy Kyte, vice president and Gartner fellow.

IT has become invisible to end users and should rather be called “OT” or “operational technology”, according to Kyte. At the same time, the way businesses procure technology is changing, for example software as a service is allowing technology to seep into the business beyond the IT department’s control.

“So where does that leave IT professionals?” Kyte asked. “We need to ask what we do, what to achieve, what is expected of us, and how we are perceived by our peers in business,” he said.

IT managers who fail to ask these questions risk becoming irrelevant to the business. In fact, the vertical market an IT manager works in — government, pharmaceuticals, manufacturing, retail — is more important than the person’s department, said Kyte.”

Tragically it reminded me of a meeting I attended a few years ago when I worked at a financial services company.  Essentially a senior manager who was briefing the IT department said “put up your hand if you work in financial services”.  I – and a few others tentatively did.  “Now put up your hand if you work in the IT industry”.  Everyone else raised their hand.  “That’s a big problem”, said the man who paid them ruefully.

And this gets to the nub of one of the issues in the increasing commodisation of technology and IT – essentially you have a whole army of people whose loyalty is to the technology and not to the business.  Things are going to have to change soon, however; as I wrote here the stratification of business types around economic forces will drive us all to admit – finally – that we either do work in IT – and so need to be in an IT company – or we work in financial services – and so IT is just a utility that we are forced to use to achieve our goals.  We increasingly can’t be both, as funding our play habit will become too rich for the majority of organisations being exposed to the increasing chill of global competition. 

So get your ass to the business side of the fence and look for value before it’s too late and try your hardest to get shot of operational management and execution to give you the attention space to prove to your colleagues that you’re really on their side.

Does ERP Suck?

4 May

I’ve been wondering a lot lately to what extent service-orientation and capability unbundling will affect ERP vendors.  Whilst idly musing on this subject I came across Shai Agassi’s post on the same subject, following on from an article at Infoworld.  As a result I thought I’d put down some of the questions that I’ve been asking myself about this subject.

  • ERP – by definition – codifies commoditised capabilities as a way of supporting the implementation of standard, tightly coupled business functions within many consuming organisations.  Essentially they allow enterprises to standardise the processes that their people use in executing non-differentiating capabilities.  As I’ve discussed previously, the main reason that most organisations choose to execute these non-differentiating capabilities is to minimise the transaction costs of collaboration.  If new technologies and methods enable us to collaborate with other partners and make specialisation more attractive, however, then we’re likely to see commoditised processes consolidated into specialised providers who rely on economies of scale in their execution.  Essentially the mode of delivering commoditised practices moves away from the dissemination of best practice to many organisations in the form of applications to underpin business capabilities and into a few large scale service providers who execute the whole capability (software and services) on your behalf;    
  • Another issue that I’ve been thinking about concerns the tight integration of capabilities within ERP packages.  As Richard Veryard points out, ERP has taken a set of uncontrolled processes and tightly coupled them; such command-and-control models of management are increasingly unsustainable in the face of increasing levels of change, however.  Organisations are now looking for increased adaptability, and this will require systematically defined business capabilities that can be ‘pulled’ into dynamic value-chains in place of static processes within a monolithic software system.  Such drivers at best lead to a disaggregation of the ERP system into modular software services that support the execution of such capabilities but at this point you again begin to wonder why – if I have defined and metricised these capabilities (and therefore understand the results that I need) – I would purchase and run IT to support my people in executing these capabilities internally rather than go to a specialised provider to do it all for me?
  • The next question that occurs to me is if enterprises no longer buy systems that allow them to implement processes they want to unbundle then who will replace them as buyers of ERP?  I wonder whether the resulting consolidated service providers are likely to want to offer all of the capabilities codified within such systems either?  Consolidated service providers are more likely to specialise on just a subset of the capabilities traditionally offered within big applications in order to a) focus and b) support inclusion within the adaptable value chains that their partners require.  Irrespective of service scope, is it likely that providers will choose to have ‘standard’ implementations of practices given that they have chosen to specialise – and potentially differentiate – in one of the capabilities previously provided to enterprises through such standard applications?  Currently enterprises want standard outputs but they also want to do it in a well understood, standard way because they don’t care about being better than others in these commoditised capabilities; if my whole purpose is to deliver these capabilities, however, then I may well be more interested in how I can deliver the standard outputs in a way that differentiates me from the competition;
  • Even if providers do purchase disaggregated packages to underpin their provision of commodity services, such packages are still consolidated into far fewer, higher scale providers.  In this instance is the software still viable given the drastically reduced market?  Do service providers in this context not just absorb the software needed to deliver services, offering the results of using software to deliver real value to customers rather than just the tools needed to get results for yourself?

Contrary to all of this, however, Shai contends that ERP systems are more necessary than ever for a number of reasons (I’m paraphrasing):

  • Semantic consistency;
  • Consistent ‘APIs’;
  • Compliance; and
  • External partnering.

In many ways I think that these issues tend more towards unbundling than they do towards consolidation within the organisation:

  • In my view consistency and the ability to build propositions on top of known ‘APIs’ will come from rigorous business architecture that systematically designs and metricises the capabilities needed by the organisation rather than from software packages.  Such organisational practices will need to become common in any case as enterprises seek to understand where value is created and destroyed and to make themselves more adaptable.  Mapping the differences in semantics between the architecture of the organisation and the services of any provider would not then appear to be an insurmountable task;
  • In terms of compliance there are many laws and regulations being passed on a regular basis that impact particular capabilities within the business and even particular geographies (e.g. employment laws).  Where commodity processes remain within the organisation the onus is on the people delivering the capabilities to ensure that they are compliant (which software alone won’t deliver).  Where the capability has been outsourced to a specialised provider, however, the onus is on that organisation to remain current.  Additionally – given their focus and expertise – they are far more likely to remain so than non-experts working within a non-core capability internal to an enterprise.  As a result it may be that replacing your ERP supported internal capabilities is a more robust way of remaining compliant to the myriad of regulations within which an organisation operates; and
  • In terms of the last point, Shai more specifically contends that “…the coming wave of implementations will be driven by the need for consolidation, networked value chains and increased speed of process change”.  I completely agree with this but feel that consolidation will be delivered from a business perspective (i.e. increasing specialisation) rather than from a software perspective (i.e. more ERP) and that the networked value chains will be complementary partners orchestrated using technologies that enable rapid process change.  I feel that these things are orthogonal to the concept of an integrated ERP, however, with horizontal process based technologies pulling together a network of specialised providers who will have taken over many of the tasks that I would previously have needed an ERP to control.

The one area that I did agree with both Shai and Richard was around the fact that none of this unbundling and specialisation can take place just on the basis of granular web services that have been thrown together; we need to get to a level where we can express the capabilities of an organisation as a set of services that represent access to significant business functionality.  Only service providers who have been through this exercise will be mature and stable enough for us to place our trust in them.  This is not an easy endeavour but I feel that the drivers towards specialisation will compel us to address these issues and that the resulting ability to procure commoditised services from consolidated providers will undermine the value of ERP software.