Saturday, September 26, 2009

Overinvestment in Information Technology

One of the questions that is quite often asked during IT budget process is that what is the minimum "maintenance" level of capital expense on IT? There is no commonly accepted definition or vocabulary for "maintenance" level of IT capex. Generally, "maintenance" level of capex is the capex that will be needed to meet the minimum requirements of "business as usual." This includes replacement of discontinued IT assets, software enhancements to meet ongoing business needs, enhancements requested by customers and a certain level of capital to stay competitive in the market. Last item is hard to measure. "Maintenance" level capital does not include, for instance, release of new products, entry into new markets and new marketing campaigns for existing products.

There is a caveat though. This "maintenance" capex model does not work in high growth or emerging industries, where more than 90% of the capital plan is directed towards building up a strong position in the market.

I have seen several variations of this model used in the finance, health care and insurance industries. The capital requirements are variously labelled as "core" versus "growth", "maintenance" versus "growth", or "keep the lights on" versus "new business."

I do sense a problem with this method of capital planning. Invariably, this results in overinvestment in IT, which is a bad thing for any business. The reasons are not hard to discern:

  1. Vendor-driven technology refresh cycles which result in lack of support and discontinuance of critical IT infrastructure every year. This pushes up the "core" capex.
  2. Customer requested product enhancements for which customers are unwilling to pay, particularly, if one of the customers is a critical stakeholder.
  3. Inability to accurately estimate the capital level required to "keep up with the Joneses". This is also called "arms race". This does not lead to new business or growth but you have to do it, since everyone else is doing it and if you don't do it, then you will lose business.
  4. Government and regulatory changes
  5. IT expense due to sudden or unexpected shifts in the market or industry structure due to changes in consumer tastest, industry consolidation, etc.
  6. There is a wide spread belief that technology can solve critical business problems. This is the myth of silver bullet.

We know that the marginal efficiency of capital or IRR declines as investment level increases within IT. At certain point of time IRR goes below the cost of capital. This is the point at which the investment must stop but often does not and results in overinvestment that destroys value.

Thursday, September 17, 2009

Backward Compatibility

Is backward compatibility really necessary? This is again a question of choice and trade-offs. From a rational economic perspective, if the value that you expect to create by giving up backward compatibility is more than the value that you will have with backward compatibility, then go ahead and drop backward compatibility like a hot potato.

Now, I'm going to diverge a little and take a moment to talk about value. What is value? If a customer is willing to pay an extra dollar for a feature, then the feature has value. Therefore, value is in the mind of the customer and somewhat related to the concept of utility in economics.

Backward compatibility is a big deal. It would be a pain in the neck, if operating systems did not have backward compatibility. Testing and reconfiguration of hundreds of applications will kill you without backward compatibility at the OS level. If hardware did not have backward compatibility, then upgrade path would be strewn with problems. This is the reason most of the vendors follow a product life cycle approach, describe an upgrade path through various versions of a product and then finally sunset the product. What sunset means is that the next version of the product won't be backward compatible and you would have to go through the painful process of reconfiguration and testing. The key is that this pain should result in long-term value. This is what is promised when you go from Alpha architecture to Itanium architecture, from 32-bit to 64-bit, from Windows to Linux or Apple's Mac OS X. This is also called burning the bridge. You make a bold statement that you are not going back. You are going to start a new life. You are going to change all your applications and all the problems that came with it and configure new software with new business processes. This is very expensive. For a $50 million organization the cost could be $1 million, for a $500 million organization the cost won't be $10 million but more like $15 million and for a $5 billion organization the cost would be in the range of $250 million. The problem is that the expense increases with the size of the organization.

Why do we have this accelerated increase in expense as the size of organization grows? The problem is related to the cost of coordination, synchronization, business process optimization and testing. These are the diseconomies of scale. As you grow, you have to spend more. Clearly, our large organizations have many offsetting economies of scale, otherwise they will not be able to sustain themselves. In fact, this is not true in all the cases. I've seen several large organizations teetering on the brink of bankruptcy due to problems in upgrading of core systems.

From a purely software point of view, backward compatibility issues limit designers in pursuing the best course of action. So whether you should burn the bridges and burn the boat or not. It is as much a question related to philosophy as economics of risk and reward.