B2B Analysts
About the Company   Services   Research   Pressroom  
Short Takes
ERP
SCM
CRM
SRM
Infrastructure
Profit Optimization
Sarbanes
Sourcing
Analytics

Cambridge MA
7/15/2004

Slow Markets

Spending on applications seems to have slowed once again. Maybe there are underlying reasons for the slowdown.

The Software Value Chain

For the past year or so, I've been arguing that a good metric for a software product is its time to value, that is, the amount of time it will take before the value returned by the software to all its users will have exceeded the total investment in the software.

For most software, the time to value is infinite; it never returns its total investment. What we're all interested in is the applications that have some chance of going positive.

I've explained this notion in some detail to smart people at some of the major software companies, and I have to say, it's not one of those ideas that they get the first time around. They tend to think that software is good if the revenue it returns to the software company exceeds the cost of development and marketing. They don't look beyond that metric.

In desperation, I've tried to explain my idea by invoking the notion of a software value chain. In other industries, the term "value chain" refers to the sequence of events that occur between the time a raw material is in the ground and a product is consumed. Theoretically, each of these events (mining borax, shipping it in rail cars, packaging it, sponsoring Death Valley Days, putting it on the shelves) adds value to the product until, when the Boraxo actually cleans something up, the value is realized. A software value chain, by analogy, is all the steps that take place before the value is realized.

In a consumer value chain, there is always a risk that there will be no ultimate return on the value that any intermediate link adds. If you mine a lot of borax, and one of those Death Valley floods washes away the rail line, the borax will lie there in piles, valueless, until the train line is repaired. So, too, in a software value chain. If you add a feature, but it's in a release that nobody upgrades to, the value of the feature is not realized.

In the short run, of course, people at intermediate stages in a value chain don't care what happens further down the chain. But in the long run, if what you're doing doesn't generate value, you'll be out of business. A miner may collect for having dug out that borax, but if the rail line washes out, he won't have a job until those piles get moved. Thus, when you are at an intermediate stage, it's always in your interest to figure out what's happening farther along the chain, and it's usually very good business to try to get as much control over the chain as possible.

The time-to-value metric implicitly takes into account what happens along the whole software value chain. And using that as a metric, instead of revenue generated, would force the companies to take into account what happens farther down the chain.

The people I've talked to at software companies tend not to understand this because they think of themselves as very far down the chain indeed. "Yes, somebody has to install it and configure it," they think, "but these days, that's a matter of weeks.After it's installed, assuming that was done correctly, the application generates value."

But what if that were not true, or even close to true. What if even after the software was installed, problems could (and do) emerge, which are completely out of the software company's or installer's control, which break the value chain and render the software installation as valueless as that pile of borax out in the desert?

Doesn't happen, right?

Wrong. Recent totally credible, impartial studies show definitively that generating value from an application and maintaining that level of value require disciplines that are simply beyond even quite sophisticated users of completely standard modern application software.

Post-Installation Disciplines

Exhibit A in this group of studies is a 2001 study of a retailer that used a merchandising system of precisely the kind that Retek, JDA, or SAP sells. The study, which was done at Harvard Business School by Ananth Raman and Nicole DeHoratius, asks two simple, but related questions. How accurate is the information in the retailer's merchandising system? And two, what is the cause of any inaccuracy that is there?

The study found that 65% of the inventory records at a name-brand chain (whom they do not identify) were found to be inaccurate. And 16% of the time, at a different (also unidentified) name-brand chain, experienced salespeople could not find an item, even though the system showed the item was supposedly on the premises.

The short version of the article is available on the web at http://www.ecr-academics.org/journal/archive/pdf/03016263.pdf.

Raman assures me (and I have independent evidence that confirms this) that these are not terrible, failing companies with antiquated computer systems. On the contrary. They are household name companies that are acknowledged leaders in their categories, whose systems were state of the art. Sixty-five percent is not even astoundingly terrible, relative to competitors; it may actually be pretty good. Says Raman, "I was contacted after the article came out by several companies who wanted to know what they could do to improve their accuracy to that level."

Operational issues are at the heart of the problem. Stuff gets put in the wrong place and, just as when a library book is misshelved, the stuff is effectively lost. Or stuff is stolen. Or stuff is mis-scanned. Or mislabeled. Or whatever.

The error rate can't be accounted for by actual maleficence. Raman looks at the magnitude of the error as well as the accuracy, and the variation is far greater than would be accounted for by normal retail shrinkage or sabotage. Nor can it be accounted for just by saying that retailers exploit employees. Among stores in the same chain, it turns out, there is wide variation in record inaccuracy. Employees at the same pay scale do well in one store, poorly in another.

Indeed, in at least one case, it wasn't employees at all, at least those weren't the precipitating cause. A store at one chain showed a huge increase in inaccuracy from one year to another. The senior executive knew why. They'd allowed the store to increase the size of its back room.

The Consequences of Record Inaccuracy

So what happens when data is inaccurate? Well, imagine that one of those modern merchandising systems people pay a boodle for shows there are 14 boxes of Boraxo on the shelves. In fact there are no boxes. At that store, there won't be any Boraxo sold until somebody orders some more. Unfortunately, the whole point of the modern system is that the system does the replenishment ordering. So there won't be any Boraxo sold until somebody corrects the record.

It gets worse. If that figure is not corrected within a certain period of time, that same system will show that the product is a slow mover--it never sells!--and will recommend that it no longer be stocked. Fantasy? Not at all. Buyers in these chains are assigned far more products (called SKUs, or stockkeeping units) than they can handle. They act on the numbers they're given; they don't have time to second-guess.

Raman does not quantify the cost of this artificial hiatus in sales of the product, but there are numerous studies on stockouts and the cost of stockouts, all of which show that stockouts are a bad, expensive thing. Raman also does not study whether the inaccuracies (and hence, stockouts) are concentrated in any inventory area. But he does not need to. Anybody with any supply chain experience knows that the problem will be worst among the products that are most popular.

It's the old pens-in-the-supply-room problem. To understand this problem, just go into your supply room right now. The only pens on the shelves will be the ones that nobody wants. The popular pens were snapped up when they came in, and they won't be replenished until the next cycle. But if your stockroom is using an automatic system with the wrong data (odds: 65% or greater), those pens will never be replaced, because there will never be another cycle.

In effect, assuming errors are random, what this means is that a store run by one of these systems will gradually fill up with more and more stuff that nobody wants and will empty itself of all the things that people do want.

The Value of the Software

For merchandising systems, the benefits that are usually touted include the following: greater inventory accuracy (!), less total inventory (because you don't need to overstock in order to make sure enough will be there), fewer stockouts, faster replenishment when there are stockouts, and less employee time spent on managing replenishment processes.

You don't need a degree in higher mathematics to see that these benefits are decreased when the records are inaccurate. Clearly, the greater the inaccuracy, the fewer the benefits. You might think that it's linear (35% accuracy, hence 35% of the benefits). But it's not. The existence of inaccuracy in the system means that you have to set up manual processes to correct the inaccuracy, and this may reduce the benefit all the way down to zero or below.

Indeed, if you're the sort of business that is hit hard by the pens-in-the-supply-room problem, the overall effect may be significantly negative. With older, less automatic methods, you might actually have ended up replenishing more popular items more reliably than you are now.

Then Why Buy?

Longtime readers know that I've always been puzzled by how slow the market for retail merchandising systems is. I've generally attributed it to orneriness, ignorance, and cheapness on the part of retailers. (When I'm being polite, it's called "lack of leadership.")

So this research (and other research I'll cover in a later piece) came as a bolt from the blue to me. "So that's why they don't do it. The benefits are nowhere near what I thought they were. To get the benefits I expect, these retailers would have to have far more operational competence than they seem to have." They're not ornery; they're just not buying the Ferrari because they know they can't drive it competently.

But isn't getting that competence relatively simple? Well, you would think that if anyone had figured out how to get that competence, it would be one of those retail chains that Raman studied. So I wandered over to the local branch of what I suspect was that chain. (Benchmarking Partners had done some work with them about the same time.) And I asked a manager, who happens to be a former technology analyst at a competitor, about inventory accuracy.

It's better, no question. But if you think it's anywhere close to the perfection that sellers of merchandising software assume when they tout benefits, you've got another think coming. Accuracy is still highly variable by store. ("Ours is the best in the area.") To get the accuracy that is achieved, they spend serious employee time counting. They count what's on the shelves, what's on the top shelves, and what's in the back room, over and over again. The computer system, by the way, only shows total inventory in the store, so it can't address the misplaced inventory problem at all. How do they address it? They walk around the store.

Does This Apply Elsewhere?

Raman's study only covers inaccuracy in retail stores using retail systems. What about other industries? Inventory accuracy shouldn't be as bad as it is in retail. Retail has more records than anyone else, and the records are maintained by people who are on average paid less than in other industries.

But if you think that accuracy is the issue, in a sense, you've missed the point. Raman is telling us that in a good retailer, the key operational disciplines required to make a modern system work effectively are so deficient that the value of the system is problematic. The question in other industries is whether there are similar failures of post-implementation operational disciplines.

Every bit of anecdotal evidence I have and every bit of experience I have suggests that there are.

I've seen it myself in supply chain software installations. There, it isn't just the discipline of maintaining inventory accuracy (which is by no means done well.) It's things like allowing "adjustments" to replenishment plans whenever a person doesn't think they are right. Or, in forecasting, using aggregation and disaggregation at the wrong levels or for the wrong groups of data.

I remember one installation of a perfectly reputable application where forecasts were pased from person to person, aggregated, disaggregated, analyzed, modified, and acted on. The only trouble was that the first person who touched the forecasts had turned them into valueless mush. And if by some chance, that person had come up with something of some value, the next person in line would have leached all of that value out.

Am I stretching Raman's study too far? Think about it. It's clear that any application software of any kind will require some level of rigor from the people using the system. What Raman is saying is that even good companies can be so deficient in the rigor required by their system that it may actually work worse than no system at all. Why should other companies in other industries be any different?

The ROI of Applications

A lot of companies that sell software like to use ROI calculators. They look at the cost of the software and the implementation, then estimate the value that will accrue over time, demonstrating (miraculously enough) that the ROI or EVP or whatever will entirely justify the investment in the software.

Not one of these ROI calculators takes the problem I am talking about into account. They assume, as the companies assume, that once the software is live, it will begin returning the promised value, requiring no more than normal maintenance (including, of course, the 15-23% of list that goes to the software company).

If the experience that Raman reports is at all typical, that isn't the pattern at all. Over time, the value goes down, unless operational disciplines that are not mentioned in the calculation, which have a problematic probability of succeeding, and which cost an unknown amount to maintain at the appropriate level, are correctly put into place from the moment the software is running. Otherwise, the value goes steadily down.

I don't think there's anything malicious about this failure to state the probable value accurately. I think the software companies may be making an honest error. They don't understand themselves how operationally problematic their software can be. If they did understand this, they would build software that helps ameliorate the operational problems. It wouldn't be that hard (off the top of my head, I can think of six or seven things they could do). But I've never seen it or heard of any software company doing so.

The Slow Market

Imagine what happens in any marketplace for a new product when suppliers simply ignore the fragility of the downstream links in the value chain. For a while, there will be tremendous excitement about the product, which seems to promise untold benefits. As the product moves down the chain there will be initial reports of success, and also reports of failure. Then, as the supplier moves to correct problems that are affecting the nearby links, success rates seem to go up.

Even among the successes, though, you wouldn't see the product making the huge difference that was promised. You certainly wouldn't see a consistent, clear correlation between use of the product and clear end benefits. In the case of software systems, you wouldn't, for instance, ever be able to demonstrate a correlation between improving margins and the implementation of the systems.

And, over time, you wouldn't see a general consensus of the kind you did see with PCs or word processing systems that this technology is something you can't get along without. You'd just see a long string of people who have put the technology in and made it work, but in their heart of hearts, don't think it made much of a difference.

As time goes on, such a market would begin to fade. There'd be too little enthusiasm, too much doubt about whether the stuff works, too many other things to do. So many systems would have landed with a thud that even those who survived just wouldn't want to line up for another parachute.

Isn't that what we're seeing now in this long, slow summer of little innovation?

In the past, when I've tried to explain the lack of enthusiasm for applications, bad software and exaggerated promises have always loomed large. But I also had to say that the people who ascribe the failures to bad buyers, bad project management, and that old standby, lack of top management support, have always been able to make a persuasive case.

Raman's study shows us that all of our explanations may have seized on only part of the problem. Even after an implementation of more or less bad software is done more or less well, his work suggests that people must also put in and execute strategies for insuring that they continue to get the software to deliver value, and that as yet, many of them haven't.

To me, this discovery is heartening. If the software companies and consulting companies recognize that there is a software value chain and that their efforts so far have stopped before the chain is ended, they can do something about it. Simple self-interest suggests that they should figure out what those operational disciplines are and help people to develop and maintain them.

But is that really their job? Isn't it up to the user to maintain their technology, just as it's up to the car owners to keep their oil changed? Well, from a moral point of view, maybe. But from a practical point of view, if cars have gotten more reliable over the last 20 years, it's not because people have gotten better about changing their oil. It's because the automobile companies and oil companies have made it much, much less burdensome to maintain their automobiles.

But they won't do that until they start understanding and trying to take more control of the entire software value chain, not just the link that they're responsible for.

To see other recent Short Takes, click here for a listing.