B2B Analysts
About the Company   Services   Research   Pressroom  
Short Takes
ERP
SCM
CRM
SRM
Infrastructure
Profit Optimization
Sarbanes
Sourcing
Analytics

Moscone Center
September 18 , 2005

"That Makes Ambition Virtue"

Oracle lays out the plans for the Fusion Application product. Part One of three parts.

Playing the Hand You're Dealt

Oracle's plans for its application product suite are now clear. They are breathtakingly ambitious. Here are the highlights.

  • Oracle will rewrite the entire core (non-retail) product line. "Make the best product you can," said John Wookey, when asked what design principles he gave the team.
  • Migration will be a data migration; preserving existing processes, customizations, user interfaces, or APIs will not be a primary goal. Competitors will characterize this migration as "requiring a re-implementation," though that overstates the case.
  • The Fusion Applications Suite (AS) will be an "information age" application, one that is far more flexible, easier to install, cheaper to maintain, and more functionally powerful than any business application that has yet come to market.
  • Fusion AS will rely heavily on Fusion Middleware; most importantly, all process management, will be provided by the BPEL (Business Process Execution Language).
  • Release date for a version of Fusion AS that customers can look at and kick the tires on is 2008, though some functionality will appear earlier. The 2008 version will have core ERP functionality, but won't necessarily contain all the functionality that is eventually planned.

If these plans work brilliantly, Oracle will find itself in 2009 or so with a product so compelling that it will make SAP look obsolete. If they work as well as the plans for 11i worked, Oracle will have squandered what it correctly perceives as a wonderful opportunity.

It's a big, ambitious gamble, the best way by far to play what is now a losing hand.

Skepticism is surely in order, but I think skepticism may be too easy a reaction. There is a really good idea here; it's worthwhile, therefore, to understand the power of this idea before letting the skepticism loose.

For that reason, I'm going to take an approach that is unusual for these Short Takes. Ordinarily, I think it's my job to give as complete, objective, and readable an assessment as an outside, objective observer can come to, do so in one (longish) Short Take, and go on.

In this case, "completeness" competes pretty directly with "readability." And I don't think either will be achieved if I can't get you to understand why Oracle is putting so much into this project. So I'm dividing my commentary on the Fusion AS into three parts. In this, the first part, I'm going to hold off on the skepticism and pretty much take Oracle's side. My job hers is to explain why it makes a lot of sense for Oracle to play out its hand in this way.

In the second part, I'll talk about how the Oracle strategy plays out for Oracle's customers. Clearly, skepticism needs to return; at the same time, one also needs to be clear about the (not insignificant) upside. Oracle has spent a lot of time in the last year allaying fears and listening; if they continue with this customer-friendly stance, they will deserve and get a hearing for a case that has some merits.

In the third part, I'll express the doubts that any of us who have followed Oracle for many years are likely to feel. At the very least, the more ambitious a project, the more likely it is to be delayed or diminished, and Oracle Fusion AS is very ambitious indeed. But again, I think that kind of blanket dismissal is inappropriate. So I'm going to try to go one level deeper, analyze where the sticking points are likely to be, and look at the experiences others have had doing the same thing. At least that will give those of us in the stands a program that we can look at.

.So let's begin. Let's look at why Oracle thinks going through all this difficulty may be worth the candle.

Why Fusion?

Midway through Othello, the Moor mourns for his days as a soldier, for "the big wars that make ambition virtue." In a war, he implies, bold strokes, strong words, and superhuman effort are called for, and singleness of mind and clarity of purpose directed toward a rich goal are expected of a leader.

Oracle's struggle with SAP may not be a war, but it shares with wars one salient characteristic: both sides want to colonize the same territory. Think of this territory as a management platform for corporate information, something that would let them occupy the same role for corporate operations that Microsoft occupies for the corporate desktop.

From both SAP's and Oracle's point of view, this territory is something like what Oklahoma was in the 1880s, virgin territory that has just been opened up, not by legislation, but by a new technology.

Those readers who haven't left skepticism behind might wonder whether there aren't Indians already in that territory, corporate information platforms called ERP systems. Some might also call to mind the fact that the history of the past dozen years is littered with "new and revolutionary" technologies that were going to support a new platform for enterprise applications, a platform that rendered the old platform obsolete. Remember client/server? Object technologies? 119% Internet?

So why do Oracle and SAP think that this time, the situation different, so diffferent that they are restructuring their companies in order to go after the opportunity that SOA creates? Well, to understand that, you have to take a hard, cold look at the old information platforms. This is something that SAP and Oracle can't really afford to do, so I will take on the task, with some pleasure.

The Walnut Shell

These old applications are nasty things. They're nasty to work with if you're a user, nasty to futz with, if you're in IT, nasty to buy, nasty to maintain. They are nasty because they are complicated, nasty because they are difficult to understand, nasty because they are underdesigned.

The worst of it is, this nastiness is a feature. Customers are actually paying good money for it.

Indeed, in the early nineties, that nastiness may have looked pretty good. Back then, most corporate information was kept in functionally organized silos, most often in custom-built apps. These custom apps had given corporations automation benefits, but the walls between the apps were a pain. So, as soon as the technology could justify it (or perhaps sooner), people tried to break the walls down and put all corporate information into a single repository--no walls.

Unfortunately, this was a very complicated and expensive proposition, well beyond the capabilities of individual IT departments. So application companies arose, who promised to build these repositories. Because the application companies were building for many customers, they promised their prospects economies of scale and sold the apps for less than it would cost a customer to build the app themselves, assuming the customers could.

These repositories held information about four basic things: people, material, demand, and money. At their core, they provided a snapshot of a company's accounts, its orders, its material assets, and its personnel.

Of course, that snapshot changes constantly. So an effective repository can't just provide a place to store information. It also has to provide a means of updating the information. Updates were (and are) complex things, for they usually involve multiple related changes in the core information. When you ship material to a customer, you decrement inventory, of course, but you also relieve demand, and create an invoice. For each of these three "transactions," different information is affected (material, demand, and money), so you need to create three different operations, each of which is auditable, and each of which is tied to the other three.

The repositories used two different underlying technologies to accomplish this. To store the current state and each change to the state, the repository used a database. The capabilities of that database were set by its so-called data model, the list of different things that could be stored.

To provide ways of making the transactions happen, the storehouses used a suite of programs, which do everything from presenting a screen where a person can register that an order has shipped to making sure that all the associated information changes (relieving demand, creating an invoice) are done and that the transactions tied together. The programs are, in effect, a process model for the business, analogous to the data model.

The two technologies necessarily work together to form a completely closed system. When you ship an order, the integrity of the system requires that all the background transactions occur. If any old program could access the database, you'd never be sure of that. So only the programs provided by the application company (or written using the same environment that the application company provides) can be allowed access to that database.

The amount of work required to build one of these closed systems is breathtaking. The data model is really complicated, and if you screw it up, it can take years to fix. The process model is, if anything, even harder. Since you, the application company, are writing the programs and managing the databaase, you're also (unfortunately) responsible for all the infrastructure that in a less closed system you'd let the operating system take care of, such as letting people through the gate, setting up their environment (interface), insuring that they do only what they are allowed to do, etc., etc., etc., etc.

It's not exactly news that every single company that has tried to build one of these systems has had limited resources (even with those economies of scale). Every single one has responded to the limitations by simplifying and generalizing and leaving things out, even things that many of its customers really want. Oddly enough, because the infrastructure is an absolute requirement, much of the scanting went on in the process model.

The compromises were necessarily subtle. For sure, in an ERP system, you're going to let people update the general ledger. But you might weaken your process model a bit when it comes to things that people might ordinarily expect from a GL system, such as the ability to correct GL transactions once the year is closed (a notorious weakness in several of the mid-nineties systems).

If the system weren't closed, no one would care that much about these limitations. But the combination of a closed system and limited process model makes for a a lot of the aforementioned nastiness.

You see, when the closed system doesn't do what you want (allow you, for instance, to correct errors in the previous year), all your choices are nasty. You can just not do what you want to do, do it outside the system, or customize the closed system (at great cost and with great difficulty) and hope you haven't damaged integrity.

Wouldn't it be great, every customer who has faced this nastiness has thought, if you could crack the system open like a walnut, pick out the meat you need, but be able to close it back up again and keep its structural integrity?

Well, with the existing technologies, you just can't. The best you can do is keep the shell of the walnut intact and drill holes through the husk, providing some access, and hoping there aren't so many holes that the integrity is compromised.

The holes are called APIs. They work OK if you want to get information out or push information in. But if what you'd really like to do is use them to improve the process model either of your choices is nasty.

One, you keep the walnut shell. You build your new process inside the walnut (nasty). And you use APIs to reach out of the walnut and get the information you don't have. Or two, go ahead and write a new process outside the walnut, using none of the walnut's facilities (including security, user management, interface, etc.) and then hope that the integrity, timing, and coordination of all the processes works the way it's supposed to.

For years, the application companies recommended the first, for some reason. It sounded good. But it turned out to be fairly horrible.

Let's look at an example. Say that you'd like to create an order for a customer and check the customer's credit. Let's make it very simple. Your walnut is Siebel, where you manage interactions with the customer, and the credit information is in SAP. This is not uncommon.

How do you do it? Well, create the order, use APIs to go ask SAP what the credit allowance is, and see whether the new order amount would send the customer over the limit.

Believe me, it sounds nice, but it ain't so easy. You have to make sure that the customers' name in Siebel matches the customer's name SAP, so you also have to build APIs and a translator to synchronize that information. You have to make sure that the order items, amounts, and prices are transferred later to SAP (otherwise the credit extended in SAP will be wrong). Oh, yes, that requires several other prior synchronizations (prices, products, etc.), plus the actual transfer is difficult.

And you're still going to have to do some outside-the-walnut process management. You see, if you ask SAP about the credit, and it fails to give you an appropriate response, what do you do? Wait? Try again? Somebody's going to have to write a program that monitors this and decides what to do.

In recent years, the nastiness of all this inside-the-walnut process management has been a little depressing. So the idea of doing the business process management outside the walnut has gained a certain amount of respectability.

When people inside the apps companies started looking at this possibility, opinions split. Talented people like Rick Bergquist (former CTO at PeopleSoft) said categorically that it's a bad idea to have apps on apps (that is, a layer of process control outside the walnut). Talented people like Shai Agassi, who developed an apps-on-apps layer for his Top Tier portal and whose xApps are built on the idea, said that you can.

Which brings us, finally, to SOA, services oriented architecture. For people like Agassi or Larry Ellison, what's interesting about SOA is not the the services themselves, which are simply a different way of drilling holes into the walnut. What's interesting is the opportunity to build an apps-on-apps or "orchestration" layer outside the walnut, which really works. It gives you a far better and more flexible process model without the nastiness. Working at the orchestration layer, you can use the services to reach into into multiple applications at the appropriate times and in the appropriate ways.

Two Apps-on-Apps Strategies

"Interesting," of course, may be an overly kind word. It's one thing to seize on a new technical capability and set the hype machine going. It is quite another to solve the technical and conceptual and practical issues associated with building a new, unproven, potentially far less secure, and clearly duplicative process model on top of the old apps. (Oops, let the skepticism back in, sorry.)

How can one go about doing this in a sensible way? Well, at some level, that's like asking, "What material should you use to build buildings?" There are a zillion ways of doing it, and a lot of them are really bad.

But, at the risk of really ticking off the people at Oracle and SAP who are working on the problem, let me say that there are two broad approaches that make some kind of sense.

The first is to keep the old walnut, drill very, very good, robust holes, and build another orchestration layer, very carefully, on top of the old walnut. This is the conservative, reasonable approach, the one that preserves your customers' investment in the good things that were in the old, nasty walnut. Think Netweaver.

The second is to crack open the old walnut model entirely, discard the shell, and build a replacement process model at the services level. In the short term, it may seem that you're risking the investment of your customers. But if it works, you've given them a way of permanently getting rid of much of the nastiness that the old model imposed. Think Fusion AS.

This second approach has a lot of appeal. As a builder, you get to give up on the old, cruddy apps, which were designed with decade-old technical constraints in mind, and rewrite them using the distributed services model that the web has brought to the fore. (This, by the way, also allows you a new crack at your data model, always a good thing.) You get to rip out a lot of the infrastructure components that the old model forced you to build (interface, context, security, etc., etc.) and replace them with something of far greater generality and utility. (All the infrastructure you used to build, like sign-ons and roles will now work not only with your apps layer, but with all your other apps.)

All good things. But there's more. You also get to tear down many of the barriers that the old walnut shell created, so that users can more easily plug in the really necessary legacy apps that they have. And, if you do it right, you get to reduce the cost of distribution, bug fixes, and maintenance services.

From a marketing point of view, it's pretty good, too. When you're done, you can go to market with an enterprise application that is fundamentally different from what has gone before. In Oracle's case, when customers of PeopleSoft, JD Edwards, or Oracle 11i say, "But, but, but what about my old system?" you can tell them they're getting the following in exchange:

  • Apps that work better and have better functionality than ever before.
  • Much lower costs of managing, installing, testing, and upgrading the new application.
  • "Free" infrastructure services, such as identity management, portal, security, and integration, services that can work across many applications, not just across the old proprietary storehouse.
  • Entirely new app capabilities, such as the ability to embed non-core apps in the interface in a seamless way. My favorite example is stupidly simple and doesn't even come from Oracle. If you're looking at an invoice and you have a question, wouldn't it be nice to IM the person who sent you the invoice, sending them the context? Well, with a process model built at the services level, that's absurdly simple to do. You can even show whether the person you'd like to IM is present.
  • Much better utilization of non-traditional application paradigms, such as lights-out or exception based processing.
  • Better run-time management, (e.g., computer performance) because services that monitor and control application performance are embedded in the system.

How is this going to sound to a PeopleSoft, JD Edwards, or Oracle customer? It ought to sound pretty good. Yes, there will be change. But a lot of that change will amount simply and only to getting rid of nastiness that they've been living with for way too long.

It Gets Better

In a way, of course, if you're buying this vision and you're a JDE or PeopleSoft customer, you shouldn't think of this as new apps for old.

The notion of the old-style app is defined by that walnut shell. An intrinsic part of this strategy is to blow the shell apart. Oracle won't just be allowing you to trade up to a Fusion Application Suite. They'll be giving you a platform without the walnut shell, one that is theoretically capable of managing all structured, core information that "belongs" to a corporation's operations, whether the information is contained in apps, legacy apps, custom apps or spreadsheets. You can use the same portal, the same messaging, the same database (!), the same security, the same workflow and approval system, who knows, even the same time-stamp system for any application you have--legacy, custom-built, desktop, whatever.

If there is a walnut shell at all in this new system, its boundaries are not being set by the technology and development capabilities of the application company, they are set by the needs of the customer.

Will all the nastiness be gone? By no means; in many ways, because it's bigger and more heterogeneous, it might be nastier. But the amount of artificial nastiness, nineties relics of certain technologies, certain economic relationships, and certain resource constraints will be eliminated.

So what will this new platform look like. I have no special access to Larry Ellison's or John Wookey's private vision, but let me offer a picture, just so you can see why they like the idea so much.

Imagine companies where all master information (lists of products, customers, employees, etc.) is stored and managed in central repositories, called hubs, which use Oracle databases and Oracle management tools. Corporate processes--workflow, transactions, or orchestration-- are invoked and executed by calling a general-purpose process engine, called Oracle BPEL (business process execution language).

The core commercial application functions are performed by a completely rewritten, best-of-the-best set of applications that use the PeopleSoft, JD Edwards, Siebel, and Oracle IP, but were written without the technical or practical constraints that limited those systems. The portals that provide individual access to the information will be mini-control centers, constantly fed real-time information and analysis, enabling every kind of action from IM with context to complex modeling.

Special-purpose applications will no longer pose a platform or process management problem to harried corporate IT managers. The applications will be purchased from commercial companies and integrated using BPEL or else built by the company itself using BPEL, at a fraction of the cost of previous customizations. Bugfixes and upgrades for commercial applications are no longer uploaded in patch groups and tested endlessly. They are managed and uploaded continuously, in much the same way Microsoft sends its upgrades out. And everthing runs on a grid.

In this vision, a huge amount of what's wrong with ERP systems today simply disappears, because things are finally being done the right way, not the wrong way. Hubs are the right way of storing master files. Best-of-the-best is the right way to develop commercial process and data models. Portals with real-time business intelligence (and context-sensitive IM presence sensing and a a host of other capabilities) are the right way for individuals to interact with these systems.

And because things are finally being done the right way rather than the wrong way, the end result will have an overwhelming advantage over SAP, which is keeping its walnut intact.

Fabulous. Fabulous. Fabulous.

If It Works

Well, there are a few little tiny things that have to happen before any of us should get too excited.

  • Oracle needs to develop an infrastructure layer (Fusion Middleware) that is so flexible and so general-purpose that you can run applications with the same integrity as before and at the same time, use the layer for managing non-application information and applications.
  • Oracle needs to provide customers with a way of getting from where they are today to where Oracle wants them to be tomorrow.
  • The new applications need to be compellingly better than the old applications.
  • Everything needs to have rock-solid reliability from the get-go.

And of course they need to do all this before the market has passed them by.

Possible? Well, it's not impossible. And it's certainly worth a try.

To see other recent Short Takes, click here for a listing.