Monday, February 13, 2006

Technology Planning

I spent much of my career in technology planning, both in Exxon and Shell. These organizations are quite contrasting in their culture despite being in the same industry: Exxon is the uber-centralist, with everything driven from head office, and very little room for variation allowed in its subsidiaries. Shell has changed over the years but has traditionally been extremely de-centralized, allowing its subsidiary operating companies a great deal of autonomy; in the mid to late 1990s Shell has tried to become more centralized in its approach, though still nothing like as much as Exxon.

What worked best in technology planning in these different companies? Exxon's strength was its standardization at the infrastructure level. There were rigid standards for operating systems, databases, email, telecoms protocols and even desktop applications, long before most companies even dreamed up their "standard desktop" initiatives. This avoided a lot of problems that many companies had e.g. with varying email systems, or differing database applications, and critically meant that skills were very transferable between jobs and between Exxon subsidiaries (important in a company where most people move jobs every two to three years). By contrast Shell was fairly anarchic when I joined, even having different email systems, desktops using anything from Windows to UNIX desktops, and every database, TP monitor, 4GL etc that was on the market, though often just one of each. In 1992 I did a survey and recorded 27 different BI/reporting tools, and that's just the ones I could track down. It was perhaps not surprising that Shell spent twice as much on IT as Exxon in the mid 1990s, despite the companies being about the same size (Exxon is now a lot bigger due to the Mobil acquisition). On the other hand Shell had excellent technology research groups, and many collaborative projects between subsidiaries, that helped spread best practice. Also, since operating companies had a lot of autonomy, central decisions that proved unsuitable in certain markets simply never got implemented rather than being rammed down subsidiary's throats.

It was certainly a lot more fun being a technology planner in Shell, as there were so many more technologies to tinker with, but it was also like herding cats in terms of trying to get decisions on a new technology recommendation, let alone getting it implemented. In Exxon it was extremely hard, and probably career-limiting, to be in a subsidiary and try and go against a central product recommendation; in Shell it was almost a badge of honor to do so. Shell's central technology planners did, though, make some excellent decisions e.g. they were very early into Windows when the rest of the world was plugging OS/2, and they avoided any significant forays into the object database world at a time when analyst forms were all writing the obituaries of relational databases.

Having worked in both cultures, I believe that the optimum approach for technology planners is to try and standardize as rigidly as your company will let you on things that have essentially become commoditized. For example, who really can tell the difference between Oracle, DB2 and SQL Server any more? For the vast majority if situations it doesn't matter, and it is more important to have one common standard than it is to get the "right" one. On the other hand, in an emerging area, it is just self-defeating to try and pick a winner at too early a stage. You do not want to stifle innovation, and the farther up the technology stack, the less harm a few bad decisions are likely to make. For example, get the wrong operating system (like OS/2) and it is a major job to rip it out. Get the wrong web page design tool and there is less damage done.
Moreover, at the application level there is likely to be a clearly quantified cost-benefits case for a new technology, since applications touch the business directly e.g. a new procurement system or whatever. At the infrastructure level it is much harder to nail down the benefits case, as these are shared and long-term. If your new application has a 9 month payback period, then it matters less if one day it turnes out to be "wrong", but you don't want to find this with your middleware message bus product. There are lots of hobbyists in technology, and few products at the infrastructure level are truly innovative, so getting the lower levels of the stack standard is well worth doing.

Overall, while both companies are clearly highly successful, I think on balance that Exxon's strong centralization at the infrastructure level is more beneficial from a technology planning viewpoint. Quite apart from procurement costs, the skills transfer advantages are huge: if your DBA or programmer moves from the UK to Thailand, he or she can still use their skills rather than starting from scratch. However Shell's greater willingness to try out new technology, especially at the application level, often gave it very real advantage and rapid payback, even if overall IT costs were higher.

What is perhaps interesting is how the technology planning needs to reflect the culture of the company: a company whose decision making is highly decentralized will struggle greatly to impose a top-down technology planning approach, whatever its merits.

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home