Wednesday, October 26, 2005

Road testing software

Fortune 500 companies have surprisingly varied approaches to procurement of software. Of course the sheer size of the project or deal is an important factor, with the chances of professional procurement people being wheeled in rising as the deal value rises. Having been on both sides of the fence now, I do have some observations.

Some customers use an RFI (request for information) as a way of trying to help their own understanding of their problem, and this approach can lead to problems. If you are not quite sure what you need then you can be certain that a software vendor has less idea. Moreover, if your needs are vague, you can be sure that every vendors' product will mysteriously fit these vague needs. It is better to sit down with your business customers and get a very firm grasp of the precise business needs, and then plan out how you are going to assess the software, before you speak to a single vendor. You should plan in advance just how you are going to select the product from the "beauty parade" of vendors that you will eventually pick from. It is important that you think about this PRIOR to talking to vendors, or your process will be tainted.

How are you going to pick a provider? Just bringing them in and seeing who presents well is unlikely to be optimal, as you are relying here too much on impressions and the skill of the individual sales teams. Do you want the best product, or the one with the slickest salesman? Instead you should define up from the list of functional, technical and commercial criteria that will frame your choice, and agree a set of weightings i.e. which are most important. You then need to think how you are going to go about measuring these in each case e.g. what does a score "8/10" mean for a particular criteria. Some things you can just look up e.g. many commercial criteria can be established from the internet (revenues of public companies) or via things like Dun and Bradstreet risk ratings. Analyst firms can help you short-list options, but be aware that analyst firms take money from vendors as well as customers. A key bit of advice here is not to go mad with the criteria - remember that you are going to have to score these somehow. Moreover, do a light check first to get the "long list" of vendors down to a short list before you delve too deeply. I know of a UK NHS trust who have a project going on right now with literally hundreds of criteria, and a "short list" of 22 vendors. How on earth they are planning to score these is a mystery to me. Slowly is presumably the answer. Get it down to three or four vendors via a first pass.

Once you have your short-list, a key part of this process is likely to be getting the vendor to actually try the software out on your own data in your own environment. Just because it all works fine in a different industry, platform and size of company to you does not mean it will all go smoothly in your environment, and you should conduct a "proof of value" for each of the short listed vendors. You will learn far more from seeing the software actually operate on your data than via any number of pretty Powerpoint slides and carefully crafted canned demos. Be reasonable here. A vendor selling software for a six figure sum will be prepared to put in a day or two of pre-sales effort, but if you expect a massive multi-week evaluation then you should expect to pay for some consulting time, either from the vendor or a consulting firm who are deeply experienced in the technology. Buying a piece of enterprise software is a major decision, with costs well beyond the basic purchase price of the software, so investing a little up-front in order to be sure you have made the right choice is a good idea. If you choose the proof of value carefully, then you can get a head start on the real business problem by tackling a small subset of it, and you may well learn something about the real implementation issues through a well structured proof of value activity. The vendors, one of which will be your future partner after all, will also be happy since they get to understand your requirement better and hopefully determine any technical horrors at this stage rather than much later on. It is amazing how often you encounter "you want it to run on what database?" type of basic issues at this stage. It is in your interest to make sure that the proof of value is realistic e.g. decent data volumes, and using the actual environment that you plan to deploy on. We have recently had problems with a project where the customer did all the testing on one web app server (Tomcat) and then deployed into production on an entirely different one and were surprised when it didn't work first time ("but both web servers adhere to the same standard so it should work"; yeah, right).

Customer references are more important that they may appear. It may be surprising following a slick sales pitch, but small vendors in particular may have very few real customer implementations, and you can learn a lot about what a vendor is really like from customers who have gone past the pitch and actually implemented the product. Of course the vendor is not going to pick his unhappiest customer to do the reference, but most people are fairly honest about experiences if you ask. Even large vendors may have very few implementations of this PARTICULAR product, so size is not everything, as with so many things in life. I remember when I was working for a major corporate and a huge computer manufacturer were trying to sell me a big ticket application, but could not come up with one reference customer at all. This told me all I needed to know about the maturity of this technology.

A well structured evaluation process and proof of value does cost some effort up-front, but it will pay dividends in terms of demonstrating whether a technology is likely to actually do what you want it too and deliver value to your project.

0 Comments:

Post a Comment

Links to this post:

Create a Link

<< Home