Evaluating Software Vendors - a framework
I have written previously in general about vendor evaluation processes. Over time I will flesh this topic out further, as I feel it is a much-neglected area. As I have said, it is important to define up front your main functional and technical requirements from the software package that you want to buy. It is then important to have a process to take the "long list" of candidates down to a manageable number of two to four vendors to evaluate properly.
So, you have done this, and are down to three vendors. What are the mechanics of the evaluation criteria? I show in the diagram a simplified example to illustrate. It is critical that you decide on what is most important to you by selecting weightings for each criteria before you see any products. Ensure that you group the broad criteria into at least two and perhaps three categories: functional should list all the things you actually want the product to do, and you may choose to separate "technical", which may include things like support for you particular company's recommended platforms e.g. "Must run on DB2", or whatever. What is sometimes forgotten is the commercial criteria, which are also important. Here you want things like the market share and financial stability of the company, how comprehensive its support is, how good is its training etc. I would recommend that you exclude price from these criteria. Price can be such a major factor that it can swamp all others, so you may want to consider it as a separate major criteria once you have done the others. I would recommend that the "functional" weightings total not less than 50%, It is no good buying something from a stable vendor if the thing doesn't do what you need it to.
An important thing about using a weighting system like this one is that the weights must add up to 100. The point here is that it forces you to make trade-offs: you can have an extra functional criteria, but you must reduce the existing weights to make sure that the weights still add to 100. This gives the discipline to stop everything being" essential". You assign all the weights before the evaluation begins. You can share this with the vendors if you like. Coveniently, however you assign the weights, the scores will come in out of 1000, so can be easily expressed as a percentage e.g. vendor B is a 74% match to the criteria in the example, while vendor C is 67%.
The final stage is that you need to score the various criteria that you have laid out. You want this to be as objective as possible, which is why you do not want too many - you want to see evidence for the functional criteria. Just because the salesman says that it does something is not sufficient to credit a score - you need to see the feature yourself, preferably working against some of your own data rather than faked up demo. I recall doing an evaluation of BI tools in 1992 at Shell and having one vendor with quite a new product that due to a stellar analyst recommendation made it to the short-list. When the pre-sales guy turned up and was presented with a file of our data for him to do the trial on he went white; their whole product was virtually hard-coded around their demo dataset, and it quickly became clear that even the slightest deviation from the data they were expecting caused the product to break.
Score each criteria out of 10. Commercial criteria can be done off-line and in advance; analyst firms can help you with this, as they tend to be up on things like market shard (IDC have the most reliable quantified figures, but rough estimates are probably good enough). Financial stability is a subject all in itself, and I will cover this in another blog.
The evaluation then becomes quite mechanical, as you crank out the scores. You see that in this simplified example vendor B has won, though not by a huge margin. If it turns out that vendor B's price is twice that of the others then you may decide this difference is not big enough to justify the slightly better scores (we will retunr to this shortly). Again, you could weight price as a factor if you prefer.
However, don't get too hung up on price; as someone who used to do technology procurement it may seem like the be all and end all, but it is not. The total cost of a new software package to your company is far greater than the initial license cost. There is maintenance and training over several years, and also the people time and cost in actually implementing the technology, which will usually be several times the cost of the software package. Hence getting a package that is 20% more productive than the next best is worth a lot more than 20% extra in the license price, as the associated costs of people will be multiples of the software cost (people costs being five times package software costs in a project is common, ten times is not unusual). It is sensible for you to try and consider the likely full lifetime costs of the software in this way (assume, say five years) since you will then have an idea as to how important the license cost really is. For example if you are about to do a 30 country roll-out budgeted at USD 50 million, making sure that the product you select is the most productive one is a lot more important than if you are doing a single project for USD 500k. Here a product that is 10% more productive than the next one to implement may save you USD 5 million, so haggling to the death over that last USD 10k of license discount may not be so critical. This will give you a true bottom line case for the level of spend you can afford to make.
Taking a structured evaluation approach like this has a number of benefits. Firstly, it reduces the amount of gut feel and "did I like the salesman" mentality that too often creeps in. You'll probably never see the salesman again unless you want to buy more software, but you will be stuck with the product that you select for years. Secondly, it gives you a documented case for selection that can, if necessary, be used to back up things internally e.g. in the case of an audit, or just to give comfort to senior management that a sound process has been used.
Moreover, given that salesmen get paid on how much they sell you, you'd be surprised at the tactics they can adopt; they will try and go over your head if they think they are going to lose, and make all sorts of accusations about how the process wasn't fair and how you are about to make a terrible mistake, so having a solid, documented case will make it much easier for your manager to do the right thing and tell the salesmen to take a running jump. I am amazed at how often this tactic was tried when I was procuring software, but I never once had a decision overturned. If you ever find yourself in this situation, remember that revenge is a dish best served cold. After a particularly acrimonious session with one vendor salesman when I was working at Exxon, I was amused to find, a few years later, the same salesman turning up a few years later when I transferred to Shell. He walked in the room, his face fell when he saw me and he walked back out again. Good professional sales staff know that the world is a small place and that it does not pay to aggravate the customer, but all too few remember this.
In another blog I will return to the subject of assessing the financial stability of vendors.