Skip to Content

4 mistakes you might be making in your RFP evaluations – and how you can avoid them.

December 5, 2019 | Nicole Roberts

person looking at RFPs and statistics

An RFP evaluation process goes beyond simple bids and price-only decisions and requires assessing both qualitative and quantitative factors. For many public sector organizations, this is where the largest spending decisions are made. The impact that these decisions make can have a ripple effect that can be felt for years, which is why making the right choice is essential to your organization.

Here are 4 common pitfalls buying organizations make during the evaluation process and how you can avoid them in order to achieve the best possible outcomes. 

Mistake 1: Weighting the price too high

Why it matters
Weighting the price high is often seen as being “price conscious,” but it can skew the outcomes that you are trying to achieve. Organizations that value price too highly run the risk of buying goods or services that are inexpensive, but that also under-deliver.  

*According to Bonfire State of the RFP data insights, a 15% increase in price will change the outcome of one in three RFPs* 

Graph showing the impact of price weight on outcome

How to achieve better outcomes
Best practices indicate that weighting price at 20-30% is ideal. Before you begin a project, determine what criteria will truly make the difference between success and failure and assign weight accordingly. If you are getting pressured by the business unit influencing the project, show them how increasing the price weighting would impact the outcome so that they understand the implications.

Mistake 2: Unclear evaluation scales

Why it matters
Some teams don’t use a scale at all and allow evaluators to assign their own point value to each component, leading to confusion and too much variation in scores. Others use a three-point scale, which doesn’t offer enough variation in score and makes it difficult to make a significant distinction between proposals.

Scale showing technical proficiency

How to achieve better outcomes
A five to ten point scale offers evaluators a good range so that they are able to make distinctions between evaluations. By clearly establishing this rule, you’ll enable consistency and alignment across stakeholders when running an evaluation. 

Mistake 3: Separating price scales

Why it matters
There is a phenomenon called ‘the lower bid bias.’ When evaluators are made aware of price when evaluating qualitative factors, a systematic bias occurs toward the lowest bidder. This was proven conclusively during a study conducted by the Hebrew University of Jerusalem. Favouring the lowest bid regardless of qualitative factors could have similar negative outcomes as weighting price too highly in an evaluation.

Chart showing why low bid bias occurs

How to achieve better outcomes
To eliminate bias, the study recommended a two-stage process where price is revealed to the same group of evaluators as qualitative factors, but only after they have evaluated the non-price components of a bid. Alternatively, you could have a different evaluation group for pricing than you do for qualitative factors. 

Mistake 4: Not making decisions by consensus

Why it matters
Many teams attempt to simplify their process by averaging their evaluator’s scores. However if one evaluator gives a score of 2 out of 5, and another a 5, then the average score of a 3.33 isn’t a truly representative conclusion. It could suggest that there was a misunderstanding either in the proposal or the scoring criteria, or even a scoring bias that could impact the final decision. 

*37% of RFPs feature a lack of consensus, indicating this is a commonly occurring issue among evaluators.*

Chart showing how often evaluators disagree

How to achieve better outcomes
When there is a significant variance in scores teams should hold consensus meetings to understand the discrepancy and come to an agreement. It can be useful to complete all comments and scoring beforehand so that the facilitator is able to focus the conversation on the areas of disagreement. A well-run meeting can get evaluators to a place of understanding and help outliers come to an agreed-upon decision. 

How technology can help

Using technology to conduct your sourcing events is more efficient than with email, Excel, and paper. This gives your team more time and focus to run effective evaluations and ultimately achieve better outcomes.  

Do you want to uncover how Bonfire can help you run seamless evaluations? Attend one of our weekly live product demonstrations to learn more.

About the author

Bonfire Blog Author Nicole Roberts

Nicole Roberts | Bonfire Interactive

As the Director of Demand Generation at Bonfire, Nicole ensures public procurement teams are provided with knowledge and information on digital solutions that will help them get the most out of their purchasing decisions. She then works to connect those teams with solution experts who can guide them on their digital journeys.