3 stats about RFP evaluations to help you run more efficient projects

Managing the RFP evaluation is one of the most difficult parts of any RFP process. From coordinating everyone’s schedules for meetings, to facilitating an award decision that everyone can agree on, there’s no shortage of challenges.

In spite of this, procurement professionals have few benchmarks into how the process is handled at other organizations. To shed some light on a process that (by necessity) takes place behind closed doors, we analyzed the data from $4.4 billion in RFP decisions conducted through the Bonfire Strategic Sourcing Platform.

This research, from the State of the RFP Report, provides insight into how 190 organizations across North America manage their most important spending decisions.

Use these benchmarks as a jumping-off point to reflect on how your team can manage this challenging process more effectively.

 

The average RFP evaluation committee consists of 4.4 members

 

A successful outcome from your RFP decision depends on the expertise of your evaluation committee, which could include subject matter experts, end users, member of peripheral departments, or even community members.

The report shows that the average RFP project includes 4.4 evaluators. However, there’s really no one-size-fits-all answer for how big your evaluation group should be. Ultimately it depends on how widely the decision will impact the organization and whose expertise is crucial to the decision.

Many teams would like to include more diverse perspectives (e.g. including community members’ feedback in social services decisions), but find themselves hampered by a cumbersome process. To ensure the procurement process is not a limiting factor on your evaluation committee size, consider ways to reduce the administrative workload of including more evaluators. Bringing the evaluation process online is among the most impactful steps you can take to include more evaluators without an exponential increase in photocopying and other manual work.

27% of evaluator scoring happens outside of work hours

 

Evaluators are typically busy people with many competing priorities — and as a result, they occasionally procrastinate on their evaluation duties.

The State of the RFP report shows that the average evaluation period is 37 days in duration — however, 53% of scores are recorded in the last 7 days before the deadline, and 10% of scores are recorded on the last day before the deadline. For people with a lot on their plate, it’s no surprise that scoring often gets pushed off to the last days before the deadline. However, with 27% of scoring recorded outside of normal business hours, this data hints that evaluators are struggling to get their scoring done within their work hours.

RFP Evaluation - Typical Scoring Patterns

Given this, procurement teams should seek to make it as easy as possible for evaluators to participate productively in the RFP process.

 

35% of RFP proposal scores lacked consensus

 

Once the scores are in, the hard part begins. This is where your skills as a procurement professional really shine, as you work to understand areas of disagreement, build consensus, and ultimately come to a mutually agreeable decision.

The report data shows that a surprising 35% of all proposal scores lacked consensus among the evaluation committee. (Lack of consensus was defined as an instance where any two evaluators’ scores for a given criterion on a given supplier differed by 30% or more). A further 41% of scores featured soft consensus. Only 21% of scores featured a hard consensus.

RFP Evaluation - Proposal Consensus

Lack of consensus is neither ‘good’ nor ‘bad’ — but it certainly can slow down an evaluation process.  To ensure effective resolution, procurement professionals should focus the meeting around outlier scores or areas of significant disagreement, and avoid line-by-line reviews of scores that are in agreement. Attention should also be given to thorough documentation of the decision, so that in vendor debriefs, bid protest, or audits, the decision is well-justified and defensible.

As with any benchmark, these stats are not intended to provide a prescriptive target for your team to hit, but rather to put your process into a larger context and surface ways that your own process could be enhanced for greater effectiveness and more value.

For more insights from our study on RFP processes and outcomes, attend the State of the RFP Benchmarking webinar.

RFP Evaluation - Register for webinar

WordPress Video Lightbox Plugin