Clarifying the Fall 2017 Project Grant funding results
March 7, 2018
In follow-up to the Fall 2017 Project Grant results, CIHR would like to clarify the mechanisms by which funding decisions are made within the Project Grant program. We recognize that the Notice of Decision documentation does not adequately explain the complexities of funding decisions, and we are working to redesign this material for future competitions. In the meantime, we would like to offer clarification on the points below.
Please note that CIHR has also made data, which are broken down by panel, from this competition available on a separate web page. It is critical that these data not be used to inform an applicant’s panel selection for future competitions, as they are based on many factors (further explained below) and will change from one competition to the next. The most important factor for an applicant to consider when selecting a panel is the expertise required to review the application.
CIHR encourages members of the community with questions and/or suggestions for further information to get in touch with the Contact Centre at support-soutien@cihr-irsc.gc.ca.
Success rate vs. Percent rank
The Fall 2017 Project Grant competition involved more than 3400 applications reviewed across 65 peer review panels. In order to make funding decisions, CIHR needs to be able to compare peer review results across panels. To do this, CIHR calculates each application’s percent rank.
The percent rank of an application is calculated using its standing within its assigned panel. For example, an application that was ranked 5/57 within its panel has been ranked higher than all but 4 of the other 56 applications in the panel and therefore has a percent rank of 92.86% (percent rank = 1-(4/56)). Percent rank is used in this exercise because it stretches the scale so that the first-ranked application in every panel receives a percent rank of 100%.
For the Project Grant program, the funding decisions across panels were made in a similar fashion to how they were calculated under the Open Operating Grants Program (OOGP). Scores for each application were converted to within-committee rankings, which were then used to calculate each application’s percent rank. This allows CIHR to account for scoring differences across the panels, and it also allows us to fund an approximately proportional number of applications across each panel. The number is approximate due to a number of factors that are explained in detail below; however, it is important to note that this is not dictated by pillar or area of research.
First, the number of applications per panel for the Fall 2017 competition ranged from 24 to 80, which accounts for some of the variability of success rate by panel (i.e, the panel success rates are close to the overall success rate but fluctuate around it). Please note that it is not possible to create panels of comparable size without forcing applications into inappropriate panels (i.e., that are not the best expertise match to review the application).
Large grants, ties, and equalization
In addition to the differences in application pressure, a number of factors may cause the individual panel success rates to fluctuate. These include:
- Large grants: Within the overall competition budget, there is a specific funding envelope for large grants (i.e., those within the top 2% of the total grant amounts requested). For the Fall 2017 competition, any application that requested a total of more than $2,314,400 was considered a large grant. These large grants were reviewed in their assigned panels; however, they were combined and treated as a separate cohort for the purpose of making funding decisions. Using this methodology means that it is possible for a large grant to be highly ranked in its panel (i.e., the ranking is above the panel cutoff) but ultimately not funded (i.e., the large grant budget gets exhausted before all of the large grants above their respective panel cutoffs are funded). In this scenario, such an application would receive a bridge grant—but since bridge grants are not included in the calculation of success rates, the panel would have a lower success rate overall.
- Example: Of the 63 applications reviewed by NSA panel, 9 would be funded based on their percent rank score (committee success rate of 14.3%). One of these 9 applications—the 9th ranked application—qualified as a large grant. The funding available to large grants was exhausted by the 8 large grant applications ranked above it; therefore it could not be fully funded and was awarded a bridge grant instead. Because of this, the NSA panel ultimately funded 8 grants (success rate of 12.7%), even though 9 (14.3%) were above the panel cutoff.
- Ties: If applications are tied (e.g., two applications have the same final score within a panel and are therefore both equally ranked), it is CIHR’s policy to fund both or neither. Occasionally, a tie will fall right at the panel cutoff. In such a scenario, and with funds allowing, both applications get funded. This would increase an individual panel's success rate, as the panel gets to fund one grant more than it otherwise would have been able to support.
- Example: Of the 44 applications reviewed by CIB panel, 6 would be funded based on their percent rank score (committee success rate of 13.6%). However, three applications were tied in sixth place in the ranking list (i.e., their final scores were identical). Because of this, the CIB panel ultimately funded 8 grants (success rate of 18.2%).
- Equalization of early career investigator (ECI) success rate: The success rate of ECIs was equalized to ensure that the proportion of ECIs funded equaled the proportion of ECI applicants to the competition. For the Fall 2017 competition, there were 21 ECI applications funded through the equalization process. These applications fell below the percent rank cutoff—but were fully funded through the funds allotted for ECIs and therefore counted toward the individual panel success rate. ECIs funded through the ECI equalization process were combined and treated as a separate cohort for the purpose of making funding decisions (i.e., they were selected based on their percent rank and were therefore distributed throughout the panels).
- Example: Of the 21 applications reviewed by HLE panel, 3 would be funded based on their percent rank score (committee success rate of 14.3%). However, one of the ECIs funded through the equalization process was in this panel. Because of this, the HLE panel ultimately funded 4 grants (success rate of 19.0%).
Bridge Grants
CIHR aims to strike the right balance between full grants and bridge grants for the Project Grant program, but the numbers may vary by competition because they are not predetermined.
For the Fall 2017 competition, 33 bridge grants were funded. We have received questions about the rationale for not offering 65 bridge grants (i.e., the same number of bridge grants as panels) to support one extra applicant per panel. It is important to note that, even if there were funds to support funding for 65 bridge grants, these bridge grants would not be distributed across the panels equally due to differences in panel size.
Indigenous Health Research
Applications that are adjudicated by the Indigenous Health Research (IHR) Panel are done so as part of the iterative review process. Further, CIHR has committed 4.6% of the Project Grant budget to support IHR applications as part of its commitment to invest 4.6% of its total budget in Indigenous health research.
Based on the recommendation of the IHR Panel, 22 of 36 Indigenous health research projects were awarded funding. This represents a success rate of 61.11% for this panel. Of these projects:
- 13 were awarded full (immediate) funding
- 9 were awarded full funding conditional on certain requests being fulfilled within one year.*
- 1 was awarded a bridge grant.
*The IHR Panel determines if these conditions have been met as part of the iterative review process.
- Date modified: