2018 Federal Standard of Excellence
Use of Evidence in Five Largest Competitive Grant Programs
Did the agency use evidence of effectiveness when allocating funds from its five largest competitive grant programs in FY18? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring; Pay for Success provisions)
Score
7
7
Administration for Children and Families (HHS)
- In FY18, the five largest competitive grant programs are: (1) Head Start ($9.9 billion); (2) Unaccompanied Children Services ($1.3 billion); (3) Early Head Start-Child Care Partnerships ($755 million, included as part of Head Start total); (4) Transitional and Medical Services ($320 million); and (5) Preschool Development Grants ($25 million).
- ACF’s template (p. 14 in Attachment C) for competitive grant announcements includes two options, requiring grantees to either (1) collect performance management data that contributes to continuous quality improvement and is tied to the project’s logic model, or (2) conduct a rigorous evaluation for which applicants must propose an appropriate design specifying research questions, measurement and analysis. This helps build the evidence base through ACF grantmaking.
- In FY12, ACF significantly expanded its accountability provisions with the establishment of the Head Start Designation Renewal System (DRS). The DRS was designed to determine whether Head Start and Early Head Start programs are providing high quality comprehensive services to the children and families in their communities. Where they are not, grantees are denied automatic renewal of their grant and must apply for funding renewal through an open competition process. Those determinations are based on seven conditions, one of which looks at how Head Start classrooms within programs perform on the Classroom Assessment Scoring System (CLASS), an observation-based measure of the quality of teacher-child interactions. Data from ACF’s Head Start Family and Child Experiences Survey (FACES) and Quality Features, Dosage, Thresholds and Child Outcomes study (Q-DOT) were used to craft the regulations that created the DRS and informed key decisions in its implementation, including where to set minimum thresholds for average CLASS scores, the number of classrooms within programs to be sampled to ensure stable program-level estimates on CLASS, and the number of cycles of CLASS observations to conduct.
- ACF has an ongoing research portfolio building evidence in Head Start. Research sponsored through Head Start funding over the past decade has provided valuable information not only to guide program improvement in Head Start itself, but also to guide the field of early childhood programming and early childhood development. Dozens of Head Start programs have collaborated with researchers in making significant contributions in terms of program innovation and evaluation, as well as the use of systematic data collection, analysis and interpretation in program operations.
- ACF has an ongoing Study of Early Head Start-Child Care Partnerships which includes a review of the literature to summarize the current knowledge base around EHS-child care partnerships; development of a theory of change model to articulate relations among key features, characteristics, and expected outcomes of partnerships; development of approaches to measuring partnerships for existing and new data collection efforts; and the design and implementation of a descriptive study documenting the characteristics and features of EHS-child care partnerships and activities aiming to improve professional development and the quality of services to better meet families’ needs.
- ACF’s Personal Responsibility Education Program ($75 million in FY18) includes three individual discretionary grant programs that support evidence-based competitive grants that teach youth about abstinence and contraception to prevent pregnancy and sexually transmitted infections.
- To receive funds through ACF’s Community Based Child Abuse Prevention (CBCAP) program, ($39.8 million in FY18) states must “demonstrate an emphasis on promoting the increased use and high quality implementation of evidence-based and evidence-informed programs and practices.” CBCAP defines evidence-based and evidence-informed programs and practices along a continuum with four categories: Emerging and Evidence-Informed; Promising; Supported; and Well Supported. Programs determined to fall within specific program parameters will be considered to be “evidence informed” or “evidence-based” practices (EBP), as opposed to programs that have not been evaluated using any set criteria. ACF monitors progress on the percentage of program funds (most recently 61.1% in FY15) directed towards evidence-based and evidence-informed practices.
Score
5
5
Administration for Community Living
- In FY18, the five largest competitive grants programs are: (1) National Institute on Disability, Independent Living, and Rehabilitation Research ($95 million); (2) Independent Living ($78 million); (3) University Centers for Excellence in Developmental Disabilities (UCEDD) ($39 million); (4) Medicare Improvements for Patients and Providers Act Programs (MIPPA) ($38 million); and (5) Alzheimer’s Disease Program ($18 Million).
- ACL grant awards are made, in part, based on the clarity and nature of proposed outcomes and whether the proposed project evaluation reflects a thoughtful and well-designed approach that will be able to successfully measure whether or not the project has achieved its proposed outcome(s); includes the qualitative and/or quantitative methods necessary to reliably measure outcomes; and is designed to capture “lessons learned” from the overall effort that might be of use to others, especially those who might be interested in replicating the project. To the extent that grantees had completed similar work in the past, they are asked to demonstrate in their funding applications that those efforts were successful. Further, grantees are required to submit data through the ACL Reporting tool. The data are reviewed and, as needed, technical assistance is provided to grantees.
- At the start of each budget cycle, ACL’s Center for Policy and Evaluation sends ACL centers a one-page document titled “Policy Review of ACL Funding Opportunity Announcements (FOAs).” As part of this review process, OPE staff review Funding Opportunity Announcements to determine “Does the FOA provide enough detailed information to help applicants think critically about the measures they should include in their applications to effectively measure their progress towards meeting the goals outlined in the FOA?” OPE staff make specific recommendations to program staff to improve language around performance measurement and evaluation. The degree to which applications provide detailed information about their expected outcomes and how they will measure those outcomes informs funding decisions.
- The NIDILRR evaluation was conducted by the National Academy of Sciences (NAS) and was released in 2012. A ten-year evaluation plan was developed based on the NAS evaluation. The plan includes a set of research questions aimed at assessing the effectiveness and efficiency of NIDILRR’s operations as well as the quality and impacts of NIDILRR-funded activities and products. Implementation is on-going and helps to guide funding priorities and decisions by ensuring that they are more closely tied to the evidence about promising practices produced by prior NIDILRR grantees.
- In FFY2018 NIDILRR released a competitive grant with the express purpose of building evidence of the effectiveness of exercise interventions or programs for improving and sustaining health and health related quality of life (HRQOL) among people with disabilities, and to determine the extent to which improved HRQOL is related to improved community participation outcomes.
- The Long-Range Plan of the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) describes ways in which “NIDILRR proposes to support competitions that build on prior investments that resulted in evidence of efficacy and effectiveness. These competitions will provide funding for further development and testing of practices and interventions in additional settings, or among new populations of people with disabilities. These efforts may support translational research to develop practical strategies for ensuring more widespread use of new evidence-based findings in the area of disability and rehabilitation research and development.”
- Independent Living (IL) programs include a mix of formula and discretionary grants. The Centers for Independent Living (CILs) Program provides 354 discretionary grants to centers that are consumer-controlled, community-based, cross-disability, nonresidential, private nonprofit agencies who provide IL services. To continue receiving CIL program funding, eligible centers must provide evidence that they have previously had an impact on the goals and objectives for this funding including:
- Promotion of the IL philosophy;
- Provision of IL services on a cross-disability basis;
- Support for the development and achievement of IL goals chosen by the consumer;
- Efforts to increase the availability of quality community options for IL;
- Provision of IL core services and, as appropriate, a combination of any other IL service;
- Building community capacity to meet the needs of individuals with significant disabilities; and
- Resource development activities to secure other funding sources.
- UCEDDs are a nationwide network of independent but interlinked centers, representing an expansive national resource for addressing issues, finding solutions, and advancing research related to the needs of individuals with developmental disabilities and their families. Applications are also reviewed based on their description of current or previous relevant experience and/or the record of the project team in preparing cogent and useful reports, publications, and other products.
- MIPPA funds are awarded to State grantees and to the National Center for Benefits Outreach and Enrollment. To continue funding without restrictions, State grantees are required to submit state plans that ACL staff review for the specific strategies that grantees will employ to enhance efforts through statewide and local coalition building focused on intensified outreach activities to help beneficiaries likely to be eligible for the Low Income Subsidy program (LIS), Medicare Savings Program (MSP), Medicare Prescription Drug Coverage (Part D) and in assisting beneficiaries in applying for benefits. The plans also require that States reflect successes achieved to date and direct their efforts to enhance and expand their MIPPA outreach activities. The National Center applicants must describe the rationale for using the particular intervention, including factors such as evidence of intervention effectiveness.
- For Alzheimer’s Disease Programs Initiative (ADPI) funding, “If the applicant has held an ADSSP grant between 2011 and 2017, they must explain the work of their previous dementia systems project.” Applicants must also “describe the rationale for using the particular intervention, including factors such as: “lessons learned” for similar projects previously tested in your community, or in other areas of the country; and factors in the larger environment that have created the “right conditions” for the intervention (e.g., existing social or economic factors that you’ll be able to take advantage of, etc.)” ACL promotes evidence building in this area through activities such as a 2017 research summit on dementia care. The goal of the research summit was to identify what is known and what new knowledge is needed in order to accelerate the development, evaluation, translation, implementation, and scaling up of comprehensive care, services, and supports for persons with dementia, families, and other caregivers. The summit focused on research needed to improve quality of care and outcomes across care settings, including quality of life and the lived experience of persons with dementia and their caregivers. Grant applicants and awardees are encouraged to use such information to inform their programming.
- Although not one of its five largest competitive grants, ACL awarded a cooperative agreement titled “Living Well-Model Approaches for Enhancing the Quality, Effectiveness and Monitoring of Home and Community Based Services for Individuals with Developmental Disabilities” to design, identify, and implement a range of evidence based practices and/or innovative strategies focusing on improving services in the community that support people with developmental disabilities living in the community or those moving to the community from a more restrictive setting. Successful applicants are required to demonstrate expertise and experience in improving quality outcomes for individuals with developmental disabilities.
Score
8
8
Corporation for National and Community Service
- CNCS is operating two competitive grant programs in FY18: (1) the AmeriCorps State and National program (excluding State formula grant funds) ($244,064,965 million); and (2) Senior Corps RSVP program ($49 million). (The Social Innovation Fund (SIF) grants were integrated into the AmeriCorps State and National program. CNCS requested $50 million for SIF in FY17, but Congress eliminated funding for this evidence-based program in FY17 and again in FY18.)
- CNCS’s AmeriCorps State and National Grants Program (excluding State formula grant funds), application (pp. 14-17) allocated up to 36 points out of 100 to organizations that submit applications supported by performance and evaluation data in FY18. Specifically, up to 24 points can be assigned to applications with theories of change supported by relevant research literature, program performance data, or program evaluation data; and up to 12 points can be assigned for an applicant’s incoming level of evidence, the quality of the evidence, and the applicant’s evaluation capacity. These categories of evidence are modeled closely on the levels of evidence defined in the Social Innovation Fund (see note above). An additional eight points could be earned by demonstrating a culture of learning (e.g., collecting and using information for learning and decision making). In sum, 44 of 100 points are earned through demonstrating quality data, rigorous evidence, and the use of this information for continuous improvement and decision-making. The percentage of grant dollars allocated to strong, moderate, preliminary, and no evidence categories shifted between FY17 and FY18 (see chart below), such that more FY18 grant dollars were awarded to applicants with strong levels of evidence for proposed interventions, and fewer grant dollars were awarded to applicants with little to no evidence of effectiveness.
Percentage of competitive AmeriCorps grant funds that support evidence-based projects |
FY16 |
FY17 |
FY18 |
Strong |
20% |
18% |
26% |
Moderate |
14% |
11% |
11% |
Preliminary |
44% |
45% |
34% |
No Evidence |
22% |
26% |
29% |
Total |
100% |
100% |
100% |
- CNCS and the VA are outcome payers for the Veterans Coordinated Approach to Recovery and Employment (CARE) Pay for Success Project. The intervention is the Individual Placement and Support program, an evidence-based approach to supportive employment. The evidence base consists of 25 published RCTs and is also currently under review by the Laura and John Arnold Foundation for inclusion in their Moving the Needle Initiative. This project will initiate service delivery in the Summer 2018 with outcome payments beginning at 18 months and continuing in months 24, 30, and 42 of the project. CNCS funded this project through forward-funded resources from the Social Innovation Fund (see note above).
- CNCS will initiate a process evaluation in FY18 as part of its Scaling Effective National Service Solutions initiative (initiated in FY16). An independent contractor reviewed grantee evaluation reports and scaling plans submitted in FY15, FY16, and FY17, and determined which interventions had sufficient evidence to warrant scaling as well as variation in scaling goals and experiences. Three grantees were selected for a process evaluation to systematically assess their scaling experiences. Findings from the evaluation will be used to inform grantmaking and provide guidance for how and when to scale effective service interventions.
- CNCS published and began implementing its Transformation and Sustainability Plan in FY18. One of the six goals included in this plan is prioritizing evidence-based interventions. Specifically, “CNCS will further refine the intervention models it funds based on evidence and demonstrated success, while maintaining the flexibility to support statutory and Administration priorities. CNCS will also continue to support innovative interventions and approaches based upon preliminary evidence in order to build grantee capacity, foster innovation, and meet evolving community needs, and will evaluate these interventions to learn more about whether they are effective.”
Score
9
9
Millennium Challenge Corporation
- MCC awards all of its agency funds through two competitive grant windows: Compact and Threshold programs (whose budgets for FY18 were $800 million and $26.6 million). Both types of grants require demonstrable, objective evidence to support the likelihood of project success in order to be awarded funds. For country partner selection, MCC uses 20 different indicators within the categories of economic freedom, investing in people, and ruling justly to determine country eligibility for program assistance. These indicators (see MCC’s FY2018 Guide to the Indicators) are collected by independent third parties.
- When considering granting a second compact, MCC further considers whether countries have 1) exhibited successful performance on their previous compact; 2) exhibited improved 2018 Scorecard policy performance during the partnership; and 3) exhibited a continued commitment to further their sector reform efforts in any subsequent partnership. As a result, the MCC Board of Directors has an even higher standard when selecting countries for subsequent compacts. Per MCC’s policy for Compact Development Guidance (p. 6): “As the results of impact evaluations and other assessments of the previous compact program become available, the partner country must use this use data to inform project proposal assessment, project design, and implementation approaches.”
- Following country selection, MCC conducts a constraints analysis to identify the most binding constraints to private investment and entrepreneurship that hold back economic growth. Coupled with a subsequent root-cause analysis, the constraints analysis enables the country, in partnership with MCC, to select compact or threshold activities most likely to contribute to sustainable, poverty-reducing growth. In developing the project proposals, MCC requires that countries use all available evidence to inform the design and potential impact of a project. Specifically, this evidence should be drawn from evaluations of similar completed projects in the compact country or, if this is not available, results from another country with similar economic characteristics and conditions that may be applicable. MCC will not approve proposals or parts of proposals without good supporting evidence that the proposal will have a significant impact on economic growth and poverty reduction. Due diligence, including feasibility studies where applicable, are also conducted for each potential investment. MCC then performs Cost-Benefit Analysis to assess the potential impact of each project, and estimates an Economic Rate of Return (ERR). MCC uses a 10% ERR hurdle to more effectively prioritize and fund projects with the greatest opportunity for maximizing impact. MCC then recalculates ERRs at compact closeout, drawing on information from MCC’s monitoring data (among other data and evidence), in order to test original assumptions and assess the cost effectiveness of MCC programs. In connection with the ERR, MCC conducts a Beneficiary Analysis, which seeks to describe precisely which segments of society will realize the project’s benefits. It is most commonly used to assess the impact of projects on the poor, but it has broader applicability that allows for the estimation of impact on populations of particular interest, such as women, the aged, children, and regional or ethnic sub-populations. This process is codified in MCC’s Compact Development Guidance. Per the guidance, MCC requires the use of evidence to inform country and project selection by requiring that each project meet certain investment criteria like generating high economic returns, including clear metrics for results, and supporting the long-term sustainability of results.
- In line with MCC’s M&E policy, MCC projects are required to submit quarterly Indicator Tracking Tables showing progress toward projected targets. MCC also requires independent evaluations of every project to assess progress in achieving outputs and outcomes throughout the lifetime of the project and beyond.
Score
7
7
Substance Abuse and Mental Health Services Administration
- The following represents SAMHSA’s five largest competitive grant programs for which funds were appropriated in FY18: (1) Opioid State Targeted Response ($1.5 billion in FY18); (2) Children’s Mental Health Services ($125 million in FY18); (3) Strategic Prevention Framework ($119.5 million in FY18); (4) Targeted Capacity Expansion – General ($95.2 million in FY18); and (5) Substance Abuse Treatment Criminal Justice ($89 million in FY18).
- The President’s Budget request for SAMHSA for FY18 stipulates “that up to 10% of amounts made available to carry out the Children’s Mental Health Initiative may be used to carry out demonstration grants or contracts for early interventions with persons not more than 25 years of age at clinical high risk of developing first episode of psychosis.” Specifically, funds from this set-aside should address whether community-based interventions during the prodrome phase can prevent further development of serious emotional disturbances and eventual serious mental illness, and the extent to which evidence-based early interventions can be used to delay the progression of mental illness, reduce disability, and/or maximize recovery.
- SAMHSA has universal language about using evidence-based practices (EBPs) that is included in its Funding Opportunity Announcements (FOAs) (entitled Using Evidence-Based Practices (EBPs)). This language includes acknowledgement that, “EBPs have not been developed for all populations and/or service settings” thus encouraging applicants to “provide other forms of evidence” that a proposed practice is appropriate for the intended population. Specifically, the language states that applicants should: (1) document that the EBPs chosen are appropriate for intended outcomes; (2) explain how the practice meets SAMHSA’s goals for the grant program; (3) describe any modifications or adaptations needed for the practice to meet the goals of the project; (4) explain why the EBP was selected; (5) justify the use of multiple EBPs; if applicable, and (6) discuss training needs or plans to ensure successful implementation. Lastly, the language includes resources the applicant can use to understand EBPs. SAMHSA shares evidence-based program and practice language with grantees as they compete for SAMHSA grants and describe the types of program/practice implementation they hope to engage in to address the needs of their particular target populations and communities. The review criteria contained in the FOA make clear that applicants proposing to use programs and practices with a more robust evidence base will receive higher scores and thus greater support for their funding application.
- The President’s Budget for SAMHSA for FY18 plans to implement a tiered evidence approach in the Screening, Brief Intervention, and Referral to Treatment (SBIRT) program, which will allow for funding allocations and awards based on the implementation of both innovative practices or programs and more standard programming. Grant funding will be tied to the particular approach taken by the grantee. At the present time, SAMHSA does not use preference points to link funds to evidence of effectiveness; however, the 10% set-aside includes language to suggest that the Coordinated Specialty Care model is a first episode approach of importance to this work.
- Among SAMHSA’s standard terms and conditions of all grant funding is the requirement that grantees collect and report evaluation data to ensure the effectiveness and efficiency of its programs under the Government Performance and Results Modernization Act of 2010 (P.L. 102-62). In addition, grantees must comply with performance goals and expected outcomes described in Funding Opportunity Announcements (FOAs), which may include participation in an evaluation and/or local performance assessment. While exemplar FOAs are not available to be shared publicly at this juncture, SAMHSA is developing the first tiered evidence FOA that will be funded in FY18, a key step to incentivize innovative practice/program models among grantees. While exemplar FOAs are not available to be shared publicly at this juncture, SAMHSA is developing the first tiered evidence FOA that will be funded in FY18, a key step to incentivize innovative practice/program models among grantees.
Score
8
8
U.S. Agency for International Development
- In FY18, USAID’s top five accounts, as appropriated, included: (1) Global Health Programs USAID ($3.02 billion); (2) Development Assistance ($3 billion) (see p. 9 of this Reference Guide); (3) International Disaster Assistance ($2.7 billion); (4) Economic Support Fund ($1.82 billion) (see p. 10 of this Reference Guide); and (5) Assistance to Europe Eurasia and Central Asia ($750 million) (see p. 10 of this Reference Guide).
- USAID is committed to using evidence of effectiveness in all of its competitive contracts, cooperative agreements, and grants, which comprise the majority of the agency’s work. USAID’s Program Cycle policy ensures evidence from monitoring, evaluation and other sources informs decisions at all levels, including during strategic planning, project and activity design, and implementation. The Program Cycle is USAID’s particular framing and terminology to describe a common set of processes intended to achieve more effective development interventions and maximize impacts. The Program Cycle acknowledges that development is not static and is rarely linear, and therefore stresses the need to assess and reassess through regular monitoring, evaluation, and learning.
- In 2013, USAID reformed its policy for awarding new contracts to elevate past performance to comprise 20 to 30 percent of the non-cost evaluation criteria. For assistance, USAID does a “risk assessment” to review an organization’s ability to meet the goals and objectives outlined by the agency. This can be found in ADS 303, section 303.3.9. Contractor performance is guided by USAID operational policy ADS 302, section 302.3.8.7. As required in FAR Subpart 42.15, USAID must evaluate contractor performance using the Contractor Performance Assessment Reporting System (CPARS). Information in CPARS, while not available to the public, is available for Contracting Officers across the government to use in making determinations of future awards.
- In June 2018, USAID revised its process for engaging senior leadership in the review of proposed high-dollar-value A&A awards (contracts, grants, and cooperative agreements) with a total estimated cost of $20 million or more. The new policy, called the Senior Obligation Alignment Review (SOAR) helps to ensure the agency is using innovative approaches to provide long-term sustainable outcomes and provides oversight on the use of mechanisms and proposed results. The review also contributes to more rigorous project design and establishes greater linkages between Washington and field activities. Factors under review include: approach, use of evidence, past activities, innovation, and sustainable results.
- USAID seeks to increase collaboration, co-design, and co-financing approaches that promote innovation and the diversification of the partner base. This will result in a broader evidence base, more empowered partners, results-driven solutions, and stronger host-country capacity and self-reliance, to advance the agency’s overall goal of ending the need for foreign assistance. Based on an assessment of best practices and potential innovations, USAID will develop and implement procurement strategies and methodologies that achieve greater reliance on collaborative approaches and co-creation. In addition, USAID will train staff on co-creation and more-collaborative methods to engage our partners. The agency is measuring progress towards this goal through one of its Agency Priority Goals.
Score
10
10
U.S. Department of Education
- ED’s five largest competitive grant programs in FY18 are: (1) TRIO ($1.01 billion in FY18); (2) Charter Schools Program (400 million in FY18); (3) GEAR UP ($350 million in FY18); (4) Teacher and School Leader Incentive Program (TSL) ($200 million in FY18); and (5) Comprehensive Literacy Development Grants ($190 million in FY18).
- ED uses evidence of effectiveness when making awards in all 5 of these largest competitive grant programs. (1) The vast majority of TRIO funding in FY18 will be used to support continuation awards to grantees that were successful in prior competitions that awarded competitive preference priority points for projects that proposed strategies supported by moderate evidence of effectiveness, including over $300 million in Student Support Services, over $150 million in Talent Search, and nearly $400 million in the Upward Bound and Upward Bound Math and Science programs combined. (2) Under the Charter Schools Program, ED generally requires or encourages applicants to support their projects through logic models – however, applicants are not expected to develop their applications based on rigorous evidence. (3) For the 2017 competition for GEAR UP, ED used a competitive preference priority for projects based on moderate evidence of effectiveness for state and partnership grants (approximately $70 million in new awards in FY17). ED is funding continuation awards in 2018 for these evidence-based projects. Additionally, ED is conducting 2018 GEAR UP competitions (nearly $130 million) including an absolute priority for applicants proposing evidence-based strategies to improve STEM outcomes. (4) The TSL statute requires applicants to provide a description of the rationale for their project and describe how the proposed activities are evidence-based, and grantees are held to these standards in the implementation of the program. In 2018, the program is paying out continuation awards from the 2017 competition. (5) The Comprehensive Literacy Development (CLD) statute requires that grantees provide subgrants to local educational agencies that conduct evidence-based literacy interventions. Plans for the 2018 competition are forthcoming, as ED can make awards under the CLD program through September 30, 2019.
- The Evidence Planning Group (EPG) advises program offices on ways to incorporate evidence in grant programs, including by encouraging or requiring applicants to propose projects that are based on research, and by encouraging applicants to design evaluations for their proposed projects that would build new evidence.
Score
8
8
U.S. Dept. of Housing & Urban Development
- In FY18 HUD’s largest competitive programs are: (1) Continuum of Care ($1.9 billion); (2) CHOICE Neighborhoods Implementation ($145 million); (3) Lead Based Paint Hazard Reduction ($130 million); (4) Section 202 ($105 million); and (5) Section 811 ($82 million).
- Competitive grants in the Continuum of Care program account for most HUD grant resources in FY18, and serve homeless populations by providing permanent supportive housing and rapid rehousing services. The Continuum of Care program awards preference points based on reporting of system performance measures focused on outcomes. The FY17 NOFA allocated $1.6 billion using a 200-point scale. The 200 points were awarded for various features, many of which included evidence of effectiveness:
- Up to 49 points for system performance, including –
- up to 10 points to CoCs that demonstrate an overall reduction of at least 5 percent in the number of individuals and families who experience homelessness;
- up to 3 points to CoCs that demonstrate how they are working to reduce the number of individuals and families who become homeless for the first time, with maximum points awarded to CoCs that demonstrate a reduction in the number of first-time homeless;
- up to 11 points to CoCs that reduce the length of time individuals and families remain homeless and specifically describe how they will reduce the length of time individuals and families remain homeless;
- up to 9 points to CoCs that demonstrate an increase in the rate in which individuals and families move to permanent housing destinations or continue to reside in permanent housing projects;
- up to 6 points to CoCs that reduce the extent to which individuals and families leaving homelessness experience additional spells of homelessness;
- up to 4 points to CoCs that increase program participants’ incomes from employment and non-employment cash sources; and
- Up to 60 points for performance and strategic planning, including –
- up to 15 points to CoCs for demonstrating the extent to which they are ending chronic homelessness;
- up to 3 points to CoCs that demonstrate the total number of homeless households with children and youth has decreased;
- up to 8 points to CoCs that demonstrate a decrease in the total number of homeless veterans in the CoC; and
- up to 4 points to CoCs that demonstrate the total number of homeless veterans has decreased.
- Up to 49 points for system performance, including –
- The FY18 CHOICE Neighborhoods Implementation Grants program’s scoring criteria accounts for how strong the evidence base is for each applicant’s selected strategies. The 102 points possible are awarded for factors of capacity, need, strategy, leverage, and soundness of approach (pp. 54–56). Included in these factors are points for evidence-based criteria, including 3 points for evidence-based public safety approaches (p. 65), 2 points for evidence-based early learning programs (p. 72), and 2 points for high-quality school-based or out-of-school education programs (p. 72).
- The Lead Based Paint Hazard Reduction program scores applicants based on their ability to monitor performance and specifically asks for statistics on elevated blood lead incidence/prevalence. Of 102 points possible, 10 points are awarded for the applicant’s performance history (p. 38), 5 points for effective use of funds (p. 40), and 20 points for evidence of applicant need (pp. 41–43).
- Additionally, all HUD-funded programs require recipients to submit, not less than annually, a report documenting achievement of outcomes under the purpose of the program and the work plan in the award agreement for accountability purposes and to build evidence of effective practices in the field.
- HUD and the U.S. Department of Justice have a partnership to demonstrate the effectiveness of a Pay for Success financing approach. Demonstration grants require implementing Pay for Success financing to reduce homelessness and prisoner recidivism by providing permanent supportive housing using the “housing first” model. HUD is conducting an evaluation of the efficacy and cost effectiveness of the Pay for Success approach.
Score
8
8
U.S. Department of Labor
- In FY18, the five largest competitive grant programs awarded were: (1) YouthBuild ($85 million); (2) Reentry Projects ($84million); (3) RETAIN Demonstration Projects ($63 million); (4) Indian and Native American Employment and Training Program ($62 million); and (5) National Health Emergency (NHE) Dislocated Worker Demonstration Grants ($21 million).
- All grantees have been or will be involved in evaluations designed by CEO and the relevant DOL agencies. In each case DOL required or encouraged grantees (through language in the funding announcement and proposal review criteria) to use evidence-based models or strategies in grant interventions and/or to participate in an evaluation, especially to test new interventions that theory or research suggest are promising.
- DOL includes rigorous evaluation requirements in all competitive grant programs, involving either: 1) full participation in a national evaluation as a condition of grant receipt; 2) an independent third-party local or grantee evaluation with priority incentives for rigorous designs (e.g., tiered funding, scoring priorities, bonus scoring for evidence-based interventions, or multi-site rigorous tests); or 3) full participation in an evaluation as well as rigorous grantee (or local) evaluations.
- For example, the YouthBuild funding announcement required applicants to demonstrate how their project design is informed by the existing evidence base on disadvantaged youth serving social programs, and in particular disadvantaged youth workforce development programs. The funding announcement also contained language stating that grantees are required to participate in an evaluation as a condition of grant award should DOL undertake one. DOL funded an evaluation of YouthBuild using a randomized controlled trial. The evaluation included 75 programs across the country and nearly 4,000 young people who enrolled in the study between 2011 and 2013. The final report, presenting the program’s effects on young people after four years, will be released in 2018.
- The Reentry Projects grant program used a tiered evidence framework to require applicants to propose evidence-based and informed interventions, or new interventions that theory or research suggests are promising, (or a combination of both) that lead to increased employment outcomes for their target populations. Applicants must frame their goals and objectives to address this issue and are able to select and implement different program services and/or features of program models. The grant funding announcement includes examples of previous studies and evaluations that DOL has conducted on reentry programs, as well as other evidence-based and promising practices, and applicants were encouraged to review these resources prior to designing their intervention. DOL currently has an evaluation underway of the Reentry Projects grant program.
- The Retaining Employment and Talent after Injury/Illness (RETAIN) Demonstration Projectwill test the impact of early intervention projects on stay-at-work/return-to-work outcomes (see here for the funding announcement). This project builds off of current evidence and includes a rigorous evaluation. The demonstration will be structured and funded in two phases. The initial period of performance will be 18 months and will include planning and start-up activities, including the launch of a small pilot demonstration and an evaluability assessment. At the conclusion of the initial period of performance, a subset of awardees will be competitively awarded supplemental funding to implement the demonstration projects. Awardees will be required to participate in an evaluation, which will be designed in Phase 1 and conducted during Phase 2 by an external, independent contractor.
- The funding announcement for the Native American Employment and Training Program required applicants to demonstrate how their project design is informed by the existing evidence base on disadvantaged youth serving social programs, and in particular disadvantaged youth workforce development programs, and contains language stating that grantees are required to participate in an evaluation as a condition of grant award should DOL undertake one.
- The funding announcement for the National Health Emergency (NHE) Dislocated Worker Demonstration Grants states that a primary goal is to test innovative approaches to address the economic and workforce-related impacts of the opioid crisis, and contains language stating that grantees are required to participate in an evaluation as a condition of grant award. The purpose of these grants is to enable eligible applicants to serve or retrain workers in communities impacted by the health and economic effects of widespread opioid use, addiction, and overdose.