2018 Federal Standard of Excellence


Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding decisions and did it disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY18? (Example: What Works Clearinghouses)

Score
9
Administration for Children and Families (HHS)
  • ACF has established a common evidence framework adapted for the human services context from the framework for education research developed by the U.S. Department of Education and the National Science Foundation. The ACF framework, which includes the six types of studies delineated in the ED/NSF framework, aims to (1) inform ACF’s investments in research and evaluation, and (2) clarify for potential grantees and others the expectations for different types of studies.
  • While ACF does not have a common evidence framework across all funding decisions, certain programs do use a common evidence framework for funding decisions. For example, the Family First Prevention Services Act (FFPSA) enables states to use funds for certain evidence-based services. ACF is currently developing an evidence framework that will be the basis for determining services eligible for funding. The Head Start Designation Renewal System determines whether Head Start and Early Head Start grants are automatically renewed, based in part on how Head Start classrooms within programs perform on the Classroom Assessment Scoring System (CLASS), an observation-based measure of the quality of teacher-child interactions. The Personal Responsibility Education Program Competitive Grants were funded to replicate effective, evidence-based program models or substantially incorporate elements of projects that have been proven to delay sexual activity, increase condom or contraceptive use for sexually active youth, and/or reduce pregnancy among youth based on a systematic evidence review.
  • ACF sponsors several user-friendly tools that disseminate and promote evidence-based interventions. In particular, several evidence reviews of human services interventions disseminate and promote evidence-based interventions by rating the quality of evaluation studies (using objective standards vetted by technical experts and applied by trained, independent reviewers, and similar to those used by other agencies such as the U.S. Department of Education’s What Works Clearinghouse and the U.S. Department of Labor’s CLEAR) and presenting results in a user-friendly searchable format. Reviews to date have covered teen pregnancy prevention; home visitingmarriage education and responsible fatherhood; and employment and training; and include both ACF-sponsored and other studies.
  • Additionally, ACF is currently working to fulfill two new statutorily required evidence reviews. (1) The Consolidated Appropriations Act of 2017 directed HHS to create a “What Works Clearinghouse of Proven and Promising Projects to Move Welfare Recipients into Work” that includes “projects that used a proven approach or a promising approach in moving welfare recipients into work, based on independent, rigorous evaluations of the projects.” (2) As mentioned above, the Family First Prevention Services Act (FFPSA) enables States to use Federal funds available under parts B and E of Title IV of the Social Security Act to provide enhanced support to children and families and prevent foster care placements through the provision of evidence-based mental health and substance abuse prevention and treatment services, in-home parent skill-based programs, and kinship navigator services, and requires an independent systematic review of evidence to designate programs and services as “promising,” “supported,” and “well-supported” practices.
Score
7
Administration for Community Living
  • The National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) uses a stages of research framework (SORF) to classify and describe its funded grants and research projects within the grants. The four stages of SORF include: exploration and discovery, intervention development, intervention efficacy, and scale-up evaluation. Using SORF, NIDILRR gains insight into what is known and unknown about a problem; whether it is time to develop interventions to address a particular problem; whether it is to time test the efficacy of interventions; and whether it is time to “scale-up” interventions for broader use.
  • The Older Americans Act (OAA) requires the use of evidence-based programming in Title III-D-funded activities, Disease Prevention and Health Promotion Services. In response, ACL developed a definition of the term evidence-based, and created a website containing links to a range of resources for evidence-based programs. This is a common evidence framework used for OAA-funded activities for this particular program.
  • While ACL does not endorse specific programs, it does publish intervention summaries of aging and disability evidence-based programs and practices that include: general information about the intervention; a description of the research outcomes reviewed; ratings of the quality of research and readiness for dissemination; a list of studies and materials reviewed; information about the translation of the intervention to additional settings and populations; and, contact information to obtain more information about implementation or research.The information provided was modeled after the Substance Abuse and Mental Health Services Administration’s (SAMHSA) National Registry of Evidence-Based Programs and Practices (NREPP).
  • ACL works through its resource centers to help grantees use evidence to drive improvements in outcomes for older adults and individuals with disabilities.
  • For example, with funding from ACL, the National Resource Centers at NCOA, in collaboration with the Evidence-Based Leadership Council, lead an innovative vetting process to increase the number of programs available to ACL’s aging network that meet the Title III-D evidence-based criteria. This process has resulted in adding six new health promotion programs and three new programs for preventing falls. Another round of program reviews is currently underway.
  • To support the use of evidence-based and evidence-informed programming, service providers can find out about evidence-based programs that serve people with dementia and their caregivers by consulting a white paper drafted with funds from ACL – Translating Innovation to Impact: Evidence-based interventions to support people with Alzheimer’s disease and their Caregivers at home and in their Communities.
  • The Alzheimer’s Disease Supportive Services Program (ADSSP) funds competitive grants limited to States to expand the availability of evidence-based services that support persons with Alzheimer’s disease and related dementias (ADRD) and their family caregivers; and to create state-wide, person-centered, dementia-capable home and community-based service (HCBS) systems. Half of the funding must be used to provide direct services.
  • In 2015, ACL’s Aging and Disability Evidence-Based Programs and Practices initiative developed a toolkit, currently available on the ACL website, to help the public learn about evidence-based programs and interventions for implementation at the community-level and to promote their utilization.
  • The Model Systems Knowledge Translation Center (MSKTC) has worked with NIDILRR’s Model Systems grantees to develop and publish a variety of evidence-based factsheets about living with spinal cord injury, traumatic brain injury, or burn injury – written in language that all users can read and understand.
  • ACL’s Alzheimer’s Disease Supportive Services Program (ADSSP) encourages the translation of dementia specific interventions shown to improve the health and well-being of persons with Alzheimer’s disease and related dementia and/or their caregivers and implemented into communities. Since the introduction of evidence-based programming into the agencies Alzheimer’s and dementia programs more than 20 evidence-based interventions have been introduced and continue to be translated to states and communities.  ACL’s requirement for inclusion of dementia specific evidence-based interventions is demonstrated in the 2018 funding opportunity announcement entitled Alzheimer’s Disease Programs to States and Communities.
  • To build the evidence base, ACL funded cooperative agreements for the development and testing of model approaches towards coordinated and comprehensive systems for enhancing and assuring the independence, integration, safety, health, and well-being of individuals with intellectual and developmental disabilities living in the community. In FFY 2018, ACL implemented an evaluation of these model programs to determine whether the models implemented across the sites impact the quality of life of individuals with developmental disabilities as follows. Information from the evaluation will be used to inform future ACL funding announcements and the training and technical assistance provided by ACL to communities serving individuals with intellectual and developmental disabilities to ensure the integration of evidence into ACL’s programming.While the evaluation is not yet complete, initial findings about what works were integrated into the requirements of the funding announcement for the FY18 award cycle.
Score
8
Corporation for National and Community Service
  • CNCS adapted the evidence framework used by its Social Innovation Fund and the Investing in Innovation Fund at ED in order to include it as part of the AmeriCorps State and National program’s FY16, FY17, and FY18 grant competitions.
  • In March 2015, CNCS released Phase I of the CNCS Evidence Exchange, a virtual repository of reports intended to help CNCS grantees and other interested stakeholders find information about evidence- and research- based national service and social innovation programs. Phase one includes a database of single study reports with some additional descriptive information about the study, as well as a systematic review of the national service evidence base. Phase two in FY16 added studies as grantees completed their independent evaluations and submitted reports to CNCS. In FY17, CNCS focused on disseminating final reports as studies were completed and ensuring that the functionality of the site made the information as accessible as possible. In FY18, CNCS focused on enhancing the search function as more reports have been added.
Score
9
Millennium Challenge Corporation
  • MCC uses a rigorous evidence framework to make every decision along the investment chain, from country partner eligibility to sector selection to project choices. MCC uses common, evidence-based selection criteria, generated by independent, objective third parties, to ensure objectivity in country selection for grant awards. To be eligible for selection, countries must first pass the MCC 2018 Scorecard – a collection of 20 independent, third-party indicators that objectively measure a country’s policy performance in the areas of economic freedom, investing in people, and ruling justly. Both the scores for all countries as well as the criteria for assessing performance based on these scores are reported publicly and in a fully transparent process. A new application was recently created to render the indicator scorecards more easily accessible and viewable on mobile devices to further the use of the scorecards’ evidence. The criteria for passing the 2018 Scorecard are applied universally to all low- and lower-middle-income candidate countries. MCC’s Board of Directors then considers three key factors for selecting countries: (1) a country’s performance on the 2018 Scorecard; (2) the opportunity to reduce poverty and generate economic growth; and (3) the availability of funds. In the case of subsequent compacts, MCC also considers the partnership and performance of the first compact, measured in part by M&E, in addition to the above three criteria. (An in-depth description of the country selection procedure can be found in the annual Selection Criteria and Methodology report).
  • Then, to determine on which sector(s) a MCC program will focus, MCC’s Economic Analysis (EA) division undertakes a constraints-to-growth diagnostic to determine the binding constraints to economic growth in a country. Finally, to determine the individual projects in which MCC will invest in a given sector, MCC’s EA division combines root cause analysis with cost-benefit analysis to determine those investments that will have greatest development impact and return on MCC’s investment.
  • MCC’s model is based on a set of core principles deemed essential for development assistance to be effective – good governance, country ownership, focus on results, and transparency. In pursuing these, MCC has created a Principles into Practice series which describes MCC’s experience in operationalizing these principles in sector-level investments. Each report details MCC’s implementation experience, what is learned in implementation, and how the Agency is applying that learning to current and future investments. This portfolio-wide meta-analysis offers instructive learning for both internal and external audiences. MCC has made a concerted effort to better disseminate these reports in FY18 through panels, events, and conferences, including the American Evaluation Association.
  • All evaluation designs, data, reports, and summaries are made publicly available on MCC’s Evaluation Catalog. In FY18, MCC is launching a new product to better capture and disseminate the results and findings of its independent evaluation portfolio. New “Evaluation Briefs” will be produced for each evaluation and will offer a user-friendly, systematic format to better capture and share the relevant evidence and learning from MCC’s independent evaluations. These accessible products will take the place of MCC’s Summaries of Findings. Evaluation Briefs will be published on the Evaluation Catalog and will complement the many other products published for each evaluation including evaluation designs, microdata, survey questionnaires, baseline findings, interim reports, and final reports from the independent evaluator.
  • In FY18, MCC conducted internal research and analysis to understand where and how its published evaluations, datasets, and knowledge products are utilized. This effort underscores MCC’s commitment to transparency andlearning as MCC seeks to widen its understanding of the use of the evidence it produces. The results of this analysis will guide future efforts on evidence-based learning such as which sectors MCC prioritizes for evidence generation and publication and what types of products best communicate MCC’s evidence and learning. The above-described Evaluation Briefs are in part a result of MCC’s findings around evaluation user metrics. MCC finalized baseline metrics around evidence and evaluation utilization in April 2018 and is continuing to track global use of its knowledge products on a quarterly basis with a goal of expanding the base of users of MCC’s evidence and evaluation products.
  • In FY18, MCC sought to strengthen its outreach and dissemination of results in more innovative ways. Following on MCC’s first evidence workshop in El Salvador in 2016, MCC worked closely with the country-led implementation unit in El Salvador (FOMILENIO II), the President’s Technical Secretariat, and the Abdul Latif Jameel Poverty Action Lab (J-PAL) to organize M&E trainings with targeted content for varying levels of government officials. Over the course of three months, over 100 participants from across the Government of El Salvador learned the foundations of evidence-based policymaking, and designing and implementing evaluations. FOMILENIO II, MCC, and the Technical Secretariat hosted focus groups in June 2018 to develop an action plan for incorporating evaluation work into policy decisions.
  • To further bring attention to MCC’s evaluation and evidence, MCC publishes a quarterly evaluation newsletter called Statistically Speaking. This newsletter highlights recent evidence and learning from MCC’s programs with a special emphasis on how MCC’s evidence can offer practical policy insights for policymakers and development practitioners in the United States and in partner countries. It also seeks to familiarize a wider audience with the evidence and results of MCC’s investments.
Score
6
Substance Abuse and Mental Health Services Administration
  • There is great diversity across SAMHSA programming, ranging from community-level prevention activities to residential programs for pregnant and post-partum women with substance misuse issues. While this diversity allows SAMHSA to be responsive to a wide set of vulnerable populations, it limits the utility of a common evidence framework for the entire agency. Within Centers (the Center for Substance Abuse Prevention, the Center for Substance Abuse Treatment, and the Center for Mental Health Services), consistent evidence frameworks are in use and help to shape the process of grant-making (e.g., Center staff are familiar with the pertinent evidence base for their particular portfolios). At the programmatic level, staff review the state-of-the-art for a particular topic area to facilitate grantee adoption and implementation of evidence-based practices (EBPs). While staff awareness of EBPs varies, a systematic approach to evidence classification remains to be developed. Most Center staff rely on the National Registry of Evidence-based Programs and Practices to identify evidence-based programs for grantee implementation.
  • Until 2018, SAMHSA regarded the National Registry of Evidence-based Programs and Practices as the primary online user friendly tool for identifying evidence-based programs for grantee implementation. In January 2018, SAMHSA announced that it was “moving to EBP implementation efforts through targeted technical assistance and training that makes use of local and national experts and will that assist programs with actually implementing services….” At the same time, the Assistant Secretary for Mental Health and Substance Use outlined significant concerns with the rigor and effectiveness of NREPP, and reportedly terminated the contract with the organization running NREPP. It was stated that the Mental Health and Substance Use Policy Lab would now “play a central role in shaping SAMHSA’s efforts to bring more science to the evidence-based practices used in the prevention, treatment, and support services being provided by behavioral health practitioners and other clinicians.”
  • In April 2018, SAMHSA launched the Evidence-Based Practices Resource Center (Resource Center) that aims to provide communities, clinicians, policy-makers and others in the field with the information and tools they need to incorporate evidence-based practices into their communities or clinical settings. The Resource Center contains a collection of science-based resources, including Treatment Improvement Protocols, toolkits, resource guides, and clinical practice guidelines, for a broad range of audiences. Similarly, the Evidence-Based Practices (EBP) Web Guide features research findings and details about EBPs used to prevent and treat mental and substance use disorders. Stakeholders throughout the behavioral health field can use the EBP Web Guide to promote awareness of current intervention research and to increase the implementation and availability of EBPs.
  • In February 2018, SAMHSA published guidance for healthcare professionals and addiction treatment providers on appropriate prescribing practices for FDA-approved medications for opioid use disorder (OUD) and effective strategies for supporting the patients utilizing medication for the treatment of OUD.
  • In January 2018, SAMHSA announced it had released $12 million in funding to the American Academy of Addiction Psychiatry to begin the effort to utilize local expertise to provide TA and training on scientifically based evidence-based practices to combat the nation’s opioid crisis. The Opioid State Targeted Response TA program aims to provide TA on evidence-based practices across the spectrum of prevention, treatment and recovery.
  • SAMHSA has universal language about using evidence-based practices (EBPs) that is included in its Funding Opportunity Announcements (FOAs) (entitled Using Evidence-Based Practices (EBPs)). This language includes acknowledgement that, “EBPs have not been developed for all populations and/or service settings” thus encouraging applicants to “provide other forms of evidence” that a proposed practice is appropriate for the intended population. Specifically, the language states that applicants should: (1) document that the EBPs chosen are appropriate for intended outcomes; (2) explain how the practice meets SAMHSA’s goals for the grant program; (3) describe any modifications or adaptations needed for the practice to meet the goals of the project; (4) explain why the EBP was selected; (5) justify the use of multiple EBPs, if applicable; and (6) discuss training needs or plans to ensure successful implementation. Lastly, the language includes resources the applicant can use to understand EBPs. Federal grants officers work in collaboration with the SAMHSA Office of Financial Resources to ensure that grantee funding announcements clearly describe the evidence standard necessary to meet funding requirements.
  • In 2011, based on the model of the National Quality Strategy, SAMHSA developed the National Behavioral Health Quality Framework (NBHQF). With the NBHQF, SAMHSA proposes a set of core measures to be used in a variety of settings and programs, as well as in evaluation and quality assurance efforts. The proposed measures are not intended to be a complete or total set of measures a payer, system, practitioner, or program may want to use to monitor quality of its overall system or the care or activities it provides. SAMHSA encourages such entities to utilize these basic measures as appropriate as a consistent set of indicators of quality in behavioral health prevention, promotion, treatment, and recovery support efforts across the nation.
Score
9
U.S. Agency for International Development
  • USAID has a scientific research policy that sets out quality standards for research across the agency. USAID’s Program Cycle Policy includes specific evidence standards for decisions related to country strategic planning, project design, monitoring, evaluation, and learning. For example, USAID policy requires evidence and data to assess the development context, challenges, and opportunities in all of USAID’s country strategies. Similarly, all USAID projects must include a detailed analytical phase with findings documented in the Project Appraisal Document.
  • USAID is a member of theInternational Initiative for Impact Evaluations (3ie), which funds impact evaluations and systematic reviews that generate evidence on what works in development programs and why. Rather than creating a separate “what works” clearinghouse, USAID has chosen to work with 3ie and other development partners to support 3ie’s database of impact evaluations relevant to development topics (including over 4,500 entries to date), knowledge gap maps, and systematic reviews that pull the most rigorous evidence and data from across donors. 3ie also houses a collection of policy briefs that examine findings from its database of impact evaluations on overarching policy questions to help policymakers and development practitioners improve development impact through better evidence. Various USAID bureaus or OUs have funded 3ie to produce evidence gap maps on topics such as: science, technology, innovation, and partnership; state-society relations; and productive safety nets.
  • USAID technical bureaus provide guidance to USAID staff based on evidence of what works by sector that applies to all relevant agency programs. USAID’s Bureau for Democracy, Conflict and Humanitarian Assistance (DCHA), for example, includes the Center of Excellence on Democracy, Rights, and Governance (DRG), which publishes evidence-based standards for what works in this field and established the Evaluating Democracy and Governance Effectiveness (EDGE) Initiative, to supply and apply sophisticated tools to measure the impact of democracy, human rights, and governance work, and infuse evidence-based programmatic decision-making throughout the DRG portfolio. In another example, USAID’s Global Health Bureau has a strategic framework that presents details in Annex 1 on specific evidence-based strategies, targets, and approaches for achieving goals within each technical area under the health priorities.
  • Several USAID Bureaus synthesize sector-specific evidence from evaluations and other sources to summarize key findings and identify gaps in knowledge that then inform sector learning agendas. For example, the Bureau for Food Security keeps a collection of evidence related to what works in agricultural development and food security, an interactive community where USAID staff and partners can contribute content. Established in 2011, Agrilinks has become the go-to source for informative discussions on development topics and the latest information that is furthering resilience, food security, and poverty reduction.
Score
10
U.S. Department of Education
  • ED’s evidence standards for its grant programs, as outlined in the Education Department General Administrative Regulations (EDGAR), build on ED’s What Works ClearinghouseTM (WWC) evidence standards. ED uses these same evidence standards in all of its discretionary grant competitions that use evidence to direct funds to applicants proposing to implement projects that have evidence of effectiveness and/or to build new evidence through evaluation (see Question #8 below for more detail).
  • As noted above, EPG has coordinated the development of revised evidence definitions and related selection criteria for competitive programs that align with the Every Student Succeeds Act (ESSA) and streamline and clarify provisions for grantees. These revised definitions align with ED’s suggested criteria for states’ implementation of ESSA’s four evidence levels, included in ED’s non-regulatory guidance, Using Evidence to Strengthen Education Investments. ED also developed a fact sheet to support internal and external stakeholders in understanding the revised evidence definitions. This document has been shared with internal and external stakeholders through multiple methods, including the Office of Elementary and Secondary Education ESSA technical assistance page for grantees.
  • The Office of Innovation and Improvement (OII), in coordination with staff from IES and members of EPG, developed Evidence Requirements Checklists for the Education Innovation and Research (EIR) mid-phase and expansion grant competitions. The checklists are intended to help applicants determine what studies to include on the Evidence Form with their application for the purposes of meeting the evidence requirement. Applicants can use the checklist as an informal worksheet to understand the evidence criteria used to review studies and learn about additional evidence-related resources available online. OII also worked with IES to develop two presentations to further support applicants in submitting evidence that meets the established standards: Understanding the Evidence Definitions Used for U.S. Department of Education Programs and Using the What Works Clearinghouse (WWC) to Identify Strong or Moderate Evidence of Positive Effects from Education Interventions.
  • Additionally, IES and the National Science Foundation issued a joint report that describes six types of research studies that can generate evidence about how to increase student learning in 2013. These principles are based, in part, on the research goal structure and expectations of IES’s National Center for Education Research (NCER) and National Center for Special Education Research (NCSER). NCER and NCSER communicate these expectations through their Requests for Applications and webinars that are archived on the IES website and available to all applicants.
  • ED’s What Works ClearinghouseTM identifies studies that provide valid and statistically significant evidence of effectiveness of a given practice, product, program, or policy (referred to as “interventions”), and disseminates summary information and reports on the WWC website. The WWC has reviewed more than 10,000 studies that are available in a searchable database, including a commitment to review all publicly available evaluation reports generated under i3 grants. The WWC released four new Practice Guides in 2016 and 2017. WWC Practice Guides are based on reviews of research and experience of practitioners and are designed to address challenges in classrooms and schools.
  • In 2017, the WWC released the version 4.0 Standards and Procedures Handbooks, which take into account improvements in research methodology. More recently, in early 2018, the WWC launched an online training system for the version 4.0 standards and procedures. The online training system is available to anyone, anywhere, and at any time – and is free to everyone. To make information about statistically significant evidence of effectiveness available to the public more quickly, the WWC has improved its suite of online reviewer tools.
  • As noted above, in 2016, ED released non-regulatory guidance, Using Evidence to Strengthen Education Investments, which recommends a five-step decision-making process to promote continuous improvement and support better outcomes for students that is built on common evidence definitions. This guidance has served as a framework for ED’s technical assistance related to implementation of ESSA’s evidence provisions, such as the State Support Network’s community of practice on evidence-based practices that supports 9 states with selection of interventions. ED has conducted outreach to build awareness of the guidance with stakeholder groups. In addition, ED included tailored guidance for these five steps in its guidance on Title II, Part A, and Title IV of the ESEA. These resources supplement ED’s substantial evidence-focused technical assistance efforts, such as:
Score
7
U.S. Dept. of Housing & Urban Development
  • PD&R provides the public, policymakers, and practitioners with evidence of “what works” primarily through HUD USER, a portal and web store for program evaluations, case studies, and policy analysis and research; the Regulatory Barriers Clearinghouse; and through initiatives such as Innovation of the Day, Sustainable Construction Methods in Indian Country, and the Consumer’s Guide to Energy-Efficient and Healthy Homes. This content is designed to provide current policy information, elevate effective practices, and synthesize data and other evidence in accessible formats such as Evidence Matters. Through these resources, researchers and practitioners can see the full breadth of work on a given topic (e.g., rigorous established evidence, case studies of what’s worked in the field, and new innovations currently being explored) to inform their work.
  • In FY17, HUD developed and piloted a new standardized data collection and reporting framework for its discretionary grant programs called Standards for Success. The framework consists of a repository of data elements that participating programs use in their grant reporting. The repository of data elements establishes common definitions and measures across programs for greater analysis and coordination of services. HUD designed and made available an online data collection and reporting tool for those grants participating in the pilot. This tool enables grantees to directly enter the applicable data elements as well as upload files. All data are consolidated for record and analysis. HUD’s broader adoption of this framework would establish a common basis for determining grant performance, return on investment, and funding decisions.
Score
9
U.S. Department of Labor
  • DOL uses the Cross-agency Federal Evidence Framework for evaluation planning and dissemination. Additionally, DOL collaborates with other agencies (HHS, ED-IES, NSF, CNCS) on refining cross-agency evidence guidelines and developing technological procedures to link and share reviews across clearinghouses. The Interagency Evidence Framework conveys the categories of evaluations, the quality review of evaluation methodologies and results, and the use of evaluation findings. The framework is accepted department-wide.
  • DOL’s Clearinghouse for Labor Evaluation and Research (CLEAR) is an internet-based evidence clearinghouse. CLEAR’s goal is to make research on labor topics more accessible to practitioners, policymakers, researchers, and the public more broadly, so that it can inform their decisions about labor policies and programs. CLEAR identifies and summarizes many types of research, including descriptive statistical studies and outcome analyses, implementation, and causal impact studies. For causal impact studies, CLEAR assesses the strength of the design and methodology in studies that look at the effectiveness of particular policies and programs.
  • CLEAR reviews causal studies in a number of labor-related topic areas and assigns them a rating indicating the strength of their causal evidence. It provides an objective assessment and rating of the degree to which the research establishes the causal impact of the intervention on the outcomes of interest based on established causal evidence guidelines.
  • DOL uses the CLEAR evidence guidelines and standards to make decisions about discretionary program grants awarded using evidence-informed or evidence-based criteria. The published guidelines and standards are used to identify evidence-based programs and practices and to review studies to assess the strength of their causal evidence or to do a structured evidence review in a particular topic area or timeframe. Requests for proposals also indicate the CLEAR standards should be applied to all CEO evaluations.
  • In additional to CLEAR, ETA maintains a user friendly technical assistance tool to promote state and local service providers’ use of evidence-based interventions through Workforce System Strategies, a comprehensive database of over 1000 profiles that summarize a wide range findings from reports, studies, technical assistance tools and guides that support service improvement, program management and operations; education and training; employment, retention or advancement activities; and other workforce development related topics.
  • DOL’s Evaluation Policy Statement formalizes the principles that govern all program evaluations in DOL, including methodological rigor, independence, transparency, ethics, and relevance. CLEAR standards provide DOL a tool to understand and make transparent how it defines rigor, and to assess the extent to which DOL studies meet the highest standards of methodological rigor.
Back to the Standard

Visit Results4America.org