2018 Federal Standard of Excellence


Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and research/learning agenda(s) and did it publicly release the findings of all completed evaluations in FY18?

Score
9
Administration for Children and Families (HHS)
  • ACF’s evaluation policy addresses the principles of rigor, relevance, transparency, independence, and ethics and requires ACF program, evaluation, and research staff to collaborate. For example, the policy states, “ACF program offices will consult with OPRE in developing evaluation activities.” And, “There must be strong partnerships among evaluation staff, program staff, policy-makers and service providers.” ACF established its Evaluation Policy in November 2012, and published it in the Federal Register in August 2014.
  • ACF’s annual portfolio reviews, which are publicly available on the OPRE website, describe key findings from past and recent research and evaluation work, and how ongoing projects are addressing gaps in the knowledge base and answering critical questions in the areas of family self-sufficiency, child and family development, and family strengthening, including work related to child welfare, child careHead Start, Early Head Start, strengthening families, teen pregnancy prevention and youth development, home visiting, self-sufficiency, welfare, and employment. These portfolio reviews describe how evaluation and evidence-building activities unfold in specific ACF program and topical areas over time and how current research and evaluation initiatives build on past efforts and respond to remaining gaps in knowledge.
  • Building on this assessment of the existing evidence base and the questions being answered by ongoing research, OPRE annually updates its research plans and proposes a research and evaluation spending plan to the Assistant Secretary. This plan covers both longer-term activities that build evidence over time as well as activities to respond to current administration priorities and provide information in the near term. This plan covers areas in which Congress has currently provided authority and funding to conduct research and evaluation.
  • ACF’s evaluation policy requires that “ACF will release evaluation results regardless of findings…Evaluation reports will present comprehensive findings, including favorable, unfavorable, and null findings. ACF will release evaluation results timely – usually within two months of a report’s completion.” ACF has publicly released the findings of all completed evaluations to date. In 2017, OPRE released nearly 100 publications. OPRE publications are publicly available on the OPRE website.
Score
7
Administration for Community Living
  • ACL has an agency-wide evaluation policy that reconfirms ACL’s commitment to conducting rigorous, relevant evaluations and to using evidence from evaluations to inform policy and practice. The evaluation policy addresses how ACL promotes coordination between evaluation staff and policymakers as well as stressing the importance of the involvement of policymakers in the development of evaluation questions. ACL’s evaluation policy stipulates that “ACL will release evaluation results regardless of the findings. Evaluation reports will describe the methods used, including strengths and weaknesses, and discuss the generalizability of the findings. Evaluation reports will present comprehensive results, including favorable, unfavorable, and null findings. ACL will release evaluation results timely – usually within six months of a report’s completion.”
  • All completed evaluation reports are posted on the ACL website (see also NIDILRR External Evaluation). Authorizing legislation for ACL programs also specify that evaluation be conducted and that the results will be made available to the public (e.g., Older Americans Act Title II, Section 206, Developmental Disabilities Assistance and Bill of Rights Act (DD Act) Title II Section 210,  Workforce Innovation and Opportunity Act Chapter III subtitle D Section 169, Elder Justice Act Part II Section 2044).
  • For an evaluation plan, ACL’s Office of Performance and Evaluation (OPE) submits a concept paper to the Principal Deputy Administrator and Acting Commissioner on Disabilities outlining proposed evaluation activities for each upcoming year. This plan reflects conversations between OPE staff, Agency leadership, and program staff regarding policy priorities. It describes how OPE will allocate it resources to answer identified evaluation questions, and to provide sound evidence regarding how well programs are meeting their stated goals as well as recommendations for program improvement.
  • While ACL has not completed a formal agency-wide learning agenda, ACL has a process for developing Center-specific learning agendas that will form the basis for an eventual agency-wide learning agenda to be completed and released in FY19. The process involves annual reviews witheach ACL Center to support the generation and use of evaluation findings to inform agency strategies and decision making. Specifically, a series of interviews with Center Directors is conducted immediately prior to the development of Center funding proposals and include discussion of the most important questions that need to be answered in order to improve program implementation and performance; ways to strategically prioritize these questions given the level of current understanding, available resources, feasibility, and other considerations; appropriate tools and methods to answer each question; and approaches for information dissemination that are accessible and useful to ACL leadership. ACL anticipates piloting this process in late fall of 2018.
  • The Long-Range Plan of the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR) publishes a five-year agenda that will advance its research efforts (i.e. learning agenda).
Score
8
Corporation for National and Community Service
  • CNCS considers its mission a set of empirical questions to be tested. As such, the agency uses the following core set of questions to organize its evidence building strategy each year:
    1. How do CNCS programs affect the individuals who serve (e.g., national service members and volunteers)?
    2. How do CNCS programs affect the individuals served by grantee and sponsor organizations (e.g., “beneficiaries”)?
    3. How do CNCS programs contribute to the civic health of communities?
    4. How can CNCS programs be used most effectively by communities to solve local challenges?
  • A comprehensive portfolio of research projects has been built to address these questions. As findings emerge, future studies are designed to continuously build the agency’s evidence base. The CNCS Office of Research & Evaluation relies on scholarship in relevant fields of academic study; a variety of research and program evaluation approaches including field, experimental, and survey research; multiple data sources including internal and external administrative data; and different statistical analytic methods.
  • The agency’s evidence-building strategy is updated annually based on input from agency leadership as well as from emerging evidence from completed studies. This agenda is reflected in the CNCS Congressional Budget Justifications each year (see FY16 pp. 55-56; FY17 pp. 5-6, 55-56; and FY18 p. 3). CNCS’s R&E coordinates the agency’s learning agenda, which includes building its evidence base and facilitating the use of evaluation to inform important decisions. To this end, the office conducts research and evaluation on CNCS service programs; helps build the capacity of agency-funded partners to conduct and understand evaluations; and facilitates evidence-based and evidence-informed grant-making.
  • A report synthesizing findings from FY16 and early FY17 research and evaluation studies conducted by or sponsored by CNCS may be found here. More generally, CNCS creates four types of reports for public release: research reports produced directly by research and evaluation staff, research conducted by third party research firms and overseen by research and evaluation staff, reports produced by CNCS-funded research grantees (see research competition for more information), and evaluation reports submitted by CNCS-funded program grantees. All reports completed and cleared internally are posted to the Evidence Exchange, an electronic repository for reports. This virtual repository was launched in September 2015. Quarterly analytics for new products created, number of reports posted, page views, and users are provided by our contractor.
  • In FY16, CNCS developed Evaluation Core Curriculum Courses which are presented to its grantees through a webinar series and is available on the CNCS website along with other evaluation resources. The courses are designed to help grantees and other stakeholders easily access materials to aid in conducting or managing program evaluations. R&E staff supported workshops using these materials for Senior Corps grantees in July 2018 and AmeriCorps grantees in September 2018. In addition, according to an internal evaluation CNCS conducted with State Commissions regarding their use of Commission Investment Fund grants to improve their ability to conduct high quality performance measurement and evaluation, having these CNCS resources facilitated implementation of the grant. As one commission explained, “One thing that definitely kept things running smoothly is that the resources—the two core curriculum courses, the performance measure and evaluation—having those already developed, ready to go…not having to develop new things from scratch and just being able to go directly to this is what theory of change is…having that ready to go was also really helpful in moving along.”
Score
9
Millennium Challenge Corporation
  • MCC’s Independent Evaluation Portfolio is governed by its publicly available Policy for Monitoring and Evaluation. This Policy requires all programs to develop and follow comprehensive M&E plans that adhere to MCC’s standards. The Policy was revised in March 2017 to ensure alignment with the Foreign Aid Transparency and Accountability Act of 2016. Pursuant to MCC’s M&E policy, every project must undergo an independent evaluation. This aspect of the policy makes MCC unique among US federal agencies and other bilateral donors.
  • Each comprehensive M&E Plan includes two main components. The monitoring component lays out the methodology and process for assessing progress towards the investment’s objectives. It identifies indicators, establishes performance targets, and details the data collection and reporting plan to track progress against targets on a quarterly basis. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Each country’s M&E Plan represents the evaluation plan and learning agenda for that country’s set of investments.
  • To ensure appropriate quality and risk assessment and management of the independent evaluation portfolio, MCC M&E and its evaluation contractors also follow the Evaluation Management and Review Process Guidelines. To ensure timely release of independent evaluation materials, a public evaluation entry is created in the MCC Evaluation Catalog as soon as an Evaluation Design Report (EDR) is cleared by MCC management. This entry is populated with all subsequent evaluation materials as they become available, including questionnaires, Baseline Report, and other corresponding documentation. Once an independent evaluation’s analytical report – an Interim or Final Report – is drafted, it is sent through MCC’s rigorous review process which is governed by the Evaluation Management and Review Process Guidelines. At this time, findings and lessons learned are documented in a Summary of Findings, and all independent evaluations and reports are publicly reported on the MCC Evaluation Catalog. As of August 2018, MCC has contracted or is planning 198 independent evaluations. To date, 91 Interim and Final Reports have been finalized and released to the public.
  • For FY18, MCC has pursued a robust agency-wide, multi-year research and learning agenda around better use of its data and evidence for programmatic impact. DPE has prioritized learning around how MCC develops, implements, monitors, and evaluates the policy and institutional reforms (PIR) it undertakes alongside capital investments. The PIR learning agenda is focused on better evidence for methodological guidance to economists and sector practices to support the expanded use of cost-benefit analysis (CBA) in more cases of PIR that MCC supports. The purpose is to make investments in PIR more effective by meeting the same investment criteria as other interventions MCC considers for investment; to make assumptions and risks more explicit for all its investments that depend on improved policies or institutional performance; and to help inform the design of PIR programs to ensure that they have a high economic rate of return.
  • MCC produces periodic reports that capture the results of MCC’s learning efforts in specific sectors and translate this learning into actionable evidence for future programming. At the start of FY18, MCC published a Principles into Practice report on its investments into roads; this report demonstrated MCC learning around the implementation and evaluation of its roads projects, and critically assessed how MCC was changing its practice as a result of this learning. In FY18, MCC began additional Principles into Practice reports on its activities in the education and water, sanitation, and hygiene sectors.
  • In FY18, MCC initiated a new learning effort around its use of Root Cause Analysis (RCA). MCC uses RCA to examine the underlying drivers of binding constraints to growth. The Root Cause Analysis Working Group has been formed to generate evidence and recommendations on how MCC conducts this analysis. The group has reviewed MCC’s experience with root cause analysis across 11 compacts or threshold programs and identified possible areas for improvement. The Working Group is currently drafting guidance to help country teams with process and the use of various RCA tools. The working group is also exploring the need for sector-specific (e.g., power, education) approaches to RCA that reflect insights on examining drivers of constraints.
Score
7
Substance Abuse and Mental Health Services Administration
  • SAMHSA’s Evaluation Policy and Procedure (P&P), revised and approved in May 2017, provides guidance across the agency regarding all program evaluations. Specifically, the Evaluation P&P describes the demand for rigor, compliance with ethical standards, and compliance with privacy requirements for all program evaluations conducted and funded by the agency. The Evaluation P&P serves as the agency’s formal evaluation plan and includes a new process for the public release of final evaluation reports, including findings from evaluations deemed significant. The Evaluation P&P sets the framework for planning, monitoring, and disseminating findings from significant evaluations.
  • The Evaluation P&P requires Centers to identify research questions and appropriately match the type of evaluation to the maturity of the program. A new workgroup was formed in 2017, the Cross-Center Evaluation Review Board (CCERB), composed of Center evaluation experts, who began reviewing significant evaluations at critical milestones in the planning and implementation process, providing specific recommendations to the Center Director having the lead for the evaluation.
  • SAMHSA’s Cross-Center Evaluation Review Board (CCERB) worked with the four centers within SAMHSA: CSAP, CMHS, CSAT, and CBHSQ to advise, conduct, collaborate, and coordinate on all evaluation and data collection activities that occur within SAMHSA. CCERB staff provided support for program-specific and administration-wide evaluations. SAMHSA’s CMO also played a key role in reviewing evaluation proposals and clearing final reports.
  • Results from significant evaluations will be available on SAMHSA’s website, a new step SAMHSA took with its newly-approved Evaluation P&P in the Fall of 2017. As of July 2018, one summary was posted on the website – a process evaluation of the Safe Schools/Healthy Students (SS/HS) State Program. No other evaluation summaries are posted, including of any ongoing evaluation studies. Significant evaluations include those that have been identified by the Center Director as providing compelling information and results that can be used to make data driven, evidence-based, and informed decisions about behavioral health programs and policy. The following criteria is used to determine whether an evaluation is significant: (1) whether the evaluation was mandated by Congress; (2) whether there are high priority needs in states and communities; (3) whether the evaluation is for a new or congressionally-mandated program; (4) the extent to which the program is linked to key agency initiatives; (5) the level of funding; (6) the level of interest from internal and external stakeholders; and (7) the potential to inform practice, policy, and/or budgetary decision-making.
  • CBHSQ is currently leading agency-wide efforts to build SAMHSA’s learning agenda. Via this process, we have developed agency-wide Learning Agenda templates in the critical topic areas of opioids, serious mental illness, serious emotional disturbance, suicide, health economics and financing, and marijuana; learning agendas focused on other key topic areas such as alcohol are underway as well. Other topics, such as cross-cutting issues related to vulnerable populations, are interwoven through these research plans. Through this multi-phased process, CBHSQ is systematically collecting information from across the agency regarding research and analytic activities, analyzing and organizing this information into a guiding framework to be used for decision-making related to priorities and resource allocation. SAMHSA began this process in early 2017 and planned to complete it in the winter of 2018. SAMHSA has developed a template for the issue of opioid abuse, the first topic we tackled in this effort and thus the most complete at this point in time and has been used in determining research questions along with the current activities underway across the agency that are relevant to these areas. The template followed the construct outlined by OMB in the publication entitled Analytical Perspectives; Budget of the U.S. Government; Fiscal Year 2018.
  • SAMHSA’s Data Integrity Statement outlines how CBHSQ adheres to federal guidelines designed to ensure the quality, integrity, and credibility of statistical activities.
  • SAMHSA’s National Behavioral Health Quality Framework, aligned with the U.S. Department of Health and Human Services’ National Quality Strategy, is a framework to assist providers, facilities, payers, and communities better track and report the quality of behavioral health care. These metrics are focused primarily on high-rate behavioral health events such as depression, alcohol misuse, and tobacco cessation, all of which impact health and health care management and thus affect a large swath of the U.S. population.
Score
8
U.S. Agency for International Development
  • USAID has an agency-wide Evaluation Policy, published in 2011 and updated in October 2016, to incorporate changes in USAID’s Program Cycle Policy and to ensure compliance with FATAA. The 2016 policy updates evaluation requirements to simplify implementation and increase the breadth of evaluation coverage. The updates also seek to strengthen evaluation dissemination and utilization. The agency released a report in 2016 to mark the five-year anniversary of the policy. Over the last three fiscal years, USAID has completed nearly 500 more evaluations (188 in FY15145 in FY16, and 161 in FY17).
  • LER works with Washington bureaus to develop annual evaluation action plans that review evaluation quality and use within each bureau, and identify challenges and priorities for the year ahead. While the plans are optional, most bureaus participate. LER uses these plans to prioritize financial and technical assistance to help bureaus address challenges and as a source for agency-wide learning on improving evaluation quality and use. In addition, all USAID bureaus and missions must report annually on any planned, ongoing, or completed evaluations, otherwise known as the “Evaluation Registry.”
  • At USAID, learning, monitoring, and evaluation priorities are set by bureaus or OUs for the programs within the bureaus’ area of responsibility. Many bureaus have a learning agenda for specific priorities within their bureau, with some learning agendas being specific to sectors or topics but shared agency-wide. And sometimes priorities are coordinated with other U.S. agencies when program responsibilities are shared. For example, the Feed the Future initiative, led by USAID with eleven agencies contributing to the effort, has a Handbook of Indicator Definitions to guide cross-agency monitoring efforts. A 2017 snapshot of recent USAID learning agendas is included as an annex in USAID’s Landscape Analysis of Learning Agendas report. PPL is also implementing a Program Cycle Learning Agenda (PCLA) to prioritize questions about how USAID’s program cycle policy is working in practice. PCLA questions include how staff perceive and value PPL capacity building support around the Program Cycle, and whether the Program Cycle incentivizes programs that are based in evidence and managed adaptively through continuous learning.
  • Since September 2016, USAID multi-year CDCSs now require a learning plan that outlines how missions will incorporate learning into their programming — including activities such as regular portfolio reviews, evaluation recommendation tracking and dissemination plans, and other analytic processes — to better understand the dynamics of their programs and their country contexts. In addition to mission strategic plans, all projects and activities are now also required to have integrated monitoring, evaluation, and learning plans.
  • As a part of the USAID Transformation, USAID will prioritize supporting partner countries as they progress along their journey to self-reliance, taking increasing ownership over planning, and financing and implementing their own development agendas. USAID support will focus on building partner countries’ commitment and capacity to assess where countries are on this journey (using USAID’s self-reliance metrics), mobilize resources to finance development, and engage the private sector in collaborating to develop market-based solutions to development challenges. This will entail transforming USAID’s partnerships with developing countries to facilitate locally-led development, and to define the conditions under which countries achieving high degrees of self-reliance transition away from development assistance. In order to learn continuously as we develop our approach, USAID is creating a learning agenda around self-reliance to capture and share knowledge of what works, what doesn’t, and what gaps in policy and practice need to be addressed.
  • USAID has an internal evaluation registry that is updated on an annual basis to provide data on completed, ongoing, and planned evaluations, including evaluations planned to start anytime in the next three fiscal years. All final USAID evaluation reports are available on the Development Experience Clearinghouse, except for a small number of evaluations that are considered Sensitive But Unclassified. For FY15, FY16, and FY17, USAID created infographics that show where evaluations took place, across which sectors, and include short narratives that describe findings from selected evaluations and how that information informed decision-making.
  • Partnerships for Enhanced Engagement in Research (PEER) is an international grants program that funds scientists and engineers in developing countries who partner with U.S. government-funded researchers to address global development challenges. PEER supports the connection of international and American researchers to advance new solutions, innovations and approaches. The PEER program is designed to leverage federal science agency funding from NASA, NIFA, NIH, NOAA, NSF, Smithsonian Institution, USFS, USDA, and USGS by directly supporting developing country scientists who work in partnership with current or new colleagues supported by these U.S. government agencies. Technical areas include water resource management, climate change, biodiversity, agriculture, energy, disaster mitigation, nutrition, maternal and child health, and infectious diseases.
Score
8
U.S. Department of Education
  • ED has a scientific integrity policy to ensure that all scientific activities (including research, development, testing, and evaluation) conducted and supported by ED are of the highest quality and integrity, and can be trusted by the public and contribute to sound decision-making. In January 2017, IES published “Evaluation Principles and Practices,” which describes the foundational principles that guide its evaluation studies and the key ways in which the principles are put into practice.
  • In addition, IES works with partners across ED, including through the EPG, to prepare and submit to Congress a two-year biennial, forward-looking evaluation plan covering all mandated and discretionary evaluations of education programs funded under ESSA (see the FY18 plan here). IES and PPSS work with programs to understand their priorities, design appropriate studies to answer the questions being posed, and share results from relevant evaluations to help with program improvement. This serves as a research and learning agenda for ED.
  • ED’s FY 2017 Annual Performance Report and FY 2019 Annual Performance Plan includes a list of ED’s current evaluations in Appendix E, organized by topic. IES also maintains profiles of all its evaluations on its website, which include key findings, publications, and products. IES publicly releases all peer-reviewed publications from its evaluations on the IES website and also in the Education Resources Information Center (ERIC). IES announces all new evaluation findings to the public via a Newsflash and through social media (Twitter, Facebook, and YouTube). IES regularly conducts briefings on its evaluations for ED, the Office of Management and Budget, Congressional staff, and the public.
  • Finally, IES manages the Regional Educational Laboratory (REL) program, which supports districts, states, and boards of education throughout the United States to use research and evaluation in decision making. The research priorities are determined locally, but IES approves the studies and reviews the final products. All REL studies are made publicly available on the IES website.
Score
8
U.S. Dept. of Housing & Urban Development
  • PD&R has published a Program Evaluation policy that establishes core principles and practices of PD&R’s evaluation and research activities. The six core principles are rigor, relevance, transparency, independence, ethics, and technical innovation. In FY18, PD&R undertook an internal review of the principles and compliance to assess whether modifications are needed. The review highlighted points of tension in the application of standards, but PD&R found that the standards themselves did not require amendment.
  • PD&R’s evaluation policy guides HUD’s research planning efforts, known as research roadmapping. Key features of research roadmapping include reaching out to internal and external stakeholders through a participatory approach; making research planning systematic, iterative, and transparent; driving a learning agenda by focusing on research questions that are timely, forward-looking, policy-relevant, and leverage HUD’s comparative advantages and partnership opportunities; and aligning research with HUD’s strategic goals and areas of special focus. HUD also employs its role as convener to help establish frameworks for evidence, metrics, and future research. In FY18, PD&R staff is collaborating on an assessment of processes and procedures for identifying and executing in-house research projects at 15 federal agencies. This work will develop an approach to assess practice maturity and identify lessons to strengthen the value of PD&R’s in-house research efforts.
  • HUD’s original “Research Roadmap FY14-18” and “Research Roadmap: 2017 Update” constitute the core of HUD’s learning agenda.The roadmaps are strategic, long-term (five-year) plans for priority program evaluations and research to be pursued given a sufficiently robust level of funding. On the basis of the learning agenda and additional policy questions that emerge, HUD also develops annual evaluation plans that identify specific research priorities. Actual research activities are substantially determined by Congressional funding and guidance.
  • PD&R’s Program Evaluation Policy (p. 87950) is to publish and disseminate all evaluations that meet standards of methodological rigor in a timely fashion. Additionally, PD&R includes language in research and evaluation contracts that allows researchers to independently publish results, even without HUD approval, after not more than six months. PD&R has occasionally declined to publish reports that fell short of standards for methodological rigor. Completed evaluations and research are summarized in HUD’s Annual Performance Report (pp. 123–131) at the end of each fiscal year, and research reports are posted on PD&R’s website, HUDUSER.gov.
Score
10
U.S. Department of Labor
  • DOL has an Evaluation Policy Statement that formalizes the principles that govern all program evaluations in the department, including methodological rigor, independence, transparency, ethics, and relevance.
  • CEO works with each of 12 operating agencies within DOL to create a learning agenda, which is rolled up into a separate agency-wide learning agenda (or Department evaluation plan). Learning agendas are updated every year. They highlight priority questions that the operating agencies would like to answer. They are a catalyst for setting priorities, identifying questions, and for conceptualizing studies that advance evidence in areas of interest to DOL agencies, the department, and the Administration.
  • CEO develops, implements, and publicly releases an annual DOL evaluation plan. The evaluation plan is based on the agency learning agendas as well as DOL’s Strategic Plan priorities, statutory requirements for evaluations, and Secretarial and Administration priorities. The evaluation plan includes the studies CEO intends to undertake in the next year using the set-aside dollars. Appropriations language requires the Chief Evaluation Officer to submit a plan to the U.S. Senate and House Committees on Appropriations outlining the evaluations that will be carried out by the Office using dollars transferred to CEO; the DOL evaluation plan serves that purpose. The 2017 plan was posted on the CEO website. The 2018 evaluation plan is also publicly available. The evaluation plan outlines evaluations that CEO will use its budget to undertake. CEO also works with agencies to undertake evaluations and evidence building strategies to answer other questions of interest identified in learning agencies, but not undertaken directly by CEO.
  • Once contracts are awarded for new evaluation studies, study descriptions are posted on the Current Studies page of CEO’s website to provide the public with information about studies currently underway including research questions and timelines for study completion and publication of results. All DOL reports and findings are publicly released and posted on the complete reports section of the CEO website. DOL agencies, such as ETA, also post and release their own research and evaluation reports.
Back to the Standard

Visit Results4America.org