2018 Federal Standard of Excellence


U.S. Agency for International Development

Score
8
Leadership

Did the agency have a senior staff member(s) with the authority, staff, and budget to evaluate its major programs and inform policy decisions affecting them in FY18? (Example: Chief Evaluation Officer)

  • The United States Agency for International Development’s (USAID) Office of Learning, Evaluation, and Research (LER) in the Bureau for Policy, Planning, and Learning (PPL) helps the agency build a body of evidence from which to learn and adapt programs. USAID is a decentralized agency with evaluations commissioned by missions and offices based in countries around the world. LER provides guidance and builds staff capacity in monitoring, evaluating, and learning from our work. The Director of LER is a senior staff member with the authority, staff, and budget to ensure agency evaluation requirements are met, including that all projects are evaluated at some level, and that decision-making is informed by evaluation and evidence. The LER Director oversaw approximately 27 staff and an estimated $8.9 million budget in 2018. At different times in the history of the office, the LER Director has been a senior political appointee, a senior Foreign Service officer, or a career civil servant. The position is a member of USAID’s Senior Leadership Group, which is comprised of the agency’s highest managerial and technical positions in Washington and overseas.
  • The LER Director and staff inform policy decisions across the agency by ensuring evaluation requirements in legislation are reflected in USAID policy and practices, attending senior level decision-making meetings when evaluation is on the agenda, providing input into working groups, and reviewing statements, draft memos, and other policy products. For example, LER staff review Country Development Cooperation Strategies (CDCSs) prior to approval to ensure evidence is used to inform country development objectives. In another example, LER manages monitoring and evaluation policy at USAID, which is codified as part of USAID’s Program Cycle Operational Policy, otherwise known as Automated Directives Systems (ADS) 201. This was adjusted after the Foreign Aid Transparency and Accountability Act of 2016(FATAA) was passed to ensure the monitoring and evaluation requirements bring the Agency into compliance with FATAA. For example, FATAA requires that agencies establish annual monitoring and evaluation objectives and timetables to plan and manage the process of monitoring, evaluating, analyzing progress, and applying learning toward achieving results. ADS 201 requires missions to develop a Performance Management Plan (PMP) that includes both a monitoring and evaluation plan, which must be updated at least once a year. All Operating Units (OUs) must annually complete the Performance Plan and Report (PPR), which include annual monitoring indicators and planned evaluations.
  • The majority of LER’s work is related to providing training, tools, technical assistance and guidance to staff in how to monitor, evaluate, and learn from USAID programs. LER staff design and manage a few high-priority evaluations at a time, such as the Evaluation of Sustained Outcomes in Basic Education, published in March 2018. LER also coordinates several cross-agency working groups organized to support learning champions and monitoring and evaluation specialists throughout the Agency.
Score
8
Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and research/learning agenda(s) and did it publicly release the findings of all completed evaluations in FY18?

  • USAID has an agency-wide Evaluation Policy, published in 2011 and updated in October 2016, to incorporate changes in USAID’s Program Cycle Policy and to ensure compliance with FATAA. The 2016 policy updates evaluation requirements to simplify implementation and increase the breadth of evaluation coverage. The updates also seek to strengthen evaluation dissemination and utilization. The agency released a report in 2016 to mark the five-year anniversary of the policy. Over the last three fiscal years, USAID has completed nearly 500 more evaluations (188 in FY15145 in FY16, and 161 in FY17).
  • LER works with Washington bureaus to develop annual evaluation action plans that review evaluation quality and use within each bureau, and identify challenges and priorities for the year ahead. While the plans are optional, most bureaus participate. LER uses these plans to prioritize financial and technical assistance to help bureaus address challenges and as a source for agency-wide learning on improving evaluation quality and use. In addition, all USAID bureaus and missions must report annually on any planned, ongoing, or completed evaluations, otherwise known as the “Evaluation Registry.”
  • At USAID, learning, monitoring, and evaluation priorities are set by bureaus or OUs for the programs within the bureaus’ area of responsibility. Many bureaus have a learning agenda for specific priorities within their bureau, with some learning agendas being specific to sectors or topics but shared agency-wide. And sometimes priorities are coordinated with other U.S. agencies when program responsibilities are shared. For example, the Feed the Future initiative, led by USAID with eleven agencies contributing to the effort, has a Handbook of Indicator Definitions to guide cross-agency monitoring efforts. A 2017 snapshot of recent USAID learning agendas is included as an annex in USAID’s Landscape Analysis of Learning Agendas report. PPL is also implementing a Program Cycle Learning Agenda (PCLA) to prioritize questions about how USAID’s program cycle policy is working in practice. PCLA questions include how staff perceive and value PPL capacity building support around the Program Cycle, and whether the Program Cycle incentivizes programs that are based in evidence and managed adaptively through continuous learning.
  • Since September 2016, USAID multi-year CDCSs now require a learning plan that outlines how missions will incorporate learning into their programming — including activities such as regular portfolio reviews, evaluation recommendation tracking and dissemination plans, and other analytic processes — to better understand the dynamics of their programs and their country contexts. In addition to mission strategic plans, all projects and activities are now also required to have integrated monitoring, evaluation, and learning plans.
  • As a part of the USAID Transformation, USAID will prioritize supporting partner countries as they progress along their journey to self-reliance, taking increasing ownership over planning, and financing and implementing their own development agendas. USAID support will focus on building partner countries’ commitment and capacity to assess where countries are on this journey (using USAID’s self-reliance metrics), mobilize resources to finance development, and engage the private sector in collaborating to develop market-based solutions to development challenges. This will entail transforming USAID’s partnerships with developing countries to facilitate locally-led development, and to define the conditions under which countries achieving high degrees of self-reliance transition away from development assistance. In order to learn continuously as we develop our approach, USAID is creating a learning agenda around self-reliance to capture and share knowledge of what works, what doesn’t, and what gaps in policy and practice need to be addressed.
  • USAID has an internal evaluation registry that is updated on an annual basis to provide data on completed, ongoing, and planned evaluations, including evaluations planned to start anytime in the next three fiscal years. All final USAID evaluation reports are available on the Development Experience Clearinghouse, except for a small number of evaluations that are considered Sensitive But Unclassified. For FY15, FY16, and FY17, USAID created infographics that show where evaluations took place, across which sectors, and include short narratives that describe findings from selected evaluations and how that information informed decision-making.
  • Partnerships for Enhanced Engagement in Research (PEER) is an international grants program that funds scientists and engineers in developing countries who partner with U.S. government-funded researchers to address global development challenges. PEER supports the connection of international and American researchers to advance new solutions, innovations and approaches. The PEER program is designed to leverage federal science agency funding from NASA, NIFA, NIH, NOAA, NSF, Smithsonian Institution, USFS, USDA, and USGS by directly supporting developing country scientists who work in partnership with current or new colleagues supported by these U.S. government agencies. Technical areas include water resource management, climate change, biodiversity, agriculture, energy, disaster mitigation, nutrition, maternal and child health, and infectious diseases.
Score
10
Resources

Did the agency invest at least 1% of program funds in evaluations in FY18? (Examples: Impact studies; implementation studies; rapid cycle evaluations; evaluation technical assistance, and capacity-building)

  • In FY17, USAID missions and offices reported completing 161 evaluations with resources totaling approximately $42 million. In addition, they were actively managing another 286 ongoing evaluations, many that span more than one year, with total ongoing evaluation budgets estimated to reach almost $210 million. Overall spending on evaluations completed or ongoing in FY17 ($252 million) represents about 1.4% of USAID’s $17.6 billion FY17 program budget.
  • This amount does not include the budget for the Office of Learning, Evaluation, and Research which primarily focuses on monitoring, evaluation, and learning capacity building and technical assistance ($8.9 million in FY17) or the investment in the Demographic and Health Surveys (DHS) ($189 million total in FY13-FY18) or surveys funded by other sector programs that often make up some of the underlying data used in many evaluations.
Score
8
Performance Management / Continuous Improvement

Did the agency implement a performance management system with clear and prioritized outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY18? (Example: Performance stat systems)

  • USAID partners with the U.S. Department of State to jointly develop and implement clear strategic goals, strategic objectives, and performance goals, articulated in the FY 2018 – 2022 U.S. Department of State – USAID Joint Strategic Plan (JSP). Indicators measuring progress on strategic goals, strategic objectives, and performance goals are collected from across the agency, in part, through the annual Performance Plan and Report (PPR), and performance is reported externally through the Annual Performance Plan/Annual Performance Report (APP/APR) and the Agency Financial Report.
  • USAID also measures operations performance management to ensure that the agency achieves its development objectives and aligns resources with priorities. USAID’s Performance Improvement Officer (PIO) and USAID’s Program Management Improvement Officer (PMIO) lead agency efforts to use performance data for decision-making and improve performance and operational efficiency and effectiveness. For example, the PIO and PMIO coordinate tracking of Cross Agency Priority (CAP) Goal and Agency Priority Goal (APG) progress; leverage performance management reviews to conduct deep-dives into evidence; and oversee business process reviews and other program and management assessments to ensure that the agency more efficiently and effectively achieves its mission and goals. USAID reports on APG and CAP goal progress on www.performance.gov.
  • USAID missions develop CDCS with clear goals and objectives and a PMP that identifies expected results, performance indicators to measure those results, plans for data collection and analysis, and periodic review of performance measures to use data and evidence to adapt programs for improved outcomes.
Score
9
Data

Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections – to improve (or help other entities improve) federal, state, and local programs in FY18? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; open data policies)

  • USAID has an open data policy which, in addition to setting requirements for how USAID data is tagged, submitted, and updated, also established the Development Data Library (DDL) as the agency’s repository of USAID-funded, machine readable data created or collected by the agency and its implementing partners. The DDL, as a repository of structured and quantitative data, complements the Development Experience Clearinghouse which publishes qualitative reports and information.
  • To improve linkages and break down silos, USAID continues to develop the Development Information Solution (DIS) – an enterprise-wide management information system that will enable USAID to collect, manage, and visualize performance data across units, along with budget and procurement information, to more efficiently manage and execute programming. Releases have begun on DIS work streams as of Q3 FY18, with an accelerated timeline for full implementation of core functionality by the end of 2019, which will then be followed by enhancements.
  • The United States is a signatory to the International Aid Transparency Initiative (IATI)– a voluntary, multi-stakeholder initiative that created a data standard for publishing foreign assistance spending data and allowing comparison across publishers. USAID continues to improve and add to its published IATI data. Published location data for USAID projects can be viewed and queried on D-Portal for Mali, Lebanon, Colombia, Mozambique, Ethiopia, Bangladesh, The Democratic Republic of Congo, West Bank/Gaza, Jordan, and Georgia.
  • The USAID GeoCenter uses data and analytics to improve the effectiveness of USAID’s development programs by geographically assessing where resources will maximize impact. The GeoCenter team works directly with field missions and Washington-based bureaus to integrate geographic analysis into the strategic planning, design, monitoring, and evaluation of USAID’s development programs. The GeoCenter also provides important data-centered trainings to USAID staff.
  • The USAID Data Services team is dedicated to improving the usage of USAID data and information, so that the agency continues to ensure its development outcomes are supported by evidence. Through USAID Data Services, the development community has direct access to more than 100 sources of international development data via the International Data and Economic Analysis (IDEA) website and Foreign Aid Explorer, a site that reports comprehensively on U.S. government foreign assistance, from 1946 to the present.
  • USAID uses data and evidence to inform policy formulation, strategic planning, project design, project management and adaptation, program monitoring and evaluation, and learning what works, through a framework called the Program Cycle, which underwent major revisions in September 2016.
  • USAID’s Monitoring Country Progress (MCP) system is an empirical analytical system which tracks and analyzes country progress to facilitate country strategic planning.
  • USAID also publishes spending data alongside program results on the Dollars to Results (D2R) page of the USAID website.D2R provides illustrative information on USAID’s impact around the world by linking annual spending to results. USAID updated D2R in FY17 to include data on all of the countries where USAID works.
  • USAID’s Privacy Program discusses policies and practices for protecting personally identifiable information (PII) and data.
Score
9
Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding decisions and did it disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY18? (Example: What Works Clearinghouses)

  • USAID has a scientific research policy that sets out quality standards for research across the agency. USAID’s Program Cycle Policy includes specific evidence standards for decisions related to country strategic planning, project design, monitoring, evaluation, and learning. For example, USAID policy requires evidence and data to assess the development context, challenges, and opportunities in all of USAID’s country strategies. Similarly, all USAID projects must include a detailed analytical phase with findings documented in the Project Appraisal Document.
  • USAID is a member of the International Initiative for Impact Evaluations (3ie), which funds impact evaluations and systematic reviews that generate evidence on what works in development programs and why. Rather than creating a separate “what works” clearinghouse, USAID has chosen to work with 3ie and other development partners to support 3ie’s database of impact evaluations relevant to development topics (including over 4,500 entries to date), knowledge gap maps, and systematic reviews that pull the most rigorous evidence and data from across donors. 3ie also houses a collection of policy briefs that examine findings from its database of impact evaluations on overarching policy questions to help policymakers and development practitioners improve development impact through better evidence. Various USAID bureaus or OUs have funded 3ie to produce evidence gap maps on topics such as: science, technology, innovation, and partnership; state-society relations; and productive safety nets.
  • USAID technical bureaus provide guidance to USAID staff based on evidence of what works by sector that applies to all relevant agency programs. USAID’s Bureau for Democracy, Conflict and Humanitarian Assistance (DCHA), for example, includes the Center of Excellence on Democracy, Rights, and Governance (DRG), which publishes evidence-based standards for what works in this field and established the Evaluating Democracy and Governance Effectiveness (EDGE) Initiative, to supply and apply sophisticated tools to measure the impact of democracy, human rights, and governance work, and infuse evidence-based programmatic decision-making throughout the DRG portfolio. In another example, USAID’s Global Health Bureau has a strategic framework that presents details in Annex 1 on specific evidence-based strategies, targets, and approaches for achieving goals within each technical area under the health priorities.
  • Several USAID Bureaus synthesize sector-specific evidence from evaluations and other sources to summarize key findings and identify gaps in knowledge that then inform sector learning agendas. For example, the Bureau for Food Security keeps a collection of evidence related to what works in agricultural development and food security, an interactive community where USAID staff and partners can contribute content. Established in 2011, Agrilinks has become the go-to source for informative discussions on development topics and the latest information that is furthering resilience, food security, and poverty reduction.
Score
10
Innovation

Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY18? (Examples: Prizes and challenges; behavioral science trials; innovation labs/accelerators; performance partnership pilots; demonstration projects or waivers with strong evaluation requirements)

  • USAID established the U.S. Global Development Lab (the Lab) in 2014 to increase the application of science, technology, innovation, and partnerships to extend the agency’s development impact in helping to end extreme poverty. In FY17, the Lab’s budget was $72 Million, and there were 106 direct hire staff (civil service and Foreign Service personnel) on board. TheLab is comprised of a diverse and specialized staff of scientists, engineers, technology and partnership experts, former venture capitalists, and program and administrative whowork closely with colleagues across the agency and by bringing together a diverse set of partners to discover, test, and scale breakthrough innovations to solve development challenges faster, cheaper and more sustainably. The Lab is the home for the Development Innovations Ventures, the agency Grand Challenges for Development, the Higher Education Solutions Network; the Innovation Design and Advisory Team; the Applied Innovation Team; and the Monitoring, Evaluation, Research and Learning Innovations program (MERLIN) to source, co-design, implement and test solutions that innovate on traditional approaches to monitoring, evaluation, research and learning. Two relevant innovations being tested are Developmental Evaluation, which aims to provide ongoing feedback to managers on implementation through an embedded evaluator, and Rapid Feedback, which allows implementers to test various methods to reach certain targeted results (more quickly than through traditional midterm or final evaluations).
  • In the past six years, through the Global Development Lab, USAID, and its partners have launched ten Grand Challenges for Development (GCD): Saving Lives at Birth (2011), All Children Reading (2011), Powering Agriculture: An Energy Grand Challenge for Development (2012), Making All Voices Count (2012), Securing Water for Food (2013), Fighting Ebola (2015), Combating Zika and Future Threats (2016), Scaling Off-Grid Energy (2016), Ensuring Effective Health Supply Chains (2017), and Creating Hope in Conflict: A Humanitarian Grand Challenge (2018). GCDs are robust partnerships that leverage each partner’s strengths to engage new solvers through incentive prizes, challenge grant funding, and crowdsourcing to capture learnings, accelerate support services, and generate awareness to identify the most promising solutions, test them, and scale those that are proven to work. Across the Grand Challenges for Development portfolio, partners have jointly committed over $508 million ($140 million from USAID) in grants and technical assistance for over 450 innovators in 70 countries. To date, more than $154 million in follow-on funding has been catalyzed from external sources, a key measure of success.
  • USAID is at the forefront of distilling the steps needed to execute open innovation competitions such as challenges and prizes that pay for results. USAID’s process is captured in a series Tools for Innovation Programming (only available internally), which serve as the backbone of the GSA Challenges and Prizes Toolkit, as well as the Pay for Results in Development: A Primer for Practitioners, which released its third edition in November 2017.
  • Development Innovation Ventures (DIV) is USAID’s tiered, evidence-driven open innovation program. DIV awards grants for innovative solutions to any development challenge, on the basis of rigorous evidence of impact, cost-effectiveness, and a pathway to scale via the public and/or private sectors. DIV awards funding across three stages, ranging from under $200,000 for testing early-stage innovations to up to $15 million for scaling evidence-backed innovations.The DIV model is designed to source breakthrough solutions, to minimize risk, and maximize impact by funding according to outcomes and milestones, to rigorously evaluate impact and cost-effectiveness, and to scale proven solutions. Since 2010, DIV has supported over 185 innovations in 45 countries with approximately $104 million. In addition to mobilizing external financing of $589 million, DIV grantees have directly impacted over 30 million beneficiaries with measurable outcomes, such as fewer infant deaths, higher literacy rates, and more households with affordable and reliable energy. Ongoing DIV grants continue to accelerate breakthrough innovation for transformative impact, with a refreshed emphasis on advancing local innovation and entrepreneurship in partnership with USAID Missions and the rest of the Agency. USAID opened a call for new DIV applications in September 2018.
  • USAID also supports innovation through the external Global Innovation Fund (GIF), a private fund co-founded by USAID and based on the DIV model. Like DIV, GIF invests in social innovations to improve the lives of millions of people in the developing world, but, as a private fund, GIF is also able to provide debt and equity financing.
  • USAID partners with the Australian Department of Foreign Assistance and Trade (Australian Aid), the Korea International Cooperation Agency (KOICA), and the Bill & Melinda Gates Foundation to fund and promote the Global Innovation Exchange (the Exchange), a free, online platform that connects social entrepreneurs with the funding and other resources they need to be impactful. As of June 2018, the Exchange has 28,143 registered users; 7,320 innovations; 2,673 deals; and over $270 million cumulative funding opportunities. USAID’s investment in the Exchange has provided a key platform not only for USAID’s business intelligence around innovation investment but the entire global industry. With the Exchange, USAID and others in development can see in one place all of the development innovations, where they are being tested, and the maturity curves, allowing USAID to make smarter innovation investments in the future. This platform has also tracked what has been funded where and what monies are accessible to domestic and global innovators, more rapidly connecting the right people, resources, and information at the right time. This platform won the 2017 ACT IAC Government Impact Innovation of the year.
  • The Innovation Design and Advisory Team (iDesign) helps advance USAID’s culture of innovation and intrapreneurship through testing, application, and mainstreaming of innovative design and problem-solving processes. USAID’s Applied Innovation team works with programs and implementing partners, including contractors and grantees, to capture learning and accelerate innovations supported by USAID. The Applied Innovation team is working to expand innovation adoption across USAID’s programming, and test the theory that innovations can enhance development impact, save time and resources, and improve programmatic efficiencies.
  • USAID’s Higher Education Solutions Network (HESN) program is a partnership with seven competitively awarded universities working with partners worldwide. Leveraging nearly equal investments from each institution, the universities have established eight Development Labs that have built a global research network of 1,100 partners from 82 countries. Through HESN, USAID has been able to harness the ingenuity of students and faculty to create or test over 500 innovations, which have helped USAID missions reach their development goals and benefitted an estimated 16.7 million individuals in developing countries.
  • Feed the Future Innovation Labs are led by U.S. Title XII universities and are collaborative research programs between U.S. universities and host-country universities or national research institutions. The Feed the Future Innovation Labs are an integral component of USAID’s implementation of the U.S. Government Global Food Security Research Strategy through their leadership and implementation of research and capacity building.
  • LER’sResearch and Development (R&D) Hub for Monitoring and Evaluation (M&E) helps agency staff determine best fit for emerging M&E approaches to specific contexts and programs and plays the role of connector by linking champions and conducting research on and documenting emerging M&E approaches that have been helpful in various circumstances. Approaches include working with complexity-aware M&E, context monitoring, monitoring without indicators, and M&E for adaptive management.
  • LER’s Global Learning for Adaptive Management (GLAM) project aims to enable adaptive management through access to, use of, and learning from better and more timely monitoring and evaluation evidence. GLAM, which was awarded in early 2018, is co-funded by the UK Department for International Development (DFID) and USAID. GLAM will work with USAID and DFID field offices and partners to identify, facilitate, and support innovative monitoring and evaluation methods and tools for more adaptive programs, while conducting action research to advance learning on adaptive management.
  • USAID’s Acquisition and Assistance Lab (A&A) is an interconnected network of A&A staff advancing the agency’s mission through workforce development and the testing and scaling of innovations in acquisition and assistance. The A&A Lab was developed as a result of feedback from staff, senior leadership, and the partner community on the need to empower and enhance the A&A workforce, and to find innovative ways to do business. Since 2016, USAID has established five A&A Labs throughout the world, which have proven to be effective at fostering new ideas, sharing solutions, identifying workforce challenges, and advancing new approaches to traditional acquisition and assistance practices.
Score
8
Use of Evidence in Five Largest Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its five largest competitive grant programs in FY18? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring; Pay for Success provisions)

  • n FY18, USAID’s top five accounts, as appropriated, included: Global Health Programs USAID ($3.02 billion); Development Assistance ($3 billion) (see p. 9 of this Reference Guide); International Disaster Assistance ($2.7 billion); Economic Support Fund ($1.82 billion) (see p. 10 of this Reference Guide); and Assistance to Europe Eurasia and Central Asia ($750 million) (see p. 10 of this Reference Guide).
  • USAID is committed to using evidence of effectiveness in all of its competitive contracts, cooperative agreements, and grants, which comprise the majority of the agency’s work. USAID’s Program Cycle policy ensures evidence from monitoring, evaluation and other sources informs decisions at all levels, including during strategic planning, project and activity design, and implementation. The Program Cycle is USAID’s particular framing and terminology to describe a common set of processes intended to achieve more effective development interventions and maximize impacts. The Program Cycle acknowledges that development is not static and is rarely linear, and therefore stresses the need to assess and reassess through regular monitoring, evaluation, and learning.
  • In 2013, USAID reformed its policy for awarding new contracts to elevate past performance to comprise 20 to 30 percent of the non-cost evaluation criteria. For assistance, USAID does a “risk assessment” to review an organization’s ability to meet the goals and objectives outlined by the agency. This can be found in ADS 303, section 303.3.9. Contractor performance is guided by USAID operational policy ADS 302, section 302.3.8.7. As required in FAR Subpart 42.15, USAID must evaluate contractor performance using the Contractor Performance Assessment Reporting System (CPARS). Information in CPARS, while not available to the public, is available for Contracting Officers across the government to use in making determinations of future awards.
  • In June 2018, USAID revised its process for engaging senior leadership in the review of proposed high-dollar-value A&A awards (contracts, grants, and cooperative agreements) with a total estimated cost of $20 million or more. The new policy, called the Senior Obligation Alignment Review (SOAR) helps to ensure the agency is using innovative approaches to provide long-term sustainable outcomes and provides oversight on the use of mechanisms and proposed results. The review also contributes to more rigorous project design and establishes greater linkages between Washington and field activities. Factors under review include: approach, use of evidence, past activities, innovation, and sustainable results.
  • USAID seeks to increase collaboration, co-design, and co-financing approaches that promote innovation and the diversification of the partner base. This will result in a broader evidence base, more empowered partners, results-driven solutions, and stronger host-country capacity and self-reliance, to advance the agency’s overall goal of ending the need for foreign assistance. Based on an assessment of best practices and potential innovations, USAID will develop and implement procurement strategies and methodologies that achieve greater reliance on collaborative approaches and co-creation. In addition, USAID will train staff on co-creation and more-collaborative methods to engage our partners. The agency is measuring progress towards this goal through one of its Agency Priority Goals.
Score
N/A
Use of Evidence in Five Largest Non-Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its five largest non-competitive grant programs in FY18? (Examples: Evidence-based funding set-asides; requirements to invest funds in evidence-based activities; Pay for Success provisions)

  • USAID does not administer non-competitive grant programs.
  • USAID does contribute funding to multilateral institutions known as Public International Organizations (PIOs), which are listed here, and include the World Bank, UN, and multi-donor funds such as the Global Fund. A Public International Organization (PIO) is an international organization composed principally of countries. In these specific cases, USAID funds are part of overall U.S. government funding for these partner institutions. These funds become subject to the monitoring and evaluation requirements of the organization that receives them. For example, the Global Fund has a performance-based funding system, which bases funding decisions on a transparent assessment of results against time-bound targets. USAID’s ADS chapter 308 provides more information on how PIOs are defined and includes guidance related to due diligence required prior to awarding grants to PIOs.
Score
7
Repurpose for Results

In FY18, did the agency shift funds away from or within any practice, program, or policy that consistently failed to achieve desired outcomes? (Examples: Requiring low-performing grantees to re-compete for funding; removing ineffective interventions from allowable use of grant funds; proposing the elimination of ineffective programs through annual budget requests)

  • USAID’s updated operational policy for planning and implementing country programs has incorporated a set of tools and practices called Collaborating, Learning, and Adapting (CLA), that include designing adaptable activities that build in feedback loops; using flexible implementing mechanisms; and adopting a management approach that includes consulting with partners about how implementation is evolving and what changes need to be made. Through the Program Cycle, USAID encourages managing projects and activities adaptively, responding to rigorous data and evidence and shifting design and/or implementation accordingly.
  • USAID uses rigorous evaluations to maximize its investments. A recent independent study found that 71 percent of USAID evaluations have been used to modify and/or design USAID projects. Below are a few recent examples where USAID has shifted funds and/or programming decisions based on performance:
    • Serbia | Democracy & Governance – An evaluation completed six years after the end of a community development and civic engagement project found that increasing citizen engagement in public policy and government responsiveness requires simultaneous targeting of citizens, the government, and private and civil sectors. Findings led to the launch of new activities focused on the rule of law, media strengthening, and business competitiveness incorporating the evaluation’s recommended multifaceted approach.
    • El Salvador | Education – An evaluation of an education project revealed the need to improve effectiveness by better matching interventions with individual target school and community needs to better serve out-of-school youth. As a result, the project changed how it is working with the Ministry of Education. It was also modified to address barriers to gender equality and social inclusion, to increase access to and quality of secondary-level education, and improve educational opportunities for school dropouts.
    • Cambodia | Agriculture – An impact evaluation showed significant increases in participating farmers’ gross commercial horticulture income and cropped area, but not in returns to land or economic productivity. Going forward, USAID agricultural programming in Cambodia will help organize farmers into cooperatives and strengthen farmer-buyer linkages to improve production and overall household income.
    • Afghanistan | Economic Growth – A midterm evaluation of a program that enables women to increase their participation in the formal economy found that 98% of interns placed in jobs report working in a women-friendly workplace, although it is too soon to show whether the program has helped businesses increase profits or hire more women. Going forward, the program plans to raise the age cap of job placement candidates, hold trade fairs featuring women-to-women networking, and better support microenterprises.
    • Ethiopia | Health – An evaluation of a project to improve the lives of vulnerable children and their families showed a significant improvement in educational attainment, health status, economic status, access to food and nutrition, and safer shelter. Future USAID efforts will further strengthen the capacities of regional and local government and civil society to serve these children and their families even better.
  • BalanceD MERL: One of the MERLIN mechanisms, BalanceD, is ending early because evidence showed that it was being used for traditional M&E activities (e.g. survey data collection and M&E system set-up) rather than anything innovative. The decision to end the program early was made after two MOUs requested traditional support from BalanceD MERL, and no further interest for testing innovations using BalanceD was found.
  • USAID’s Securing Water for Food: A Grand Challenge for Development (SWFF) selected the highest potential water-for-food innovations and is providing grant funds and ongoing assistance to support business development. SWFF starts as a competition, but the winners must continually show results to receive a new tranche of funding. To move forward, grantees must achieve technical and financial milestones, such as increased crop yields and total product sales. Of the 38 total awardees, twenty-three received Year 2 funding; fifteen did not, because they did not meet the target number of end-users/customers in a cost-effective way and because their model was not deemed sustainable without USAID funding. By using milestone-based funding, SWFF has helped over 3.6 million farmers and other customers grow more than 4 million metric tons of food and reduced water consumption in agriculture by more than 11.4 billion liters of water compared to traditional practices.For every $1,000 spent by the SWFF program, SWFF innovators have impacted 156 customers and end users, produced 282 tons of crops, reduced water consumption by more than 832,000 liters, improved water management on 86 hectares of agricultural land, and generated more than $200 in sales. In addition, SWFF innovators have formed more than 300 partnerships and secured more than $16 million in leveraged funding.
Score
N/A
Other Evidence and Evaluation Efforts
  • USAID is exploring ways to move agencywork to maximize exploration in “pay for results” models of programming. Innovation is not just about products and services but also the agency’sown procurement and programming models. Development impact bonds (DIBs) are a results-based financing mechanism, where investors provide upfront capital for an intervention and are paid as results are achieved. USAID has launched two DIBs, one on maternal and child health in India, and another on poverty alleviation in Africa. The 2017 Village Enterprise DIB, aims to support the growth of sustainable small businesses in Africa, helping communities increase economic self-sufficiency and transition out of poverty. Through this DIB, USAID’s DIV, DFID, and other funders are committing to paying for specific outcomes: Village Enterprise gets up-front funding in the form of working capital from socially-motivated investors, and flexibility to adapt the program to maximize impact. USAID and other funders’ repayments to the investors are conditionedon Village Enterprise delivering verifiable results such as improved income and consumption. The impact bond, valued at a total of $5.28 million, will allow Village Enterprise to scale its successful program that has already helped 39,000 small businesses get off the ground. Village Enterprise’s approach is to create and sustain microenterprises by providing a small cash grant, business and financial literacy training, mentoring, and access to savings. By partnering with the private sector and maintaining a focus on results, USAID can remain good stewards of U.S. taxpayer dollars and advance agency efforts to reduce poverty and promote economic growth.
Back to the Standard

Visit Results4America.org