2018 Federal Standard of Excellence


Millennium Challenge Corporation

Score
8
Leadership

Did the agency have a senior staff member(s) with the authority, staff, and budget to evaluate its major programs and inform policy decisions affecting them in FY18? (Example: Chief Evaluation Officer)

  • There are three key touch-points in each Millennium Challenge Corporation (MCC) program’s lifecycle where senior MCC leadership uses evidence to evaluate and inform policy decisions: Development, Decision, and Implementation stages.
    • In the Development stage, selection of projects and policy decisions are informed by the Department for Policy and Evaluation’s (DPE) Economic Analysis (EA) division. EA is headed by the Chief Economist whose role is to oversee and strengthen the economic evidence base used for program development, including economic growth diagnostics, root cause analyses, beneficiary analyses, and cost-benefit analyses. EA has a staff of 19 and an estimated FY18 budget of $707,000 in due diligence (DD) funds. EA’s analytical work provides the evidence base to determine which projects will have a sufficient return on investment so as to be funded by MCC.
    • This analytical work underpins the program logic for MCC’s investments and informs MCC’s Monitoring and Evaluation (M&E) division (also a part of DPE) on the primary outputs and outcomes that should be measured to assess the effects of MCC’s investments. M&E has a staff of 29 and an estimated FY18 budget of $26.2 million in DD funds. (Departments throughout the agency have requested a total of $62.6 million in DD funds for FY18). The M&E Managing Director (MD) is a career civil service position with the authority to execute M&E’s budget. The MD participates in technical reviews of proposed investments, as well as regular monitoring meetings that inform policy and investment decisions.
    • At the Decision stage, the input of both EA and M&E is provided to the Vice President for DPE (VP-DPE), to whom both divisions report. The VP-DPE and Chief Economist sit on MCC’s Investment Management Committee, where they perform the role of ensuring a rigorous evidence base for each investment before the submission of programs to the MCC Board of Directors for final approval. Therefore, both the Vice President (who is equivalent to an Assistant Secretary rank and reports directly to the CEO of MCC) and Chief Economist (who is equivalent to a Deputy Assistant Secretary, and is a competitively selected technical expert hired as a career civil servant) are senior leaders at MCC who have the authority, staff, and budget to evaluate MCC programs and inform policy decisions.
    • Once in Implementation, M&E DD resources are used to procure evaluation services from external independent evaluators to directly measure high-level outcomes to assess the attributable impact of MCC’s programs and activities. MCC sees its independent evaluation portfolio as an integral tool to remain accountable to stakeholders, demonstrate programmatic results, and promote internal and external learning. Through the evidence generated by monitoring and evaluation, the M&E MD, Chief Economist, and VP-DPE are able to continuously update estimates of expected impacts with actual impacts to inform future programmatic and policy decisions. In FY18, MCC began or continued comprehensive, independent evaluations for every Compact or Threshold project at MCC (a requirement found in Section 7.5.1 of MCC’s Policy for Monitoring and Evaluation). To date, MCC has already published more final evaluations this year than in any prior year, increasing its stock of published evaluations by 23% this fiscal year. All evaluation designs, data, reports, and summaries are available on MCC’s Evaluation Catalog.
  • MCC is a member of the Federal Inter-Agency Council on Evaluation Policy (ICEP), which is coordinated by the OMB’s evidence deputies. Given MCC M&E’s strong experience and leadership in areas of rigorous and transparent evaluations, in FY18 MCC was asked to deliver several presentations at ICEP meetings (including one on MCC’s experience and learning in the areas of evaluation microdata dissemination guidelines and practices, and another on transparency and evaluation publication policies), as well as to prepare one or more monthly or semi-monthly workshops for OMB’s evaluation training series in Summer 2018. Examples of MCC expert trainings include: (1) Project Evaluability; (2) Management and dissemination of evaluation microdata; and (3) Integration and dissemination of evaluation findings.
  • To remain abreast of ongoing research and policy dialogues outside of MCC—in academia, multilateral development banks, donors, and nongovernmental organizations—MCC recently established an Economic Advisory Council (EAC). With the aim of bringing thought leaders and experts into MCC to highlight technical advances in the field, innovations, and learning in economic development, the EAC will hold meetings semi-annually to solicit feedback and advice that will be shared across the agency for internal discussion and use. The EAC will be composed of approximately 20 members drawn from diverse backgrounds, balanced across institutions, economic sub-disciplines, and by their region of applied expertise.
Score
9
Evaluation & Research

Did the agency have an evaluation policy, evaluation plan, and research/learning agenda(s) and did it publicly release the findings of all completed evaluations in FY18?

  • MCC’s Independent Evaluation Portfolio is governed by its publicly available Policy for Monitoring and Evaluation. This Policy requires all programs to develop and follow comprehensive M&E plans that adhere to MCC’s standards. The Policy was revised in March 2017 to ensure alignment with the Foreign Aid Transparency and Accountability Act of 2016. Pursuant to MCC’s M&E policy, every project must undergo an independent evaluation. This aspect of the policy makes MCC unique among US federal agencies and other bilateral donors.
  • Each comprehensive M&E Plan includes two main components. The monitoring component lays out the methodology and process for assessing progress towards the investment’s objectives. It identifies indicators, establishes performance targets, and details the data collection and reporting plan to track progress against targets on a quarterly basis. The evaluation component identifies and describes the evaluations that will be conducted, the key evaluation questions and methodologies, and the data collection strategies that will be employed. Each country’s M&E Plan represents the evaluation plan and learning agenda for that country’s set of investments.
  • To ensure appropriate quality and risk assessment and management of the independent evaluation portfolio, MCC M&E and its evaluation contractors also follow the Evaluation Management and Review Process Guidelines. To ensure timely release of independent evaluation materials, a public evaluation entry is created in the MCC Evaluation Catalog as soon as an Evaluation Design Report (EDR) is cleared by MCC management. This entry is populated with all subsequent evaluation materials as they become available, including questionnaires, Baseline Report, and other corresponding documentation. Once an independent evaluation’s analytical report – an Interim or Final Report – is drafted, it is sent through MCC’s rigorous review process which is governed by the Evaluation Management and Review Process Guidelines. At this time, findings and lessons learned are documented in a Summary of Findings, and all independent evaluations and reports are publicly reported on the MCC Evaluation Catalog. As of August 2018, MCC has contracted or is planning 198 independent evaluations. To date, 91 Interim and Final Reports have been finalized and released to the public.
  • For FY18, MCC has pursued a robust agency-wide, multi-year research and learning agenda around better use of its data and evidence for programmatic impact. DPE has prioritized learning around how MCC develops, implements, monitors, and evaluates the policy and institutional reforms (PIR) it undertakes alongside capital investments. The PIR learning agenda is focused on better evidence for methodological guidance to economists and sector practices to support the expanded use of cost-benefit analysis (CBA) in more cases of PIR that MCC supports. The purpose is to make investments in PIR more effective by meeting the same investment criteria as other interventions MCC considers for investment; to make assumptions and risks more explicit for all its investments that depend on improved policies or institutional performance; and to help inform the design of PIR programs to ensure that they have a high economic rate of return.
  • MCC produces periodic reports that capture the results of MCC’s learning efforts in specific sectors and translate this learning into actionable evidence for future programming. At the start of FY18, MCC published a Principles into Practice report on its investments into roads; this report demonstrated MCC learning around the implementation and evaluation of its roads projects, and critically assessed how MCC was changing its practice as a result of this learning. In FY18, MCC began additional Principles into Practice reports on its activities in the education and water, sanitation, and hygiene sectors.
  • In FY18, MCC initiated a new learning effort around its use of Root Cause Analysis (RCA). MCC uses RCA to examine the underlying drivers of binding constraints to growth. The Root Cause Analysis Working Group has been formed to generate evidence and recommendations on how MCC conducts this analysis. The group has reviewed MCC’s experience with root cause analysis across 11 compacts or threshold programs and identified possible areas for improvement. The Working Group is currently drafting guidance to help country teams with process and the use of various RCA tools. The working group is also exploring the need for sector-specific (e.g., power, education) approaches to RCA that reflect insights on examining drivers of constraints.
Score
10
Resources

Did the agency invest at least 1% of program funds in evaluations in FY18? (Examples: Impact studies; implementation studies; rapid cycle evaluations; evaluation technical assistance, and capacity-building)

  • MCC’s FY18 investment in monitoring and evaluation to date is $15.1 million, which amounts to 5.1% of Compact spending for FY18 ($293.6 million). This investment exceeds the proportion of FY17 spending in which MCC invested over $21.9 million in M&E, roughly 4.2% of Compact spending for FY17 ($516.1 million).
Score
8
Performance Management / Continuous Improvement

Did the agency implement a performance management system with clear and prioritized outcome-focused goals and aligned program objectives and measures, and did it frequently collect, analyze, and use data and evidence to improve outcomes, return on investment, and other dimensions of performance in FY18? (Example: Performance stat systems)

  • MCC monitors progress towards compact and threshold program results on a quarterly basis using performance indicators that are specified in the M&E Plan for each country’s investments. The M&E Plans specify indicators at all levels (process, output, and outcome) so that progress towards final results can be tracked. Every quarter each partner country submits an Indicator Tracking Table (ITT) that shows actual performance of each indicator relative to the baseline that was established before the activity began and the performance targets that were established in the M&E Plan. Key performance indicators and their accompanying data by country are updated every quarter and published online. MCC management and the relevant country team reviews this data in a formal Quarterly Performance Review meeting every quarter to assess whether results are being achieved and integrates this information into project management and implementation decisions.
  • In an effort to track and aggregate evidence across its entire portfolio, MCC has implemented a common indicators structure across six sectors in which it invests. In all MCC countries, projects in these six sectors – energy, land and property rights, education, WASH, transportation, and agriculture – capture evidence across a common set of indicators to allow MCC to build an agency-wide evidence base around its investments.
  • MCC also supports the creation of multidisciplinary ‘country teams’ to manage the development and implementation of each compact and threshold program. Teams usually include the following members: coordinator, economist, private sector development specialist, social inclusion and gender integration specialist, technical specialists (project specific), M&E specialist, environmental and social performance specialist, legal, and financial management and procurement specialists. From the earliest stages, these teams develop project logics and M&E frameworks supported by data and evidence, and use them to inform the development of the projects within each program. Teams meet frequently to gather evidence, discuss progress, make project design decisions, and solve problems. Prior to moving forward with a program investment, teams are encouraged to use the lessons from completed evaluations to inform their work going forward.
  • Established as a key element for success in the 2016 Open Government Plan, Knowledge Management (KM) continues to be a critical priority for MCC. In FY18, the agency formed a Knowledge Management Core Team led by a newly appointed Knowledge Management Lead. The objective of this KM initiative is to better capture and disseminate intra-agency information and resources. By leveraging accumulated knowledge, MCC will be better positioned to implement country programs and achieve development impact more efficiently and effectively.
  • Throughout FY18, MCC is implementing a new reporting system that will enhance MCC’s credibility around results, transparency, and accountability. The Star Report and its associated business process quarterly captures key information to provide a framework for results and improve the ability to promote and disseminate learning and evidence throughout the compact and threshold program lifecycle. For each compact and threshold program, evidence is collected on performance indicators, evaluation results, partnerships, sustainability efforts, and learning, among other elements; and critically, this information will be available in one report after each program ends. Through the Star Report, MCC is able to capture how and why programs achieved certain resultsand provide better reporting of compact and threshold program performance to public audiences, such as Congress, other development agencies, and the academic community. Each country will have a Star Report published roughly seven months after completion. MCC’s first Star Report will focus on the recently closed compact in Cabo Verde and will be published in September 2018.
  • MCC reports on its performance in its Agency Financial Report (AFR) which provides the results that enable the President, Congress, and the American people to assess MCC’s performance for the fiscal year. In particular, the AFR provides an overview of MCC’s programs, accomplishments, challenges, and management’s accountability over the resources entrusted to MCC. MCC also prepares an Annual Performance Report (APR) each fiscal year that is included in its Congressional Budget Justification. Together, the AFR and APR provide a comprehensive presentation and disclosure of important financial and programmatic information related to MCC’s operations and results, including a fair assessment of MCC’s leadership and stewardship of the resources entrusted to the agency. MCC provides further information related to its activities in an Annual Report.
Score
9
Data

Did the agency collect, analyze, share, and use high-quality administrative and survey data – consistent with strong privacy protections – to improve (or help other entities improve) federal, state, and local programs in FY18? (Examples: Model data-sharing agreements or data-licensing agreements; data tagging and documentation; data standardization; open data policies)

  • MCC’s M&E Division oversees the publication of anonymized evaluation data to MCC’s public Evaluation Catalog. In the Catalog, partner countries, as well as the general public, can access the microdata and results of independent evaluations for MCC-funded projects, and public use versions of the data used in those evaluations. The M&E plans and tables of key performance indicators are available online by compact and by sector. All evaluation data is meticulously reviewed by MCC’s internal Disclosure Review Board prior to posting to ensure that respondents’ privacy is protected.
  • MCC’s Economic Analysis division publishes constraints analysis reports and interactive, downloadable Economic Rate of Return (ERR) spreadsheets that include the description of the project, including its economic rationale; the expected project impacts, including detailed cost and benefit estimates; and a tool allowing users to modify key assumptions and study the effects of those modifications on the project’s returns.
  • As part of its Data2x commitment, MCC and other donors are increasing the amount of gender data released and helping to improve international data transparency standards.
  • MCC is a founding partner of the Governance Data Alliance, a collaborative effort by governance data producers, consumers, and funders to improve the quality, availability, breadth, and use of governance data.
  • MCC has a partnership with the President’s Emergency Plan for AIDS Relief (PEPFAR) which is helping to increase the availability and quality of development-related data in selected countries. The Data Collaboratives for Local Impact program supports innovative and country-led approaches that promote evidence-based decision-making for programs and policies that address HIV/AIDS, global health, gender equality, and economic growth in sub-Saharan Africa. Data Collaboratives projects are strengthening the availability and use of data to improve lives and empower citizens to hold governments and donors more accountable for results. The program aligns with broader U.S. government efforts to maximize the effectiveness of U.S. foreign assistance and with the Global Data Partnership’s efforts to promote data collaboration to achieve the Sustainable Development Goals (SDGs).
Score
8
Common Evidence Standards / What Works Designations

Did the agency use a common evidence framework, guidelines, or standards to inform its research and funding decisions and did it disseminate and promote the use of evidence-based interventions through a user-friendly tool in FY18? (Example: What Works Clearinghouses)

  • MCC uses a rigorous evidence framework to make every decision along the investment chain, from country partner eligibility to sector selection to project choices. MCC uses common, evidence-based selection criteria, generated by independent, objective third parties, to ensure objectivity in country selection for grant awards. To be eligible for selection, countries must first pass the MCC 2018 Scorecard – a collection of 20 independent, third-party indicators that objectively measure a country’s policy performance in the areas of economic freedom, investing in people, and ruling justly. Both the scores for all countries as well as the criteria for assessing performance based on these scores are reported publicly and in a fully transparent process. A new application was recently created to render the indicator scorecards more easily accessible and viewable on mobile devices to further the use of the scorecards’ evidence. The criteria for passing the 2018 Scorecard are applied universally to all low- and lower-middle-income candidate countries. MCC’s Board of Directors then considers three key factors for selecting countries: (1) a country’s performance on the 2018 Scorecard; (2) the opportunity to reduce poverty and generate economic growth; and (3) the availability of funds. In the case of subsequent compacts, MCC also considers the partnership and performance of the first compact, measured in part by M&E, in addition to the above three criteria. (An in-depth description of the country selection procedure can be found in the annual Selection Criteria and Methodology report).
  • Then, to determine on which sector(s) a MCC program will focus, MCC’s Economic Analysis (EA) division undertakes a constraints-to-growth diagnostic to determine the binding constraints to economic growth in a country. Finally, to determine the individual projects in which MCC will invest in a given sector, MCC’s EA division combines root cause analysis with cost-benefit analysis to determine those investments that will have greatest development impact and return on MCC’s investment.
  • MCC’s model is based on a set of core principles deemed essential for development assistance to be effective – good governance, country ownership, focus on results, and transparency. In pursuing these, MCC has created a Principles into Practice series which describes MCC’s experience in operationalizing these principles in sector-level investments. Each report details MCC’s implementation experience, what is learned in implementation, and how the Agency is applying that learning to current and future investments. This portfolio-wide meta-analysis offers instructive learning for both internal and external audiences. MCC has made a concerted effort to better disseminate these reports in FY18 through panels, events, and conferences, including the American Evaluation Association.
  • All evaluation designs, data, reports, and summaries are made publicly available on MCC’s Evaluation Catalog. In FY18, MCC is launching a new product to better capture and disseminate the results and findings of its independent evaluation portfolio. New “Evaluation Briefs” will be produced for each evaluation and will offer a user-friendly, systematic format to better capture and share the relevant evidence and learning from MCC’s independent evaluations. These accessible products will take the place of MCC’s Summaries of Findings. Evaluation Briefs will be published on the Evaluation Catalog and will complement the many other products published for each evaluation including evaluation designs, microdata, survey questionnaires, baseline findings, interim reports, and final reports from the independent evaluator.
  • In FY18, MCC conducted internal research and analysis to understand where and how its published evaluations, datasets, and knowledge products are utilized. This effort underscores MCC’s commitment to transparency andlearning as MCC seeks to widen its understanding of the use of the evidence it produces. The results of this analysis will guide future efforts on evidence-based learning such as which sectors MCC prioritizes for evidence generation and publication and what types of products best communicate MCC’s evidence and learning. The above-described Evaluation Briefs are in part a result of MCC’s findings around evaluation user metrics. MCC finalized baseline metrics around evidence and evaluation utilization in April 2018 and is continuing to track global use of its knowledge products on a quarterly basis with a goal of expanding the base of users of MCC’s evidence and evaluation products.
  • In FY18, MCC sought to strengthen its outreach and dissemination of results in more innovative ways. Following on MCC’s first evidence workshop in El Salvador in 2016, MCC worked closely with the country-led implementation unit in El Salvador (FOMILENIO II), the President’s Technical Secretariat, and the Abdul Latif Jameel Poverty Action Lab (J-PAL) to organize M&E trainings with targeted content for varying levels of government officials. Over the course of three months, over 100 participants from across the Government of El Salvador learned the foundations of evidence-based policymaking, and designing and implementing evaluations. FOMILENIO II, MCC, and the Technical Secretariat hosted focus groups in June 2018 to develop an action plan for incorporating evaluation work into policy decisions.
  • To further bring attention to MCC’s evaluation and evidence, MCC publishes a quarterly evaluation newsletter called Statistically Speaking. This newsletter highlights recent evidence and learning from MCC’s programs with a special emphasis on how MCC’s evidence can offer practical policy insights for policymakers and development practitioners in the United States and in partner countries. It also seeks to familiarize a wider audience with the evidence and results of MCC’s investments.
Score
9
Innovation

Did the agency have staff, policies, and processes in place that encouraged innovation to improve the impact of its programs in FY18? (Examples: Prizes and challenges; behavioral science trials; innovation labs/accelerators; performance partnership pilots; demonstration projects or waivers with strong evaluation requirements)

  • In September 2014, MCC’s Monitoring and Evaluation division launched the agency’s first Open Data Challenge. This challenge encourages Masters and PhD students working in economics, public policy, international development, or other related fields to leverage the MCC Evaluation Catalog to search publicly available, MCC-financed, primary data for policy-relevant analysis. The Open Data Challenge initiative is intended to facilitate broader use of MCC’s US-taxpayer funded data, encourage innovative ideas, and maximize the use of data that MCC finances for its independent evaluations.
  • In 2014, MCC developed an internal “Solutions Lab” that was designed to encourage innovation by engaging staff to come up with creative solutions to some of the biggest challenges MCC faces. MCC promotes agency-wide participation in its Solutions Lab through an internal intranet portal. To further encourage staff who pursue innovative ideas throughout the compact lifecycle, MCC launched the annual MCC Innovation Award as a part of the Agency’s Annual Awards Ceremony held each summer. The Innovation Award recognizes individuals who demonstrate “exemplary” leadership integrating innovation in project design, project implementation, and/or systems functionality and efficiency. Selections for the Innovation Award are based on a demonstrated ability to lead and implement innovative strategies from project conception that foster sustained learning and collaboration and add value to MCC and/or country partnerships.
  • MCC recently launched a Millennium Efficiency Challenge (MEC) to encourage innovation specifically in the compact and threshold program development phase. The challenge was designed to tap into the extensive knowledge of MCC’s staff to identify efficiencies and innovative solutions that can shorten the program development timeline while maintaining MCC’s rigorous quality standards and investment criteria. Winning ideas were selected and are being integrated into the compact development process starting in FY18. Senior management selected two MEC proposals around expedited diagnostics and agile management to build out their ideas and move forward in trial implementation.
  • DCLI, the MCC-PEPFAR partnership discussed in criterion five, also hosts an Innovation Challenge that identifies, supports, and involves country-based youth, developers, programmers, and solution providers through targeted competitions that address specific challenges with data. Since mid-2016, it has reached nearly 1,200 innovators, awarded almost 40 small scale grants in areas related to HIV/AIDS and health, economic empowerment of youth and women, and early childhood development challenges that increase risk and constrain economic potential. A number of these innovations are being considered for scale-up through public-private partnership funding. Another DCLI project, Data Zetu, uses innovative “listening” campaigns to engage over 300 citizens and local leaders in subnational areas to generate over 100,000 hyperlocal data points which can be used to plan development interventions, such as schools, health centers, and specific services. The data collected by this project, including community insights, were “opened” and shared in order to promote transparency and action.
  • MCC regularly engages in implementing test projects as part of its overall Compact programs. A few examples include: 1) in Morocco, an innovative pay for results mechanism to replicate or expand proven programs that provide integrated support including short-term (one to six months) job readiness skills training, technical training, job matching, follow-up to ensure longevity, and other services; 2) a “call-for-ideas” in Benin to interested companies and organizations from around the world to submit information regarding potential projects that would expand access to renewable off-grid electrical power in Benin; 3) a regulatory strengthening project in Sierra Leone that includes funding for a results-based financing system designed to strengthen the regulator’s role, incentivize performance by the utilities, and enhance accountability; and 4) an Innovation Grant Program in Zambia to encourage local innovation in pro-poor service delivery in the water sector through grants to community-based organizations, civil society, and/or private sector entities.
  • MCC has signed a five-year (2017-2022), $450 million grant with the Kingdom of Morocco, called the Morocco Employability and Land Compact. The focus of the Compact is on making improvements toward land productivity and employability to create new economic opportunities, improve workforce skills, and strengthen the business environment. The Labor Market Impact Evaluation Initiative is a key component of the compact aimed at improving labor market outcomes through the use of rigorous quantitative research. The Initiative will finance rigorous impact evaluations and other rigorous empirical studies, as well as policy-research engagements, to build the capacity of the Moroccan government to commission and generate such studies on its own. This is the first time MCC has pursued a country-led Initiative focused on impact evaluations and policy research.
Score
9
Use of Evidence in Five Largest Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its five largest competitive grant programs in FY18? (Examples: Tiered-evidence frameworks; evidence-based funding set-asides; priority preference points or other preference scoring; Pay for Success provisions)

  • MCC awards all of its agency funds through two competitive grant windows: Compact and Threshold programs (whose budgets for FY18 were $800 million and $26.6 million). Both types of grants require demonstrable, objective evidence to support the likelihood of project success in order to be awarded funds. For country partner selection, MCC uses 20 different indicators within the categories of economic freedom, investing in people, and ruling justly to determine country eligibility for program assistance. These indicators (see MCC’s FY2018 Guide to the Indicators) are collected by independent third parties.
  • When considering granting a second compact, MCC further considers whether countries have: (1) exhibited successful performance on their previous compact; (2) exhibited improved 2018 Scorecard policy performance during the partnership; and (3) exhibited a continued commitment to further their sector reform efforts in any subsequent partnership. As a result, the MCC Board of Directors has an even higher standard when selecting countries for subsequent compacts. Per MCC’s policy for Compact Development Guidance (p.6): “As the results of impact evaluations and other assessments of the previous compact program become available, the partner country must use this use data to inform project proposal assessment, project design, and implementation approaches.”
  • Following country selection, MCC conducts a constraints analysis to identify the most binding constraints to private investment and entrepreneurship that hold back economic growth. Coupled with a subsequent root-cause analysis, the constraints analysis enables the country, in partnership with MCC, to select compact or threshold activities most likely to contribute to sustainable, poverty-reducing growth. In developing the project proposals, MCC requires that countries use all available evidence to inform the design and potential impact of a project. Specifically, this evidence should be drawn from evaluations of similar completed projects in the compact country or, if this is not available, results from another country with similar economic characteristics and conditions that may be applicable. MCC will not approve proposals or parts of proposals without good supporting evidence that the proposal will have a significant impact on economic growth and poverty reduction. Due diligence, including feasibility studies where applicable, are also conducted for each potential investment. MCC then performs Cost-Benefit Analysis to assess the potential impact of each project, and estimates an Economic Rate of Return (ERR). MCC uses a 10% ERR hurdle to more effectively prioritize and fund projects with the greatest opportunity for maximizing impact. MCC then recalculates ERRs at compact closeout, drawing on information from MCC’s monitoring data (among other data and evidence), in order to test original assumptions and assess the cost effectiveness of MCC programs. In connection with the ERR, MCC conducts a Beneficiary Analysis, which seeks to describe precisely which segments of society will realize the project’s benefits. It is most commonly used to assess the impact of projects on the poor, but it has broader applicability that allows for the estimation of impact on populations of particular interest, such as women, the aged, children, and regional or ethnic sub-populations. This process is codified in MCC’s Compact Development Guidance. Per the guidance, MCC requires the use of evidence to inform country and project selection by requiring that each project meet certain investment criteria like generating high economic returns, including clear metrics for results, and supporting the long-term sustainability of results.
  • In line with MCC’s M&E policy, MCC projects are required to submit quarterly Indicator Tracking Tables showing progress toward projected targets. MCC also requires independent evaluations of every project to assess progress in achieving outputs and outcomes throughout the lifetime of the project and beyond.
Score
N/A
Use of Evidence in Five Largest Non-Competitive Grant Programs

Did the agency use evidence of effectiveness when allocating funds from its five largest non-competitive grant programs in FY18? (Examples: Evidence-based funding set-asides; requirements to invest funds in evidence-based activities; Pay for Success provisions)

  • MCC does not administer non-competitive grant programs.
Score
8
Repurpose for Results

In FY18, did the agency shift funds away from or within any practice, program, or policy that consistently failed to achieve desired outcomes? (Examples: Requiring low-performing grantees to re-compete for funding; removing ineffective interventions from allowable use of grant funds; proposing the elimination of ineffective programs through annual budget requests)

  • MCC has an established Policy on Suspension and Termination that lays out the reasons for which MCC may suspend or terminate assistance to partner countries. Assistance may be suspended or terminated, in whole or in part, if a country: (1) engages in activities contrary to the national security interests of the US, (2) engages in a pattern of actions inconsistent with the MCC’s eligibility criteria, or (3) fails to adhere to its responsibilities under a compact or threshold grant, or related agreement. Evidence of failing to achieve desired outcomes is included in category two. Such actions may be evidenced by, among other things:
    • A decline in performance on the indicators used to determine eligibility;
    • A decline in performance not yet reflected in the indicators used to determine eligibility; or
    • Actions by the country which are determined to be contrary to sound performance in the areas assessed for eligibility for Assistance, and which together evidence an overall decline in the country’s commitment to the eligibility criteria.
  • MCC has terminated a compact partnership, in part or in full, seven times out of 35 compacts approved to date, and has suspended partner country eligibility (both compact and threshold) four times, most recently seen with the suspension of Tanzania in March 2016 due to actions contrary to MCC’s eligibility criteria. MCC’s Policy on Suspension and Termination also allows MCC to reinstate eligibility when countries demonstrate a clear policy reversal, a remediation of MCC’s concerns, and an obvious commitment to MCC’s eligibility indicators, including achieving desired results. For example, in early 2012, MCC suspended Malawi’s Compact prior to Entry into Force as MCC determined that the Government of Malawi had engaged in a pattern of actions inconsistent with MCC’s eligibility criteria. Thereafter, the new Government of Malawi took a number of decisive steps to improve the democratic rights environment and reverse the negative economic policy trends of concern to MCC, which led to a reinstatement of eligibility for assistance in mid-2012. MCC’s model uses objective, evidence-based eligibility criteria to determine whether a country is pursuing the policies necessary to support poverty reduction through economic growth. For countries that meet these criteria, MCC provides large investments to capitalize on the country’s policy environment to spur economic growth and reduce poverty. When countries fail to maintain such policies or take actions that undermine these policies, they can have a negative impact on the ability of MCC’s programs to achieve their desired results.
  • In a number of cases, MCC has repurposed investments based on real-time evidence. In MCC’s first compact with Morocco, the Government of Morocco proposed the Enterprise Support Project to address two of its critical economic priorities: reduce high unemployment among young graduates and encourage a more entrepreneurial culture. The project was designed to be carried out in two phases, with continuation of the second phase subject to positive results from an impact evaluation of the first phase. The pilot phase was completed in March 2012; although it met its implementation targets and showed promising trends, the impact evaluation did not show statistically significant impacts. The revised economic rate of return did not justify scaling up the project for a second phase. MCC did not continue with a second phase and the project was closed in May 2012. In MCC’s compact with Lesotho, MCC cancelled the Automated Clearing House Sub-Activity within the Private Sector Development Project after monitoring data determined it would not accomplish the economic growth and poverty reduction outcomes envisioned during compact development. The remaining $600,000 in the sub-activity was transferred to the Debit Smart Card Sub-Activity, which targeted expanding financial services to people living in remote areas of Lesotho. In Tanzania, the $32 million Non-Revenue Water Activity was re-scoped after the final design estimates on two of the activity’s infrastructure investments indicated higher costs that would significantly impact their economic rates of return. As a result, $13.2 million was reallocated to the Lower Ruvu Plant Expansion Activity, $9.6 million to the Morogoro Water Supply Activity, and $400,000 for other environmental and social activities. In all of these country examples, the funding is either reallocated to activities with continued evidence of results or returned to MCC for investment in future programming.
  • MCC also consistently monitors the progress of compact programs and their evaluations across sectors, using the learning from this evidence to make changes to MCC’s portfolio. For example, MCC undertook a review of its portfolio investments in roads in an attempt to better design, implement, and evaluate road investments. Through evidence collected across 16 compacts with road projects, MCC uncovered seven key lessons including the need to prioritize and select projects based on a road network analysis, to standardize content and quality of road data collection across road projects, and to consider cost and the potential for learning in determining how road projects are evaluated. This body of evidence and analysis was published in November 2017. The lessons from this analysis are being applied to road projects in compacts in Cote d’Ivoire and Nepal as MCC roads investments see a shift toward increased maintenance investments. Critically, the evidence also pointed to MCC shifting how it undertakes road evaluations which led to a new request and re-bid for proposals for MCC’s roads evaluations based on new guidelines and principles. These changes in future practice are especially important to highlight as MCC operates in a five-year timeframe, often investing in large infrastructure projects. Because of the difficulty in changing implementation in such a limited timeframe given the types of projects MCC finances, MCC’s focus on shifting funds from ineffective programs and policies is often demonstrated in future compact and threshold program investment decisions.
Back to the Standard

Visit Results4America.org