2018 Federal Standard of Excellence
Leadership
Did the agency have a senior staff member(s) with the authority, staff, and budget to evaluate its major programs and inform policy decisions affecting them in FY18? (Example: Chief Evaluation Officer)
Score
8
8
Administration for Children and Families (HHS)
- Administration for Children and Families’ (ACF) Deputy Assistant Secretary for Planning, Research, and Evaluation, a Senior Executive Service career official, oversees ACF’s Office of Planning, Research, and Evaluation (OPRE) and supports evaluation and other learning activities across the agency. ACF’s Deputy Assistant Secretary reports directly to the Assistant Secretary for Children and Families. ACF’s evaluation policy, which is published in the Federal Register, gives the OPRE Deputy Assistant Secretary “authority to approve the design of evaluation projects and analysis plans; and…authority to approve, release and disseminate evaluation reports.”
- ACF’s budget for research and evaluation in FY18 is approximately $165 million. OPRE’s staffing includes 61 federal positions, experts in research and evaluation methods and data analysis as well as ACF programs and policies and the populations they serve.
- In the past year, ACF released evaluation impact reports on major programs including Healthy Marriage and Responsible Fatherhood programs, Personal Responsibility Education Programs, and Health Profession Opportunity Grants, as well as impact reports on strategies that can be used by ACF programs including subsidized employment and career pathways. OPRE released many other types of research reports related to ACF programs. Examples include research reports related to Temporary Assistance for Needy Families, Head Start, Child Care, the Maternal, Infant, and Early Childhood Home Visiting program, Child Welfare, the National Domestic Violence Hotline, the Domestic Victims of Human Trafficking Demonstration Projects, and Refugee Cash Assistance. Research and evaluation efforts are ongoing in other ACF program areas. The OPRE Research Library contains all publications, and the OPRE website also includes a Projects by Topic page.
- OPRE engages in ongoing collaboration with program office staff and leadership to interpret research and evaluation findings and to identify their implications for programmatic and policy decisions. Examples of how research and evaluation findings have influenced ACF regulations and funding opportunity announcements include:
- When ACF’s Office of Head Start significantly revised its Program Performance Standards (PPS), the regulations that define the standards and minimum requirements for Head Start services, the revisions drew from decades of research and the recommendations in the Final Report of the Secretary’s Advisory Committee on Head Start Research and Evaluation. For example, the 2016 PPS includes new and strengthened requirements for Head Start programs to use evidence-based curriculum with an organized developmental scope and sequence; to use evidenced-based parenting curricula; to support staff to effectively implement curricula and monitor curriculum implementation and fidelity; and to implement a research-based, coordinated coaching strategy for education staff. These requirements drew on a wide array of research efforts that developed, tested, and established an evidence base on effective curricular and professional development/coaching strategies for use in Head Start programs.
- ACF’s Office of Child Care drew on research and evaluation findings related to eligibility re-determination, continuity of subsidy use, use of quality dollars to improve quality of programs, and more to inform regulations related to Child Care and Development Block Grant reauthorization.
- ACF’s Office of Family Assistance (OFA) used lessons learned and emerging findings from research and evaluation on the first round of Health Profession Opportunity Grants (HPOG) to inform the funding opportunity announcement for the second round of grants. Specifically, research findings informed the program components highlighted as important to the HPOG approach in the second round of funding. For example, based on the finding that many participants engaged in short-term training for low-wage, entry-level jobs, OFA more carefully defined the career pathways model, described specific strategies for helping participants progress along a career pathway, and identified and defined key HPOG education and training components. Based on an analysis which indicated limited collaborations with healthcare employers, OFA required second round applicants to demonstrate use of labor market information and consultation with local employers and to describe their plans for employer engagement.
Score
7
7
Administration for Community Living
- The Administration for Community Living (ACL) is led by the Administrator who oversees five major units: the Administration on Aging (AoA), Administration on Disabilities (AoD), Center for Integrated Programs (CIP), Center for Policy and Evaluation (CPE), and the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR).
- CPE houses the Office for Performance and Evaluation (OPE). OPE is the primary office that oversees evaluation efforts within ACL. OPE is led by a Director who is a GS-15, senior career civil servant, who reports directly to the Director of CPE who reports directly to ACL’s Administrator. In addition to the Director, there are five other full time staff overseeing 17 evaluation-related contracts. OPE’s Federal Fiscal 2018 budget for evaluation and performance management was approximately $10 million dollars. This budget represents dedicated funding authorized through legislation and is administered by the Director of OPE.
- Evaluation and performance information is communicated to leadership to support policy decisions through weekly meetings between the Director of OPE and CPE, weekly meetings between the Director of CPE and the Principal Deputy Administrator and Acting Commissioner on Disabilities, and quarterly meetings between the Director of OPE and the Principal Deputy Administrator and Acting Commissioner on Disabilities.
- Under an overarching analytic support contract, ACL’s program offices are able to transfer funds to OPE, which allows OPE staff to provide oversight and guidance regarding evaluation design and implementation, performance measurement development, and the interpretation of data for decision-making. This approach promotes coordination and allows close partnerships between programmatic experts and the evaluation experts in OPE to help programmatic staff develop needed evidence to support policy and funding decisions.
- ACL’s evaluation policy describes ACL’s commitment to conducting rigorous, relevant evaluations and to using evidence from evaluations to inform policy decisions and practice across the agency. It indicates ACL’s interest in conducting outcome-focused evaluations for all ACL programs, and in promoting rigor, relevance, transparency, independence, and ethics in the conduct of evaluations. This policy gives the director of OPE the authority to approve the design of evaluation projects and analysis plans; and the authority to approve, release and disseminate evaluation reports. ACL’s evaluation policy “applies to all ACL-sponsored evaluations” and states that “OPE and program offices will work in partnership to inform potential applicants, program providers, administrators, policymakers and funders through disseminating evidence from ACL-sponsored and other good quality evaluations.”
- At the start of each budget cycle, the Director of ACL’s Center for Policy and Evaluation sends ACL center Directors a one-page document titled “Policy Review of ACL Funding Opportunity Announcements (FOAs).” As part of this review process, OPE staff review Funding Opportunity Announcements (FOA) to determine: “Does the FOA provide enough detailed information to help applicants think critically about the measures they should include in their applications to effectively measure their progress towards meeting the goals outlined in the FOA?” OPE staff then make specific recommendations to program staff to improve language in their FOAs around performance measurement and evaluation to help ensure that program staff have evidence to inform future funding and policy decisions.
- While OPE directly oversees most of ACL’s evaluations, ACL’s, NIDILRR conducts its own evaluations (NIDILRR External Evaluation and NIDILRR Performance and Evaluation). Coordination between OPE staff and NIDILRR evaluation staff occurs through participation of NIDILRR evaluation staff in bi-weekly OPE staff meetings.
Score
8
8
Corporation for National and Community Service
- The Corporation for National and Community Services’ (CNCS) Director of the Office of Research & Evaluation (R&E) oversees the development of social science research designed to measure the impact of CNCS programs and shape policy decisions; encourage a culture of performance and accountability in national and community service programs; provide information on volunteering, civic engagement, and volunteer management in nonprofit organizations; and assist in the development and assessment of new initiatives and demonstration projects. The R&E Director, who is overseeing R&E’s $4 million budget and a staff of nine in FY18, reports directly to the CNCS Chief of Staff and is a member of CNCS’s Leadership and Policy Council. The R&E Director also meets regularly with CNCS Program Directors to identify areas where evidence can be generated and used for various decisions.
- The R&E Director meets annually with all CNCS program offices to identify priorities and negotiate which pools of funds are needed to support the year’s priorities. The FY18 plan was developed through a series of formal and informal conversations. AmeriCorps State and National is prioritizing Evidence-Based Planning Grants and an evaluation of how grantees have scaled evidence-based interventions. AmeriCorps NCCC is prioritizing the development and implementation of a learning framework for more systematically assessing member leadership development and community impact. AmeriCorps VISTA is considering how to evaluate the impact of Team Leaders on the organizations sponsoring them compared to organizations without Team Leaders. Senior Corps will release final findings in late FY18/early FY19 from its longitudinal survey of volunteers including a comparison to similar older adults who volunteer. Senior Corps (Senior Companion Program) impacts on caregivers will also be released. The CNCS research grants competition is also supported by program offices as appropriate.
- CNCS published and began implementing its Transformation and Sustainability Plan in FY18. One of the six goals included in this plan is prioritizing evidence-based interventions. Specifically, “CNCS will further refine the intervention models it funds based on evidence and demonstrated success, while maintaining the flexibility to support statutory and Administration priorities. CNCS will also continue to support innovative interventions and approaches based upon preliminary evidence in order to build grantee capacity, foster innovation, and meet evolving community needs, and will evaluate these interventions to learn more about whether they are effective.”
Score
8
8
Millennium Challenge Corporation
- There are three key touch-points in each Millennium Challenge Corporations (MCC) program’s lifecycle where senior MCC leadership uses evidence to evaluate and inform policy decisions: Development, Decision, and Implementation stages.
- In the Development stage, selection of projects and policy decisions are informed by the Department for Policy and Evaluation’s (DPE) Economic Analysis (EA) division. EA is headed by the Chief Economist whose role is to oversee and strengthen the economic evidence base used for program development, including economic growth diagnostics, root cause analyses, beneficiary analyses, and cost-benefit analyses. EA has a staff of 19 and an estimated FY18 budget of $707,000 in due diligence (DD) funds. EA’s analytical work provides the evidence base to determine which projects will have a sufficient return on investment so as to be funded by MCC.
- This analytical work underpins the program logic for MCC’s investments and informs MCC’s Monitoring and Evaluation (M&E) division (also a part of DPE) on the primary outputs and outcomes that should be measured to assess the effects of MCC’s investments. M&E has a staff of 29 and an estimated FY18 budget of $26.2 million in DD funds. (Departments throughout the agency have requested a total of $62.6 million in DD funds for FY18). The M&E Managing Director (MD) is a career civil service position with the authority to execute M&E’s budget. The MD participates in technical reviews of proposed investments, as well as regular monitoring meetings that inform policy and investment decisions.
- At the Decision stage, the input of both EA and M&E is provided to the Vice President for DPE (VP-DPE), to whom both divisions report. The VP-DPE and Chief Economist sit on MCC’s Investment Management Committee, where they perform the role of ensuring a rigorous evidence base for each investment before the submission of programs to the MCC Board of Directors for final approval. Therefore, both the Vice President (who is equivalent to an Assistant Secretary rank and reports directly to the CEO of MCC) and Chief Economist (who is equivalent to a Deputy Assistant Secretary, and is a competitively selected technical expert hired as a career civil servant) are senior leaders at MCC who have the authority, staff, and budget to evaluate MCC programs and inform policy decisions.
- Once in Implementation, M&E DD resources are used to procure evaluation services from external independent evaluators to directly measure high-level outcomes to assess the attributable impact of MCC’s programs and activities. MCC sees its independent evaluation portfolio as an integral tool to remain accountable to stakeholders, demonstrate programmatic results, and promote internal and external learning. Through the evidence generated by monitoring and evaluation, the M&E MD, Chief Economist, and VP-DPE are able to continuously update estimates of expected impacts with actual impacts to inform future programmatic and policy decisions. In FY18, MCC began or continued comprehensive, independent evaluations for every Compact or Threshold project at MCC (a requirement found in Section 7.5.1 of MCC’s Policy for Monitoring and Evaluation). To date, MCC has already published more final evaluations this year than in any prior year, increasing its stock of published evaluations by 23 percent this fiscal year. All evaluation designs, data, reports, and summaries are available on MCC’s Evaluation Catalog.
- MCC is a member of the Federal Inter-Agency Council on Evaluation Policy (ICEP), which is coordinated by the OMB’s evidence deputies. Given MCC M&E’s strong experience and leadership in areas of rigorous and transparent evaluations, in FY18 MCC was asked to deliver several presentations at ICEP meetings (including one on MCC’s experience and learning in the areas of evaluation microdata dissemination guidelines and practices, and another on transparency and evaluation publication policies), as well as to prepare one or more monthly or semi-monthly workshops for OMB’s evaluation training series in Summer 2018. Examples of MCC expert trainings include: (1) Project Evaluability; (2) Management and dissemination of evaluation microdata; and (3) Integration and dissemination of evaluation findings.
- To remain abreast of ongoing research and policy dialogues outside of MCC—in academia, multilateral development banks, donors, and nongovernmental organizations—MCC recently established an Economic Advisory Council (EAC). With the aim of bringing thought leaders and experts into MCC to highlight technical advances in the field, innovations, and learning in economic development, the EAC will hold meetings semi-annually to solicit feedback and advice that will be shared across the agency for internal discussion and use. The EAC will be composed of approximately 20 members drawn from diverse backgrounds, balanced across institutions, economic sub-disciplines, and by their region of applied expertise.
Score
7
7
Substance Abuse and Mental Health Services Administration
- The director of Substance Abuse and Mental Health Services Administration’s (SAMHSA) Center for Behavioral Health Statistics and Quality (CBHSQ) Division of Evaluation, Analysis and Quality (DEAQ) serves as the agency’s evaluation lead with key evaluation staff housed in this division. In addition, the agency’s chief medical officer (CMO), as described in the 21st Century Cures Act, plays a key role in addressing evaluation approaches and the utilization of evidence-based programs and practices among grantees; at this time, a collaborative approach between CBHSQ and the Office of the CMO is being established to ensure broad agency evaluation oversight by senior staff. The Office of the CMO is housed within the agency’s emerging Mental Health Policy Lab (currently the Office of Policy, Planning and Innovation) and will influence evaluation policy decisions across the agency in a more systematic manner as the new Policy Lab is stood up in January 2018. In January 2018, SAMHSA announced the creation of the National Mental Health and Substance Use Policy Lab, which is designed to “play a central role in shaping SAMHSA’s efforts to bring more science to the evidence-based practices used in the prevention, treatment, and support services being provided by behavioral health practitioners and other clinicians.”
- SAMHSA’s Office of Policy, Planning and Innovation provides policy perspectives and guidance to raise awareness around SAMHSA’s research and behavioral health agenda. OPPI also facilitates the adoption of data-driven practices among other federal agencies and partners such as the National Institutes for Health, the Centers for Disease Control and Prevention, and the Centers for Medicare and Medicaid Services.
- At this time, evaluation authority, staff, and resources are decentralized and found throughout the agency. SAMHSA is composed of four Centers, the Center for Mental Health Services (CMHS), the Center for Substance Abuse Treatment (CSAT), the Center for Substance Abuse Prevention (CSAP) and the Center for Behavioral Health Statistics and Quality (CBHSQ). CMHS, CSAT, and CSAP oversee grantee portfolios and evaluations of those portfolios. Evaluation decisions within SAMHSA are made within each Center specific to their program priorities and resources. Each of the three program Centers uses their program funds for conducting evaluations of varying types. CBHSQ, SAMHSA’s research arm, provides varying levels of oversight and guidance to the Centers for evaluation activities. CBHSQ also provides technical assistance related to data collection and analysis to assist in the development of evaluation tools and clearance packages.
- SAMHSA evaluations are funded from program funds that are used for service grants, technical assistance, and for evaluation activities. Evaluations have also been funded from recycled funds from grants or other contract activities. Given the broad landscape of evaluation authority and funding, a variety of evaluation models have been implemented. These include recent evaluations funded and managed by the program Centers (e.g., First Episode Psychosis, FEP); evaluations funded by the Centers but directed outside of SAMHSA (e.g., Assisted Outpatient Treatment, AOT), and those that CBHSQ directly funds and executes (e.g., Primary and Behavioral Health Care Integration, PBHCI, and the Cures-funded Opioid State Targeted Response funding). Evaluations require different degrees of independence to ensure objectivity and the models above afford SAMHSA the latitude to enhance evaluation rigor and independence on a customized basis.
Score
8
8
U.S. Agency for International Development
- The United States Agency for International Development’s (USAID) Office of Learning, Evaluation, and Research (LER) in the Bureau for Policy, Planning, and Learning (PPL) helps the agency build a body of evidence from which to learn and adapt programs. USAID is a decentralized agency with evaluations commissioned by missions and offices based in countries around the world. LER provides guidance and builds staff capacity in monitoring, evaluating, and learning from its work. The Director of LER is a senior staff member with the authority, staff, and budget to ensure agency evaluation requirements are met, including that all projects are evaluated at some level, and that decision-making is informed by evaluation and evidence. The LER Director oversaw approximately 27 staff and an estimated $8.9 million budget in 2018. At different times in the history of the office, the LER Director has been a senior political appointee, a senior Foreign Service officer, or a career civil servant. The position is a member of USAID’s Senior Leadership Group, which is comprised of the agency’s highest managerial and technical positions in Washington and overseas.
- The LER Director and staff inform policy decisions across the agency by ensuring evaluation requirements in legislation are reflected in USAID policy and practices, attending senior level decision-making meetings when evaluation is on the agenda, providing input into working groups, and reviewing statements, draft memos, and other policy products. For example, LER staff review Country Development Cooperation Strategies (CDCSs) prior to approval to ensure evidence is used to inform country development objectives. In another example, LER manages monitoring and evaluation policy at USAID, which is codified as part of USAID’s Program Cycle Operational Policy, otherwise known as Automated Directives Systems (ADS) 201. This was adjusted after the Foreign Aid Transparency and Accountability Act of 2016 (FATAA) was passed to ensure the monitoring and evaluation requirements bring the Agency into compliance with FATAA. For example, FATAA requires that agencies establish annual monitoring and evaluation objectives and timetables to plan and manage the process of monitoring, evaluating, analyzing progress, and applying learning toward achieving results. ADS 201 requires missions to develop a Performance Management Plan (PMP) that includes both a monitoring and evaluation plan, which must be updated at least once a year. All Operating Units (OUs) must annually complete the Performance Plan and Report (PPR), which include annual monitoring indicators and planned evaluations.
- The majority of LER’s work is related to providing training, tools, technical assistance and guidance to staff in how to monitor, evaluate, and learn from USAID programs. LER staff design and manage a few high-priority evaluations at a time, such as the Evaluation of Sustained Outcomes in Basic Education, published in March 2018. LER also coordinates several cross-agency working groups organized to support learning champions and monitoring and evaluation specialists throughout the Agency.
Score
8
8
U.S. Department of Education
- The United States Department of Education’s (USED or ED) Institute of Education Sciences (IES), with a budget of $613.5 million in FY18, has primary responsibility for education research, evaluation, and statistics. The IES Director is appointed by the President and confirmed by the U.S. Senate, and advises the U.S. Education Secretary on research, evaluation, and statistics activities. Four Commissioners support the IES Director, including the Commissioner for the National Center for Education Evaluation and Regional Assistance (NCEE), who is responsible for planning and overseeing ED’s major evaluations. IES employed approximately 170 full-time staff in FY18, including approximately 20 staff in NCEE.
- The Assistant Secretary for the Office of Planning, Evaluation and Policy Development (OPEPD) reports to, and advises, the Secretary on matters relating to policy development and review; program performance measurement and evaluation; and the use of data and evidence to inform decision-making. OPEPD’s Policy and Program Studies Service (PPSS) has a staff of about 20 and serves as the Department’s internal analytics office. PPSS performs data analysis and conducts short-term evaluations to support continuous improvement of program implementation, working closely with program offices and senior leadership to inform policy decisions with data and evidence.
- IES and PPSS staff collaborate closely through ED’s Evidence Planning Group (EPG) with other senior staff from OPEPD, including Budget Service, as well as staff from the Office of Innovation and Improvement (OII), the Performance Improvement Office, and the Office of the General Counsel. EPG currently includes around 25 participants from these offices within ED. EPG supports programs and advises Department leadership and staff on how evidence can be used to improve Department programs and works to provide resources and support to staff in the use of evidence. EPG has coordinated, for example, the development of revised evidence definitions and related selection criteria for competitive grant programs that align with the Elementary and Secondary Education Act, as amended by the Every Student Succeeds Act (P.L. 114-95) (ESSA), and provided guidance on the strategic use of those definitions in the Department’s grant competitions. EPG has also facilitated cross-office alignment of investments in technical assistance related to evidence and pooling program funds for evaluations.
- Senior officials from IES and OPEPD, and OII are part of ED’s leadership structure. Officials from OPEPD and OII weigh in on major policy decisions. OPEPD plays leading roles in the formation of the Department’s policy positions as expressed through annual budget requests, grant competition priorities, including evidence, and technical assistance to Congress to ensure that evidence appropriately informs policy design.
Score
8
8
U.S. Dept. of Housing & Urban Development
- The United States Department of Housing and Urban Development’s (USHUD or HUD) Office of Policy Development & Research (PD&R) informs HUD’s policy development and implementation by conducting, supporting, and sharing research, surveys, demonstrations, program evaluations, and best practices. PD&R achieves this mission through three interrelated core functions: (1) collecting and analyzing national housing market data (including with the Census Bureau); (2) conducting research, program evaluations, and demonstrations; and (3) providing policy advice and analytic support to the HUD Secretary and program offices.
- PD&R is led by an Assistant Secretary who oversees six offices, about 139 staff including a team of field economists that work in HUD’s 10 regional offices across the country, and a budget of $89 million in FY18. The Assistant Secretary ensures that evidence informs policy development through frequent personal engagement with other principal staff, the Secretary, and external policy officials including Congress, speeches to policy audiences, sponsorship of public research briefings, and policy implications memoranda.
- PD&R staff is integral to departmental working groups focused on program-specific or cross-cutting policy development and initiatives. As part of such engagement, PD&R ensures that deliberations are informed by program evaluations and the research literature and conducts policy studies, regulatory impact analyses, and special-purpose analyses of administrative data or external data to answer specific policy questions that arise. As part of ongoing research in FY18, PD&R continues to undertake evaluations of the programs that account for the vast majority of HUD’s outlays, including evaluations of key outcomes for rental assistance programs (Moving to Work and Moving to Work Expansion, Family Self Sufficiency, Rental Assistance Demonstration, Rent Reform), homeless assistance programs (Family Unification Program), and the Housing Counseling program.
- PD&R regularly engages with each HUD program office to ensure that metrics, evaluations, and evidence inform program design, budgeting, and implementation. Periodic meetings enable PD&R to inform program offices about evaluation progress, and program offices to share knowledge with PD&R about emerging needs for research, evaluation, and demonstrations to advance program policy. Such collaboration has ensured that major policy changes have developed through rigorously evaluated program demonstrations that include interim reports to help shape program design.
Score
10
10
U.S. Department of Labor
- The United States Department of Labor’s (DOL) Chief Evaluation Officer is a senior official with responsibility for all activities of the Chief Evaluation Office (CEO), and coordination of evaluations department-wide. In 2016, DOL’s Chief Evaluation Officer was converted to a career position, a change which more fully cements the principle of independence and reflects DOL’s commitment to institutionalizing an evidence-based culture at the agency. Evaluation results and products are approved and released by the Chief Evaluation Officer (as per the DOL Evaluation Policy), and disseminated in various formats appropriate to practitioners, policymakers, and evaluators.
- The CEO includes 15 full-time staff plus a small number of contractors and one to two detailees at any given time. This staff level is augmented by staff from research and evaluation units in other DOL agencies. For example, the Employment and Training Administration (ETA) has nine FTE’s dedicated to research and evaluation activities with which CEO coordinates extensively on the development of a learning agenda, management of the studies, and dissemination of results. CEO staff have expertise in research and evaluation methods as well as in DOL programs and policies and the populations they serve. CEO also employs technical working groups on the majority of evaluation projects whose members have deep technical and subject matter expertise. Further, CEO staff engage and collaborate with program office staff and leadership to interpret research and evaluation findings and to identify their implications for programmatic and policy decisions.
- In FY18, the CEO is directly overseeing approximately $21.4 million in evaluation funding (this includes a direct appropriation of $8.04 million for department program evaluation and a set-aside amount of up to 0.75 percent of select department accounts). Additionally, many projects are co-funded with DOL agencies using programmatic dollars. CEO also collaborates with DOL program offices and other federal agencies on additional evaluations being carried out by other offices and/or supported by funds appropriated to other agencies or programs.
- The CEO’s role is to develop and incorporate evidence and evaluation findings as appropriate and to identify knowledge gaps that might be filled by evaluations or convey evidence that can inform policy and program decisions or performance. DOL’s Chief Evaluation Officer and senior staff are part DOL’s leadership structure and play a role in the formation of the DOL’s agencies’ annual budget requests, recommendations around including evidence in grant competitions, and providing technical assistance to DOL leadership to ensure that evidence informs policy design. There are a number of mechanisms set up to facilitate this: CEO has traditionally participated in quarterly performance meetings with DOL leadership and the Performance Management Center (PMC); CEO reviews agency operating plans and works with agencies and the PMC to coordinate performance targets and measures and evaluation findings; quarterly meetings are held with agency leadership and staff as part of the Learning Agenda process; and meetings are held as needed to strategize around addressing new priorities or legislative requirements.