Federal Evidence-Building Activities
Title 5 Chapter 3, Subchapter II is the federal government's core evidence-building chapter. 5 U.S.C. §§ 311-315 was added by the Foundations for Evidence-Based Policymaking Act of 2018 and establishes the basic Title 5 framework for agency learning agendas, evaluation plans, Evaluation Officers, statistical officials, and the temporary advisory structure Congress used to push government-wide data-sharing and evidence-building reform.
This subchapter matters because it turned "evidence-based policymaking" from a management slogan into a set of legal roles and planning duties. Instead of leaving evaluation and data strategy to agency discretion, Congress required major agencies to identify the policy questions they need answered, describe the data and methods they will use, designate senior officials responsible for evaluation and statistical expertise, and connect those efforts to the strategic-planning machinery in 5 U.S.C. SS 306 and the performance-planning framework in Title 31.
Current Law (2026)
| Parameter | Value |
|---|---|
| Governing law | 5 U.S.C. §§ 311-315 |
| Main focus | Learning agendas, evaluation planning, evaluation leadership, statistical expertise, and federal data-for-evidence governance |
| Covered agencies | Agencies covered by 31 U.S.C. § 901(b), generally the major CFO Act agencies (§311) |
| Evidence-building plan | Must be included in the agency strategic plan and identify policy questions, needed data, methods, challenges, and implementation steps (§312(a)) |
| Evaluation plan | Must be issued with the annual performance plan and describe the evaluations and key data activities planned for the next fiscal year (§312(b)) |
| Evaluation Officer | Each covered agency must designate a senior Evaluation Officer with methodological expertise and coordination duties (§313) |
| Statistical official | Each covered agency must designate the head of its statistical unit, or another senior expert, as statistical official (§314) |
| Temporary advisory body | §315 created the Advisory Committee on Data for Evidence Building, with annual reporting duties and a statutory sunset 2 years after its first meeting |
| Key cross-link | The subchapter is designed to work with §306 strategic plans and Title 44 evidence, statistics, and open-data provisions |
| Why it matters | Subchapter II creates the legal architecture for federal learning agendas, evaluation capacity, and the integration of data analysis into agency planning |
Legal Authority
- 5 U.S.C. § 311 — Definitions: defines agency, Director, evaluation, evidence, State, and statistical terms for the subchapter
- 5 U.S.C. § 312 — Agency evidence-building plan: requires each covered agency to include a systematic evidence-building plan in its strategic plan and issue a yearly evaluation plan with its performance plan
- 5 U.S.C. § 313 — Evaluation Officers: requires each covered agency to designate a senior Evaluation Officer and assigns portfolio-assessment, capacity, policy, and plan-coordination functions
- 5 U.S.C. § 314 — Statistical expertise: requires each covered agency to designate a statistical official and links that official to the Interagency Council on Statistical Policy
- 5 U.S.C. § 315 — Advisory Committee on Data for Evidence Building: establishes a temporary advisory committee to recommend ways to facilitate data sharing, linkage, privacy techniques, and evidence-building coordination
What Connects These Sections
They turn evidence-building into an agency-management duty. Congress did not just encourage agencies to study their programs. It required formal plans, designated officials, and recurring coordination.
They link data, evaluation, and strategy. The subchapter assumes policymaking should be informed by data acquisition, analytical methods, evaluation studies, and strategic planning rather than by ad hoc intuition alone.
They combine internal leadership with government-wide coordination. Evaluation Officers, statistical officials, OMB guidance, and interagency councils all appear because evidence-building is supposed to be both agency-specific and cross-government.
Major Components
Definitions that tie Title 5 to Title 31 and Title 44
5 U.S.C. § 311 is short, but it does important connective work. It defines covered agencies by reference to 31 U.S.C. § 901(b), defines evaluation as systematic data collection and analysis meant to assess effectiveness and efficiency, and borrows the definitions of evidence and statistical terms from 44 U.S.C. § 3561.
That drafting choice matters. Congress did not build evidence-building in isolation. It tied Title 5 personnel-and-management law to the federal planning structure in Title 31 and to the open-data, statistics, and information-governance framework in Title 44.
Learning agendas and evidence-building plans
The heart of the subchapter is 5 U.S.C. § 312(a). Each covered agency must include in its strategic plan a systematic plan for identifying and addressing policy questions relevant to the agency's programs, policies, and regulations. In practice, this is what agencies usually call a learning agenda.
The statute requires more than a general commitment to research. Agencies must list the policy-relevant questions they want to answer, the data they intend to collect, use, or acquire, the methods and analytical approaches they may use, the challenges they face in building evidence, and the steps they will take to implement the plan.
This is one of the clearest statutory efforts to force agencies to become explicit about what they do not yet know and how they plan to find out.
Annual evaluation plans
5 U.S.C. § 312(b) requires each covered agency to issue an evaluation plan with the annual performance plan required by 31 U.S.C. § 1115(b). The evaluation plan must describe the significant evaluation studies the agency plans to begin in the next fiscal year and the key information collections or acquisitions it expects to start.
That annual requirement matters because it keeps evidence-building from becoming a one-time strategic-plan exercise. It turns the learning-agenda idea into a recurring operational document tied to the federal performance cycle.
Section 312(c) also requires consultation with outside parties — including the public, other agencies, state and local governments, and nongovernmental researchers. So the statute expects agencies to build evidence agendas with outside input, not solely internal preference.
Evaluation Officers
5 U.S.C. § 313 requires each covered agency to designate a senior Evaluation Officer. The officer must be chosen without regard to political affiliation and based on demonstrated expertise in evaluation methodology and the agency's substantive disciplines.
The functions assigned to the Evaluation Officer are broad. The officer must continually assess the agency's portfolio of evaluations and policy research, assess the agency's evaluation capacity, establish and implement an agency evaluation policy, and coordinate the plans required under § 312.
That makes the Evaluation Officer the statute's internal steward of evaluation quality and evaluation governance. Congress was not content to ask agencies to publish plans; it wanted a senior official responsible for whether those plans are methodologically serious and institutionally supported.
Statistical expertise
5 U.S.C. § 314 requires agencies to designate the head of a statistical agency or unit, or another senior official with appropriate expertise, as the agency's statistical official. That official advises on statistical policy, techniques, and procedures and serves on the Interagency Council on Statistical Policy under Title 44.
This matters because not all evidence-building is the same thing as program evaluation. Congress recognized that statistical work has its own standards, privacy issues, and professional norms. The Chief Human Capital Officers at major agencies are among the primary consumers of evidence-building outputs, using workforce data to inform hiring, training, and restructuring decisions. The statistical-official requirement is the subchapter's way of ensuring that evidence-building does not flatten statistics into generic management analytics.
The advisory committee on data for evidence building
5 U.S.C. § 315 created the Advisory Committee on Data for Evidence Building, chaired by the Chief Statistician of the United States and populated by federal data leaders plus outside experts from state and local government, privacy, transparency, technology, and research communities.
Its job was to advise on how to facilitate data sharing, data linkage, privacy-enhancing techniques, and cross-agency availability of data for evidence-building. The statute also required an annual public report.
But §315 is also a reminder that not every provision here is permanent operating law in the same way. The section expressly provides that the advisory committee terminates no later than two years after its first meeting. So the provision is best understood as a reform-era catalyst rather than a permanent standing governance body.
How It Works
The subchapter works only in combination with other statutory frameworks — Congress deliberately tied it to § 306 strategic planning, Title 31 performance planning, and Title 44 data governance rather than building it in isolation. The learning agenda under § 312 is a legally required element of each agency's strategic plan, not a discretionary management practice: agencies must explicitly identify the policy questions they don't yet know the answers to, the data they plan to collect or acquire, the analytical approaches they'll use, and the obstacles they face. The Evaluation Officer designated under § 313 is the institutional steward — a senior official chosen on expertise rather than political affiliation who is responsible for whether evaluation plans are methodologically credible, not just paperwork-compliant. § 314's statistical official requirement reflects Congress's recognition that statistical work has its own professional standards separate from general program evaluation; the two roles are related but distinct, and the statute gives each its own senior champion at covered agencies.
How It Affects You
If you work in agency policy, budget, or performance management at a CFO Act agency: Your agency's learning agenda and annual evaluation plan are legal requirements under §§ 312(a)-(b), not optional management documents. OMB Circular A-11 Section 290 specifies the required format; agencies must submit their evidence-building plan as part of the strategic plan and their annual evaluation plan alongside the annual performance plan. OMB reviews these during the budget formulation process — a weak or vague learning agenda (one that lists broad research interests without specific policy questions or named data sources) may draw scrutiny. The Evaluation Officer role (§ 313) must be a senior official with demonstrated methodological expertise, not a generic policy director. If your agency has not formally designated an Evaluation Officer and published their name in the strategic plan, you are not in compliance.
If you are an external researcher, advocacy organization, evaluator, or state or local government official: Section 312(c)'s consultation requirement is an underused opening. Agencies must consult outside researchers, practitioners, and other levels of government — when developing their evidence-building plans and annual evaluation plans. This consultation is typically conducted through public comment processes and stakeholder meetings, and it is distinct from notice-and-comment rulemaking. If your organization has policy questions you want a federal agency to prioritize (a program's effects on a specific population, a data gap that prevents you from measuring outcomes), submitting comments during the evidence-plan development cycle is more likely to influence agency research priorities than general advocacy. Check agency performance.gov pages for upcoming consultation opportunities.
If you track federal statistical capacity and are concerned about data cuts: The § 314 statistical official requirement and the interagency coordination structures it connects to depend on maintaining the underlying statistical programs — Census Bureau surveys, Bureau of Labor Statistics data collections, agency-specific program data. The Trump administration's 2025 DOGE review included scrutiny of statistical agency budgets: proposals to eliminate or reduce the Current Population Survey's fertility questions, restrict American Community Survey content, and consolidate BLS data collection. These cuts would undermine the data infrastructure that evidence-building plans depend on. Tracking statistical-capacity cuts is therefore a leading indicator for whether § 312-315 requirements become paper compliance exercises or meaningful evidence-building frameworks.
If you work in federal AI governance, digital services, or evaluation methodology: The Evidence Act framework predates generative AI, but its core requirements — systematic identification of policy questions, named data sources, described analytical methods, and designated methodological expertise — now apply to AI-assisted research just as they apply to traditional program evaluation. OMB guidance issued in 2024 and 2025 began to address how AI-generated analysis interacts with evidence standards, including requirements around reproducibility, bias assessment, and documentation of model limitations. Evaluation Officers at major agencies are being asked to assess whether AI-assisted research meets the Evidence Act's standards for what constitutes credible "evidence." If your agency uses AI models to generate program evaluation findings, the Evaluation Officer (§ 313) is the official responsible for that quality determination.
State Variations
This subchapter is federal law governing major federal agencies. States may have their own performance, evaluation, or evidence statutes, but they differ widely in scope, staffing, and data-sharing structure.
Implementing Regulations
Subchapter II is implemented less through classic notice-and-comment regulations than through OMB guidance, agency strategic plans, annual performance plans, learning agendas, evaluation plans, and related evidence-governance practices. It also depends on the companion Title 44 data-governance framework and interagency policy structures.
Pending Legislation
As of April 9, 2026, there does not appear to be a major standalone 119th Congress bill aimed at overhauling 5 U.S.C. §§ 311-315 as a package. The current policy debate is more about implementation quality: agency data access, evaluation capacity, privacy-preserving linkage, AI-era evidence methods, and whether agencies are using learning agendas as real decision tools rather than compliance documents.
Recent Developments
The most important continuing development is that the subchapter has shifted from startup law to implementation law. In the years immediately after the 2018 Evidence Act, the main question was whether agencies would create the required roles and plans at all. By 2026, the larger question is whether the plans are actually shaping agency decisions, budget proposals, and program redesign.
Another practical development is the growing interaction between this subchapter and newer federal conversations about data governance, privacy, digital service delivery, and AI-assisted analysis. The statute itself predates the current wave of generative-AI policymaking, but its core requirements around methodology, evaluation capacity, and evidence-building informed by outside researchers and practitioners have become more relevant, not less.
Finally, §315's advisory-committee structure shows that some parts of the original reform package were deliberately transitional. The permanent legacy of Subchapter II is less the temporary committee and more the now-routine federal architecture of learning agendas, annual evaluation plans, Evaluation Officers, and statistical officials embedded across major agencies.