freelance Made by History Time Magazine Why the Government Historically Has, and Still Should, Pay For University Research Costs CM NewsFebruary 25, 202501 views On Feb. 7, the National Institutes of Health (NIH) issued guidance limiting how much of the “indirect costs” (IDC) of scientific research it would pay. While the topic is arcane, it has massive implications for the future of scientific research in the U.S. In the specialized jargon of big science, “direct costs” contribute unambiguously to a specific grant-funded project. For instance, the salaries for personnel who work on a project and project-specific equipment are two examples of direct costs. Indirect costs can cover everything from the cost of ventilation in the lab where the grant-funded project takes place, to staff accountants who help create and monitor budgets. [time-brightcove not-tgx=”true”] Before the NIH directive, universities negotiated what percentage of the indirect costs incurred by scientific research the federal government would cover. The negotiated rates reflected differences in costs confronting institutions. Since lab space in New York City, for example, costs more than in Lincoln, Nebraska. Columbia University has a higher IDC rate (64.5%) than the University of Nebraska (55.5%). The new NIH guidance justified reducing the IDC it would pay as a commonsense market reform: since private foundations pay a lower percentage of universities’ IDC, the federal rate must be padded. Meanwhile, Project 2025, the Heritage Foundation’s manifesto for a second Trump administration, argued that IDC actually paid for “Diversity, Equity, and Inclusion (DEI) efforts” on university campuses. But both claims ignore the real reasons the federal government embraced this complex method of funding scientific research during the Cold War and oblivious to how it has evolved. Paying for IDC was a way to build and maintain a free and uniquely American way of doing science—and it has proved deeply successful for three-quarters of a century. Before World War II, private donations and industry awards funded most science research. Agriculture was the exception. At land grant universities, a combination of federal, state, and local grants supported the growth of “extension services,” outreach programs that brought new methods and technologies from agricultural researchers to farmers. Read More: NIH Budget Cuts Are the ‘Apocalypse of American Science,’ Experts Say The war changed everything. Under electrical engineer Vannevar Bush, the National Research Defense Committee pushed scientific research and development to unprecedented heights to serve the war effort. The Manhattan Project was proof of concept: when the American government leveraged its resources to organize and fund scientists, they could change the world rapidly. In 1945, Bush wrote Science, the Endless Frontier. Worried that demobilization would fragment the wartime scientific community, he wanted to persuade President Harry Truman that the U.S. could—and should—continue innovating. According to Bush, to maintain momentum the country needed to increase its absolute number of research universities and distribute them more evenly around the country. This would ensure California researchers could study seismology while their colleagues in Oklahoma specialized in petroleum geology. American interests demanded advancements across all sciences, and Bush recognized innovation would unfold differently across the broad and diverse nation. Universities needed to recruit the best minds, build modern facilities, and decrease the time scientists spent teaching so they could prioritize research. Failure to do so, Bush argued, would push talented researchers into private industry. There they might contribute to the growth of the postwar economy rather than address urgent intellectual questions. Drug discovery, desalinization technology, and the development of artificial intelligence were all innovations that both challenged the limits of the human condition and served the nation’s strategic interests and therefore required its best minds. Bush also hoped to prevent the politicization of science. He argued that while America needed more regional hubs of scientific inquiry, federal funds should support those hubs. Otherwise local and state politics might unduly influence their research agendas. For the same reason, he disliked the idea of using congressional appropriations to underwrite research, fearing that science would become hostage to Washington’s political whims. Instead, he suggested that a national granting agency should award funds through a competitive process overseen by a mix of neutral professional administrators and scientists themselves. “The principle of variety and decentralization of control,” Bush argued, “is nowhere more important than in scientific work, where the fostering of novelty must be the first concern.” While Bush did not name it in the Endless Frontier, he later found that the key to fostering novelty was building higher education through IDC. The private sector and the military were arguably better positioned to become the chief stewards of American scientific inquiry. But Bush feared they would sacrifice complex and wide-ranging thought to achieve short-term results. Moreover, he believed they were unlikely to conceptualize the national interest in sufficiently ambitious terms. That meant America’s underutilized university system needed to grow to fit the nation’s needs. Federal payment of indirect costs would allow it to do so. The IDC model acknowledged both that universities took on costs over and above the direct costs of individual research projects and that research required incubation: ongoing investment could be necessary for progress on thorny problems. Leveraging taxpayer dollars to support universities’ infrastructures could help Washington build research capacity across the country. By overseeing a competitive awards process, the federal government would link the nation’s scientists in mutually beneficial networks—while also keeping them sufficiently independent that they would not fall prey to groupthink. Truman accepted Bush’s recommendations, and Congress created the National Science Foundation in 1950. A push from other scientists who had worked on the war effort led to the expansion of the NIH’s grant-making capacity and the creation of new agencies specifically dedicated to government sponsorship of research, such as the Office of Naval Research and the Atomic Energy Commission. The agencies’ staffs and the scientists they worked with believed the U.S. had a duty to lead the world in scientific research. As the system came together during the 1950s, paying indirect research costs became a standard practice for the government, with agencies generally covering 8% of IDC at educational institutions. Universities never believed that amount was sufficient, and in 1957 the successful Soviet launch of Sputnik lent their complaints new urgency. Amid a panic that U.S. science lagged its communist counterpart, federal investment in science exploded. Read More: RFK Jr. Outlines His Health Secretary Priorities in Post-Confirmation Interview With Fox News In 1958 the Bureau of the Budget codified a set of principles universities should use to calculate IDC. The flat rate jumped to 15%, then to 20% in 1963. Three years later the cap was removed in favor of the current negotiated model. Rates continued to rise to reflect the increasing costs and complexity of scientific research, even as the threat of Soviet scientific competition faded. The Framingham Heart Study, which has collected health and lifestyle data from thousands of residents of Framingham, Mass. since 1946, offers a clear example of the broad and diffuse benefits of grant-funded research and its escalating costs. In 1945, estimates indicated that cardiovascular disease (CVD) caused around half the adult deaths in the U.S. each year. Yet the causes of CVD were largely unknown. Accordingly, the Public Health Service and the state of Massachusetts funded Harvard researchers to undertake the Framingham study, which over eight decades has surfaced what we now think of as common-sense insights into the role of diet, exercise, and smoking in heart health. In the 1970s Framingham data underwrote clinical practice guidelines to identify at-risk patients and formulate their treatment plans. It also supported the development of medications to lower cholesterol and manage high blood pressure. The study’s current subjects are the adult grandchildren of the original volunteers. In this phase, researchers are examining intergenerational genetic maps with an eye to developing the “personalized medicine” projected to become the clinical gold standard over the next decades. As research costs and IDC rates have risen, federal agencies have mandated more cost-sharing from universities, tightened rules about how grant funds can be used, and increased the rate of auditing. Given that private funders pay a lower IDC rate, it might appear that universities are gouging the federal government, but only if we ignore the rationale behind the system. Vannevar Bush and his colleagues—and the presidents they served—wanted Washington to pay a higher percentage of universities’ IDC than private funders did. That funding model would maintain federal power over research and ensure American scientists prioritized the national interest. Congress concurred: a geographically diverse scientific footprint would create concentrations of modern technology and high-wage jobs outside of the Northeast’s corridors of power. Sun Belt states, which used IDC to fund air conditioning in university labs and offices, were perhaps the biggest beneficiaries. Far from ripping off taxpayers, the current arrangement has paid off handsomely. It underwrites a public minded, distinctly American system of scientific research. Birthed by the nation’s sense of postwar responsibility and honed on the Cold War fight against communism, that system’s ambitions have set the standard for the world. They are one key reason the U.S.remains uniquely positioned to address the grandest challenges facing the planet today. Trysh Travis is a historian of behavioral health at the University of Florida, where she served as an Associate Dean in the College of Liberal Arts and Sciences from 2021-2024. Made by History takes readers beyond the headlines with articles written and edited by professional historians. Learn more about Made by History at TIME here. Opinions expressed do not necessarily reflect the views of TIME editors. Source link