NASA - George Mason University Workshop:
Performance Metrics for R&D Organizations
March 3, 1999

"NASA: Measuring the R&D Payoff"

Sylvia K. Kraemer
Director of Policy Development
Office of Policy and Plans
NASA


        While there may be a few new things under the Sun, attempts to measure the "payoff" for R&D at NASA is not one of them. In the 1970’s, as public interest in space exploration after Apollo waned, and the costs of the Vietnam misadventure continued to grow, NASA struggled to justify its budgets by showing significant returns to the economy from its programs. A series of NASA funded econometric studies appeared to prove that for each tax dollar the Agency spent, it returned from $7 to $14 to the U.S. economy. Seemingly sophisticated cost-benefit studies of particular NASA technologies -- from cardiac pacemakers to zinc-rich coatings -- purported to prove cost-benefit ratios from 4:1 to 340:1.

        Unfortunately, these findings -- early efforts at quantitative measurement of R&D performance -- did not survive close scrutiny. The Government Accounting Office could not validate the findings of some of these studies, and noted that the excessive number of variables in the equations used made them unreliable. Concerns over the number of variables and instability of formulas were echoed by the Government’s statisticians over at the Department of Labor, who concluded from their own more conservative estimates that (a) returns on private sector R&D tend to be between 15% and 30%, while returns on Government R&D vary between 0% and 5%.

        This early effort begged the question: does anyone invest in R&D to achieve "macroeconomic" returns? Most of us, whether in the public or private sector, look for a more immediate payoff, e.g., a successful product brought to market, or a critical capability added to the Government’s public policy toolkit. Toward the end of the last decade, the Office of Technology Assessment offered this benediction on the macroeconomic proofs of R&D returns:

        The factors that need to be taken into account in research planning, budgeting, resource allocation, and evaluation are too complex and subjective; the payoffs too diverse and incommensurable; and the institutional barriers too formidable to allow quantitative models to take the place of mature, informed judgement.

        Manipulation of econometric formulas, however, were not the first attempts at the quantitative measurement of technological change and resulting economic value. Prior to World War II the Simon Kuznets and Robert Merton sought to trace the rate and dispersion of technological change through patent data. Their findings included the notion that technological improvements in particular industries, like economic investment, reach points of diminishing returns. After World War II the most important analysis of patents was done by Jacob Schmookler who, among other things, disproved the notion popular among university-based scientists, that technological progress is driven by the growth of basic science. In a landmark statement that has, to the best of my knowledge, not been successfully refuted, Schmookler’s conclusions were based on his study of patents issued in key capital goods industries since 1874, and 934 important inventions in producers’ goods industries. He found that, in the minority of instances in which a stimulus to the invention could be identified, ...for almost all of these that stimulus is a technical problem or opportunity conceived by the inventor largely in economic terms.... When the inventions themselves are examined in their historical context, in most instances either the inventions contain no identifiable scientific component, or the science that they embody is at least twenty years old.

        What is interesting for our purposes today about Schmookler’s work is that it combines the quantitative data -- patents -- with qualitative information -- assessments in trade and technological literature, to answer not only the brute question: "How Much?", but the related and, at the end of the day, equally important question, "How and Why?" I will return to this model in a few minutes.

        Which brings us to the present, and the burden the Government Performance and Results Act places on R&D mission agencies to measure the success of their programs. The reactions to GPRA in the research world may turn out to be as revealing of the research enterprise as any of the efforts at compliance.

        At one extreme we have the basic research die-hards, who want us to believe that basic science is so motivationally pure, and so exalted in the talents it requires, that its essence can not be captured by anything so brute as a simple number. At the other extreme, we have the marketeers of simplistic formulas, not the least of which has been -- you guessed it -- counting the number of patents issued to, or articles appearing in the Science Citation Index (SCI) authored by, researchers in a particular organization.

        Unfortunately, neither patents nor science citations have uniform value. Twice the number of patents does not give us twice the number of useful or commercially profitable inventions. Twice the number of science citations does not give us a two-fold increase in our understanding of a natural phenomenon.

        Before we can develop the valid generalizations about causes and effects that we need for informed R&D policy and management, we need to assemble qualitative information. And that qualitative information includes generalizable information about the incentives, circumstances and facilities behind both the appearance of particular technological innovations, as well as their successful commercialization. This kind of information comes from genuinely representative case studies.

        Because comprehensive patent data for any organization gives us a catalogue of its most significant inventions -- a catalogue comparable, let’s say, to a telephone book -- it is possible to randomly sample the organization’s inventions over a significant period of time. A rule of thumb for many is 15 years, which conventional wisdom says is the average time required for a technology to mature from inspiration to demonstration. To be safe, I prefer 20 years. Randomly sampled patented inventions are a mountain full of precious metals waiting to be mined. Studies of randomly sampled (rather than anecdotally sampled) cases of patented inventions will enable us to form reliable generalizations about who invents, under what circumstances, and with what results. Combined with licensing and commercialization information, we may begin to understand, with much greater certainty than before, where the greatest potential for R&D investment return lies waiting, whether we are investing 401(k) plans, our firm’s capital, or the Federal budget. Now, let me share with you some of the initial observations from our efforts, in NASA, to do just the kind of project I’ve described.

        Our project begins with a complete data-base of all patents assigned to NASA during the 20 year period, 1976-1996. Aside from being a 20 year period that brings us as close to the present as possible, given a few years allowance for the patenting process itself, 1976-1996 covers the transition between the post-Apollo let-down in NASA’s budget and the ramp-up for the Shuttle program. Secondly, we have assembled, from our patent and license attorneys’ files, a data-base of licenses issued by NASA to use patents assigned during this period. An analysis of the patent classifications and technologies represented by the more than xxxxx patents assigned to NASA between 1976 and 1996 enables us to make some interesting observations about what has happened to Federal tax dollars dedicated to aerospace R&D, and the occasional complaint that "spin-off" from Federal mission agency R&D is an ineffective strategy for stimulating innovative technologies in general.

        One category of data I cannot yet report on, except in the most general terms (though it will be included in the completed study) are the xxxx "patent waivers" NASA has granted pursuant to a contract with a private-sector performer. (On average, patent waivers represent xx% of total number of NASA patents for the period, while licenses issued to NASA represents yyy%.) The Space Act vests in NASA patent rights to all inventions made by its contractors. NASA is also authorized, however, to waive those rights to a contractor. The application for a patent waiver is typically reveals the nature of the technology that the contractor suspects may have value -- whether for its own development, or to protect its business position in a particular market segment. Generally, then, if we want to trace NASA’s influence in aerospace technologies, patent waivers, rather than patents or licenses, will be our vehicle for doing so.

¦