Friday, 9 April 2021

How can Congress create infrastructure for the next pandemic?

By Nicholson Price, Rachel Sachs, Jacob S. Sherkow, and Lisa Larrimore Ouellette

After approximately 200 Infrastructure Weeks, policymakers now appear to be actually talking about passing legislation about infrastructure! Congress also seems like it might take action to lay the groundwork for combatting the next pandemic; bipartisan efforts are underway. Putting the two together: how should Congress think about creating innovation infrastructure, broadly defined, to help combat the next pandemic? 

Even before COVID-19, experts were sounding alarms about insufficient infrastructure to address the foreseeable risk of a global pandemic. In 2019, an expert group convened by the World Bank and WHO concluded that “[t]he world is not prepared” for the “very real threat of a rapidly moving, highly lethal pandemic of a respiratory pathogen,” among other things because “[t]oo many places lack even the most rudimentary health-care infrastructure.” COVID-19 has magnified these global health inequalities. But inadequate infrastructure investment is not just a problem in low-income countries: COVID-19 has also drawn increased attention to long-apparent weaknesses in many U.S. infrastructure sectors. We suggest priorities for three types of infrastructure: physical infrastructure, knowledge infrastructure, and human infrastructure (recognizing that these categories may overlap).

How can Congress build physical infrastructure for future pandemics?

The early stages of the current pandemic showed the challenges, from a public health perspective, of globalized supply chains and just-in-time inventory. Some of our first posts in this series one year ago investigated PPE shortages and responses including 3D printing. Global supply lines are still struggling to recover. Corporate leaders have some incentives to build more redundant and resilient supply chains going forward, such as by diversifying their supply base, holding more intermediate inventory, and adopting more flexible process innovations. But the government also has a role in encouraging supply chain resilience, including through supply chain mapping for critical goods, investments in transportation infrastructure, diversifying supply routes through investments in developing countries, and clarifying lines of regulatory responsibility in the next emergency.

Reviewing all the physical infrastructure sectors that could benefit from improved pandemic preparedness would take more than a blog post, but one particularly critical investment is in vaccine manufacturing infrastructure. As we have previously explained, scaling vaccine manufacturing and distribution networks are enormous challenges, and poor planning in these areas has led to a slow rollout even in the United States (including substantial manufacturing problems) and tragically few doses administered in low-income countries. The mRNA platform used by the Moderna and Pfizer-BioNTech vaccines is exciting in part because the vaccines can be produced more quickly than with older technologies, but a key bottleneck has been the small number of facilities that can make lipid nanoparticles for encapsulating the mRNA. Building more of these machines takes time, but the government could invest in more capacity now—both to help continue expanding COVID-19 vaccine production and to have facilities ready for the next emerging infectious disease. 

More broadly, policymakers should recognize the benefits of having idle and underused capacity during non-emergency times. The federal government already spends enormous sums on undeployed military capacity in the event of a defense-related emergency, but Americans are also threatened by other kinds of emergencies, ranging from winter storms that overwhelm power grids to pandemics that overwhelm public health systems. Private industry isn’t going to maintain adequate physical infrastructure on its own, so policymakers should consider subsidies, mandates, and getting the public sector more directly involved in commercialization.

How can Congress build knowledge infrastructure for future pandemics?

In addition to physical infrastructure for future pandemics, policymakers can focus on building “knowledge infrastructure,” i.e., shared informational resources used for downstream development. Generating private incentives for knowledge infrastructure can be tough, however, because (as we’ve said before) some knowledge goods are nonexcludable even under strong IP laws. Creating adequate levels of such goods typically requires some direct government investment to supplement private efforts. But even for knowledge goods that may be excludable—think privately generated clinical data—we may want such information to be freely shared in a pandemic. This suggests at least three important areas in which, given our experience in this pandemic, government could do a better job at building a public health-focused knowledge infrastructure for the next one.

First, disease surveillance. The United States did a remarkably poor job at tracking the initial (and later and later) spread of the virus, and continues to be flat-footed in tracking variants of concern. These deficiencies stem from a weak, poorly funded public health infrastructure and one not readily interconnected (at least, en masse) with sequencing laboratories. Now is the time for Congress to invest in building up this country’s disease surveillance capacity; we should know about the next outbreak before it turns into an outbreak. Fortunately, the pandemic has produced several models worth replicating and expanding. For example, the University of Illinois, through SHIELD Illinois, has a scalable saliva-borne virus surveillance system that it makes available for out-licensing and adoption. That said, sequencing capacity has proven to be a significant bottleneck in COVID-19 surveillance and it’s not clear whether the next pandemic will be saliva borne. Policymakers should therefore also think about the development of other types of technologically simple tests, including CRISPR-based systems that are non-perishable, require almost no equipment, cost cents per test, and could, in principle, be used at home.

Second, we need to increase funding for fundamental research on microbiology, including, of course, knowledge of viruses and vaccines. This time around, we all “got lucky” on the successful development of COVID-19 vaccines. But a 2019 joint World Bank–WHO report notes that we still face a continuing pandemic preparedness problem when it comes to fundamental research: “Research infrastructure and level/predictability of funding are weak.” We still have basic questions about viruses, such as the level and role of recombination in positive-stranded RNA viruses. Ensuring, expanding, and diversifying funding for basic microbiology research infrastructure should be one of policymakers’ top priorities.

Third, we need better knowledge infrastructure for manufacturing pharmaceuticals, particularly biologics. Manufacturing pharmaceuticals is—and continues to be—ad hoc; improvements are typically made as manufacturers see fit. (See, e.g., biologics, small-molecule drugs, nucleic acid vectors.) This becomes a knowledge infrastructure problem because much of this information is cloistered away in relatively few companies which, generally, do not share such information. There is room, here, for more direct government investment on expanding the knowledge infrastructure of manufacturing, including basic research on manufacturing processes. And beyond direct spending, there is room for regulatory improvement, too: FDA considers much of the manufacturing information submitted to it “confidential business information” or CBI. Having the Agency narrow the scope of CBI—and sharing what is not confidential, publicly and accessibly—could provide a foundation for manufacturing knowledge infrastructure going forward.

How can Congress build human infrastructure for future pandemics?

The need for investments in knowledge infrastructure is closely related to the need for investments in human infrastructure: investing in the training and development of people with skills and expertise that will be needed in advance of the next pandemic, and doing so in an accessible, equitable manner. For instance, not only have we underfunded our public health knowledge infrastructure (as noted above), but we have also underfunded public health professionals—critical human infrastructure elements. There would be many ways to invest in further training of public health professionals, some leveraging federal-state relationships (such as through grant funding to states and local governments), and others focusing at the federal level (such as through investments in the Public Health Corps). 

Policymakers should also invest in developing and training scientists with expertise in these areas. Although NIH and NSF grants are often thought of as supporting particular substantive projects, much of this funding is used to support the people working on these projects. Indeed, there is some empirical evidence that basing grant selection criteria on people, not projects, results in higher-impact research—though also a potential for biased selection. But whether funding agencies invest in people who seem likely to become experts in virology, in the microfluidics essential to mRNA manufacturing and encapsulation, or in projects related to these technologies (along with the people who happen to be working on them), developing human capital for the next pandemic should be a guiding policy concern. 

Policymakers might also think about the ability of existing scientists to reallocate or redirect their research when urgent needs arise, such as in the pandemic context. Extramural researchers are often unable to redirect their lab capacity or expertise without obtaining permission from their funding organizations (if their funding agreements permit it at all). Funding agencies could seek to increase flexibility in reallocating extramural grants in the event of particular declared public health emergencies. Similarly, policymakers might seek an increase in intramural research done by government-employed scientists at NIH, and particularly at NIAID, which might be able to be reallocated more quickly in the event of a pandemic.

Congress should pay special attention to using funding to ameliorate disparities in medical and scientific education, which are driven by factors including structural racism (as we have discussed in previous posts). The American Rescue Plan contains an important template of this approach, specifically directing $3 billion (of the $40 billion in the Higher Education Emergency Relief Fund) to historically Black colleges and universities and minority-serving institutions, some of which, like Xavier University, serve as sources of training for a disproportionate number of Black health professionals.

Yet human infrastructure is not only about training new scientists and public health professionals. It must also be about building connections between them to enable them to cooperate, share knowledge, and develop new insights at the intersection of their fields. These efforts might take many forms, such as encouragement of interagency coordination, or through support for interdisciplinary scientific work. There is also surely an important role to play for particular individuals or entities as hubs of connections, such as a pandemic response team, or even a broader innovation regulator, to better our pandemic preparedness infrastructure.

This post is part of a series on COVID-19 innovation law and policy. Author order is rotated with each post.

Tuesday, 6 April 2021

Google v. Oracle - The Final Shoe Drops

The Supreme Court ruled yesterday in Google v. Oracle that Google did not infringe Oracle's copyright in its APIs by virtue of fair use. The vote was 6-2, with Justice Breyer writing for the Court, and Justices Thomas and Alito dissenting. 

The opinion was straightforward and went to great lengths to attempt to explain the technology at issue. I thought it did a decent job of it (definitely more Godot than Guffman), even as the opinion continued to struggle for a good analogy. The Court adopted the file cabinet/drawer/folder analogy presented in Google's brief, which I thought was a terrible analogy...so I guess there's no accounting for taste (or winning advocacy). The court's fair use analysis was influenced by Judge Boudin's concurrence in Lotus v. Borland, though that concurrence didn't actually call it fair use, but instead "privileged use."

Others have and will surely write about the fair use aspects and what this means for software APIs. Contrary to Oracle's ridiculous and vitriolic press statement yesterday, this case will likely not change the way anyone in the industry behaves in the least. APIs have been used and reused for decades, and will continue to be. And contrary to being a barrier to entry, reuse of APIs allows for competitive inroads and entry, including by Oracle, in its mimicry of Amazon's AWS API. (Indeed, the hubris of Oracle's statement in light of its implementation of another company's API is stunning, assuming it was unlicensed-I've been unable to verify one way or the other.)  

The opinion also has some nuggets for other fair use - discussion of transformation and art, definition of markets for determining harm, another reaffirmation of Campbell v. Acuff-Rose Music, fair use as a mixed question of law and fact (something I discussed in a prior blog post), and so forth.

Instead, I will focus on my hobby horse-whether the APIs are copyrighted, and if so how we get to non-infringement. The Supreme Court explicitly decided that the copyrightability of APIs is a third-rail and did not attempt to touch the issue. There are two ways to read the tea leaves. First, perhaps a majority of the court thought they were uncopyrightable, but feared the effects of saying so. Second (and my guess), perhaps a majority of the court (or a 4-4 split) thought that they were copyrightable, but fair use was an acceptable compromise. The second possibility is why I wrote and submitted my amicus brief, which was intended to give a path to non-infringement even if the APIs were copyrightable.

Alas, the court did not buy into the abstraction/filtration argument I made, which I believe was doctrinally appropriate, nor did the brief get a cite, as many that discussed the importance of APIs did. However, in a sense, the court adopted the methodology I suggested. From my brief:

But the copyrightability of an entire work does not answer the question of whether any particular portion of it, if used by another, is infringing. That analysis requires determining whether the defendant has taken too much expression and not ideas, systems, methods of operation, or the like. And such a determination cannot be made outside of the infringement analysis. Any functionally required aspects—including any expression necessary to practice the idea—should be removed from the comparison. 

What remains should then be compared. The advantage of this approach is that it recognizes that while entire software programs may be copyrightable in some contexts, their pieces might not be infringed in others. There need be no zero-sum game, but only a recognition that the scope of copyright depends, as it always has, on the accused’s use of the copyrighted work. 

...

[T]he Court need not decide whether any part of Oracle’s code is copyrightable standing alone. It should only determine that the scope of its copyright in the Java source code cannot extend to infringement through the reuse of declaring functions necessary to create a compiler or interpreter that accepts the same commands and parameter names to allow programmers to use the Java programming language.

The primary pushback on this argument that I received from smart colleagues asked this question: why should context matter in the infringement analysis? Justice Thomas's dissent is replete with this same concern. My answer was always the same - because use in a functional context may be a use of the idea/method, whereas use in a different context might not be.

But Justice Breyer has sidestepped this question to essentially reach the same result using fair use. The one place where we can be sure that context matters is fair use. The nature and character of the use is one of the factors, after all. The Court's analysis tracks many of the same issues in my brief - the functionality of the APIs, their use as a de facto standard, switching costs, etc.

And so the Court's final resolution is not that far off from what I had asked. Rather than excluding the APIs from infringement by filtering them out, the Court would instead exclude them from infringement under a fair use analysis that considers many of the same factors. I can live with this solution--way back in 1999, I published a paper that argued that "courts have been able to determine efficient economic outcomes based on the cases before them, but they have been unable to settle on a rule that definitely determines how much reuse to allow in each case." The article lays out a variety of economic factors that predict how cases come out, and you'll be shocked to find out that they favor Google in this case (e.g. switching costs, de facto standards, lack of slavish copying of the implementation, no breach of an underlying economic duty, public benefits of compatibility). Perhaps that's a reason this case has stuck in my craw for so long: it's about the only one that didn't fit with my 20+ year old model.

Despite my doctrinal sanguinity, the downside of the court's approach is that it might still lead to framing issues in the future. Litigants might still be subjected to juries asked to simply decide whether the APIs were used (for infringement) and then hope to rely on fair use as a defense. Then again, a judicial fair use inquiry might keep the question from ever getting to a jury, which is basically the same result I've advocated. But this case went to a jury, by appellate order, and it's unclear that it should have. It was certainly costly. However, the strong language of this opinion may apply the next time, as in the case I recently blogged about with control codes.


The end is near for my Oracle and Google blogging, a 9 year expedition. But I do have one more in me, a more technical post in a week or so about the so-called 170 lines of code that supposedly all that are necessary to implement Java.

Labels: , ,