Monday, 30 November 2020

What administrative actions might we expect on COVID-19 in President-elect Biden’s administration?

By Rachel Sachs, Jacob S. Sherkow, Lisa Larrimore Ouellette, and Nicholson Price

When President-elect Joe Biden is inaugurated in January, his administration will face the ongoing public health and economic challenges created by COVID-19. Even if Biden takes office without control of Congress, we can expect his administration to take important actions regarding COVID-19 innovation in response to the pandemic. In this post, we consider three main pillars of his administration’s likely response (as articulated by the Biden-Harris transition website) and explain the legal foundations behind them.

What innovation incentive and access goals has the administration put forth?

The transition website describes plans to encourage both innovation incentives for and access to COVID-19 diagnostics and treatments. Given that substantial progress has been made in developing new healthcare technologies to combat COVID-19—including, to date, at least two vaccine candidates that seem likely to be marketed in the US—the new administration’s efforts focus mainly on furthering the development of rapid diagnostics, improving access to them, and ensuring the broad and equitable distribution of personal protective equipment (PPE) and vaccines; these are laudable goals.

The administration plans to further the development of diagnostics, such as “next-generation testing, including at home tests and instant tests, so we can scale up our testing capacity by orders of magnitude.” This includes “[d]oubl[ing] the number of drive-through testing sites” and centralized production of at least some test kits, which the incoming administration likens to the War Production Board during World War II. This is good; as we’ve written about previously, at-home testing is a core strategy to controlling the pandemic. The current administration’s efforts focused on encouraging their development through NIH, and has had some success. The incoming administration rightly is focused on widely deploying such tests.

By contrast, the large-scale manufacture of PPE continues to be a persistent problem in the US, despite now being nearly a year since the current administration was warned of a new respiratory virus that would likely put millions of Americans at risk of illness or death. A national shortage of N95 respirators, for example, emerged as a key policy problem early in the pandemic that has not improved. The new administration recognizes that it needs to “[f]ix personal protective equipment (PPE) problems for good.” To do so, it plans to use the Defense Production Act “to ramp up production of masks, face shields, and other PPE so that the national supply of personal protective equipment exceeds demand and our stores and stockpiles.” The government indeed has supply chain power and expertise that it can lean on to coordinate the production and distribution of similar resources.

And with respect to vaccines—arguably the current administration’s most significant success—the new administration states it will use “$25 billion in a vaccine manufacturing and distribution plan that will guarantee [a successful vaccine] gets to every American, cost-free.” Whether these are new funds or money already committed to the effort remains unclear; the current administration’s vaccine’s plan intended for “most patients [to receive the vaccine] for no out-of-pocket costs.”

One noteworthy departure from the current administration’s priorities is explicit recognition of racial disparities in who has been most severely impacted by COVID-19. This includes “[e]stablish[ing] a COVID-19 Racial and Ethnic Disparities Task Force, as proposed by Vice President-elect Harris, to provide recommendations and oversight on disparities in the public health and economic response.” It even goes so far as to apply such efforts to “culturally competent approaches to contact tracing and protecting at-risk populations.” And, to the extent that personnel is policy, the new administration’s COVID Task Force is made up of members who have done top flight research on issues pertaining to COVID and equity, including Drs. Marcella Nunez-Smith and Celine Gounder. Nor do these changes appear to be temporary; the new administration plans that the Task Force will “transition to a permanent Infectious Disease Racial Disparities Task Force” at the pandemic’s completion.

What legal tools might the incoming administration use to accomplish these goals? 

Any of these policy goals could be accomplished through new legislation, including appropriation of additional funds for combatting the pandemic. Congress is under pressure to pass a new COVID-19 stimulus package, although negotiations have been stalled for months, and it is unclear whether or when a compromise might be reached. The results of the January 5 Georgia Senate runoff elections will decide whether Democrats control both houses of Congress—making new legislation a more viable priority—or whether any new laws must navigate an even more challenging political landscape. But even without new congressional action, the Biden administration has numerous legal tools for accomplishing its goals concerning COVID-19. Here, we highlight three key options.

First, the Defense Production Act—passed at the start of the Korean War and reauthorized through 2025—can be used to control domestic industries through, for example government purchasing, requiring firms to prioritize government orders, and restricting exports. A report by the nonpartisan Congressional Research Service noted that the Trump administration’s use of the Defense Production Act has been “sporadic and relatively narrow,” and President-elect Biden has called for much more aggressive use of this authority to address PPE shortages. The Defense Production Act could also be used to increase supplies of diagnostics or products necessary for the vaccine supply chain like glass vials and freezers.

Second, the incoming administration can set priorities for the numerous agencies with discretion over innovation-related grantmaking budgets. The NIH—the world’s largest biomedical research funder—has a budget of over $40 billion. And direct public funding for health-related R&D is also provided by other agencies within the Department of Health and Human Services, including the CDC, FDA, and Biomedical Advanced Research and Development Authority (BARDA), as well as the Department of Defense, the Department of Veterans Affairs, and the Department of Energy. For example, we have previously written about the investments by the NIH and BARDA in Moderna’s SARS-CoV-2 vaccine and the NIH’s Rapid Acceleration of Diagnostics (RADx) initiative in collaboration with the FDA, CDC, and BARDA. These agencies can use both traditional ex ante grants as well as ex post or “pull” policies like prizes to support the Biden administration’s R&D goals.

Third, modulating government reimbursement amounts through insurers can also be used as an ex post innovation incentive. Some coverage decisions are out of the administration’s hands, such as whether to expand Medicaid eligibility under the Affordable Care Act in the 12 states that have failed to do so. But the Centers for Medicare and Medicaid Services (CMS) and the Department of Veterans Affairs have substantial authority over reimbursement decisions, such as for the more than 100 million Americans covered by Medicare and Medicaid. We have previously described steps taken by Congress and the Trump administration to expand coverage of COVID-19-related healthcare costs. HHS has repeatedly stated that it plans for COVID-19 vaccines to have no out-of-pocket costs for patients. And in October, CMS reimbursement rules were changed to cover with no cost-sharing even vaccines that receive emergency use authorization (EUA) rather than a full approval. In addition, CMS recently aggressively expanded access without cost-sharing to a monoclonal antibody therapy that had just received an EUA.

The incoming administration can play a crucial role not only in directing actions at individual agencies, but also in coordinating the wide array of innovation-related agencies into an integrated response. In prior posts, we have described how the early Trump administration responses to N95 respirator shortages and diagnostics represented interagency coordination problems. The Trump administration’s Operation Warp Speed, in contrast, seems to reflect a successful collaboration among agencies including the NIH, CDC, BARDA, and the Department of Defense. Coordinating the federal government’s COVID-19 response seems like one of the most important roles the incoming administration can play, whether or not it has support from Congress.

How does the Biden administration plan to communicate with the public?

In addition to more foundational innovation policy issues, communication is central to ongoing efforts to combat the pandemic. The pandemic has been greatly exacerbated by communications failures by the federal government, which has sent profoundly mixed messages on masks, the severity of the pandemic, and social distancing. The Biden administration aims to communicate better with the public in three related ways.

First, the administration plans to put doctors and scientists front and center in the pandemic response. Biden’s already-announced COVID task force shows this commitment. It is replete with scientists and physicians, particularly those with governmental expertise (like former FDA Commissioner David Kessler and Surgeon General Vivek Murthy)—though some have noted a lack of other experts, including social scientists. Biden also plans to return the CDC to the front line, including resuming regular daily briefings led by respected public health experts and scientists—and hopefully restoring some of the CDC’s lost luster and public authority.

Second, the Biden administration plans to provide evidence-based guidance for dealing with the pandemic dynamically. The transition guide suggests a more nuanced version of the policy prescriptions that have become familiar over the last several months. Social distancing, for instance, “is not a light switch. It is a dial. President-elect Biden will direct the CDC to provide specific evidence-based guidance for how to turn the dial up or down…” (Those of you who follow public-health-law-focused Professor Lindsay Wiley on Twitter will be familiar with this theme.)

Third and finally, the administration plans to be substantially more transparent than the Trump administration has been and to promote that transparency throughout the various agencies involved. For instance, the Biden administration plans to “publicly release clinical data for any vaccine the FDA approves, and authorize career staff to write a written report for public review and permit them to appear before Congress.” The FDA is now planning to do this—but the GAO is concerned that the FDA has been insufficiently transparent in its COVID-19 decisions to date, especially emergency use authorizations for therapeutics. Presumably, greater transparency about CDC guidance on public health matters is also likely to follow in a Biden administration. Particularly as scientists discover more about COVID-19 and recommendations change over time, transparency about those recommendations is especially important to maintain public trust. For therapeutics, non-pharmaceutical interventions, and vaccine distribution alike, public trust is key, and effective and transparent communication are essential to restoring and maintaining that trust.

This post is part of a series on COVID-19 innovation law and policy. Author order is rotated with each post.

Labels:

Saturday, 21 November 2020

What role is AI playing in the COVID-19 pandemic?

By Nicholson Price, Rachel Sachs, Jacob S. Sherkow, and Lisa Larrimore Ouellette

Promising results for the Pfizer/BioNTech and Moderna vaccines have been the most exciting COVID-19 innovation news in the past few weeks. But while vaccines are a crucial step toward controlling this virus, it is important not to overlook the many other technological developments spurred by the pandemic. In this week’s post, we explore how the COVID-19 pandemic is proving a fertile ground for the use of artificial intelligence and machine learning in medicine. AI offers the tantalizing possibility of solutions and recommendations when scientists don’t understand what’s going on—and that is sometimes exactly what society needs in the pandemic. The lack of oversight and wide deployment without much in the way of validation, however, raise concerns about whether researchers are actually getting it right. 

How is AI being used to combat the COVID-19 pandemic?

The label “artificial intelligence” is sometimes applied to any kind of automation, but in this post we will focus on developments in machine learning, in which computer models are designed to learn from data. Machine learning can be supervised, with models predicting labels based on training data, or unsupervised, with models identifying patterns in unlabeled data. For example, a supervised machine learning algorithm might be given a training dataset of people who have or have not been diagnosed with COVID-19 and tasked with predicting COVID-19 diagnoses in new data; an unsupervised algorithm might be given information about people who have COVID-19 and tasked with identifying latent structures within the dataset. Many of the most exciting developments in machine learning over the past decade have been driven by the deep learning revolution, as increases in computing power have enabled analysis of enormous layered datasets.

The number of applications of machine learning approaches to COVID-19 is staggering—there are already close to fifty thousand works on Google Scholar using the terms “machine learning” and “COVID-19”—so we will focus here on some accessible examples rather than a systematic academic review.

Machine learning is being used for basic research, such as understanding COVID-19 and potential interventions at a biomolecular level. For instance, deep learning algorithms have been used to predict the structure of proteins associated with SARS-CoV-2 and to suggest proteins that might be good targets for vaccines. Deep learning has facilitated drug repurposing efforts involving scanning the literature and public databases for patterns. And AI is being used to help conductive adaptive clinical trials to determine as effectively as possible the differences between potential COVID-19 therapies.

AI is also being used to help diagnose and manage COVID-19 patients. Some AI researchers are focused on helping people self-diagnose through technologies like wearable rings or chatbots. One algorithm predicts whether a patient has COVID-19 based on the sound of their cough. Once a patient reaches the hospital, datasets of lung x-rays are being used to both diagnose COVID-19 and predict disease severity, and models like Epic’s Deterioration Index have been widely adopted to predict whether and when a patient’s symptoms would worsen.

How do we know that AI is actually helping?

In most cases, we don’t. Many companies and institutions that have developed or repurposed AI tools in the fight against COVID-19 have not published any data demonstrating how well their analytical tools work. In many cases, this data is being gathered—for instance, Epic has repurposed a tool it used to predict critical outcomes in non-COVID-19 patients for the COVID-19 context, and has tested the tool on more than 29,000 COVID-19 hospital admissions—but those data have yet to be made public. 

In general, high-quality trials of the type we expect in other areas of medical technology are scarce in the AI space, in part because of the relative absence of the FDA in this area. Most of these AI tools are being developed and deployed without agency oversight, in a way that enables them to be used quickly by clinicians but which also raises questions about whether they are actually safe and effective for the intended use. To date, just two companies have received emergency use authorizations from the FDA for their AI models; each aims to predict which patients are at particularly high risk for developing the more severe complications of COVID-19.

To be sure, at least some studies have been published evaluating these models—but the results have been mixed. One study of a model aiming to predict a patient’s likelihood of developing COVID-19 complications found that it performed fairly well in identifying patients who were particularly high-risk or low-risk, but performed less well in predicting results for patients in the vast middle. But the model had already been adopted and used before gathering and publishing this information, due to the urgency of the pandemic. 

These studies are consistent with other observations that many AI tools are not yet delivering on the promise of the technology. Take the example of the AI chatbots which aim to ask you about your symptoms and provide an initial screening for COVID-19—one reporter tested eight of these chatbots with the same inputs and received a widely variable range of answers. One symptom checker found him to be at “low” risk of having COVID-19, others declared him to be at “medium risk,” while a third directed him to “start home isolation immediately.” (To be sure, the chatbots are designed for slightly different purposes, but to the extent that they are designed to deliver helpful advice to patients, the disparities are still concerning.) 

How should policymakers treat the use of AI in the COVID-19 context?

Policymakers should encourage best uses of AI in combatting COVID-19—but should be wary of some of its serious negative limitations. Despite its promise, AI doesn’t immediately resolve one of the classic tensions of new technologies—the tension between getting the science right and trying everything to see what works. There is undoubtedly a sense of urgency to develop new tools to treat the disease—especially given its rapid and fatal spread—so a sense of optimistic experimentalism makes sense. But such an approach isn’t helpful if the diagnostics and therapies employed do not ultimately work. Nor should policymakers assume these tensions are in some ways complementary: bad therapies often preclude the use of good ones or diminish our ability to properly test good ones.

AI also presents some major challenges related to racial bias; these can arise without any conscious bias on the part of developers. This is simply explained: AI is only good as its inputs, and where its inputs contain biases—a sad but true reflection of the world around it—any algorithm developed from those inputs will embody those biases. Even before COVID-19, researchers found that because less money is spent on Black patients, a popular commercial algorithm for guiding healthcare decisions falsely concluded that Black patients are healthier than they are. This algorithmic bias is an example of “proxy discrimination”—the tendency to use proxies to take into account differences between groups, even where the training data omit group identification. As a consequence, overusing AI may contribute to COVID-19 bias or disparities. And algorithmic bias is not just a concern in the clinical context; for example, a new working paper shows that relying on smartphone-based mobility data to inform COVID-19 responses “could disproportionately harm high-risk elderly and minority groups” who are less likely to be represented in such data. To be sure, AI can also be used to combat racial bias; there are some efforts to use AI specifically to figure out the determinants of disproportionate COVID-19 problems by race/ethnicity. But whether AI developers can systematically and legally combat algorithmic discrimination more broadly remains to be seen.

AI in clinical care holds promise but—if used poorly—has the potential to make things worse, not better. As noted by Arti Rai, Isha Sharma, and Christina Silcox, “[t]o avoid unintended harms, actors in the [AI] development and adoption ecosystem must promote accountability . . . that assure careful evaluation of risk and benefit relative to plausible alternatives.” Doing so—intelligently—will best encourage the technological development of AI in clinical settings while avoiding some of its worst excesses. 

This post is part of a series on COVID-19 innovation law and policy. Author order is rotated with each post.

Labels:

Monday, 2 November 2020

Trade Secrets and Prior Art

I have published an article entitled "The Trade Secrecy Standard for Patent Prior Art," co-authored with Sharon K. Sandeen. 

The article, which is forthcoming in American University Law Review, argues that patent prior art cases can be explained using concepts of publicness and secrecy that match those used in trade secret law. In other words, what counts as prior art against a patent pursuant to 35 U.S.C. § 102 (2011) is informed by the definition of a trade secret pursuant to 18 U.S.C. § 1839 (3) (2016).

The paper can be downloaded here. I've posted an excerpt below, applying the trade secrecy standard for patent prior art to the Supreme Court's interpretation of a "public use" in Egbert v. Lippman.
The origin of [the Federal Circuit's] seemingly counterintuitive notion of what makes a use “public” is usually traced to the Supreme Court’s holding in Egbert v. Lippman. In Egbert, the inventor of an improved corset spring (Samuel Barnes) gave two samples of the invention to Francis Lee Egbert (Mr. Barnes’ then girlfriend and eventual wife). She wore them for more than two years before Mr. Barnes applied for a patent; and they allegedly showed and explained the corset spring to a friend, Joseph Sturgis, who came over to Mr. Barnes’ house for dinner. Even though the spring was sewn into a corset and therefore, by its nature, was not visible to the public, the Court held it was in “public use” because it was given “to another” (to Frances and to Mr. Sturgis) “to be used” by them “without limitation or restriction, or injunction of secrecy.”

This holding baffled the dissenting Justice Miller, who wrote rhetorically that “[i]f the little steep spring inserted in a single pair of corsets, and used by only one woman, covered by her outer-clothing, and in a position always withheld from public observation,” was a “public use,” then he was “a loss to know the line between a private and a public use.”

But the trade secrecy standard explains how the Egbert Court drew “the line between a private and a public use.” The Court’s concept of a “public use” was, in its own words, any sharing of the invention with another for use, without placing them under an “injunction of secrecy[.]” The Court’s use of this phrase, “injunction of secrecy,” is unlikely to be coincidence. This phrase appeared in contemporary trade secret cases to refer to a duty of confidentiality that might give rise to an injunction.

In trade secret law, a court might have found that—although the corset spring was neither generally known in the industry nor readily ascertainable by proper means—it had not been the subject of reasonable efforts to maintain its secrecy, given how liberally Barnes shared it with his girlfriend and house guests, without placing them under contractual or other restrictions that would have imposed an obligation of confidentiality. 
We might debate whether this was the right decision, from the perspective of trade secret law and modern domestic relationships. In trade secret law, a duty to maintain secrecy does not necessarily have to be written down. It depends upon the circumstances. Even if not expressly stated, a duty to maintain secrecy can be inferred if, among other things, “the trade secret was disclosed to the person under circumstances in which the relationship between the parties to the disclosure” indicates an intent to keep the information confidential. That standard might have been met in Egbert, because the most public part of the disclosure was to Frances, who was at the time the inventor’s domestic partner. ...  

In any case, even if the Egbert Court may have reached the wrong factual conclusion, it is hard to argue with the fact that the standard being applied was a trade secrecy one. Other cases from the same period indicate courts thought of secrecy in a similar way—deliberate, beyond the ordinary efforts to conceal the invention was needed to keep something out of the public eye.

For example, just a few years later, in Hall v. Macneale, the Court again denied a patent for an improved design for a safe that the inventors had used more than two years prior to filing a patent. Citing its opinion in Egbert, the Court held that—even though the inventive design feature was effectively “hidden from view,” since it was inside the safe, and revealing it would have required a “destruction of the safe”—the inventors had not made deliberate efforts at concealment. Like Mr. Barnes, they’d simply relied on the inherent nature of the safe designs to maintain the secrecy of the invention. Like in a trade secret case, where zero efforts to maintain secrecy will disqualify an owner from enforcing their trade secrets, this simply was not enough.

The trade secrecy standard—if not being kept as a trade secret, then patent prior art—sheds still more light when prior art activity arises within an employment setting, where courts must assess whether the company has exercised sufficient secrecy precautions with respect to its own employees. ..."

The full paper can be downloaded here.  

 

Labels: ,