Wednesday, 30 June 2021

Why do differences in clinical trial design make it hard to compare COVID-19 vaccines?

By Lisa Larrimore Ouellette, Nicholson Price, Rachel Sachs, and Jacob S. Sherkow

The number of COVID-19 vaccines is growing, with 18 vaccines in use around the world and many others in development. The global vaccination campaign is slowly progressing, with over 3 billion doses administered, although the percentage of doses administered in low-income countries remains at only 0.3%. But because of differences in how they were tested in clinical trials, making apples-to-apples comparisons is difficult—even just for the 3 vaccines authorized by the FDA for use in the United States. In this post, we explore the open questions that remain because of these differences in clinical trial design, the FDA’s authority to help standardize clinical trials, and what lessons can be learned for vaccine clinical trials going forward.

What were the key differences in how clinical trials were run for COVID-19 vaccines?

Clinical trials for COVID-19 vaccines differed along a surprising number of dimensions based on manufacturer choices: the number of doses, the spacing between multiple doses, the amount of vaccine per dose, the patients studied, and the endpoints tested. Because the trials were conducted at different places and at different times, the prevalence of COVID-19 variants also differed. These differences matter because clinical trials are the best source of rigorous information about vaccine efficacy, and differences in the way those trials were conducted limits the ability to compare the vaccines meaningfully.

Doses administered. Vaccines are given in one or two doses. The J&J vaccine was studied and is administered in one dose (indeed, many people sought out J&J for this reason), but J&J is now studying the effect of a booster shot. When the Pfizer-BioNTech and Moderna vaccines were initially tested, a first dose was followed by a relatively weak immune reaction, and a second dose triggered a strong reaction. The Oxford-AstraZeneca and Novavax vaccines showed the same pattern. Nevertheless, rigorous evidence about the performance of two-dose vaccines after only a single dose is lacking, because that scenario was not tested in the pivotal clinical trials. 

Dose spacing. Two-dose vaccines were administered at significantly different intervals. Moderna was tested at a four-week interval, Pfizer-BioNTech and Novavax at three weeks. Oxford-AstraZeneca was the only prominent vaccine where trials included different spacing between doses (for a small number of patients), testing four- to twelve-week intervals. They found that longer inter-dose gaps provoked a greater immune response, which was used as support for the UK’s controversial Dec. 30 decision to expand the gap between doses—including for the Pfizer-BioNTech vaccine where changing the dose spacing hadn’t been tested. A later study found that an increased gap for Pfizer boosted immune response in the elderly. The United States saw similar proposals to expand the gap between doses to get more people vaccinated (with at least one dose) faster, but evidence for the real-world effects remains unknown—because, as the CDC has emphasized, a longer gap was not tested. 

Vaccine per dose. Vaccines differed substantially in the amount of vaccine per dose in clinical trials—and consequently in the amount of vaccine given to patients. For instance, Moderna vaccines were tested with 100 micrograms of mRNA per dose, while Pfizer-BioNTech was tested with 30 micrograms per dose. Would Moderna be effective with lower doses? In January the FDA said changes to dosing or schedule were “premature and not rooted solidly in the available evidence.” In February, Moderna published results from a Phase 2 trial showing that half-doses of 50 micrograms were as good as full-doses at generating a strong immune response, but experts cautioned against extrapolating immunogenicity data to make conclusions about real-world performance.

Study populations. Vaccines were studied in different populations, based in part on recruitment efforts and in part on trials being conducted in different countries. Globally, for instance, the J&J patient population was 45% Hispanic and/or Latinx, Pfizer-BioNTech 26% and Moderna 20%. Given disparities in COVID-19 impact on different communities, disparities in vaccine access, and a history of biased clinical trial populations, balanced vaccine demographics are particularly important. Population age also differed, though less starkly (even setting aside pediatric trials); 25% of Moderna’s patients were 65 or older, but only 21% of Pfizer-BioNTech’s. (Notably, even knowing whether the comparisons are apples to apples is nontrivial; Moderna’s reported age breakdown was 18-65/older, Pfizer’s was 16-18/16-55/55+/65+/75+, and J&J’s was under/over 60). 

Endpoints. Manufacturers also chose different endpoints for their clinical trials. What were the endpoints? Pfizer-BioNTech measured efficacy against any symptomatic infection beginning seven days after the second vaccine dose. Moderna also measured any symptomatic infection, but not until two weeks after the second dose. And J&J measured cases both at two and four weeks after its single dose—but counted only cases of moderate-to-severe COVID-19 (including a positive test). These differences make it difficult to compare even topline results. 

Variants. Finally, clinical trials occurred at different times and in different countries—which means that the prevalence of viral variants also differed. Pfizer-BioNTech’s and Moderna’s vaccines were tested before variants of concern were widely circulating, making it harder to know how effective they are against variants. J&J’s vaccine, on the other hand, was tested in South Africa when the Beta variant was spreading there, and it showed lower efficacy in preventing infection in South Africa (57%) than in the United States (72%), though still strong protection (85%) against severe illness. Three months ago we described the existing data on vaccines and variants, including the lack of clinical trial evidence for most vaccine/variant combinations. Studies from England and Scotland suggest Pfizer and AstraZeneca vaccines offer somewhat reduced protection against infection by the highly transmissible Delta variant, but similar protection against severe illness. But these studies are not randomized controlled clinical trials.

What legal authority does the FDA have to help standardize clinical trials?

As we discussed in our last post, the FDA’s decision whether to grant either emergency use authorization (EUA) or full approval under a Biologics License Application (BLA) is based on standards specified by statute. But the FDA has also published a range of regulations and guidance documents governing the clinical trial process. These requirements do compel some standardization throughout the drug development timeline, though some requirements are more procedural than substantive (such as the submission of clinical data in standard formats). 

More substantively, even though the FDA has been pushing clinical trials “in the direction of standardization” since before World War II, many important scientific aspects of the clinical trials process remain up to sponsors’ discretion. The pharmaceutical companies themselves choose, in their best scientific judgment, the right dosage and route of administration for their new products. In the case of the COVID-19 vaccines, the FDA was unlikely to second-guess the manufacturers’ choice of one versus two-dose vaccines, the spacing of those vaccines, or the dosage within each vaccine. 

From the perspective of generating comparative clinical effectiveness information, these decisions (about dosage and spacing) may be less important than decisions about clinical trial design, including enrollment and the selection of appropriate endpoints. The FDA did release helpful guidance in June 2020 to explain what it would be looking for in the development of COVID-19 vaccines. The FDA “strongly encourage[d]” sponsors to enroll “populations most affected by COVID-19, specifically racial and ethnic minorities.” Although the agency did not require sponsors to meet certain benchmarks for the diversity of their trials, Moderna did slow the enrollment of their trial briefly to ensure that their trial population was more representative of the U.S. public, after initially enrolling fewer people of color than anticipated.

The FDA also specified important features of the clinical trial design, including appropriate endpoints. Noting that “[s]tandardization of efficacy endpoints across clinical trials may facilitate comparative evaluation of vaccines,” the FDA recommended that “either the primary endpoint or a secondary endpoint… be defined as virologically confirmed SARS-CoV-2 infection” with one or more specified symptoms, and that sponsors should also evaluate severe COVID-19, defined as confirmed infection plus particular indicators of severity. Despite the FDA’s efforts to standardize these endpoints, though, the manufacturers chose—and were approved to use—endpoints which cannot be compared in a straightforward way.

Another feature of the COVID-19 pandemic may have given the government as a whole more visibility into and oversight of the clinical trials process here: Operation Warp Speed. Although the FDA does work with sponsors throughout the development process, the development of COVID-19 vaccines featured a particularly high level of government involvement and coordination of clinical trials for companies opting in to that process (as Moderna and other companies did). In theory, Warp Speed could have used that funding to attempt to exert a greater degree of standardization over the clinical trials involved here.

What lessons can be learned for vaccine clinical trial design going forward?

Greater standardization of clinical trial design could have made the existing COVID-19 vaccines more comparable. But standardization also has costs. Compelling the standardization of trials on less than ideal or ambiguous endpoints, for example, would allow easier comparisons at the cost of less informative studies. Standardization of treatment protocols—say, the number of days between shots for two-dose vaccines—would yield some comparative insights but would diminish opportunities for optimization, essentially variations that yield information about improving vaccine candidates under trial. For the COVID-19 vaccines, we still don’t know the optimal number of weeks between doses, and standardizing a given time-period from the outset would have done little to elucidate that. Ditto for the optimal dose of the vaccine, be it, for example, 100 or 30 micrograms of mRNA, or something greater, smaller, or in between. 

At the same time, compelling each manufacturer to test these variations would have come at the cost of losing statistical power and increasing the time before vaccines could be authorized—a tragedy in a pandemic where more than 10,000 lives are lost each day. Vaccine developers were able to create multiple safe and effective vaccines on an unprecedented timeline in no small part because of the simple trial designs. So, while the lack of standardization is easy to critique, the triumph of studying these vaccines under immense time pressure should be lauded.

That doesn’t mean there are no future opportunities to improve. Additional trials will be needed to resolve a variety of open scientific questions about the vaccines and the virus, such as whether boosters are needed to combat variants, as well as the safety and effectiveness of the vaccines in pediatric populations. A recent Perspective in Nature calls for a similar strategy, to design post-licensure clinical trials “addressing vaccine effectiveness, including the level of protection of both vaccinated and non-vaccinated individuals in entire targeted populations.” Making the most of these studies—and making the results comparable across vaccines—will require some effort at standardization, balanced with the need to deliver more vaccine to low-income countries without being exploitative.

While the FDA and other regulators shouldn’t necessarily micromanage trials, especially for novel technology, agencies could learn from the successes of a large-scale, umbrella clinical trial: RECOVERY. The initial vaccine trials did not have an adaptive, multi-arm design, but now that several vaccines are available, future trials perhaps should have such an adaptive design. The WHO’s 2018 plan for designing vaccine trials during a public health emergency noted the advantages of trials with “multiple vaccine candidates and a control comparator arm” and “adaptive strategies to drop poorly performing candidates.” Going forward, especially with testing on pediatric populations, this could include differing dosing options for a single vaccine.

While these lessons are directed to the COVID-19 vaccines, they should be broadly generalizable, including to non-vaccine interventions. A lack of comparative effectiveness data for the COVID-19 vaccines exists precisely because these problems exist more generally. Thinking through their solutions now should help us for the next pandemic. As the saying goes, researchers investigating clinical trial design shouldn’t let a serious crisis go to waste.

This post is part of a series on COVID-19 innovation law and policy. Author order is rotated with each post.

Labels:

Friday, 4 June 2021

What Does it Mean to Exceed Authorized Access?

After years of debate and prosecutorial overreach, the Supreme Court has now narrowed the Computer Fraud and Abuse Act (CFAA). In Van Buren v. U.S., the Court ruled that obtaining information by "exced[ing] authorized access" is limited to information on the computer that one is not authorized to access at all, rather than to information simply gathered for an improper purpose.

To explain, consider the facts of Van Buren. Van Buren had rightful access to a database of DMV license plate information. He accessed that database using valid credentials, but looked up information for an improper purpose. He was convicted under the CFAA for exceeding his authorized access. I have blogged about this issue before. The broad reading that sent him to jail is a really scary interpretation of the statute, one in which many ordinary people could go to jail for innocuous use of the internet.

The Court narrowed the meaning, and held that the language of the statute: "to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter.” cannot be read to cover the purpose of gathering the information. Instead, "entitled so to obtain" must mean entitled to obtain in the manner prior referenced, which means obtained by access to a computer with authorization. Based on this reading, Van Buren cannot be guilty because he accessed records that he was already entitled to access. But he might have been guilty if he looked at personnel files on the same computer.

The Court leaves open the question whether access to other information must be barred by code or merely policy. In the hypo above, if Van Buren bypasses a password on the computer to which he has access in order to obtain the personnel records, there's no question that such conduct would be barred. But what if the files were there for all to see if they merely looked, and it was simply policy that barred access? The court leaves that question open. The legislative history, which I discuss here, makes clear that the policy based bar was contemplated at the time of the statute, because "exceeds authorized access" was left out of some provisions of the CFAA to keep unwary employees from being ensnared: "It is not difficult to envision an employee or other individual who, while authorized to use a particular computer in one department, briefly exceeds his authorized access and peruses data belonging to the department that he is not supposed to look at. This is especially true where the department in question lacks a clear method of delineating which individuals are authorized to access certain of its data." (S. Rep. 99-472)

This brings me to my discomfort with the opinion. I'm thrilled at the outcome. The CFAA is much too broad, and this is one way to narrow the scope of it. Otherwise, it made all sorts of innocuous activity illegal. But from a textual standpoint, I've never been convinced that this is the proper reading of the words of the statute.

So long as the Court allows policy-based access restrictions (which is not crazy given the legislative history, even if it's not great policy), my view continues to be that the actual statutory interpretation part of it is not nearly as clear as the Court would have it. 

As noted above, the Court envisions two situations: 

    1. You may access the computer. You may access file A but (by policy) not file B, even though technically your access to the computer allows you to download file B. This exceeds authorized access. 

    2. You may access the computer. You may access file A, but (by policy) only for a particular purpose, even though technically your access to the computer allows you to download file A for any purpose. This does not exceed authorized access. 

For many policy reasons this is a better outcome than saying No. 2 exceeds authorized access. But the Court offers little support for the conceptual (or textual) notion that these two scenarios are distinct. There is nothing in the “entitled so to obtain” discussion that differentiates what is entitled by access once given and what is not. Both of these scenarios are types of information you could get with your access, but have no right to get under the terms of your access. 

The only difference is that as a matter of policy we don’t want to impose a purpose based limitation on that right. Even if you accept the Court’s reading of the statute wholesale, you do not get to (quoting the Court's new rule): “an individual 'exceeds authorized access' when he accesses a computer with authorization but then obtains information located in particular areas of the computer—such as files, folders, or databases—that are off limits to him.”  So long as “off-limits” is not code based, this is a common law gloss rather than a textual one. I’m fine with that, but would rather the Court say that, or alternatively limit liability for all policy based breaches.

To illustrate the point that we cannot differentiate policy limits, as I noted in this post years ago: what is to stop everyone from rewriting their agreements conditionally: "Your access to this server is expressly conditioned on your intent at the time of access. If your intent is to use the information for nefarious purposes, then your access right is revoked." Problem solved, Van Buren goes to jail. If this seems far-fetched, consider Google's terms of service at the time of the Nosal case:  "You may not use the Services and may not accept the Terms if . . . you are not of legal age to form a binding contract with Google . . . .”  That sounds like an access restriction to me. I can see everyone rewriting policy to match; but this shows the folly of it all.

As a final note, the Court's appeal to the civil provisions is unavailing – standard hacking, captcha breaking, password guessing and any number of other things that might give unauthorized access to information are illegal yet cause no damage or loss as the Court describes those provisions. Further, the Court ignores the ridiculous, “we spent money finding the leak and that’s loss” that lower courts have upheld. That type of loss would apply to a broader definition of "exceeds authorized access" as well. 

In sum, this is a good outcome even if I'm not entirely convinced it's the technically proper one. I'm good with that.

Thursday, 3 June 2021

What’s the difference between vaccine approval (BLA) and authorization (EUA)?

By Jacob S. Sherkow, Lisa Larrimore Ouellette, Nicholson Price, and Rachel Sachs

Recently, Pfizer and BioNTech and Moderna announced that they are seeking full FDA approval for their mRNA COVID-19 vaccines—filing, in FDA parlance, a Biologics License Application (BLA). Johnson & Johnson plans to file its own BLA later this year. But currently, all three vaccines are being distributed under a different FDA mechanism, the Emergency Use Authorization (EUA). What’s the difference, under the hood, between these two mechanisms? Why would these companies want to go through the BLA process? And what tools can policymakers use to make the EUA to BLA shift better?

What’s the difference between an EUA and a BLA?

A Biologics License Application, or BLA, is FDA’s standard “full approval” mechanism for biological products, including therapeutics and vaccines. A company seeking a BLA for its product must demonstrate that the product is “safe, pure, and potent,” which generally means completing robust, well-controlled clinical trials. A company receiving a BLA for their product can introduce the product into interstate commerce and market it for its approved uses. A BLA also has no defined end date—assuming no significant problems emerge, the product can stay on the market indefinitely.

By contrast, an Emergency Use Authorization, or EUA, is just that—an authorization to distribute an otherwise unapproved product (or an approved product for an unapproved use) during an emergency formally declared by the Secretary of Health & Human Services. Both the substantive and procedural rules surrounding an EUA differ from those surrounding a BLA (or the BLA counterpart for small-molecule drugs, a New Drug Application or NDA). Substantively, the standard for granting an EUA is whether, “based on the totality of scientific evidence available,” “it is reasonable to believe that the product may be effective” and that the “known and potential benefits… outweigh the known and potential risks.” Procedurally, an EUA lasts only as long as the underlying emergency. Further, the FDA may “revise or revoke” an EUA if the substantive evidence for granting it no longer exists.

In the context of prescription drugs intended to treat COVID-19, the EUAs granted by the FDA typically did not require the completion of robust, well-controlled clinical trials. Further, several of these EUAs have been revised or revoked subsequently (such as hydroxychloroquine and convalescent plasma). But for COVID-19 vaccines, the FDA has applied a higher standard.

Specifically, the FDA has published multiple guidance documents that describe both what the agency is specifically looking for in vaccines authorized under an EUA and how that differs from the requirements for submitting a full BLA. A June 2020 guidance for COVID-19 vaccines laid out requirements for companies to conduct full-scale clinical trials before submitting EUA paperwork. Each vaccine manufacturer enrolled tens of thousands of participants in randomized clinical trials, similar if not virtually identical to what would have been done for outright license approval. As the agency later explained in more detailed EUA guidance, because these vaccines are “intended to be administered to millions of individuals, including healthy people, to prevent disease,” the FDA planned to apply different standards to the authorization of vaccines than to the authorization of treatments for patients who were already ill with COVID-19.

However, a comparison of these two guidance documents reveals important differences  between what is required of companies submitting applications for an EUA and their subsequent BLA. Two stand out. First, the FDA wants to see longer follow-up of trial participants, particularly at least six months of safety data (compared to the two months required for EUA submission). Second, the FDA needs more detailed chemistry, manufacturing, and control data (including requiring facility inspections) in a BLA submission. The agency has indicated that working through this vast amount of data will take “months.” 

Why would vaccine manufacturers go through the BLA process?

Pfizer-BioNTech, Moderna, and other vaccine manufacturers have already sold billions of vaccine doses—many even before any vaccines received an EUA—so why would they go through the expense of filing for a BLA? Although a BLA would not increase rewards under existing contracts, it could increase demand for the vaccine in ways that lead to additional government purchases for at least six reasons.

First, a recent U.S. survey found that 32% of unvaccinated adults say they would be more likely to get a vaccine that had full FDA approval. Reducing vaccine hesitancy would also have a social benefit that exceeds any private rewards to Pfizer or Moderna.

Second, although there is a strong legal argument that employers may mandate vaccines under EUAs, including guidance from the EEOC, some commentators have raised concerns about the practice. Some employers and schools are thus waiting for full approval before mandating the shots. The U.S. military will also consider a vaccine mandate after approval.

Third, because EUAs are authorized, not approved, physicians do not appear to have the ability to prescribe authorized products off-label. Full approval, however, allows physicians to do just that, and it’s possible that some physicians and parents could choose to vaccinate children under 12 off-label even before the clinical trial results come out. Pfizer has said that they’re hoping to have data from children as young as 2 by September or October, which could be relevant to the agency as it decides on the grant of full approval.

Fourth, an approved BLA likely would make it harder for new vaccines to receive EUAs. The FDA has statutory authority to grant an EUA only if “there is no adequate, approved, and available alternative,” so if Pfizer’s or Moderna’s approvals are granted and are deemed both “adequate” and sufficiently “available” for the intended populations, an EUA couldn’t be granted. To be sure, there are strong arguments (for instance) that the existing mRNA vaccines may not be sufficiently “available” for particular populations when compared to a product like J&J’s, a one-dose vaccine without the mRNA vaccines’ particular storage needs. But the agency has already taken steps toward limiting future EUA requests, suggesting that these factors may be more difficult to satisfy in the future.

Fifth, EUAs last only as long as the public health emergency that prompted them. Although COVID-19 continues to devastate countries around the world, the pandemic appears to be winding down in the United States as vaccination becomes more widespread. Approval will allow manufacturers to continue marketing their vaccines even after the officially declared emergency ends. Relatedly, manufacturers that receive full approval of their vaccines will have an easier time receiving approval for post-pandemic boosters to address new variants if COVID-19 becomes endemic. 

Finally, a full stamp of approval from the FDA might help the rollout of the approved vaccines in other countries. The FDA is unusual among health regulators in requiring companies to submit raw data so that the agency can conduct its own statistical analyses, and other organizations like the WHO often rely on the FDA’s expertise—although the FDA is not the WHO’s National Regulatory Authority (NRA) of record for any of the COVID-19 vaccines so far. The Pfizer-BioNTech and Moderna vaccines are still not approved or authorized in many countries; perhaps FDA approval will help.

What can policymakers learn from the experience with EUAs and BLAs for COVID-19 vaccines?

The experience of EUAs and forthcoming BLAs for COVID-19 vaccines has useful lessons for policymakers going forward.

First, resistance to mandates of EUA products, and public reactions more generally, suggest that full FDA approval—in this case, a BLA—remains important in the eyes of the public and those setting corporate and local policy. Whatever the various challenges that have arisen to the public’s trust in the FDA (and however the FDA responds), there is more skepticism of things that haven’t yet gotten the agency’s official stamp of approval. That matters, and it’s worth the effort to maintain that public trust (in addition to public trust more generally). Other companies should be encouraged to seek approval, and as we have noted above, they have substantial incentives to do so. Policymakers can and should continue that encouragement. This includes raising thresholds for or discouraging EUAs once a vaccine is approved through a BLA, as the agency has begun to do.

Second, despite the benefits of BLAs, the experience of COVID-19 has demonstrated exactly why EUAs create a complementary pathway. Put simply, typical BLAs, especially for vaccines, require much longer to gather, submit, and review important data. Given the timing of clinical trials, no BLA could have been approved until this summer; without EUAs, vaccines would have been delayed for at least several months, at the cost of countless lives. Public hesitancy in the face of EUA products shouldn’t dim the immense benefits of swift action. It is useful both to have flexibility to get products on the market quickly when needed—but also to have the incentives carried by full approval to drive the creation of the high-quality data needed for long-term trust. 

Third and finally, the flexibility of EUAs is important to manage carefully. Quick authorization should also allow quick deauthorization when products don’t work or have minimal benefits. Knowing when to reverse course depends on the FDA ensuring that EUAs come with careful guidance and rules on generating and analyzing data as products are used in the clinic. Good data prompted the quick revocation of hydroxychloroquine’s EUA, a clear win. It took much longer to gather data showing that convalescent plasma was similarly unhelpful. Ensuring the collection of data, rapid reevaluation when necessary, and appropriate standards in the first place (like the FDA’s helpful guidance on heightened vaccine EUA standards in summer 2020) will help the public know that even though full approval is the gold standard, EUA-authorized products are still as data-supported as possible given the need for speed.

EUAs and BLAs are both important parts of the FDA’s box of policy tools. Getting the contours of each right, reevaluating their complementary roles when necessary, ensuring the collection of good data throughout the process, and clearly communicating to the public will all help ensure those tools are as effective as possible.

This post is part of a series on COVID-19 innovation law and policy. Author order is rotated with each post.

Labels: