Tuesday, 26 March 2019

Trademarking the Seven Dirty Words

With the Supreme Court agreeing to hear the Brunetti case on the registration of scandalous trademarks, one might wonder whether allowing such scandalous marks will open the floodgates of registrations. My former colleague Vicenç Feliú (Nova Southeastern) wondered as well. So he looked at the trademark database to find out. One nice thing about trademarks is that all applications show up, whether granted or not, abandoned or not. He's posted a draft of his findings, called FUCT® – An Early Empirical Study of Trademark Registration of Scandalous and Immoral Marks Aftermath of the In re Brunetti Decision, on SSRN:
This article seeks to create an early empirical benchmark on registrations of marks that would have failed registration as “scandalous” or “immoral” under Lanham Act Section 2(a) before the Court of Appeals for the Federal Circuit’s In re Brunetti decision of December, 2017. The Brunetti decision followed closely behind the Supreme Court’s Matal v. Tam and put an end to examiners denying registration on the basis of Section 2(a). In Tam, the Supreme Court reasoned that Section 2(a) embodied restrictions on free speech, in the case of “disparaging” marks, which were clearly unconstitutional. The Federal circuit followed that same logic and labeled those same Section 2(a) restrictions as unconstitutional in the case of “scandalous” and “immoral” marks. Before the ink was dry in Brunetti, commentators wondered how lifting the Section 2(a) restrictions would affect the volume of registrations of marks previously made unregistrable by that same section. Predictions ran the gamut from “business as usual” to scenarios where those marks would proliferate to astronomical levels. Eleven months out from Brunetti, it is hard to say with certainty what could happen, but this study has gathered the number of registrations as of October 2018 and the early signs seem to indicate a future not much altered, despite early concerns to the contrary.
The study focuses not on the Supreme Court, but on the Federal Circuit, which already allowed Brunetti to register FUCT. Did this lead to a stampede of scandalous marks? It's hard to define such marks, so he started with a close proxy: George Carlin's Seven Dirty Words. This classic comedy bit (really, truly classic) nailed the dirty words so well that a radio station that played the bit was fined and the case wound up in the Supreme Court, which ruled that the FCC could, in fact, ban these seven words as indecent. So, this study's assumption is that the filings of these words as trademarks are the tip of the spear. That said, his findings about prior registrations of such words (with claimed dual meaning) are interesting, and show some of the problems that the court was trying to avoid in Matal v. Tam.

It turns out, not so much. No huge jump in filings or registrations after Brunetti. More interesting, I thought, was the choice of words. Turns out (thankfully, I think) that some dirty words are way more acceptable than others in terms of popularity in trademark filings. You'll have to read the paper to find out which.

Labels: , ,

Saturday, 23 March 2019

Jotwell Review of Frakes & Wasserman's Irrational Ignorance at the Patent Office

I've previously recommended subscribing to Jotwell to keep up with interesting recent IP scholarship, but for anyone who doesn't, my latest Jotwell post highlighted a terrific forthcoming article by Michael Frakes and Melissa Wasserman. Here are the first two paragraphs:
How much time should the U.S. Patent & Trademark Office (USPTO) spend evaluating a patent application? Patent examination is a massive business: the USPTO employs about 8,000 utility patent examiners who receive around 600,000 patent applications and approve around 300,000 patents each year. Examiners spend on average only 19 total hours throughout the prosecution of each application, including reading voluminous materials submitted by the applicant, searching for relevant prior art, writing rejections, and responding to multiple rounds of arguments from the applicant. Why not give examiners enough time for a more careful review with less likelihood of making a mistake?
In a highly-cited 2001 article, Rational Ignorance at the Patent Office, Mark Lemley argued that it doesn’t make sense to invest more resources in examination: since only a minority of patents are licensed or litigated, thorough scrutiny should be saved for only those patents that turn out to be valuable. Lemley identified the key tradeoffs, but had only rough guesses for some of the relevant parameters. A fascinating new article suggests that some of those approximations were wrong. In Irrational Ignorance at the Patent Office, Michael Frakes and Melissa Wasserman draw on their extensive empirical research with application-level USPTO data to conclude that giving examiners more time likely would be cost-justified. To allow comparison with Lemley, they focused on doubling examination time. They estimated that this extra effort would cost $660 million per year (paid for by user fees), but would save over $900 million just from reduced patent prosecution and litigation costs.
Read more at Jotwell.

Labels: ,

Tuesday, 19 March 2019

The Rise and Rise of Transformative Use

I'm a big fan of transformative use analysis in fair use law, except when I'm not. I think that it is a helpful guide for determining if the type of use is one that we'd like to allow. But I also think that it can be overused - especially when it is applied to a different message but little else.

The big question is whether transformative use is used too much...or not enough. Clark Asay (BYU) has done the research on this so you don't have to. In his forthcoming article in Boston College Law Review called, Is Transformative Use Eating the World?, Asay collects and analyzes 400+ fair use decisions since 1991. The draft is on SSRN, and the abstract is here:
Fair use is copyright law’s most important defense to claims of copyright infringement. This defense allows courts to relax copyright law’s application when courts believe doing so will promote creativity more than harm it. As the Supreme Court has said, without the fair use defense, copyright law would often “stifle the very creativity [it] is designed to foster.”
In today’s world, whether use of a copyrighted work is “transformative” has become a central question within the fair use test. The U.S. Supreme Court first endorsed the transformative use term in its 1994 Campbell decision. Since then, lower courts have increasingly made use of the transformative use doctrine in fair use case law. In fact, in response to the transformative use doctrine’s seeming hegemony, commentators and some courts have recently called for a scaling back of the transformative use concept. So far, the Supreme Court has yet to respond. But growing divergences in transformative use approaches may eventually attract its attention.
But what is the actual state of the transformative use doctrine? Some previous scholars have empirically examined the fair use defense, including the transformative use doctrine’s role in fair use case law. But none has focused specifically on empirically assessing the transformative use doctrine in as much depth as is warranted. This Article does so by collecting a number of data from all district and appellate court fair use opinions between 1991, when the transformative use term first made its appearance in the case law, and 2017. These data include how frequently courts apply the doctrine, how often they deem a use transformative, and win rates for transformative users. The data also cover which types of uses courts are most likely to find transformative, what sources courts rely on in defining and applying the doctrine, and how frequently the transformative use doctrine bleeds into and influences other parts of the fair use test. Overall, the data suggest that the transformative use doctrine is, in fact, eating the world of fair use.
The Article concludes by analyzing some possible implications of the findings, including the controversial argument that, going forward, courts should rely even more on the transformative use doctrine in their fair use opinions, not less.
In the last six years of the study, some 90% of the fair use opinions consider transformative use.*  This doesn't mean that the the reuser won every time - quite often, courts found the use to not be transformative. Indeed, while the transformativeness finding is not 100% dispositive, it is highly predictive. This supports Asay's finding that transformativeness does indeed seem to be taking over fair use.

And he is fine with that. Asay recommends based on his findings that two of the fair use factors be used much less often. He arrives at that conclusion based on the types of works that receive transformative treatment. In short, while there are some cases that seem to go too far, the courts do seem to require more than a simple change in message to support transformativeness.

The paper has a lot of great detail - including transformative analysis over time, which precedents/articles are cited for support, which circuits see more cases and how they rule, the interaction of each factor with the others, the interaction of transformativeness in the other factors, and (as noted above) the types of works and uses that are at issue. Despite having this detail, it's a smooth and easy read. The only information I would have liked in more detail is a time based analysis of win rates, especially for shifting media.
 
*There is a caveat that the study omits many "incomplete" opinions that leave out discussion of multiple fair use factors - it is unclear what these look like. While this decision is defensible, especially in light of the literature, given the paper's suggestion that two of the fair use factors be eliminated I think it would have been interesting to see the role of transformativeness in those cases where the courts actually did eliminate some factors.

Labels: , ,

Tuesday, 12 March 2019

Cicero Cares what Thomas Jefferson Thought about Patents

 One of my favorite article titles (and also an article a like a lot) is Who Cares What Thomas Jefferson Thought About Patents? Reevaluating the Patent 'Privilege' in Historical Context, by Adam Mossoff. The article takes on the view that Jefferson's utilitarian view of patents should somehow reign, when there were plenty of others who had different, natural law views of patenting.

And so I read with great interest Jeremy Sheff's latest article, Jefferson's Taper. This article challenges everyone's understanding of Jefferson. The draft is on SSRN, and the abstract is here:
This Article reports a new discovery concerning the intellectual genealogy of one of American intellectual property law’s most important texts. The text is Thomas Jefferson’s often-cited letter to Isaac McPherson regarding the absence of a natural right of property in inventions, metaphorically illustrated by a “taper” that spreads light from one person to another without diminishing the light at its source. I demonstrate that Thomas Jefferson likely copied this Parable of the Taper from a nearly identical passage in Cicero’s De Officiis, and I show how this borrowing situates Jefferson’s thoughts on intellectual property firmly within a natural law theory that others have cited as inconsistent with Jefferson’s views. I further demonstrate how that natural law theory rests on a pre-Enlightenment Classical Tradition of distributive justice in which distribution of resources is a matter of private judgment guided by a principle of proportionality to the merit of the recipient — a view that is at odds with the post-Enlightenment Modern Tradition of distributive justice as a collective social obligation that proceeds from an initial assumption of human equality. Jefferson’s lifetime correlates with the historical pivot in the intellectual history of the West from the Classical Tradition to the Modern Tradition, but modern readings of the Parable of the Taper, being grounded in the Modern Tradition, ignore this historical context. Such readings cast Jefferson as a proto-utilitarian at odds with his Lockean contemporaries, who supposedly recognized property as a pre-political right. I argue that, to the contrary, Jefferson’s Taper should be read from the viewpoint of the Classical Tradition, in which case it not only fits comfortably within a natural law framework, but points the way toward a novel natural-law-based argument that inventors and other knowledge-creators actually have moral duties to share their knowledge with their fellow human beings.
I don't have much more to say about the article, other than that it is a great and interesting read. I'm a big fan of papers like this, and I think this one is done well.

Tuesday, 5 March 2019

Defining Patent Holdup

There are few patent law topics that are so heatedly debated as patent holdup. Those who believe in it, really believe in it. Those who don't, well, don't. I was at a conference once where a professor on one side of this divide just..couldn't...even, and walked out of a presentation taking the opposite viewpoint.

The debate is simply the following. The patent holdup story is that patent holders can extract more than they otherwise would by asserting patents after the targeted infringer has invested in development and manufacturing. The "classic" holdup story in the economics literature relates to incomplete contracts or other partial relationships that allow one party to take advantage of an investment by the other to extract rents.

You can see the overlap, but the "classic" folks think that patent holdup story doesn't count, because there's no prior negotiation - the party investing has the opportunity to research patents, negotiate beforehand, plan their affairs, etc.

In their new article forthcoming in Washington & Lee Law Review, Tom Cotter (Minnesota), Erik Hovenkamp (Harvard Law Post-doc), and Norman Siebrasse (New Brunswick Law) try to solve this debate. They have put Demystifying Patent Holdup on SSRN. The abstract is here:
Patent holdup can arise when circumstances enable a patent owner to extract a larger royalty ex post than it could have obtained in an arm's length transaction ex ante. While the concept of patent holdup is familiar to scholars and practitioners—particularly in the context of standard-essential patent (SEP) disputes—the economic details are frequently misunderstood. For example, the popular assumption that switching costs (those required to switch from the infringing technology to an alternative) necessarily contribute to holdup is false in general, and will tend to overstate the potential for extracting excessive royalties. On the other hand, some commentaries mistakenly presume that large fixed costs are an essential ingredient of patent holdup, which understates the scope of the problem.
In this article, we clarify and distinguish the most basic economic factors that contribute to patent holdup. This casts light on various points of confusion arising in many commentaries on the subject. Path dependence—which can act to inflate the value of a technology simply because it was adopted first—is a useful concept for understanding the problem. In particular, patent holdup can be viewed as opportunistic exploitation of path dependence effects serving to inflate the value of a patented technology (relative to the alternatives) after it is adopted. This clarifies that factors contributing to holdup are not static, but rather consist in changes in economic circumstances over time. By breaking down the problem into its most basic parts, our analysis provides a useful blueprint for applying patent holdup theory in complex cases.
The core of their descriptive argument is that both "classic" and patent holdup are based on a path dependence: one party invests sunk costs and thus is at the mercy of the other party. In this sense, they are surely correct (if we don't ask why the party invested). And the payoff from this is nice, because it allows them to build a model that critically examines sunk costs (holdup) v. switching costs (not holdup). The irony of this, of course, is that it's theoretically irrational to worry about sunk costs when making future decisions.

But I guess I'm not entirely convinced by the normative parallel. The key in all of these cases is transactions costs. So, the question is whether the transactions costs of finding patents are high enough to warrant the investment without expending them. The authors recognize the problem, and note that when injunctions are not possible parties will refuse to pay a license because it is more profitable to do so (holdout). But their answer is that just because there is holdout doesn't mean that holdup isn't real and a problem sometimes. Well, sure, but holdout merely shifts the transactions costs, and if it is cheaper to never make an ex ante agreement (which is typical is these days), then it's hard for me to say that being hit with a patent lawsuit after investment is the sort of path dependence that we should be worried about.

I think this is an interesting and thoughtful paper. There's a lot more than my brief concerns. It attempts to respond to other critiques of patent holdup, and it provides a framework to debate these questions, even if I'm not convinced by the debate.

Monday, 4 March 2019

Recent Advances in Biologics Manufacturing Diminish the Importance of Trade Secrets: A Response to Price and Rai

Guest post by Rebecca Weires, a 2L in the J.D./M.S. Bioengineering program at Stanford

In their 2016 paper, Manufacturing Barriers to Biologics Competition and Innovation, Price and Rai argue the use of trade secrets to protect biologics manufacturing processes is a social detriment. They go on to argue policymakers should demand more enabling disclosure of biologics manufacturing processes, either in patents or biologics license applications (BLAs). The authors premise their arguments on an assessment that (1) variations in the synthesis process can unpredictably affect the structure of a biological product; (2) variations in the structure of a biological product can unpredictably affect the physiological effects of the product, including immunogenicity; and (3) analytical techniques are inadequate to characterize the structure of a biological product. I am more optimistic than Price and Rai that researchers will soon overcome all three challenges. Where private-sector funding may fall short, grant-funded research has already led to tremendous advances in biologics development technology. Rather than requiring more specific disclosure of synthesis processes, as Price and Rai recommend, FDA could and should require more specific disclosure of structure, harmonizing biologics regulation with small molecule regulation. FDA should also incentivize development of industrial scale cell-free protein synthesis processes.

In the past few years, researchers have made rapid progress developing techniques for synthesizing, assessing the physiological effects of, and characterizing the structure of biologics. Researchers have been developing cell-free protein synthesis systems to make biologics synthesis more predictable and less path-dependent. Historically, cell-free synthesis systems have been application-specific and difficult to scale. Cell-based systems have dominated because cells maintain their own internal environments, including necessary components for protein synthesis. But cell-based systems are not perfect. For example, as Price and Rai explain at p. 1035, the pattern of carbohydrates attached to a protein is particularly challenging to replicate across different cell lines and is important for efficacy and immune response. Recently, researchers have created more flexible, generalizable platforms for cell-free synthesis. Some are developing industrial-scale cell-free synthesis processes. Others have demonstrated cell-free production of increasingly complex, proteins with attached carbohydrates. These cell-free synthesis techniques are more predictable than current cell-based synthesis, eliminating variations that arise from differences between cell lines.

Researchers have developed improved models of the immune system to improve preclinical assessment of biologics. Traditional preclinical toxicity assays and animal models have been insufficient for biologics, which are often not directly cytotoxic but instead trigger species- and patient-specific immune reactions. As the biologics industry has grown, researchers have developed sensitive in silico methods, 2D in vitro assays, and 3D in vitro models of immune response. For example, computer models can now provide good estimations of the ability of immune cells to bind with a biologics, which a sponsor can use to predict whether a product with a slightly different structure than its reference product has the same immunogenicity. If the two products are likely to be biosimilar, the sponsor can validate immunogenicity in vitro before investing in a clinical trial. The sponsor may use 2D assays to measure the response of immune cell cultures directly exposed the biologic, or the sponsor may introduce the biologic into 3D artificial lymph nodes, which model flow and other mechanical forces that affect immune cell response. With these tools, the variations arising from different synthesis processes become less of an obstacle to biosimilar development.

Technology for characterizing the structure of biologics has come especially far in the past decade, enabling high-resolution characterization of protein folding and glycosylation for increasingly large biologics. Structural characterization has been limited in the past because protein sequencing does not provide folding or glycosylation information, X-ray crystallography requires prohibitively complex sample preparation, and nuclear magnetic resonance (NMR) spectroscopy is ambiguous and computationally expensive for large molecules. In the past few years, though, researchers have developed 2D NMR methods for characterizing products as large as monoclonal antibodies. Cryogenic electron microscopy (CryoEM) is a newer technique suitable for characterizing larger biologics. CryoEM can be used to image large glycosylated structures such as viral coat proteins, and even whole cells, at near-atomic resolution. Though 2D NMR and CryoEM may be too time-consuming or expensive for rapid prototyping, computational methods for predicting protein structure and function are now adequate for prototyping new biologics.

Price and Rai theorize that the private sector underinvests in these three areas of research, but total funding may be sufficient. The above-cited advances were largely grant-funded. Defense department funding for synthetic biology has skyrocketed in the past decade, accounting for 67% of U.S. public-sector research investments in synthetic biology in 2014. Public sector investment has made technologically feasible what was once nearly impossible: reverse engineering biologics.

Price and Rai argue the costs of trade secrecy in biologics manufacturing likely outweigh the benefits, but research advances may soon reverse that assessment. As reverse engineering biologics becomes easier, the private value of keeping manufacturing methods trade secrets will decline, and we can expect biologics makers to reduce their reliance on trade secrets. Furthermore, tools for assessing immunogenicity function in silico and in vitro will eliminate some expense of failed clinical trials. Thus, the social value of disclosing synthesis processes will also decline.

Overall, these scientific advancements reduce the urgency and importance of Price and Rai’s policy prescriptions but do not render them irrelevant. Policymakers should consider the regulatory levers the paper describes at pages 1050-56 to incentivize full and specific disclosure; however, full disclosure of structure, rather than synthesis process, should be the focus. Biologics sponsors should be required to define their exact formulations. Heightened patent disclosure requirements are an option, but as Price and Rai suggest, the FDA may be in a better position to enforce heightened disclosure requirements. In fact, detailed structural characterization, to the extent it is technologically feasible, is already required to prove biosimilarity. With improved characterization and deterministic, cell-free manufacturing, it will become possible to make true generic biologics. Heightened disclosure requirements could take the form of harmonized generics and biosimilars regulation.

Policymakers should supplement disclosure requirements with incentives for the private sector to further develop cell-free synthesis processes. Reverse engineering requires both structural information and deterministic synthesis processes. Biologics sponsors may not have sufficient incentives to invest in cell-free synthesis because it facilitates biosimilars development. Fortunately, current research provides a basis for FDA to set a reasonable timeline for biologics makers to develop and adopt cell-free synthesis. Now is an appropriate time for the FDA to announce cell-free synthesis requirements, along with immunogenicity assay requirements, for biologics license applications. As escalating fuel efficiency standards have done for the auto industry, escalating application requirements would stimulate private-sector research and development to meet requirements.

Price and Rai highlight legitimate concerns with the current use of trade secrets to inhibit the development of biosimilars. However, biologics manufacturing technology has advanced enough that an end to these practices is in sight. New scientific developments will enable FDA to treat biosimilars more like generic small-molecule drugs, which would simplify the approval pathway for biosimilars and enable more effective product inspections. Though this course of action would not immediately accommodate new and complex biologics such as whole cell therapies, it does suggest a model for regulating them. For new types of biologics, FDA can start with a flexible regulatory scheme allowing approval based on manufacturing process information. Then, as deterministic synthesis processes, preclinical assays, and structural characterization techniques advance, it can transition to more rigid disclosure requirements.

Labels: , ,