Sunday, 30 September 2018

Aman Gebru: Compelling Disclosure of Traditional Knowledge in Patents

Aman Gebru, visiting assistant professor at Cardozo Law, has a new article forthcoming in Denver Law Review about patenting traditional knowledge. Aman is on the teaching market this year in the patents and intellectual property field, but his research and teaching deal with other areas as well like contracts and international law. His proposal, if adopted, could be good for the public and some communities, but might make big pharma a bit angry.

So-called traditional knowledge is a term of art. Gebru defines it as the "know-how, skills, innovations, and practices of indigenous peoples and local communities." This is often referred to colloquially as “traditional knowledge” or “TK."  (5). One of the big issues in the international human rights community and the TK literature is whether it is fair for big U.S. companies to extract information from local communities that they then go on to patent and commercialize in products like pharmaceutical drugs, generally without compensation or attribution.

Gebru taps into the literature on “information-forcing” in contract law to argue, I think quite effectively, that the patent office should compel disclosure of any "substantial" reliance by the patent applicant on traditional knowledge.

He argues benefits of compelling disclosure would be two-fold.

First, compelling disclosure of traditional knowledge in patents would increase the quality of patent disclosures. As with the (dying) best mode requirement of Section 112, forced disclosure of traditional knowledge would provide more information to the public regarding how the invention works. This is particularly important, Gebru argues, for traditional knowledge that is not otherwise documented and may remain to large degree tacit, trapped in the minds of the people who utilize it.    

Second, he argues that compelling disclosure of traditional knowledge would benefit researchers, the source community, and the public by addressing mistrust between parties and encouraging collaborative research.
The past experiences of researchers accessing TK, developing products, and failing to recognize the contributions of the source community have created significant trust issues.Decades of alleged biopiracy have made source communities hesitant to share their resource. To overcome this mistrust, a robust and clear signal of change from the status quo is needed.  
(39)

Essentially, compelling disclosure of traditional knowledge would signal to the public, and to potential commercializers, that there may be more information available within the local community from which the traditional knowledge was derived. And it would signal to local communities that they are a part of the system.  Companies seeking information would receive assistance to "transcend the tacit dimension." Local communities would gain the opportunity to engage in consulting or licensing of their knowledge to commercializing companies.

At a broader level, the paper attempts to develop more of a welfare-based justification for recognizing the contributions of traditional knowledge to innovation. The justifications predominantly used in the literature are equity and distributive justice. Gebru's argument for compelling disclosure is based on the notion that this could ultimately improve innovation, as well as combating inequality and access issues. In this sense, his arguments parallel some of Madhavi Sunder's work about the potential benefits of IP systems for indigenous communities.

I also saw a third benefit to compelled disclosure of TK. Since the AIA, prior art that once fell into the category of "known or used by others in this country" prior to the invention date has been substantially expanded. Now prior art that is in "public use" or "otherwise available to the public" anywhere before the filing date qualifies to invalidate a patent on novelty and non-obviousness grounds. The patent office will have no realistic way of obtaining this new corpus of prior art without more disclosure from applicants. Traditional knowledge would seem to be a huge area where this information asymmetry could occur.  So forcing disclosure would solve a novelty check problem as well.

This third public benefit leads to a potential pitfall: resistance from the patentee community. If prior art is now open to public uses outside as well as inside this country, they might ask, why should we now also be compelled to disclose this information to the patent office? Let the examiner, and eventually infringers in court, find it themselves.

I enjoyed hearing Gebru speak on this paper, and really look forward to more of his work.

Tuesday, 25 September 2018

Questioning Design Patent Bar Restrictions

Every once in a while an article comes along that makes you realize all the things that you just don't realize. Of course, someone else realizes these things, which makes you realize all the things you should be realizing but didn't. But this is what scholarship is about, I think - spreading knowledge. The latest such article for me is The Design Patent Bar: An Occupational Licensing Failure, by Chris Buccafusco and Jeanne Curtis (both of Cardozo Law). A draft is posted on SSRN, and the abstract is here:
Although any attorney can represent clients with complex property, tax, or administrative issues, only a certain class of attorneys can assist with obtaining and challenging patents before the U.S. Patent & Trademark Office (PTO). Only those who are members of the PTO’s patent bar can prosecute patents, and eligibility for the patent bar is only available to people with substantial scientific or engineering credentials. However much sense the eligibility rules make for utility patents—those based on novel scientific or technical inventions—they are completely irrational when applied to design patents—those based on ornamental or aesthetic industrial designs. Yet the PTO applies its eligibility rules to both kinds of patents. While chemical engineers can prosecute both utility patents and design patents (and in any field), industrial designers cannot even prosecute design patents. This Article applies contemporary research in the law and economics of occupational licensing to demonstrate how the PTO’s application of eligibility rules to design patents harms the patent system by increasing the costs of obtaining and challenging design patents. Moreover, we argue that the PTO’s rules produce a substantial disparate impact on women’s access to a lucrative part of the legal profession. By limiting design patent prosecution jobs to those with science and engineering credentials, the majority of whom are men, the PTO’s rules disadvantage women attorneys. We conclude by offering two proposals for addressing the harms caused by the current system.
It never occurred to me to think about the qualifications required for prosecuting design patents. The observation that a different set of skills goes into such work is a good one; it makes no sense that a chemistry grad can prosecute design patents but an industrial design grad cannot. There are plenty of outstanding trademark lawyers who could probably do this work, despite not having a science or engineering degree.

I like that this paper takes the issue beyond this simple observation (which could really be a blog post or op-ed), and applies some occupational licensing concepts to the issue. Furthermore, I like that the paper makes some testable assertions that can drive future scholarship, such as whether these rules have a disparate impact on women. I am skeptical about the negative impact on design patents, but I think that's testable as well.

The paper concludes with some relatively mild suggestions on how to open up the field a little bit. I think they should be considered, but I'm happy to hear from folks who disagree.

Labels: ,

Monday, 24 September 2018

USPTO Director Iancu Proposes Revised 101 Guidance

In remarks at the annual IPO meeting today, USPTO Director Andrei Iancu said "the USPTO cannot wait" for "uncertain" legislation on patentable subject matter and is "contemplating revised guidance" to help examiners apply this doctrine. Few are likely to object to his general goal of "increased clarity," but the USPTO should be sure that any new guidance is consistent with precedent from the Supreme Court and Federal Circuit.

As most readers of this blog are well aware, the Supreme Court's recent patentable-subject-matter cases—Bilski (2010), Mayo (2012), Myriad (2013), and Alice (2014)—have made it far easier to invalidate patent claims that fall under the "implicit exception" to § 101 for "laws of nature, natural phenomena, and abstract ideas." Since Alice, the Federal Circuit has held patents challenged on patentable-subject-matter grounds to be invalid in over 90% of appeals, and the court has struggled to provide clear guidance on the contours of the doctrine. Proponents of this shift call it a necessary tool in the fight against "patent trolls"; critics claim it creates needless uncertainty in patent rights and makes it too difficult to patent important innovations in areas such as medical diagnostics. In June, Rep. Thomas Massie (R-KY) introduced the Restoring America’s Leadership in Innovation Act of 2018, which would amend § 101 to largely undo these changes—following a joint proposal of the American Intellectual Property Law Association (AIPLA) and Intellectual Property Owners Association (IPO)—but Govtrack gives it a 2% chance of being enacted and Patently-O says 0%.

In the absence of legislation, can the USPTO step in? In his IPO speech today, Director Iancu decries "recent § 101 case law" for "mush[ing]" patentable subject matter with the other patentability criteria under §§ 102, 103, and 112, and he proposes new guidance for patent examiners because this mushing "must end." The problem is that the USPTO cannot overrule recent § 101 case law. It does not have rulemaking authority over substantive patent law criteria, so it must follow Federal Circuit and Supreme Court guidance on this doctrine, mushy though it might be.

Under the Supreme Court's patentable-subject-matter inquiry, as summarized in Alice, once a patent claim is determined to fall within a statutory category of a "process, machine, manufacture, or composition of matter," step 1 is to "determine whether the claims at issue are directed to a patent-ineligible concept," and if so, step 2 is to "examine the elements of the claim to determine whether it contains an inventive concept sufficient to transform the [ineligible concept] into a patent-eligible application"—where "simply appending conventional steps, specified at a high level of generality" is "not enough."

Director Iancu questions whether a claim can be "nonobvious enough to pass 103, yet lack an 'inventive concept' and therefore fail 101," but this inquiry into a claim's "inventive concept" is part of the doctrinal test, and the Supreme Court has explicitly "decline[d] the Government's invitation to substitute §§ 102, 103, and 112 inquiries for the better established inquiry under § 101."

Instead, Iancu proposes a new inquiry into whether an exception to patentable subject matter "is integrated into a practical application"—if so, "the claim passes 101 and the eligibility analysis would conclude." It is unclear how the proposed guidance might define "practical application," but the phrase seems unfortunate: the Supreme Court held a binary-to-digital conversion system with "practical application" on a computer to be patent ineligible in Gottschalk v. Benson (1972). And it is hard to imagine a definition of "practical application" that excludes all of the claims held to be patent ineligible by the Federal Circuit post-Alice, including on the method for detecting cell-free fetal DNA in Ariosa v. Sequenom, the method of assessing CVD risk in Cleveland Clinic v. True Health, the method of screening email in IV v. Symantec, the method of real-time performance monitoring of an electric power grid in Electric Power Group v. Alstom, etc.

I am sympathetic to the difficult position Director Iancu is in, with stakeholders clamoring both for more consistency in how § 101 is applied and for changes to the substantive standard. But the agency must ensure that any revised guidance on patentable subject matter is consistent with the relevant judicial precedent. And despite the outcry from the patent bar, I have seen little evidence that the recent shift in patentable-subject-matter doctrine has in fact created a crisis for U.S. innovation. Perhaps the USPTO's first step should be to focus on this empirical question of how § 101 case law has affected R&D—and then legislative or judicial reform of the doctrine could be targeted at wherever there is an actual problem.

Labels: ,

Tuesday, 18 September 2018

No Fair Use for Mu(sic)

It's an open secret that musicians will sometimes borrow portions of music or lyrics from prior works. But how much borrowing is too much? One would think that this is the province of fair use, but it turns out not to be the case - at least not in those cases that reach a decision.  Edward Lee (Chicago-Kent) has gathered up the music infringement cases and shown that fair use (other than parody) is almost never a defense - not just that defendants lose, but that they don't even raise it most of the time. His article Fair Use Avoidance in Music Cases is forthcoming in the Boston College Law Review, and a draft is available on SSRN. Here's the abstract:
This Article provides the first empirical study of fair use in cases involving musical works. The major finding of the study is surprising: despite the relatively high number of music cases decided under the 1976 Copyright Act, no decisions have recognized non-parody fair use of a musical work to create another musical work, except for a 2017 decision involving the copying of a narration that itself contained no music (and therefore might not even constitute a musical work). Thus far, no decision has held that copying musical notes or elements is fair use. Moreover, very few music cases have even considered fair use. This Article attempts to explain this fair use avoidance and to evaluate its costs and benefits. Whether the lack of a clear precedent recognizing music fair use has harmed the creation of music is inconclusive. A potential problem of “copyright clutter” may arise, however, from the buildup of copyrights to older, unutilized, and underutilized musical works. This copyright clutter may subject short combinations of notes contained in older songs to copyright assertions, particularly after the U.S. Supreme Court’s rejection of laches as a defense to copyright infringement. Such a prospect of copyright clutter makes the need for a clear fair use precedent for musical works more pressing.
The results here are pretty interesting, as I discuss below.

The first takeaway is that defendants almost always win. So while there are a few high profile losses (the ones we teach in school: My Sweet Lord, Love is a Wonderful Thing, Blurred Lines), most of the time the cases settle (Viva la Vida) or the defendant wins (Stairway to Heaven). So, you would think that in the losing cases, the jury must have ignored a clear cut fair use defense...but no. Not one opinion in a plaintiff winning case even discusses fair use. Most don't seem to raise it (Thicke and Williams did not in the Blurred Lines case, for example). And only a few of the defense winning cases mention it, and most either involved parody or did not rule on non-parody. Only one case, for a a capella rap, rejected a non-parody fair use defense. Obviously, there may have been settlements or unchallenged/unappealed jury verdicts in all the other cases where fair use would have been a live issue, but this is unlikely given the inherently debatable nature of fair use.

Why? Why don't musicians raise fair use? Why didn't courts rule on it? This is where the "avoidance" part of the title comes in. The article discusses a variety of theories why litigants and courts may avoid fair use. Just as software companies did not raise patentable subject matter for many years, musicians (and their labels) may rather pay a verdict than set precedent that certain copying is fair use. This is just one suggestion - there are many others in the article and it is well worth a read to explore them all in detail.

This is an interesting article, and I certainly learned something I didn't know before. Every "yeah but probably..." skeptical thought I had was answered, and that's pretty rare. That said, my one critique is that the background section, which is supposed to be discussing why fair use is the type of thing that we should often see in music (see history of borrowing, above), often conflates a variety of other defenses to copying in the same discussion. For example, the article points to the ubiquitous YouTube video that shows how many songs are based on the same four chords. The use of those chords, though, isn't really a fair use; it's more of scenes a faire or other defense to copying. Those four chords, after all, lead to very different sounding songs, and where they do sound the same, they can be traced to a common source, not to each other. An empirical study that I would like to see is how many songs that fit the four chord mold have been accused of and/or held liable for infringement. Perhaps Professor Lee's data has that, for reported decisions at least.

The reason this conflation is problematic leads back to the study results. Perhaps it should not be surprising that so many defendants win outright on non-copying defenses because there are so many ways to win on non-copying defenses without having to resort to an admission of copying and reliance on fair use. It may be that despite a history of borrowing, musicians can tell the difference between illicit copying and either copying from the same source/methods or real fair use. After all, only an average about four cases per year went to decision.

Labels: ,

Wednesday, 12 September 2018

Erie and Intellectual Property Law

When it comes to choice of law, U.S. federal courts hearing intellectual property law claims generally do one of two things. They either construct and apply the federal IP statutes (Title 18, Title 35, Title 17, and Title 15, respectively), remaining as faithful to Congress' meaning as possible; or they construct and apply state law claims brought under supplemental (or diversity) jurisdiction, remaining as faithful as possible to the meaning of the relevant state statutes and state judicial decisions. In the former case, they apply federal law; in the latter case, they apply the law of the state in which they sit.

Simple, right? Or maybe not.

This Friday, University of Akron School of Law is hosting a conference called Erie At Eighty: Choice of Law Across the Disciplines, exploring the implications of the Erie doctrine across a variety of fields, from civil procedure to constitutional law to evidence to remedies. I will be moderating a special panel: Erie in Intellectual Property Law.  Joe Miller (Georgia) will present his paper, "Our IP Federalism: Thoughts on Erie at Eighty"; Sharon Sandeen (Mitchell-Hamline) will present her paper, "The Erie/Sears-Compco Squeeze: Erie's Effects on Unfair Competition and Trade Secret Law”; and Shubha Ghosh (Syracuse) will present his paper "Jurisdiction Stripping and the Federal Circuit: A Path for Unlocking State Law Claims from Patent."

Other IP scholars in attendance include Bryan Frye (Kentucky), whose paper The Ballad of Harry James Tompkins provides a riveting, surprising, and (I think) convincing re-telling of the Erie story, and Megan LaBelle (Catholic University of America), whose paper discusses the crucial issue of whether the Erie line of cases directs federal courts sitting in diversity to apply state privilege law. All papers will be published in the Akron Law Review.

If you have written a paper that touches on the Erie doctrine's implications for intellectual property, I would really appreciate it if you would send it to me: chrdy@uakron.edu or cahrdy@gmail.com I will link to them in a subsequent post in order provide a resource for future research. Thank you!


Tuesday, 11 September 2018

Bargaining Power and the Hypothetical Negotiation

As I detail in my Boston University Law Review article, (Un)Reasonable Royalties, one of the big problems with using the hypothetical negotiation for calculating damages  (aside from the fact that it strains economic rationality and also has no basis in the legal history of reasonable royalties) is differences in bargaining power. The more explicit problem is when litigants try to use their bargaining power to argue that the patent owner would have agreed to a lower hypothetical rate. More implicitly, bargaining power can affect royalty rates in pre-existing (that is, comparable) licenses. This gives rise to competing claims in top 14 law reviews about whether royalty damages are spiraling up or down based on the trend of comparable licensing terms.

For what it's worth, my article dodges the spiral question, but suggests that existing licenses only be used if they can be directly tied to the value of the patented technology (and thus settlements should never be used). Patent damages experts who have read my article uniformly hate that part of it, because preexisting licenses (including settlements) are sometimes their best or even only granular source of data.

But much of this is theory. What about the data?  Gaurav Kankanhalli (Cornell Management - finance) and Alan Kwan (U. Hong Kong) have posted An Empirical Analysis of Bargaining Power in Licensing Contract Terms to SSRN. Here is the abstract:
This paper studies a new, large sample of intellectual property licensing agreements, sourced from filings by public corporations, under the lens of a surplus-bargaining framework. This framework motivates several new empirical findings on the determinants of royalty rates. We find that licensors command premium royalty rates for exclusivity (particularly in competitive industries), and for exchange of know-how. Licensors with differentiated technology and high market power charge higher royalty rates, while larger-than-rival licensees pay lower rates. Finally, using this framework, we study how the nature of disclosure by public firms affects transaction value. Firms transact at lower royalty rates when they redact contracts, preserving pricing power for future negotiations. This suggests that practitioners modeling fair value in transfer pricing and litigation contexts based on publicly-known comparables are over-estimating royalties, potentially impacting substantial cumulative transaction value.
The paper uses SEC reported licenses (more on that below), but one clever twist is that they obtained redacted terms via FOIA requests, so they could both expand their dataset and also see what types of terms are missing. They model the following transactions. Every firm has the most they are willing to pay, and the least they are willing to accept. If those two overlap, then the parties will agree to some price in the middle that splits the surplus.  Where that price is set is based on bargaining power. The authors then hypothesize what types of characteristics will affect that price, and most of them are borne out.

They focus on several kinds of bargaining power contract characteristics, firm specific characteristics, technology characteristics and license characteristics. I'm not sure I would call all of these bargaining power, as they do. I think some relate more to the value of the thing being licensed. Technically this will affect the division of surplus, but it's not really the type of bargaining power I think about. So long as the effect on license value is clear, however, the results are helpful for use in patent cases regardless of the technical designation.

So, for example, universities, non-profits, and individuals receive lower rates because they  have no credible BATNA for self-commercialization. They argue that this sheds light on conventional wisdom that individuals produce less valuable inventions. Further, firms in weaker financial condition do worse, and firms with more pricing power among their rivals do better.

On the other hand, licenses including know-how or exclusivity receive higher royalties, while amendments typically lead to lower royalties (presumably due to underperformance). I don't consider this to be bargaining power, but rather added value. That said, the authors test exclusivity and find that that highly competitive industries have higher royalties for exclusivity than non-competitive industries, which implies a mix of both bargaining power and value in competition.

The authors do look at technological value and find, unsurprisingly, that substitutability leads to lower rates.

The paper points to one interesting combination, though: territorial restrictions. Contracts with territorial restrictions have higher rates. You would think they have lower rates because the license covers less. But the contrary implication here is that a territorial restriction is imposed where the owner has the leverage to impose it, and that means a higher rate. That could be due to value or bargaining power, I suppose. I wonder, though, how many expert reports say that a royalty rate should be greater because the comparable license only covered a territory. Any readers who want to chime in would be appreciated.

There is a definite selection effect here, though, which further implies that use of preexisting licenses gathered via SEC filings be treated carefully. First, the authors note that there is a selection effect in the redactions. They find that not only are lower rates redacted, but that these redactions are driven by non-exclusive licenses, because firms want to hide their lowest willingness to sell (reservation) price. This finding is as valuable as the rest, in my opinion. It means, as the authors note, that any reliance on reported licenses may be over-weighting. It also means, in terms of my own views, that the hypothetical negotiation is not a useful way to calculate damages, because the value of the patent shouldn't change based on who is buying and selling. A second selection effect is not within the data, but what is not in the data: these are only material licenses. If the licenses are not material, they will not be reported. Those licenses are likely to be smaller, whether due to patent value or bargaining power.

This is a really interesting and useful paper, and worth a look.

Labels: ,

Monday, 3 September 2018

Boundary Maintenance on Twitter

Last Saturday was cut-down day in the NFL, when rosters are shaved from 90 players down to 53. For the first time, I decided to follow the action for my team by spending time (too much time, really - the kids were monopolizing the TV with video games) watching a Twitter list solely dedicated to reporters and commentators discussing my team.

I've never used Twitter this way, but from an academic point of view I'm glad I did, because I witnessed first-hand the full microcosm of Twitter journalism. First, there were the reporters, who were all jockeying to be the first to report someone was cut (and confirm it with "sources."). Then, there were the aggregators, sites with a lot of writers devoted to team analysis and discussion, but who on this day were simply tracking all of cuts/trades/etc. Ironically, the aggregators were better sources of info than the reporters' own sites, because the reporters didn't publish a full list until later in the day, along with an article that they were too busy to write because they were gathering facts.

Then there were the professional commentators - journalists and semi-professional social media types who have been doing this a long time or have some experience in the sport, but who were not gathering facts. They mostly commented on transactions. Both the reporters and commentators answered fan questions. And then...there were the fans, commenting on the transactions, commenting on the reporters, commenting on the commentators, etc. This is where it got interesting.

Apparently experienced commentators don't like it when fans tell them they're wrong. They like to make clear that either a) they have been doing this a long time, or b) they have a lot of experience in the league, and therefore their opinion should not be questioned. Indeed, in one case a commentator's statement seemed so ridiculous that the "new reporter" in town made fun of it, and all the other reporters circled the wagons to say that the new guy shouldn't be questioning the other men and women on the beat, all of whom had once held his job but left for better jobs. Youch! It turns out the statement was, in fact, both wrong and ridiculous (and proven so the next morning).

This type of boundary maintenance is not new, but it is the first time I've seen it so clearly, explicitly, and unrelentingly (there is some in legal academia, which I'll discuss below). This is a blog about scholarly works, so I point you to an interesting article called The Tension between Professional Control and Open Participation:Journalism and its Boundaries, by Seth Lewis, now a professor in the communications department at the University of Oregon. The article is published in Information, Communication & Society. It is behind a paywall, so a prepublication draft is here. Here is the abstract:
Amid growing difficulties for professionals generally, media workers in particular are negotiating the increasingly contested boundary space between producers and users in the digital environment. This article, based on a review of the academic literature, explores that larger tension transforming the creative industries by extrapolating from the case of journalism – namely, the ongoing tension between professional control and open participation in the news process. Firstly, the sociology of professions, with its emphasis on boundary maintenance, is used to examine journalism as boundary work, profession, and ideology – each contributing to the formation of journalism's professional logic of control over content. Secondly, by considering the affordances and cultures of digital technologies, the article articulates open participation and its ideology. Thirdly, and against this backdrop of ideological incompatibility, a review of empirical literature finds that journalists have struggled to reconcile this key tension, caught in the professional impulse toward one-way publishing control even as media become a multi-way network. Yet, emerging research also suggests the possibility of a hybrid logic of adaptability and openness – an ethic of participation – emerging to resolve this tension going forward. The article concludes by pointing to innovations in analytical frameworks and research methods that may shed new light on the producer–user tension in journalism.
The article includes a fascinating literature review on the sociology of journalism, and focuses on what it means to be a journalist in a world when your readers participate with you.

Bringing it back to IP for a moment (and legal academia more generally), I certainly see some of this among bloggers and tweeters. I see very little of it as a producer of content, presumably because I am always right. 😀 But I know that as a consumer I bleed into the boundaries of others, both in legal academia and elsewhere. I can't help myself - my law school classmates surely remember me as a gunner.

Many of my producer colleagues (mostly women, surprise surprise) see it much worse. Practicing lawyers tell them they don't know what they are talking about. Some may be making valid points, some not. Some are nice about it, while others are not. I'm speaking mostly of good faith boundary issues here, not trolling or harassment, which is a different animal in my mind.

I guess the real question is what to do about it. If you are in an "open" area, boundaries will get pushed. Some people welcome this, and some despise it. Some are challenged more fairly than others. I suspect that people have different ways of managing their boundaries, and it depends heavily on who and how folks are commenting. Some may ignore it, some may swat back about relative expertise, some engage with everyone, some disengage selectively or entirely, going so far as block and mute. I suspect it's a mix.

In any event, I don't have any policy prescriptions here. I know so little about it that I have no clue what the right answer is. I just thought I would make explicit what is usually implicit, point out an interesting article about it, and suggest that readers be mindful of boundaries and Diff'rent Strokes - what might be right for you, may not be right for some.