Tuesday, 23 April 2019

How Does Patent Eligibility Affect Investment?

David Taylor (SMU) was interested in how patent eligibility decisions at the Supreme Court affected venture investment decisions, so he thought he would ask. He put together an ambitious survey of 14,000 investors at 3000 firms, and obtained some grant money to provide incentives. As a result, he got responses from 475 people at 422 firms. The response rate by individual is really low, but by firm it's 12% - not too bad. He performs some analysis of non-responders, and while there's a bit of an oversample on IT and on early funding, it appears to be somewhat representative.

The result is a draft on SSRN and forthcoming in Cardozo L. Rev. called Patent Eligibility and Investment. Here is the abstract:
Have the Supreme Court’s recent patent eligibility cases changed the behavior of venture capital and private equity investment firms, and if so how? This Article provides empirical data about investors’ answers to those important questions. Analyzing responses to a survey of 475 investors at firms investing in various industries and at various stages of funding, this Article explores how the Court’s recent cases have influenced these firms’ decisions to invest in companies developing technology. The survey results reveal investors’ overwhelming belief that patent eligibility is an important consideration in investment decisionmaking, and that reduced patent eligibility makes it less likely their firms will invest in companies developing technology. According to investors, however, the impact differs between industries. For example, investors predominantly indicated no impact or only slightly decreased investments in the software and Internet industry, but somewhat or strongly decreased investments in the biotechnology, medical device, and pharmaceutical industries. The data and these findings (as well as others described in the Article) provide critical insight, enabling evidence-based evaluation of competing arguments in the ongoing debate about the need for congressional intervention in the law of patent eligibility. And, in particular, they indicate reform is most crucial to ensure continued robust investment in the development of life science technologies.
The survey has some interesting results. Most interesting to me was that fewer than 40% of respondents were aware of any of the key eligibility decisions, though they may have been vaguely aware of reduced ability to patent. More on this in a minute.

There are several findings on the importance of patents, and these are consistent with the rest of the literature - that patents are important for investment decisions, but not first on the list (or second or third). Further, the survey finds that firms would invest less in areas where there are fewer patents - but this is much more pronounced for biotech and pharma than it is for IT. This, too, seems to comport with anecdotal evidence.

But I've always been skeptical of surveys that ask what people would do - stated preferences are different than revealed preferences. The best way to measure revealed preferences would be through some sort of empirical look at the numbers, for example a differences-in-differences approach before and after these cases (though having 60% of the people say they haven't heard of them would certainly affect whether the case constitutes a "shock" - a requirement of such a study).

Another way, which this survey attempts, is to ask not what investors would do but rather ask what they have done. This amounts to the most interesting part of the survey - investors who know about the key court opinions say they have moved out of biotech and pharma, and into IT. So much for Alice destroying IT investment, as some claim (though we might still see a shift in the type of projects and/or the type of protection - such as trade secrets). But more interesting to me was that there was also a similar shift among those folks who claimed not to know much about patent eligibility or think it had anything to do with their investment. In other words, even for that group who didn't actively blame the Supreme Court, they were shifting investments out of biotech and pharma and into IT.

You can, of course, come up with other explanations - perhaps biotech is just less valuable now for other reasons. But this survey is an important first step in teasing out those issues.

There are a lot more questions on the survey and some interesting answers. It's a relatively quick and useful read.



Labels: , , , ,

Thursday, 18 April 2019

Beebe and Fromer: Study on the Arbitrariness of 2(a) Immoral or Scandalous Refusals

For those who have not had the pleasure of seeing it, I recommend the fascinating and, honestly, fun, new study by Barton Beebe and Jeanne Fromer on the arbitrariness and unpredictability of the U.S. Patent & Trademark Office's refusals of trademarks that are deemed to be "immoral" or "scandalous."

The study, entitled Immoral or Scandalous Marks: An Empirical Analysis, has been posted on SSRN. This paper served as the basis for Professors Beebe and Fromer's amicus brief in Iancu v. Brunetti.

This study follows up on Megan Carpenter and Mary Garner's prior 2015 paper, published in the Cardozo Arts & Entertainment Law Journal and Anne Gilson LaLonde and Jerome Gilson's 2011 article, Trademarks Laid Bare: Marks That May Be Scandalous or Immoral.  

All of these studies come to similar conclusions: there are serious inconsistencies in trademark examiners' application of the Section 2(a) "immoral-or-scandalous" rejection. The Beebe/Fromer study is technically 161 pages long, but it's mostly exhibits, and it's very accessible  – worth at least a read to see some of the examples they give, and to oggle at the bizarre interplay between Section 2(a) "immoral-or-scandalous" refusals and Section 2(d) "likely to confuse with prior registered mark" refusals.

The issue in Brunetti is whether the Section 2(a) "scandalous-or-immoral" refusal is an unconstitutional restriction on free speech under the First Amendment.  The test for determining whether the mark is scandalous or immoral asks
"whether a substantial composite of the general public would find the mark scandalous, defined as shocking to the sense of truth, decency, or propriety; disgraceful; offensive; disreputable; ... giving offense to the conscience or moral feelings; ... or calling out for condemnation."
(6-7) (quoting In re Brunetti, 877 F.3d 1330, 1336 (Fed. Cir. 2017) (citations omitted)).

To assess how the USPTO examiners make this determination (what's "disgraceful," for example?), the Beebe/Fromer study takes advantage of a large amount of empirical data (3.6 million trademark registration applications), and creatively uses the interplay between Section 2(a) "immoral-or-scandalous" refusals and Section 2(d) "likely to confuse with prior registered mark" refusals, to emphasize just how unpredictable and capricious the examiners have been in determining what the general public might find "shocking to the sense of truth, decency," or what not.

In particular, the authors show that
the PTO routinely refuses registration of applied-for marks on the ground that they are immoral or scandalous under § 2(a) and confusingly similar with an already registered mark under § 2(d); in other words, the PTO routinely states that it cannot register a mark because the mark is immoral or scandalous and in any case because it has already allowed someone else to register the mark on similar goods. Furthermore, the PTO arbitrarily allows some applied-for marks to overcome an immoral-or-scandalous refusal while maintaining that refusal against other similar marks. ...
For example, the mark at issue in Brunetti is FUCT for apparel.  Here is what the authors say about this:
In 2009, the PTO refused to register the mark FUK!T in connection with apparel (Class 25) and the operation of an internet website (Class 42) on the bases that the applied-for mark was immoral or scandalous under § 2(a) and confusingly similar under § 2(d) to the recently-registered mark PHUKIT for apparel (Class 25). Similarly, on June 18, 2013, the PTO registered the mark PHUC for apparel (Class 25). Four days before, on June 14, 2013, the PTO sent out an office action refusing to register the mark P.H.U.C. CANCER (PLEASE HELP US CURE CANCER) in connection with apparel (Class 25) on the bases that the mark was immoral or scandalous and confusingly similar to the about-to-be-registered mark PHUC for apparel. At no time during its registration process did the earlier-filed mark PHUC for apparel receive any immoral-or-scandalous refusal....
My brilliant:) Akron Law Trademark Law 2019 students might call this a "Schrödinger’s cat argument" from the USPTO examiners. On the one hand, the mark FUCT is unregisterable because it's 2(a) "scandalous"; but on the other hand, FUCT is unregisterable because we already registered a mark just like it. Doh!

Here is another favorite example, that definitely left me scratching my head and pulling out my Spanish Dictionary (the Urban Dictionary was more helpful):
In 2008, the PTO issued an immoral-or-scandalous refusal to an application for the mark CAJONES for dietary supplements (Class 5). It cited evidence from urbandictionary.com, among other sources, in support of the conclusion that: the proposed mark “CAJONES” means “TESTICLES” or “BALLS” and is thus scandalous because it is a commonly used vulgar slang term for a part of the male genitalia. 
... 
Yet in 2008 the PTO registered the mark CAJONES for party games (Class 28) without any immoral-or-scandalous objection, even though, with authorization from the applicant’s attorney, it amended the application record to include the following translation statement: “The foreign wording in the mark translates into English as drawers, and as a slang term for testicles.” Similarly, in 2005 the PTO issued no immoral- or-scandalous refusal to the mark CAJONES for beer (Class 32) and published the mark. In an office action, the PTO had asked the applicant for a translation of the mark, stating: “The following translation statement is suggested: ‘The English translation of CAJONES is drawers.’”
Beebe and Fromer assert in their SSRN paper that these sorts of inconsistencies support that the 2(a) "immoral-or-scandalous prohibition" is being applied in an arbitrary and viewpoint- discriminatory matter" that violates the First Amendment. (They have a couple of theories for how this works under First Amendment doctrine, see pp. 27-32).

These empirical studies by IP professors are likely to be influential on the outcome of the case. It seems clear the work has already been read by some of the Supreme Court Justices or their clerks.  For instance, as Professor and Dean of University of New Hampshire School of Law Megan Carpenter noted on SCOTUSblog, at oral arguments on Monday, April 15, Justice Gorsouch was
particularly troubled by inconsistencies in acceptances and rejections in the PTO’s application of this provision over time, and the resultant inability to give adequate notice to trademark owners. ... He added that he himself could not “see a rational line” through the refusals and registrations, and asked “is it a flip of the coin?”  
The full transcript reveals even more that suggests the Justices are reading the Beebe/Fromer amicus or other empirical studies. From Justice Gorsuch:
JUSTICE GORSUCH: But I can come up with several that are granted that ... have phonetics along the lines you've described and a couple that have been denied. And what's the rational line? How is a person -- a person who wants to get a mark supposed to tell what the PTO is going to do? Is it a flip of the coin? (p. 21)
From Justice Kavanaugh:
JUSTICE KAVANAUGH: How ...do you deal with the problem of erratic or inconsistent enforcement, which seems inevitable with a test of the kind you're articulating? (p. 16)
As someone who lacks a strong view on whether this provision of the Lanham Act should be struck down as unconstitutional, I am just enjoying hearing the examples...and seeing the Justices squirm a bit:
JUSTICE GORSUCH: I don't want to -- I don't want to go through the examples. I really don't want to do that.
  (Laughter.)
(p. 21)

Thursday, 11 April 2019

What was the "promise of the patent doctrine"?

What was the "promise of the patent doctrine"?  The short answer is: a controversial doctrine that originated in English law and that, until recently, was applied in Canadian patent law to invalidate patents that made a material false promise about the utility of the invention. A common example would be a claim to therapeutic efficacy in a specification that is not born out.

Warning: the content of this doctrine this may seem bizarre to those familiar with U.S. patent law.

I learned of the "promise of the patent doctrine" at PatCon9 from Norman Siebrasse, Professor of Law at University of New Brunswick, and founder of the Canadian patent blog: Sufficient DescriptionSiebrasse provided an in depth analysis of what he calls the"promise of the patent doctrine," or just the "Promise Doctrine," in a 2013 article  in a Canadian law journal. He discussed it further on several posts on Sufficient Description, leading up to the Supreme Court of Canada's decision to abolish the doctrine in AstraZeneca Canada Inc. v. Apotex Inc. (2017).

According to Professor Siebrasse, pre-abolishment, Canadian utility doctrine effectively had "two branches": (1) the "traditional utility requirement," which is similar to U.S. law's, and requires merely a "scintilla" of utility; and (2) "the Promise Doctrine."

The basic idea of the Promise Doctrine was that
"where the specification does not promise a specific result, no particular level of utility is required; a “mere scintilla” of utility will suffice. However, where the specification sets out an explicit “promise”, utility will be measured against that promise." (quoting Lilly v Novopharm / Olanzapine) 
Starting around 2005, until the Supreme Court of Canada's decision in AstraZeneca, Canadian courts applied the Promise Doctrine in the pharmaceutical context to invalidate patents. The "promise," Siebrasse explained, could be found "anywhere in the specification[.]" If there were  multiple “promises," the patent had to satisfy all of them, or the entire patent would be invalidated.

Here is an example from Siebrasse's article.  (35-36). In a Canadian case circa 2009, a judge construed a patent as making a "promise" of a certain utility based on the following statements, in bold italics, within the patent specification:
The compounds of this invention have useful pharmacological properties. They are useful in the treatment of high blood pressure. The compounds of the present invention can be combined with pharmaceutical carriers and administered in a variety of well-known pharmaceutical forms suitable for oral or parental administration to provide compositions useful in the treatment of cardiovascular disorders and particularly mammalian hypertension. (35) (citing Sanofi v Apotex/ramipril).    
The patent would be invalidated under the Promise Doctrine if the promise of utility turned out to be false—or if the court deemed the promised of utility to be premature and unfounded at the time of filing. This is an important caveat, because, Professor Siebrasse explains, in essentially all the Canadian cases invalidating the patent on the basis of the Promise Doctrine, the promise was in fact true. It's just that the heightened promise of utility was speculative at the time of filing. Only later was it was proven to be true later, when validity was challenged in court.  So it's not just that the applicant makes a "false" promise; it's that the applicant makes a promise on which s/he may not be able to deliver.

Professor Siebrasse was not happy about courts' use of the Promise Doctrine to invalidate patents.  His view seems to have won out in AstraZeneca, where the Court's language, at least to me, suggests the Doctrine is unambiguously dead:
"...the Promise Doctrine is not the correct method of determining whether the utility requirement under s. 2  of the Patent Act  is met. Given the correct approach, as set out below, the drug for which the ‘653 patent was granted is useful as a PPI; thus, it is an “invention” under s. 2 of the Act. The ‘653 patent is therefore not invalid for want of utility."          
Siebrasse gleefully keeps watch on the Promise Doctrine's fate in posts with titles like "Whack the Zombies Dead Once and for All",  where he discusses unsuccessful attempts by generic drug companies to revive the doctrine.

What I think is really interesting here is the history of the Promise Doctrine. According to Professor Siebrasse, the Promise Doctrine evolved in English law. To paraphrase Siebrasse, in English law "the grant of a patent was an exercise of the royal prerogative, and as such wholly within the discretion of the Crown." Patents thus could be retracted for many reasons, including a false promise of utility, as "measured by the representations made in the patent."  (5-6). This was codified in the English Patent Act until 1977.  (7) ("[T]he English false promise doctrine was codified by a statutory provision that a patent would be void if obtained on the basis of a 'false suggestion.' ”).  

There was an important difference. In the older English cases from which the Canadian Promise Doctrine originated, the elevated promise of utility was actually false or at least misleading. For example, in the 1919 English case, Hatmaker v Joseph Nathan & Co Ltd., the invention claimed a process for producing dried milk. The specification stated that the process would produce milk solids “in a dry but otherwise unaltered condition” and that the reconstituted milk was “of excellent quality.” But it turned out the dried milk was not actually as good as real milk. (8) (citing case). This is why the older English cases referred to a "false" promise. But in the modern Canadian practice, the promise was typically not actually false in hindsight.

Our shared origins in English law means the Promise Doctrine is a path U.S. law could have taken too. It is interesting to ask then: how might a "Promise Doctrine" evolve in U.S. law today? There are a few analogues.

(1) Utility

First, obviously, is Section 101's requirement that a patent be "useful," i.e. the utility requirement. But as is well known, U.S. patent law has an intentionally lax utility requirement. As the Federal Circuit has put it, "[t]he threshold of utility is not high: An invention is "useful" under section 101 if it is capable of providing some identifiable benefit." So making a therapeutic claim like "these compounds are useful in treatment of high blood pressure" would not ordinarily raise red flags unless they are verifiably false or completely incredible, of the "cold fusion" variety.

(2) Duty of Candor/Inequitable Conduct

Second, there is a general prohibition on lying to the Patent Office (e.g. duty of candorinequitable conduct). If statements about utility are in false and this false claim is "material" in the "but for" sense that the examiner would not have granted the patent unless it believed the assertions, this could potentially make the patent vulnerable to invalidation for inequitable conduct. But merely a premature promise of utility would not be false. And moreover, even a false statement of higher-than-actual utility would not be "material" in most cases, given the currently lax utility standard.

(3) Enablement

Third, closely related to utility is Section 112's "enablement" requirement. Enablement asks, could a person of "ordinary skill in the art" make the invention work in the stated way? But this does not necessarily require assessing the veracity of therapeutic claims. So long as a PHOSITA can practice the invention as claimed without "undue experimentation," it would not strictly matter whether the inventions' therapeutic benefits pan out. For example, it would not matter whether the patients that are treated with a claimed drug live or die. This would be the FDA's concern, not patent courts' and examiners'.

That said, cases like Brenner v. Mason have shown how U.S. law's utility requirement might be beefed up to weed out patents that are filed well before claims of efficacy have been verified. For instance, filing a patent that makes "promises" of therapeutic efficacy when testing has not even been performed in mice might be seen as a premature assertion of utility that warrants invalidation. See Brenner v. Mason, 383 U.S. 519, 534-35 (1966) ("The basic quid pro quo contemplated by the Constitution and the Congress for granting a patent monopoly is the benefit derived by the public from an invention with substantial utility. Unless and until a process is refined and developed to this point—where specific benefit  exists in currently available form—there is insufficient justification for permitting an applicant to engross what may prove to be a broad field.").

But Professor Siebrasse is quick to point out that Brenner's notion of "substantial utility" corresponds to the "scintilla" branch of Canadian utility law, mentioned above, which he says already requires assessing whether the specific asserted benefit of the invention has been developed to the point where it is currently available. The Promise Doctrine, in contrast, is a separate standard that seeks out "promises" and then imposes a higher standard on them.  

***  

I suspect there is more here to uncover on the history of the so-called "promise of the patent" doctrine. I am curious as to why it was not discussed in the Oil States debates, which centered on the conations under which patents can be retracted. I didn't catch it mentioned in the amicus briefs that I read. I checked Professor Oren Bracha's thesis (now book) on U.S. patent law history. He does mention some aspects of this issue his discussion of "working clauses." For example, Bracha states that "working clauses," which required grant holders to practice the invention to which they sought rights,
were a clear manifestation of the two main characteristics of English patents. They expressed the understanding of patents as royal discretionary policy tools, by creating mechanisms for insuring the 'execution' of the specific consideration promised by the patentee as the basis of the patent deal. They reflected the dominant notion of the subject matter of patents as new industries or trades, by focusing on actual putting into practice rather than on mere disclosure of information.   
(Bracha, 20).  But I didn't see anything about invalidations based on a "false promise of the patent" doctrine.

Tuesday, 9 April 2019

Making Sense of Unequal Returns to Copyright

Typically, describing an article as polarizing refers to two different groups having very different views of an article. But I read an article this week that had a polarizing effect within myself. Indeed, it took me so long to get my thoughts together, I couldn't even get a post up last week. That article is Glynn Lunney's draft Copyright's L Curve Problem, which is now on SSRN. The article is a study of user distribution on the video game platform Steam, and the results are really interesting.

The part that has me torn is the takeaway. I agree with Prof. Lunney's view that copyright need not be extended, and that current protection (especially duration) is overkill for what is needed in the industry. I disagree with his view that you could probably dial back copyright protection all the way with little welfare loss. And I'm scratching my head over whether the data in his paper actually supports one argument or the other. Here's the abstract:
No one ever argues for copyright on the grounds that superstar artists and authors need more money, but what if that is all, or mostly all, that copyright does? This article presents newly available data on the distribution of players across the PC videogame market. This data reveals an L-shaped distribution of demand. A relative handful of games are extremely popular. The vast majority are not. In the face of an L curve, copyright overpays superstars, but does very little for the average author and for works at the margins of profitability. This makes copyright difficult to justify on either efficiency or fairness grounds. To remedy this, I propose two approaches. First, we should incorporate cost recoupment into the fourth fair use factor. Once a work has recouped its costs, any further use, whether for follow-on creativity or mere duplication, would be fair and non-infringing. Through such an interpretation of fair use, copyright would ensure every socially valuable work a reasonable opportunity to recoup its costs without lavishing socially costly excess incentives on the most popular. Second and alternatively, Congress can make copyright short, narrow, and relatively ineffective at preventing unauthorized copying. If we refuse to use fair use or other doctrines to tailor copyright’s protection on a work-by-work basis and insist that copyright provide generally uniform protection, then efficiency and fairness both require that that uniform protection be far shorter, much narrower, and generally less effective than it presently is.
The paper is really an extension of Prof. Lunney's book, Copyright's Excess, which is a good read even if you disagree with it. As Chris Sprigman's JOTWELL review noted, you either buy in to his methodology or you don't. I discuss below why I'm a bit troubled.

Lunney exploits a brief data delivery by Steam that allowed simple algebra to calculate the number of users for each game. There are a few problems with the calculations (assumptions, potential errors, etc), but not so many that I'm going to spend time quibbling with them. So, let's start with the L-Curve, which forms the basis for the article. Here is the graph of user count:


It looks like an L, alright, but the scale bothered me.  So, I looked at the 1000th most popular game, which appears to be about 0 on this chart. But the game (called Cities in Motion) had at the time of the dump (about June 2018) 237,970 users.  At the 70% takehome rate on $19.99, that's revenues of $3.3 million. And that doesn't count the 9 add-ons that range from $2 to $5 each. Nor does it include the followup, Cities in Motion 2, which sits in position 626 with 451,407 users ($6.3 million revenues plus another several add-ons). Cities in Motion 2 also looks to be nearly zero on this curve.

Indeed, the scale is so off, it reminded me of this global warming chart from the National Review:

So, I thought I would be clever and cut off the top outliers to show a more linear progression. But all I got was another L-Shaped graph like Prof. Lunney did. Indeed, no matter where I cut, it was always L-Shaped. The reason for this is that the revenue growth is exponential. Lunney notes as much as well, just to be clear. Now, people misuse that word, but it actually applies here. The number of users at each rank is some exponential power higher than the one before it. So I did what people sometimes do with an exponential curve that's difficult to graph - I took a log of it.

Here's the chart of the logged number of players:
 

The relatively straight line shows a fairly constant exponential growth, though there is a large dropoff at the bottom, and some big outliers at the top.

What to make of all this? It is here where we diverge a bit. Prof. Lunney's basic position is that we don't need super-strong copyright to protect the folks at the very top. They would have made those games for a lot less. Therefore, copyright must, if it is to exist, be there for the large middle. And the problem with the L-Curve is that the large middle isn't making any money.

There are two ways to approach his concerns. The first is the theory, and the second is the empirics.

The primary theoretical answer is that the large middle creates incentives for developers hoping to become outliers. Prof. Lunney calls this the lottery effect, and he poo-poos it as not terribly valid. Let's just say I disagree with him, but I don't want this post to be about that. I frame this question as an expected value question, which means that in a repeat and uncertain game, one must have supracompetitive returns to offset all the losses when things flop. I mathematically illustrated this in an article I wrote nearly 20 years ago, which demonstrated that cutting off copyright protection once some expected value of profits was reached yielded lower expected value than the alternative. Ironically, this proposal is exactly professor Lunney's here today, and I disagree with it as much now as I did then. If you claim that the middle isn't making money, then by cutting off outliers you're just making your average creator earn even less money, which will push incentives downward.

Now, this isn't to say that Prof. Lunney doesn't have a point. As noted above, I agree that some supra-competitive rents may be too much. Where we differ in large part is our views of the uncertainty involved and the motives for participating.

But I'll put that aside, and focus on the second question - even without the lottery effect, does the data support a theory that copyright is providing nothing to the vast middle?

It's hard to get a sense from the data, so I thought I would look at every tenth percentile (deciles):
1. Team Fortress 2, 50,191,347 users, released Oct 2007, free to play (since 2011), first person shooter, formerly $20, developer: 41 games
2. Trick and Treat - Visual Novel, 164,544 users, released Dec. 13, 2016, free to play (visual novel), developer: 3 visual novels
3. Hunahpu: way of the Warrior, 48,807 users, released April 10, 2017, $3.99, very simple graphic landscape game (like Mario Bros. or Defender), developer: 8 games
4. Caravan, 19,612 users, released Sep. 30, 2016, $9.99, low-graphic RPG, developer: only game, publisher: 50 games
5. The Fidelio Incident, 8,547 users, released May 23, 2017, $9.99, first-person adventure, developer: only game
6. Rubek, 4,163 users, released Oct. 14, 2016 , $2.99, very simple graphic strategy game, developer: 2 games
7. Soko Match, 1,904 users, released Sep. 16, 2016, $.99, extremely simple graphic strategy game, developer: 3 games
8. Q-YO Blaster, 828 users, released Jan. 15, 2018, $3.99, pixel graphic landscape game, developer: 1 game
9. EquiMagic - Galashow of Horses, 343 users, released Dec. 19, 2017, $9.99, simple graphic horseshow simulation (trotting horses), developer: 15 games
10. Over My Dead Body (For You), 119 users, released Sept. 11, 2017, $9.99, very simple graphic strategy game, developer: 1 game

Doing this exercise was interesting, and revealed a few patterns to be explored. First, price seems to matter, but Prof. Lunney's data does not take that into consideration. Free games reign supreme, but cheap games do not. This implies that a) there's a quality tradeoff, b) that in-game revenues are not being counted, and c) that perhaps some low-user games make money because they are cheap to develop.

Also, many firms appear to be repeat players. Some of the followons have more users, and some have less. A full study of repeat play and incentives to create better games would be interesting.

Another takeaway is that age seems to matter. A lot. I did a simple regression on the user count (rather the log of user count) and the steamid (which is smaller for older games), and that simple variable explains 40% of the variation in user counts. Older games have more users. Prof. Lunney might consider that for future analysis. That said, there's still remarkable inequality even in what remains after age - so the question remains whether the vast middle must be linear in order for copyright to make sense.

I'm not so sure. but before I consider some of Prof. Lunney's analysis, I should note that in large part I agree with many of the things he's arguing. For example, the existence of strong copyright won't force the unwilling to pay - they will otherwise pirate. On the flip side, content management systems, like Steam's, largely eliminate the need for pure piracy copyright, as they limit copying even in the absence of law. Copyright's added value in platforms like this is much more in avoiding knock-offs, as PUBG (No. 3 on the list) alleged when Fortnite came out with a similar battle royale system.

Now, on to some specifics:
Does the "L-Curve" mean that the average producer can't make a profit? No. I contacted a programmer right around the median, with a $3.99 game. He told me that he worked on it minutes at a time, off and on, for a couple years. He confirmed that the numbers sounded right (he actually had more sales associated with bundles, but the people didn't play), and that he made a very small profit. This was a side business, and the world got a program it wouldn't otherwise have. But he also didn't believe that copy protection helped him make that money.

That's the median creator, still making a profit. How far up or down the line do we go? Prof. Lunney says: "As soon as two games are produced without copyright or with extremely narrow copyright, the welfare losses associated with the excess incentives for these two games likely outweigh the welfare gains from enacting or expanding copyright to ensure the expected profitability of the third game." I think it may be this statement that gives me the most heartburn.

There are a few reasons I'm troubled by this. First, this seems to assume that all consumer welfware can come from owning a couple games - that there's simply nothing to be gained by having more games (that somehow isn't stifled by limiting copying, at least). But the data belies that. These user counts are not individual - if you add them all up, they add to 1.7 billion or so. And Steam had about 150 million users at that time, which means that every user played 10 games on average. So if we stop at 2 games we have a shortage - players willing to pay for games that may or may not be created.

Second, we don't know what the quality of games would look like. If firms are limited to self help protections, it may be that their games will be of lesser quality, taking less investment, and otherwise not fulfilling demand. Or maybe they won't. We certainly can't know this from the data here.

Third, as noted above, Prof. Lunney believes that there is a lot less uncertainty than there is. At one point, he notes: "But whatever the reason some profit motivated production will occur even without copyright, and under our assumptions, the most popular videogames will be produced first." While it's true that the older games have more users, they are more popular because they are older, not because they were created first. I am highly skeptical that producers know in advance which games will be highly played. Many flop. This ties to my point above about expected value - where one does not know whether a game will be successful, one cannot assume that the first one out will be the best one.

Fourth, Prof. Lunney reaches his conclusions by making assumptions about the welfare loss associated with copyright at the top as compared to the welfare gain through added incentives at the median. Those assumptions may work for patents (I don't know enough to know). They might even work for songs, to the extent that songs are limiting new creation. But I seriously question them in video games for several reasons. First, in a world where content management systems control much, it is unclear what copyright is restricting at the top end. Some copying of actual characters, I suppose, but is it really dollar for dollar? Second, in a world where much gaming is based on underlying engines, such that gameplay is already half-handled, what limitations are there? Again, it seems to be specific artwork rather than free reuse of games (which, are already free at the top end anyway). Is the restriction on artwork so great that it's causing loss of welfare? I don't know, but I doubt it. The reason people spend money on free-to-play Fortnite is for the original skins that cost money (as stupid as I think that is). If the Fortnite skins look like everyone else's then why bother? Note, of course, that the copyright incentive at the low end may also be too low. It's the upper middle range, where there's a real investment (which is less than half the games, apparently) that matters to me, not the median.

Fifth, the analysis relies on three assumptions that Prof. Lunney lays out: "For this to be the case, we need: (i) revenue to be correlated with demand, so that a more popular game earns more than a less popular game; (ii) for each game to have a constant cost, and thus higher demand games are more profitable per unit cost; and (iii) expected demand cannot be completely uncertain ex ante." The article admits that item (ii) is difficult for videogames, and my analysis of the ten games above shows that. Item (i) is also difficult to show in practice; the pricing varies so much that some games with 4000 users make $40,000 and some games with 4000 users make $12,000. Item (iii) probably holds - expected demand is not completely uncertain, especially given quality of investment, but it's probably a lot more uncertain than Prof. Lunney gives credit for. In any event, the assumptions of his analysis don't hold up on their own terms.

I suppose the real issue for me comes down to this passage in the article (in which Prof. Lunney suggests that revenues be capped at costs, a point that I disagreed with above):
Alternatively, some might insist that it is neither fair nor efficient that Sheeran should earn the same for Shape of You [the number highest earning song] as someone earns for a marginal song to which hardly anyone listens. But it is entirely fair and efficient. In a competitive market economy, a heart surgeon who saves your life earns the same market reward as a doctor who gives you a vaccine. Neither earns the value of their work, in the sense of the maximum reservation price a patient could be forced to pay to avoid dying. Rather, both earn the cost of the service they provided. To the extent the market prices for the surgery and the vaccine differ, that price difference should reflect an underlying difference in cost. In a competitive market economy, it is not value, but cost that dictates what you earn. If copyright intends to create a market that mimics a competitive market, it should strive to do the same. As a result, if Shape of You cost the same as a marginal song to author and distribute, then that cost is all the market return that fairness and efficiency require each to earn.
I'm troubled by this analysis and analogy in a couple of ways. Primarily, it mixes the apples of price per unit and the oranges of total demand. Sheeran and noname song both earn the same cost: Spotify pays them exactly the same per play. Sheeran makes more money not because the price of his song is higher, but because more people want it. There are no rents in his individual demand/supply curve; indeed, if he knew the popularity, he probably would have charged more (see Taylor Swift refusing to go on Spotify). Now, Lunney is saying that if too many people want the song, then it's not efficient because there are others who could copy it for free and that's a welfare loss because the gain in incentives to create music is outweighed by the joy we would all get if we could just listen to the music for free once Sheeran got enough to make the music in the first place. But if that's the argument he wants to make, he should own it, rather than claiming that somehow Sheeran is selling something at other than the cost of making it. Because if you take this argument to the extreme, it means that even though the heart surgeon has now exceeded the cost of running the practice by June (it was an unexpectedly cholesterol filled year), then the incentives for people to become heart surgeons are outweighed by the value we'd all get if we just forced the surgeon to operate for free after July 1. Maybe it's ok for copyright because of "promote the progress" in the constitution and all, but it's a pretty unsettling way to look at the world, in my view.

I'm also troubled because this treats music (and other copyrighted works) as fungible goods, as you would in a market. Sheeran and noname - either one is the same, so if they are priced similarly then they should make similar profits. But that's not how even efficient markets work. In an efficient market, a better product has a larger demand, and thus garners more revenue because more people will buy it. Even with a flat marginal cost (and it's not actually flat, even with music), if the demand curve shifts out, more people will buy the product at the same price. And so cutting off the revenues in fact distorts the market. It's a wealth transfer. It may be justified in the name of welfare maximization (though I'm not convinced), but again, Prof. Lunney should own that. He proposes taking an otherwise efficiently behaving market, in which everyone sells the similar but slightly differentiated goods for the same price, and putting a thumb on the scale to cap demand at the competitive price before it is sated, so that consumers and have all of the welfare associated with not having to pay for a product they prefer to another product they could just as easily buy for the exact same cost but do not prefer. This is not an efficient market proposal, in my view. (On a side note, it's not even clear that Spotify is the way to measure this, as customers pay a fixed price, and can consume as much of the product as they want, detaching demand analysis from pricing).

To recap this very long rant post, I agree with Prof. Lunney that we likely don't need more copyright protection to get more video games, or music, or books, or whatever. Indeed, we likely would be fine with a lot less of it. But I just cannot get to that conclusion from the data presented here, except to say that a lot of people seem to make video games on Steam without the expectation of huge profits.

Labels: , ,