Divulged and Then Forgotten

 | 

You remember the Katharine Gun story, right? The British “Ed Snowden” who leaked a damning National Security Administration email that urged wiretaps and extortion in order to influence the UN vote in favor of invading Iraq, back in 2003? And you remember the British “Neil Sheehan,” Martin Bright, who got hold of the document and published it on the front page of the Observer in early March of that year? Surely Gun went to prison and Bright won a Pulitzer, right? Together they prevented the war in Iraq? No? You don’t remember?

Well, that’s because only the first half of the above scenario actually happened. Gun did leak the document, and the Observer did run Bright’s story on its front page, on March 2, 2003. All hell should have broken loose, and support for the war, already shaky in some quarters, should have ended. Nevertheless, three weeks later George Bush began the Shock and Awe bombing of Iraq. The article, though itself shocking and awful, had little effect, for reasons that are made clear in the movie Official Secrets (and would spoil the experience for you if I revealed them here.)

Full disclosure: I wasn’t entirely against the war when it started. I was living in New York when the Towers were hit. I listened as emergency vehicles screamed their way down Broadway that day. Comforted my daughter when she woke up with nightmares that week. Had nightmares myself when the Metro North trains chugging by at night entered my dreams as thundering military planes. Later, I feared the weapons of mass destruction whose existence Colin Powell confirmed to the UN in calm, measured, insistent tones. Yes, as much as I hate war, I was manipulated by the hype, the news, and my fears. And by that gaping hole in downtown Manhattan, that looked like an abscessed cavity among the skyscrapers as I flew into LaGuardia a month after the attack. But mostly by those weapons of mass destruction.

All hell should have broken loose, and support for the war, already shaky in some quarters, should have ended.

I say this as a reminder that public opinion mattered immensely in the runup to the war. Bush did not want to be seen as the aggressor but as the moral defender. Therefore, he needed the support and approval of the media, the world at large, and the UN in particular. The leaked document could have influenced all three. Indeed, the editorial board of the Observer had supported the war, until its members were convinced that the document was real and they decided to publish the article.

Official Secrets tells this story skillfully, suspensefully, and with reasonable accuracy; Gun was a consultant on the film and spent many hours with director and writer Gavin Hood to help him understand the motivation for what she did, and her experience after she was caught. But as in all films, the story is streamlined and enhanced for dramatic effect. In particular, Gun is married to a Turkish Muslim, Yasar Gun (Adam Bakri), which casts some underexplored suspicion on her motivation. Moreover, whenever a film is based on a true story, the filmmaker has to package it for presentation in a two-hour block with a rising conflict and satisfactory resolution. That requires streamlining events and enhancing or creating certain characters to make it work. But Official Secrets feels like an honest presentation, whether or not it is entirely factual.

Two character lines drive the film: that of the whistleblower Kat Gun (Keira Knightley) — do I dare say she pulls the trigger on the NSA? — and that of the reporters Martin Bright (Matt Smith) and Peter Beaumont (Matthew Goode), who investigate and write the story. All face the same dilemma: how to reveal confidential information without facing jail time.

Yes, as much as I hate war, I was manipulated by the hype, the news, and my fears.

As a low-level translator for the British Government Communications Headquarters, Gun is basically hired to spy, eavesdropping on private conversations and alerting her supervisor if something seems “suspicious.” She is bound by the Official Secrets Act of 1989 not to reveal or even talk about anything she sees or hears or experiences at work. (This becomes particularly onerous when she tries to communicate with her lawyer.) But when reminded that she works for the British government, she counters, “I work for the British people.” Good stuff.

Bright and Beaumont are mostly concerned with authenticity: Is the document real? Can they confirm its source without revealing their own sources? Their efforts to verify create a more suspenseful and compelling storyline than Gun’s relationship with her husband and her fears about their personal risks. To me it’s the best part of the film. Ralph Fiennes as her principled attorney also provides some fine libertarian talking points.

When reminded that she works for the British government, Gun counters, “I work for the British people.”

Both of these topics — spying and ethical journalism — are highly relevant today. My inbox is full of speech-chilling articles about Apple and Google using Siri and Alexa to listen in on private conversations, and even more chilling articles about the draconian “social credit” system arising in China. And as I write this review, the New York Times is trying to justify its decision to run a front-page story dredging up a sexual abuse allegation against Justice Brett Kavanaugh and “accidentally” leaving out a sentence indicating that the presumed victim says that she doesn’t remember the incident. In a particularly dramatic scene, a similar “accident” happens in Official Secrets.

Moreover, the “quick” war in Iraq quickly expanded to Afghanistan and other countries in the Middle East, dragging on for 18 years with no end in sight. Over 5000 US troops have been killed and tens of thousands have been injured. And American good will around the world is at an all-time low.

We really showed them, didn’t we?


Editor's Note: Review of "Official Secrets," directed by Gavin Hood. Entertainment One, 2019, 112 minutes.



Share This


Software Patents and Software Copyrights

 | 

Some readers will be surprised to learn that as of 2019, despite the rise of Facebook, Google, and Amazon, and despite decades of the explosive growth of Microsoft and Apple, the Supreme Court of the United States has declined to decide whether software is patentable, instead deciding every software patent case on narrower grounds. It is settled and statutory that software is copyrightable, but the test for copyright infringement and the extent of copyright protection are questionable.

Here I will explore a Law and Economics Approach to questions about whether and to what extent software can be protected by patent or copyright.

One might ask why it would even be questioned whether software can be patented. After all, patent law protects technology, and software is technology. However, patent law was created in the 19th century and solidified in the 20th, and the paradigm of patent law, in which its legal doctrines make sense, is to protect a physical device with a specific structure and physical elements of the object arranged a certain way. Patent law protects structures and elements, not functions or features. Also, patent law has always held that abstract ideas are not protected. Every invention it dealt with during its formative era was a physical invention. One thinks of the cotton gin as a good example of an early achievement of patent law. Lastly, patent law evolved for investigations in physics, biology, and mechanical engineering in which scholarly research led to lab experiments that yielded inventions — all of which was very expensive and hard to duplicate, although productive of discoveries that greatly benefited society.

The Supreme Court of the United States has declined to decide whether software is patentable, instead deciding every software patent case on narrower grounds.

Software does not fit this paradigm. Software is an abstract idea with no physical existence essential to its operation. The same software can usually run on any hardware, so hardware is not necessary to it conceptually. As such, it should not be, and is not, patentable. Software patent attorneys recite hardware in patent claims to try to create a jurisdictional nexus in the physical world, but this is merely a legal fiction. What makes software profitable is usually its features and function, not how the elements of its source code were structured. It is black letter law that you can patent structures, but not features.

Software generally takes preexisting computer program language syntax, software frameworks, software OS features (operating systems), SDKs (software developer kits), and software APIs (application programming interfaces, a set of syntax for software systems to interact with other software systems), and uses them to do new and useful or extraordinary things. All software essentially uses the same sort of syntax, such as logical and arithmetic operations and conditional statements and loops and variables, and the same framework features, such as user authentication and database reads and writes, and merely rearranges these into new and useful features for end users — a calendar, for example, or photo sharing, or music playing. Front-end designs, such as what web pages or apps look like, usually take a set of given elements (colors and shapes or rounded corners or shadows or progress bars) and find new ways to arrange old components. Software engineers usually do not reinvent the wheel.

Software development is relatively cheap, does not require lab research, and does not rely on academic research. Indeed, a person could, in theory, learn how to code by reading books and, for no more than the cost of buying a laptop, use a free, open source framework to write software that made millions of dollars. Essentially anyone can learn to code, and scientific or mathematical skill of academic caliber are not requisite. Contrast the knowledge of biology or electrical engineering that is required to patent a drug or a microchip.

The paradigmatic Silicon Valley startup is three 20-year-old computer science majors who hack around one night and make some software and release it — whereupon it snags 2,000 users in a matter of weeks. The inventors raise a million dollars of venture capital, promote the product, get a million users, and get acquired for a billion dollars. That story resembles that of Instagram and many other “unicorns” (the slang term for a software startup valued at over a billion dollars). These young people who know code at a very high level and get very rich from it are called Silicon Valley Geniuses.

Software is an abstract idea with no physical existence essential to its operation. As such, it should not be, and is not, patentable.

In contrast, the paradigm for a patent is a lab that spends a ton of money on Ph.D. researchers who are looking for a cure for cancer. This lab must have the promise of a patent to justify the millions spent on research that may ultimately strike out. For this reason, trying to fit software into patent law is like trying to fit a square per into a round hole. The round hole did not expect and was not designed for a square peg.

Again, the abstract idea doctrine of patent law holds that abstract ideas and scientific principles are not eligible subject matter for patents. On the basis of the abstract idea doctrine alone, software is not patentable, and arguments that hardware gives it physical existence is a legal fiction. The actual statement of the software in a computer programming language does physically exist as bit values in computer memory or instructions in a processor, but language expression is copyright subject matter, whereas inventions are patent subject matter. Software, considered as an invention, is simply an idea implemented by a computer or device.

There is a path forward from this impasse: register software patents, but give them special rules. This path is especially attractive because it does not require new legislation and all the lobbying and hand-wringing that come with a political process, but it is more honest than legal fiction. The abstract idea doctrine should simply be retired as a patent doctrine that failed to keep pace with the evolution of technology. The abstract idea doctrine can be replaced by a distinction between theoretical knowledge and practical knowledge, which is a plausible distillation of the abstract idea law, wherein software patents that yield practical knowledge, as opposed to merely theoretical knowledge, can be patentable subject matter.

But this leaves the question of what software should be patentable.

There exists a certain basic set of patentability subject matter requirements that every patent lawyer knows: novelty, nonobviousness, utility, adequate disclosure, and claims defined by a specification. The key is two patentability subject matter requirements, as interpreted by Law and Economics. First, a patent must be novel. Second, it must be nonobvious.

There is a path forward from this impasse: register software patents, but give them special rules.

From a Law and Economics point of view, monopolies are horrible, because they raise prices and stifle competition. Yet a patent is a 20-year monopoly on an invention. So why grant a patent? A patent is a trade whereby society gives a monopoly to an inventor in return for his disclosing his invention, which then becomes part of the knowledge possessed by society. For this trade to be justified for society, the benefit to society of acquiring the knowledge must exceed the loss from higher prices during the monopoly. Novelty and nonobviousness are two guarantees of this. If the knowledge is not new, society has no need to buy it, because it already possesses it. The underlying intent of the nonobvious requirement is that other inventors would be unlikely to invent the same thing during the 20-year life of the monopoly, because otherwise society could gain that knowledge without paying out a full 20-year monopoly, even if society might wait five years for the invention to be disclosed by other inventors. This analysis replicates the patent law doctrine that combining old parts into a new configuration is not patentable unless the new whole is more than the already known sum of its parts. If A, B, and C are already known, then combining them into ABC might be new, but if there is no new knowledge above what was known from A, B, and C before, then society does not gain any new knowledge and has no reason to award a monopoly.

From this we arrive at a new test for whether a unit of software should be eligible for a patent: the Silicon Valley Genius Test. A patent should be awarded for something that a Silicon Valley Genius does not already know and could not figure out and is unlikely to think up during the next 20 years.

The Silicon Valley Genius knows every element of source code syntax and every feature already available to end users, so a new combination or configuration of those, absent something new that he does not know, will fail the test. The invention must be innovative and creative enough that, even with actual billions of dollars as motivation, the real geniuses of Silicon Valley are not likely to invent it within 20 years. The Silicon Valley Genius is smart, he is a genius, and if something could make a profit and could be cobbled together from the prior art, he will find a way to make it. But to pass the test, the knowledge involved must be new to the SVG. It must look like a lightbulb turning on in his head. After all, he is the representative member of society who actually gains the knowledge disclosed in the patent and is able to monetize it after the monopoly ends.

The Silicon Valley Genius Test is a very high standard to meet, but from the Law and Economics point of view there is otherwise no real reason to grant a patent — because real Silicon Valley Geniuses do exist in large numbers with venture capital funding and low development costs, so they will likely discover whatever the invention is and society will get it at a lower cost than by paying a monopoly. It is conceivable that most software patents would be struck down under the SVG test. Yet that would be the correct result, lowering prices by busting monopolies while only paying for true genius inventions that benefit society most.

From a Law and Economics point of view, monopolies are horrible, because they raise prices and stifle competition.

As with any test, we arrive at this question: how to pass it? Any inquiry using the SVG test must begin by identifying what is the knowledge that society gains in return for the proposed monopoly. From a Law and Economics point of view, if society does not take a profit on the trade of knowledge for monopoly, a patent should not issue. We can ask what new knowledge about how to do or accomplish something the disclosure teaches, and without which it may be merely theoretical knowledge, not practical knowledge. Having identified practical knowledge, we then ask whether an SVG knows or will soon know this.

The SVG test can demonstrate this in several ways. Expert testimony from real established SVGs can be taken on the issue of whether they think this would be novel and nonobvious to SVGs. Polls and surveys and focus groups can show the knowledge to groups of 20 to 100 SVGs plucked from Silicon Valley or the talent pool of software developers and ask them to vote on whether they have seen this or they think someone would have thought of this within the next 20 years. And there can be a factual analysis of the prior art to see if the patent is merely a reconfiguration of old known elements into new features, which is essentially not even novel, let alone non-obvious.

Prior art analysis should not be limited merely to academic scholarship and published patents. It should look to the documentation of every computer programming language, framework, and API, as well as analyzing the source code in every repository of free open source software, such as GitHub.

Now let’s look at copyrights. The Copyright Act defines software as a literary work, and courts have developed a test for software copyright infringement that is essentially the Hand Test applied to source code. The Hand Test, named after Judge Learned Hand, is a popular test for deciding idea-expression dichotomy issues in a copyright infringement case. The idea-expression dichotomy is a copyright doctrine that holds that literary expression is copyrightable subject matter, but ideas (and facts) are not.

It is conceivable that most software patents would be struck down under the Silicon Valley Genius test. Yet that would be the correct result.

Let us be honest: it is a legal fiction that a computer program is a literary work. As with patents, software doesn’t really fit the mold. But let us consider a computer program and take seriously the position that it is a literary work. A computer program, in the copyright sense, is a unit of source code. Source code is composed of words, written in a computer programming language, that tell a computer what to do (leaving aside the detail that it is compiled to machine code, which is a series of ones and zeroes loaded into the processor, and the machine code is what actually tells the processor what to do). So it is text written in a language, and could be protected under copyright like any book or article.

But what makes software special, and what makes people want to protect it, is precisely the thing that normal literary works lack: technological functionality and computer operational execution. If the basis of software copyright is that software is a literary work, then it should be protected as a literary work. Assume that an author writes a short story. Under the idea-expression dichotomy, copyright could protect literal copying of her words and also her voice, her style, her idiom, and the details of her plot and story and characters, although not the abstract ideas of her plot or her character archetypes. Copyright would also not protect the effect her story might have upon her readers.

Now assume that a chef writes a recipe in a cookbook. Literal copying of the cookbook would be copyright infringement, but when chefs use it to bake cakes, copyright would not normally give the author ownership over the cakes or any right to infringement damages. Selling pirated copies of a cookbook could lead to legal remedies, but baking a cake would not. If one chef buys a cookbook and bakes a cake and sells slices which compete against the author's cakes, absent some contractual license as a condition of buying the book, that is not copyright infringement.

I believe that copyright should protect the literary aspect of source code: its word choice, style, and idiom, the voice of the author as a writer of software, and any structural extensions of such, but should not protect functionality, which is properly within the scope of patent, not copyright. If the cake itself, or the reader enjoyment, is not copyright infringement, neither should be any source code elements, to the extent that they have an effect on computers.

Selling pirated copies of a cookbook could lead to legal remedies, but baking a cake would not.

A doctrinal basis for this position exists within the idea-expression dichotomy, which holds that where the copyright is the only possible expression of an idea, or one of a small finite set of potential expressions, then it merges with the idea and is a defense to infringement. From this we may infer the Functional Merger Test, which holds that features and functionality are facts about what software does or are simply ideas, and that where functionality merges with the expression of the source code under Merger Doctrine, the functionality is an idea for purposes of the idea-expression dichotomy, and the software as a functional entity cannot be protected.

Some examples may help to explain this. Assume that a feature requires software source code to take user input ten times in sequence and each time compare the input to a value, outputting a message if and only if the messages match. Assume that this must be done in one specific programming language. There is probably a small finite number of syntactic ways to do this. Declaring a variable, putting user input into it, comparing it to a string literal, and using a loop to do this ten times would suggest certain syntax to an SVG who was, for example, experienced in Java, or in Python. How to code this naturally emerges from the function or feature that is the end goal. The expression has therefore merged with the idea under Merger Doctrine, because there is either only one or a discreet finite number of expressions that is capable of correctly expressing the non-copyrightable idea.

Where the copyright would spoof a patent and grant a de facto monopoly on a technology, which Merger Doctrine in copyright law explicitly rejects, there should be a defense to copyright infringement. But the same argument would apply to most software copyrights, in the absence of actual copying of source code or the use of someone else's source code verbatim without permission. Languages, frameworks, SDKs, and APIs have a finite set of syntax to accomplish common jobs, and any common feature, or any feature conceivable by combining or reconfiguring known components, will suffer this syndrome of any one solution necessarily resembling other solutions. Writing source code takes a lot of work and investment, and copyright properly protects unauthorized use or literal verbatim copying of source code, but all copyright infringement litigation that comprises or touches upon functionality must fail, under Functional Merger Doctrine, because ideas are not copyrightable subject matter and the copyright would grant a monopoly on the idea and therefore the copyright is invalid to the extent that it is directed at protecting functionality. A software writer’s voice, style, and idiom could be protected, but these have no financial value, so enforcement would be rare.

To extend the cake hypothetical: assume that the recipe calls for flour, sugar, chocolate, eggs, butter, and honey. Assume that the chef writes the recipe in a cookbook and publishes it. What if it is impossible for another chef to write a recipe for that cake without using those six words: flour, sugar, chocolate, eggs, butter, honey? To judge infringement, one might try to grade the text on the scale of abstract idea to specific expression. But that would not be my approach. If the copyright would grant a monopoly on noncopyrightable subject matter, namely the cake, then there should be no infringement. There does not even need to be any deep analysis to find infringement. The copyright itself should be held invalid to the extent that it owns the cake recipe.

A software writer’s voice, style, and idiom could be protected, but these have no financial value, so enforcement would be rare.

By contrast, the current test for software copyright infringement seeks to remove all unprotected elements and then be left with the core structure and elements of the software, which other software can infringe if there is substantial similarity. This is essentially a version of the Hand Test for copyright infringement. The Hand Test posits a spectrum or continuum of the idea-expression dichotomy with unprotected idea at one extreme end and full protection as 100% expression at the other end. The judge applying the Hand Test then hears arguments and draws an arbitrary line and points to an arbitrary spot on the spectrum, above which is “protected” and below which is “idea.” Judge Hand famously said that no judge can know where he should draw the line, and no judge ever will, and the Hand Test is as much an acknowledgement of an arbitrary decision as it is an actual test applied by a judge. The copyright software infringement test inherits all the flaws and weaknesses of the Hand Test, with the decision of what to protect being arbitrary, unpredictable, and without rigorous rational justification. The test removes unprotected elements to find what is protected, but absent a test on top of it, this is circular reasoning: an element was removed because it was not protected, and it was not protected because it was removed.

In a novel you can have the plot in general (say, forbidden love), plot details (forbidden love between young Italian nobles), and fully defined textual detail (Romeo and Juliet). You can have a spectrum on which you begin with Romeo and Juliet, then move to their story but with different names, then not in Italy, then not in the Renaissance, then they aren’t from warring families but just from groups that hate each other, then they don’t both die at the end, and you can then arrive at just the abstract idea of forbidden love with a tragic ironic ending. To analyze whether West Side Story would infringe Romeo and Juliet (which could have been a real case, if copyrights had been started centuries ago and never expired), a judge would need to look at every point on the line from idea to expression, choose an arbitrary point, then assess whether West Side Story contains enough of the protected elements of Romeo and Juliet to infringe.

In contrast, source code is fully defined, and every item of syntax and every code structural organization and source code design exists only for the purpose of achieving a result; so there are really two discreet entities, the source code on the one hand, and features and functionality on the other hand, and there is no spectrum or continuum between them. Current software copyright law applies a Hand Test Romeo and Juliet approach instead of the either-or, all-or-nothing approach that I am recommending.

My position is that the test for software copyright infringement should be that literal copying or the unauthorized operational and actual use of source code by infringing hardware is liable but any allegation of infringement of non-identical source code fails, to the extent that it would protect (and grant a monopoly on) features or functionality. A Law and Economics analysis of infringement sheds light here.

Judge Hand famously said that no judge can know where he should draw the line, and no judge ever will, and the Hand Test is as much an acknowledgement of an arbitrary decision as it is an actual test.

Writing source code costs money: resources are consumed to pay the salary of the software developers while they write the code, and to hire coders who are technically competent to do so. By granting a copyright, copyright law gives the authors a monopoly on the source code and its usage for the life of the copyright, so that they can recover their investment. If infringers come along and sell copies of the source code, they can make the same revenues as the author but without the cost of authorship, and are hence stealing the money paid to make the source code. Copyright law properly prices the cost of creation into the sale price of the software, so that the purchaser pays the cost of its creation by the manufacturer (plus whatever profit the market will bear).

On the other hand, assume that someone writes source code that has a specific feature, and someone else writes a second computer program in the same computer programming language, implementing the same feature. The task, and the available syntax, defines and specifies certain optimal ways of coding the feature, so the second person's software ends up substantially similar to the first software. The first person sues for copyright infringement. The software is substantially similar, but damages are not for recovering the author’s cost of writing source code; instead he is trying to own the feature. This grants a monopoly and raises prices and eliminates the second person as a competitor and a substitute in the marketplace. The difference in price between what the software would cost in a world with price competition between the first and second persons, compared to what the software costs with the monopoly, is the amount of economic surplus that society loses by granting the monopoly.

It may or may not be economically efficient for society to grant a monopoly in return for technology, but to the extent that it does, patent law is the regime that does so. The details of patent law have been carefully calibrated by Congress to consider the costs and benefits of monopolies paid by society in return for technology given to society, both in eligibility and in duration of the monopoly, and copyright law should not be doing this. Judges have explicitly recognized that copyright law should not usurp patent law in Merger Doctrine cases, and the Functional Merger Doctrine for software copyrights will correctly eliminate infringement for features and functionality.

Software is not poetry, and one does not write it for beauty.

The Functionality Merger Doctrine is confirmed both by logic and by pursuit of the Law and Economics question: why is the copyright owner suing for infringement?

A lawsuit costs money, and the litigant must expect to get more out of it than what he spends, or as a rational economic actor he would not choose to litigate. If literal copying or unauthorized use of source code happens, and if his cost of having made the source code exceeds his loss through litigation, then he can gain something, the protection of his investment. Otherwise, if he sues, and only nonfunctional elements are protected, he gains nothing, because purely stylistic elements of source code have no financial value. Software is not poetry, and one does not write it for beauty. He would not sue unless he could monetize his lawsuit, for which he would have to take some of the money made by the functionality away from the competitor he sues. From a Law and Economics point of view, something must make the money to pay the damages, otherwise they would be unfunded.

Thus, absent literal copying, no software copyright infringement litigation would ever be initiated, unless it was to attack competing functionality in different yet similar source code, which is precisely what copyright as a literary work of authorship should not protect, and which is within the scope of patent, not copyright. Functional Merger Doctrine can eliminate all such lawsuits.

The test for whether software copyright infringement should be found will follow this inquiry: did literal verbatim copying of source code or actual unauthorized use of source code by hardware occur? If yes, copyright infringement. If no, there must be a presumption that the plaintiff seeks to protect functionality and must fail, rebutted by a showing that the asserted protected elements are nonfunctional — for example, voice or style.

It is a legal fiction that software is not an abstract idea for the purposes of patent law, and it is a legal fiction that software is a literary work.

Return to the example of the chef. Assume that the recipe says, “Mix flour, butter, eggs, chocolate, honey, and sugar; then bake.” If some publisher reprints this text in a rival cookbook, that is theft of the text as a literary work of authorship. But a chef cannot say the recipe without using the six words that name the ingredients, so they merge with the functionality. Now what if someone copies the structure, listing savory ingredients first and sweet ingredients last? Could the chef sue for infringement then? Scholars and judges may think this is a tough question, but really it is easy. Who cares in what order the ingredients are listed?

No lawsuit will ever get funded unless it is to protect something that makes enough money to pay the legal fees — in software, functionality. So the Functional Merger Doctrine test will apply and answer this question. The chef owns the cookbook but not the cake. No one cares about the recipe; people only care about the cake and what it tastes like. A chef’s recipe is the perfect analogy for software, because source code is a set of instructions that tells a computer what to do, and the functionality, the cake, is the operation of the computer executing that set and series of instructions, which does or accomplishes something useful. (It is interesting to wonder whether a chocolate honey cake would be surprising and unexpected enough to deserve a patent, although as a mere combination of already known elements it probably should not.)

To conclude: it is a legal fiction that software is not an abstract idea for the purposes of patent law, and it is a legal fiction that software is a literary work. But, lacking the political and legislative will to reform and create a new regime for software, we must make patent and copyright law work for software, through tests that are better suited for it. The Silicon Valley Genius Test and the Functional Merger Doctrine Test are clear, bright line tests that are easy for judges to use and that will clean up and refine the law of intellectual property for computer source code.




Share This


Flea to Choose

 | 




Share This


The Urgency of Climate Change

 | 

On June 30, at a climate change meeting in Abu Dhabi, UN Secretary-General Antonio Guterres exclaimed, "Every week brings new climate-related devastation . . . floods, drought, heatwaves, wildfires and super storms." This weekly barrage of unprecedented climate events is believed to be caused by the increasing concentration of atmospheric CO2 — exceeding a hellish 400 ppm in 2017. The world must “act now with ambition and urgency,” he implored.

Journalists, liberals, and frightened children agree, as does every one of the more than 20 Democrat candidates who have entered the 2020 presidential race. They adamantly believe that climate change is an “existential threat” that is already hitting key tipping points. Climatologist Michael Mann (the inventor of the famous Hockey Stick curve) “has urged governments to treat the transition to renewable energy with the equivalent urgency that drove the US industrial mobilization in World War Two”. By some estimates, fossil fuels must be eliminated in 12 years. Sensing a lack of urgency, students in over 100 nations walked out of their classrooms last March, in a global “Student Climate Strike” to protest climate inaction. News anchor Chuck Todd devoted an entire edition of NBC’s Meet the Press to how climate urgency can be explained to the American people.

This weekly barrage of unprecedented climate events is believed to be caused by the increasing concentration of atmospheric CO2 — now exceeding a hellish 400 ppm.

Not long ago, climate havoc was less urgent. “Change” wasn’t expected to become catastrophic until the latter half of the century. As late as December 2015, when the Paris climate accord was signed, few cared that horrendous polluters, such as China and India, promised only trivial emission reductions. There was ample time for journalists to explain the urgency of climate change. Experts now, however, tell us that the increased frequency and intensity of extreme climate events is already underway, and that only bold, multitrillion dollar programs such as the Green New Deal (GND) can end the weekly assaults.

But why has the frequency and intensity of “floods, drought, heatwaves, wildfires and super storms" increased so much, only recently? The Global Circulation Models (GCMs) that predict global temperature as a function of greenhouse gas (GHG) emissions have not changed significantly; they are as flawed as they have always been.

Let’s say that a group of economists created an economic model designed to predict future inflation rates. And let’s say that they insisted that all future US monetary and fiscal policy be based on its predictions. But what if every time the model was tested, its predicted inflation rate was three times greater than the observed rate? After a few years of observed failure (if it took that long), most people would tell the economists where they could stick their model. And those who promoted policies based on its predictions would be ridiculed as clowns and morons.

The Global Circulation Models that predict global temperature as a function of greenhouse gas emissions have not changed significantly; they are as flawed as they have always been.

Not so in climate science world. The denizens of that bizarre kingdom are praised for their shoddy tools. Indeed, they have been encouraged, with profligate research grants, to create more and bigger GCMs. Since 1988, when James Hansen first sounded the catastrophic global warming alarm, climate scientists have relied on such models. Hansen’s initial model predicted a warming rate of 0.35C per decade. Other climate scientists jumped into the climate modeling business, and over the ensuing decades built a suite of at least 102 models — all of which estimated temperature increases similar to Hansen’s torrid rate.

The growth of climate temperature estimation science gave rise to climate event attribution science — the blaming of fossil-fuel combustion for any event that climate change fretters believe could plausibly result from the implausible temperatures predicted by the GCMs. And for most major news outlets, both of these sciences are settled, and weekly “floods, drought, heatwaves, wildfires and super storms” are the grist for the mill of climate urgency.

Except that empirical evidence for urgency does not exist. The temperature predictions of the GCMs are no more accurate than those of the fictitious economic model above. The only difference is that the latter model would have been discarded decades ago. The GCMs are still in use, heavy use, despite a gaping discrepancy between the theoretical temperatures that they estimate and the empirical temperatures that are observed. Its existence has been known for years. Many peer-reviewed studies (e.g., here, here, and here) have identified and measured its magnitude. In his 2019 paper Falsifying Climate Alarm, John Christy compared the temperature trends estimated by GCMs (102 of them) to the actual trend observed by satellites and radiosonde balloons. Over the period from 1979 (when satellite temperature measurements first became available) to 2017, the average trend produced by the models was 0.44 o C per decade, three times the observed trend of 0.15 o C per decade.

For most major news outlets, both of these sciences are settled, and weekly “floods, drought, heatwaves, wildfires and super storms” are the grist for the mill of climate urgency.

One would think that journalists such as Chuck Todd would welcome climate scientists such as Christy to their newscasts. They might discover that climate urgency is, well, not that urgent. Imagine the scoop: “GCMs Exaggerate Global Warming by Factor of 3, Need Fundamental Revisions.” Unfortunately, climate scientists such as Christy are treated as heretics, who should be given no opportunity to disturb the grist. “The Earth is getting hotter. And human activity is a major cause, period. We're not going to give time to climate deniers,” pontificated Mr. Todd. This is tantamount to discovering that the actual inflation rate is 3%, then writing a front-page story based on the rate predicted by the faulty economic model: “Inflation Soars to 9%, Devastating Consumer Purchasing Power.”

And so it goes at the climate urgency mill. Instead of actual climate-related death and devastation, it is imagined climate-related death and devastation that is reported. It is only the attribution of climate havoc (to fossil-fuel consumption) that has increased in frequency and intensity — a development that dramatically escalated with the 2016 election of Donald Trump, nearly rupturing climate urgentometers with the 2017 US withdrawal from the Paris climate accord.

Climate change enthusiasts around the world cringed at estimates of the additional quantity of CO2 that would spew from the US into earth’s ever-thickening, heat-trapping atmosphere. In a speech at NYU immediately prior to Mr. Trump’s announcement to withdraw, Mr. Guterres warned that the US would suffer “negative economic, security and societal consequences.” Forbes agreed with the assessment, stating, “While the rest of the world moves to invest heavily in renewables, implement carbon reduction technology, and alter consumption habits the United States runs the risk of losing its competitiveness in the global marketplace.” “China, India to Reach Climate Goals Years Early, as U.S. Likely to Fall Far Short,” snarled an Inside Climate News headline. The US became the climate villain. Climate urgency became exponentially more urgent. Climate destruction became weekly.

Instead of actual climate-related death and devastation, it is imagined climate-related death and devastation that is reported.

But is any of this urgent, or even true? Have Chuck Todd and his ilk bothered to check readily available empirical evidence? After all, science can only be confirmed by observation. If, for example, they consulted the National Oceanic and Atmospheric Administration’s detailed list of hurricanes, they would quickly discover that there is no upward trend in frequency or intensity. In her June testimony before a US House committee on climate change, climate scientist Judith Curry noted: “Of the 13 strongest U.S. landfalling hurricanes in the historical record, only three have occurred since 1970 (Andrew, Michael, Charley). Four of these strongest hurricanes occurred in the decade following 1926.” She further stated, “Recent international and national assessment reports acknowledge that there is not yet evidence of changes in the frequency or intensity of hurricanes, droughts, floods or wildfires that can be attributed to manmade global warming.”

And let’s not forget climate-related death, the ultimate measure of catastrophic anthropogenic global warming. According to the International Disaster Database, during the last century, the number of deaths from droughts, extreme temperatures, floods, storms, and wildfires has plummeted by more than 90%, from almost 500,000 per decade to less than 25,000 per decade today. Furthermore, when population growth (which quadrupled during the period) and more aggressive reporting in recent decades (to receive more disaster-relief aid) are taken into account, this impressive decline appears dramatically steeper. A time series plot would produce a hockey stick curve flipped over, “proving” that rising levels of atmospheric CO2 saves lives.

On September 23, the leaders of the rest of the world will come to the UN Climate Action Summit in New York City, “with concrete, realistic plans to enhance their nationally determined contributions by 2020, in line with reducing greenhouse gas emissions by 45 per cent over the next decade, and to net zero emissions by 2050.” Mr. Trump will no doubt be excoriated and nations such as China and India will be praised for their climate leadership, snatched from a derelict US, with its suffering economy.

The number of deaths from droughts, extreme temperatures, floods, storms, and wildfires has plummeted by more than 90%, from almost 500,000 per decade to less than 25,000 per decade today.

But the US economy has been booming — with rapid GDP growth, rising wage rates, and historically low unemployment. The “heavy investments in renewables” made by the rest of the world are, thus far, a bust. According to the Energy Information Administration (EIA), from 2005 to 2017, global energy-related CO2 emissions rose by 6,040 million metric tons, an increase of 21%. In stark contrast, and to the dismay of journalists and politicians who have been telling us that America has let the rest of the world down, US energy-related CO2 emissions declined by 861 million metric tons, a decrease of 14%. And for climate enthusiasts who are placing their planet salvation hopes on early goal attainment, the report noted that “growth in global energy-related CO2 emissions from 2005 to 2017 was led by China, India, and other countries in Asia.” Perhaps Mr. Todd should explain climate urgency to China and India.

It’s difficult for people other than liberals and schoolchildren to view climate urgency as anything but a hoax. Most people tend to slow down, if not stop, when they sense that they are being deceived — when the stories they are being told do not match what they observe. True, Americans observed the devastation of hurricanes Harvey, Irma, and Maria that struck in 2017; but they also observed the absence of a single major hurricane landfall in the 11 years prior.

If solar panels and windmills were cost-effective, we would see them everywhere. We see them almost nowhere. They supply less than 1% of the world’s energy. They provide this minuscule quantity because, after decades of technological advances (praised and celebrated by the news media) and decades of taxpayer-funded subsidies (currently in the US, $129 billion annually, without which both industries would go out of business tomorrow), they are too costly and inefficient to compete with other forms of energy. The next time a Democrat candidate promotes the GND, he should explain the urgency of replacing our cheapest sources of energy with the most expensive. Or how he expects to get to 100% solar and wind in 12 years, having taken 50 years to get to 1%. When a journalist uses the next flood or drought to explain the urgency of climate change, he should explain how, in those halcyon days of the 1930s, when the atmospheric concentration of CO2 was less than 300 ppm, floods claimed 436,147 lives. Or how the droughts of the 1920s claimed 472,400.

Perhaps Chuck Todd should explain climate urgency to China and India.

Climate change urgency has led to the hasty development of schemes to curb the rise in global temperature — currently predicted to exceed 4C by 2100. Controlling the earth’s climate, of course, requires an enormous quantity of money. The GND solution would build a near-zero carbon national electricity grid (115 million acres of solar panels and windmills to eliminate electricity generated by fossil fuels), replace air travel with a high-speed rail system and internal combustion vehicles with electric vehicles, retrofit all buildings to meet high energy-efficient standards, and much, much more. Its total cost has been estimated to be as high as $93 trillion. An exhaustive economic study by Benjamin Zycher of the American Enterprise Institute found that the electricity generation component alone would cost more than “$490.5 billion per year, permanently, or $3,845 per year per household.”

And for the proponents of the GND to believe that it will work, an enormous quantity of conceit and arrogance is also required. But let’s say that the GND succeeds — that it is executed flawlessly and meets all of its emission reduction goals. Then, notes Mr. Zycher, its effect on end-of-century temperature reduction is calculated (by an EPA climate model) to be somewhere between 0.173°C and 0.083°C. That is, $93 trillion of climate urgency will have absolutely no effect. All of that requires an enormous quantity of stupidity.




Share This


Are You Joking?

 | 

On July 23, Jeffrey Epstein, the world’s highest profile prisoner, attempted to commit suicide while in federal custody in New York. Or somebody tried to kill him while he was in federal custody in New York. No one knows. On August 10 Epstein killed himself while in federal custody. Or he didn’t. No one knows.

Likewise, no one knows what happened to President Trump’s several orders, during the past year, for the declassification of all documents bearing on the attempt by our secret police to prevent him from becoming president, or continuing to be president. Or was it all documents? Or was it all documents about the FBI, the CIA, and the DOJ? Or was it . . . ?

This is the behavior of the federal government, at its highest and most visible ranks, regarding matters that are known by all.

In addition, no one knows what is happening with the current innumerable investigations of this and similar events, events that are so well attested as to have become, at this point, crashing bores. When, or if, the investigations are completed, will we hear again that Such and Such Grand Inquisitor “lacked candor” and might be prosecuted, except that he or she will not be prosecuted?

This is the behavior of the federal government, at its highest and most visible ranks, regarding matters that are known by all. Yet leading members of one of our great political parties are demanding that still more power be given to the state — power over healthcare, over incomes, over guns, over history itself — while leading members of the other great party, having promised to drain the swamp, demand that the state take unto itself the role of policing speech on the internet, targeting “unstable” speech with red flags, and so on.

Our descendants, should they still be able to read, and allowed to do so, will marvel at this childlike faith in the great god of government.




Share This


Hurricane Ahead!

 | 

I live in Orlando. If you’ve been listening to the glamour girls and breathless boys of the curvy couches in the newsrooms of New York, you would think that I live in a Death Zone. A hurricane is coming! A hurricane is coming! Shortages! Mayhem! Break out the plywood. Buy up all the water, bread, and peanut butter within a 500-mile radius. And get the hell out of there!

Well, let me tell you what it’s really like. Yes, the shelves were pretty bare on Wednesday, a full week before the hurricane is supposed to make landfall. (Some have said that preparing for a hurricane is like waiting to be attacked by a turtle.) For one evening, bread was gone, water was gone, and batteries were gone from many stores. But that was the first day of the hype. “Hurry! Hurry! You need seven days of water per person in order to survive the devastation! Get it all before someone else does!”

Shortages! Mayhem! Break out the plywood and get the hell out of there!

So how did I prepare? Well, yesterday I went to the beach. (Hurricane warnings make for perfect beach conditions: blue skies, warm water, strong waves, and nearly empty shorelines.) Meanwhile, my daughter took my grandson to Universal Studios (light breeze and five-minute wait lines.)

Today I went shopping. As I expected, based on last year’s hurricane preparations, pallets of bottled water encircled the entire perimeter of my local Publix. An employee stood at the front door, prepared to load a couple of cases into each customer’s cart so the heavy water would be conveniently located at the bottom while the customer continued shopping. So thoughtful! (It was also a gentle reminder that two cases would be plenty — no need to hoard.) The bread shelves were full as well, and stockers were busily replenishing other staples. There will be additional deliveries tomorrow and every day until the storm hits. There is simply no reason to panic about running out of food and water, despite Wednesday’s initially empty shelves.

Hurricane warnings make for perfect beach conditions: blue skies, warm water, strong waves, and nearly empty shorelines.

Home Depot is doing the same thing with plywood, batteries, generators, and flashlights, bringing in more supplies daily. Instead of raising prices to reduce demand, as store managers did in years past, they are planning ahead to satisfy rising demand with rising supply. We don’t need to get into a fight over who saw that last sheet of plywood first — there will be a whole pallet of plywood unloaded from the delivery truck any minute.

How is this possible? As demand quadruples with every frantic news report, why aren’t we experiencing severe shortages?

It’s simple: businesspeople are smart. They can read a weather report, review previous sales trends, anticipate demand, and adjust supply. Trucking companies can respond in advance too, diverting transportation where it is needed now, not where it might have been scheduled to go a few weeks ago. And because hurricanes move so slowly, business people have a couple of weeks to adjust their orders, assign overtime duties to stockers and checkers, and reassure their customers that the doors will be open and the shelves stocked throughout the run-up to the storm. And they’ll be open for business again just as quickly as they can after the storm. We aren’t going to starve. I promise you.

We don’t need to get into a fight over who saw that last sheet of plywood first — there will be a whole pallet of plywood unloaded from the delivery truck any minute.

Meanwhile, FEMA and the National Guard are on their way to Florida. They might be needed, if damage is severe. Also on their way are hordes of weather reporters, seeking out the highest water, the windiest corner, and the dangliest signs to show us just how desperate we are in Florida. (Remember last year’s phony photos of reporters hunkered down in raincoats and boots while residents strolled by in the background wearing t-shirts, shorts and flip-flops?)

Some folks may experience severe damage and loss, especially those who live near the coast, and I feel compassion for them. They’ll need emergency help (and should receive it from their insurance companies). But for most of us, the local Publix and Home Depot have us covered. There’s no need to panic, and no need to break our budgets by purchasing more food than we actually need. And that, my friends, is how capitalism makes life better for everyone.




Share This

© Copyright 2019 Liberty Foundation. All rights reserved.



Opinions expressed in Liberty are those of the authors and not necessarily those of the Liberty Foundation.

All letters to the editor are assumed to be for publication unless otherwise indicated.