Mind the Gap

 | 

“Capitalism automatically generates arbitrary and unsustainable inequalities that radically undermine democratic societies.” — Thomas Piketty, Capital in the 21st Century

French professor Thomas Piketty’s new book — ranked #1 on Amazon and the New York Times — is a thick volume with the same title as Karl Marx’s 1867 magnum opus, Capital. Many commentators have noted the Marxist tone — the author cites Marx more than any other economist — but that’s a distraction.

The author discusses capital and economic growth, and recommends a levy on capital, but the primary focus of the book is inequality. In mind-numbing minutiae of data from Europe and the United Staes, Piketty details how inequality of income and wealth have ebbed and flowed over the past 200 years before increasing at an “alarming” rate in the 21st century. Because of his demonstrated expertise, his scholarship and policy recommendations (sharply higher progressive taxes and a universal wealth tax) will be taken seriously by academics and government officials. Critics would be wise to address the issues he raises rather than simply to dismiss him as a French polemicist or the “new Marx.”

According to his research, inequality grows naturally under unfettered capitalism except during times of war and depression. “To a large extent, it was the chaos of war, with its attendant economic and political shocks, that reduced inequality in the twentieth century” (p. 275, cf. 471) Otherwise, he contends, there is a natural tendency for market-friendly economies to experience an increasing concentration of wealth. His research shows that, with the exception of 1914-45, the rate of return on property and investments has consistently been higher than the rate of economic growth. He predicts that, barring another war or depression, wealth will continue to concentrate into the top brackets, and inherited wealth will grow faster with an aging population and inevitable slower growth rates, which he regards as “potentially terrifying” and socially “destabilizing.”

If market-generated inequality is the price we pay to eliminate poverty, I’m all in favor.

His proposal? Investing in education and technical training will help, but won’t be enough to counter growing inequality. The “right solution” is a progressive income tax up to 80% and a wealth tax up to 10%. He is convinced that these confiscatory rates won’t kill the motor of economic growth.

One of the biggest challenges for egalitarians like Piketty is to define what they mean by an “ideal” distribution of income and wealth. Is there a “natural” equilibrium of income distribution? This is an age-old question that has yet to be resolved. I raised it in a chapter in “Economics on Trial” in 1991, where I quoted Paul Samuelson in his famous textbook, “The most efficient economy in the world may produce a distribution of wages and property that would offend even the staunchest defender of free markets.”

But by what measure does one determine whether a nation’s income distribution is “offensive” or “terrifying”? In the past, the Gini ratio or coefficient has been used. It is a single number that varies between 0 and 1. If 0, it means that everyone earns the same amount; if 1, it means that one person earns all the income and the rest earn nothing. Neither one is ideal. Suppose everyone earns the same wage or salary. Perfect equality sounds wonderful until you realize that no economy could function efficiently that way. How you could hire anyone else to work for you if you had to pay them the same amount you earn?

A wealth tax destroys a fundamental sacred right of mankind — the right to be left alone.

Even social democrats William Baumol and Alan Blinder warned in their popular economics textbook, “What would happen if we tried to achieve perfect equality by putting a 100% income tax on all workers and then divide the receipts equally among the population? No one would have any incentive to work, to invest, to take risks, or to do anything else to earn money, because the rewards for all such activities would disappear.”

So if a Gini ratio of 0 is bad, why is a movement toward 0 (via a progressive income tax) good? It makes no sense.

Piketty wisely avoids the use of the Gini ratios in his work. Instead he divides income earners into three general categories, the wealthy (top 10% income earners), the middle class (40%), and the rest (50%), and tracks how they fare over the long term.

But what is the ideal income distribution? It’s a chimera. The best Piketty and his egalitarian levelers can do is complain that inequality is getting worse, that the distribution of income is unfair and often unrelated to productivity or merit (pp. 334–5), and therefore should be taxed away. But they can’t point to any ideal or natural distribution, other than perhaps some vague Belle Époque of equality and opportunity (celebrated in France between 1890 and 1914).

Piketty names Simon Kuznets, the 20th century Russian-American economist who invented national income statistics like GDP, as his primary antagonist. He credits Kuznets with the pro-market stance that capitalist development tends to reduce income inequality over time. But actually it was Adam Smith who advocated this concept two centuries earlier. In the Wealth of Nations, Smith contended that his “system of natural liberty” would result in “universal opulence which extends itself to the lowest ranks of the people.”

Not only would the rich get richer under unfettered enterprise, but so would the poor. In fact, according to Smith and his followers, the poor catch up to the rich, and inequality is sharply reduced under a liberal economic system without a progressive tax or welfare state. The empirical work of Stanley Libergott, and later Michael Cox, demonstrates that through the competitive efforts of entrepreneurs, workers, and capitalists, virtually all American consumers have been able to change an uncertain and often cruel world into a more pleasant and convenient place to live and work. A typical homestead in 1900 had no central heating, electricity, refrigeration, flush toilets, or even running water. But by 1970, before the welfare state really got started, a large majority of poor people benefited from these goods and services. The rich had all these things at first — cars, electricity, indoor plumbing, air conditioning — but now even the poor enjoy these benefits and thus rose out of poverty.

Piketty and other egalitarians make their case that inequality of income is growing since the Great Recession, and they may well be correct. But what if goods and services, what money can buy, becomes a criteria for inequality? The results might be quite different. Today even my poor neighbors in Yonkers have smartphones, just like the rich. While every spring the 1% attend the Milken Institute Conference in LA that costs $7,000 or more to attend; the 99% can watch the entire proceedings on video on the Internet a few days later — for free. The 1% can go to the Super Bowl for entertainment; the 99% gather around with their buddies and watch it on an widescreen HD television. Who is better entertained?

Contrary to Piketty’s claim, it’s good that capital grows faster than income, because that means people are increasing their savings rate.

Piketty & Co. claim that only the elite can go to the top schools in the country, but ignore the incredible revolution in online education, where anyone from anywhere in the world can take a course in engineering, physics, or literature from Stanford, MIT, or Harvard for a few thousand dollars, or in some cases, for absolutely nothing.

How do income statistics measure that kind of equal access? They can’t. Andrew Carnegie said it best, “Capitalism is about turning luxuries into necessities.” If that’s what capital and capitalism does, we need to tax it less, not more.

A certain amount of inequality is a natural outcome of the marketplace. As John Maynard Keynes himself wrote in the Economic Consequences of the Peace (1920), “In fact, it was precisely the inequality of the distribution of wealth which made possible those vast accumulations of fixed wealth of and of capital improvements which distinguished that age [the 19th century] from all others.”

A better measure of wellbeing is the changes in the absolute real level of income for the poor and middle classes. If the average working poor saw their real income (after inflation) double or triple in the United States, that would mean lifting themselves out of poverty. That would mean a lot more to them than the fortunes of the 1%. Even John Kenneth Galbraith recognized that higher real growth for the working class was what really mattered when he said in The Affluent Society (1959), “It is the increase in output in recent decades, not the redistribution of income, which has brought the great material increase, the well-being of the average man.”

Political philosopher James Rawls argued in his Theory of Justice (1971) that the most important measure of social welfare is not the distribution of income but how the lowest 10% perform. James Gwartney and other authors of the annual Economic Freedom Index have shown that the poorest 10% of the world’s population earn more income when they adopt institutions favoring economic freedom. Economic freedom also reduces infant mortality, the incidence of child labor, black markets, and corruption by public officials, while increasing adult literacy, life expectancy, and civil liberties. If market-generated inequality is the price we pay to eliminate poverty, I’m all in favor.

I have reservations about Piketty’s claim that “Once a fortune is established, the capital grows according to a dynamic of its own, and it can continue to grow at a rapid pace for decades simply because of its size.” To prove his point, he selects members of the Forbes billionaires list to show that wealth always grows faster than the average income earner. He repeatedly refers to the growing fortunes of Bill Gates in the United States and Liliane Bettencourt, heiress of L’Oreal, the cosmetics firm.

Come again?

I guess he hasn’t heard of the dozens of wealthy people who lost their fortunes, like the Vanderbilts, or to use a recent example, Eike Batista, the Brazilian businessman who just two years ago was the 7th wealthiest man in the world, worth $30 billion, and now is almost bankrupt.

Piketty conveniently ignores the fact that most high-performing mutual funds eventually stop beating the market and even underperform. Take a look at the Forbes “Honor Roll” of outstanding mutual funds. Today’s list is almost entirely different from the list of 15 or 20 years ago. In our business we call it “reversion to the mean,” and it happens all the time.

Prof. Piketty seems to have forgotten a major theme of Marx and later Joseph Schumpeter, that capitalism is a dynamic model of creative destruction. Today’s winners are often tomorrow’s losers.

IBM used to dominate the computer business; now Apple does. Citibank used to be the country’s largest bank. Now it’s Chase. Sears Roebuck used to be the largest retail store. Now it’s Wal-Mart. GM used to be the biggest car manufacturer. Now it’s Toyota. And the Rockefellers used to be the wealthiest family. Now it’s the Waltons, who a generation ago were dirt poor.

Piketty is no communist and is certainly not as radical as Marx in his predictions or policy recommendations. Many call him “Marx Lite.” He doesn’t advocate abolishing money and the traditional family, confiscating all private property, or nationalizing all the industries. But he’s plenty radical in his soak-the-rich schemes: a punitive 80% tax on incomes above $500,000 or so, and a progressive global tax on capital with an annual levy between 0.1% and 10% on the greatest fortunes.

There are three major drawbacks to Piketty’s proposed tax on wealth or capital.

First, it violates the most fundamental principle of taxation, the benefit principle. Also known as the accountability or “user pay” principle, taxation is justified as a payment for benefits or services rendered. The basic idea is that if you buy a good or use a service, you should pay for it. This approach encourages efficiency and accountability. In the case of taxes, if you benefit from a government service (police, infrastructure, utilities, defense, etc.), you should pay for it. The more you benefit, the more you pay. In general, most economists agree that wealthier people and big businesses benefit more from government services (protection of their property) and should therefore pay more. A flat personal or corporate income tax would fit the bill. But a tax on capital (or even a progressive income tax) is not necessarily connected to benefits from government services — it’s just a way to forcibly redistribute funds from rich to poor and in that sense is an example of legal theft and tyranny of the majority.

Second, a wealth tax destroys a fundamental sacred right of mankind — financial privacy and the right to be left alone. An income tax is bad enough. But a wealth tax is worse. It requires every citizen to list all their assets, which means no secret stash of gold and silver coins, diamonds, art work, or bearer bonds. Suddenly financial privacy as guaranteed by the Fourth Amendment becomes illegal and an underground black market activity.

Third, a wealth tax is a tax on capital, the key to economic growth. The worst crime of Piketty’s vulgar capitalism is his failure to understand the positive role of capital in advancing the standard of living in all the world.

To create new products and services and raise economic performance, a nation needs capital, lots of it. Contrary to Piketty’s claim, it’s good that capital grows faster than income, because that means people are increasing their savings rate. The only time capital declines is during war and depression, when capital is destroyed.

He blames the increase in inequality to low growth rates, when, says, the economic growth rate falls below the return on capital. The solution isn’t to tax capital, but to increase economic growth via tax cuts, deregulation, better training and education and productivity, and free trade.

Even Keynes understood the value of capital investment, and the need to keep it growing. In his Economic Consequences of the Peace, Keynes compared capital to a cake that should never be eaten. “The virtue of the cake was that it was never to be consumed, neither by you nor by your children after you.”

What country has advanced the most since World War II? Hong Kong, which has no tax on interest, dividends, or capital.

 

If the capital “cake” is the source of economic growth and a higher standard of living, we want to do everything we can to encourage capital accumulation. Make the cake bigger and there will be plenty to go around for everyone. This is why increasing corporate profits is good — it means more money to pay workers. Studies show that companies with higher profit margins tend to pay their workers more. Remember the Henry Ford $5 a day story of 1914?

If anything, we should reduce taxes on capital gains, interest, and dividends, and encourage people to save more and thus increase the pool of available capital and entrepreneurial activity. A progressive tax on high-income earners is a tax on capital. An inheritance tax is a tax on capital. A tax on interest, dividends, and capital gains is a tax on capital. By overtaxing capital, estates, and the income of our wealthiest people, including heirs to fortunes, we are selling our country and our nation short. There’s no telling how high our standard of living could be if we adopted a low-tax policy. What country has advanced the most since World War II? Hong Kong, which has no tax on interest, dividends, or capital.

Hopefully Mr. Piketty will see the error of his ways and write a sequel called “The Wealth of Nations for the 21st Century,” and will quote Adam Smith instead of Karl Marx. The great Scottish economist Adam Smith once said, “Little else is required to carry a state from the lowest barbarism to the highest degree of opulence but peace, easy taxes, and a tolerable administration of justice.” Or per haps he will quote this passage: “To prohibit a great people….from making all that they can of every part of their own produce, or from employing their stock and industry in the way that they judge most advantageous to themselves, is a manifest violation of the most sacred rights of mankind.”


Editor's Note: Review of "Capital in the Twenty-First Century," by Thomas Piketty, translated by Arthur Goldhammer. Belknap Press, 2014.



Share This


They Didn’t Want a War

 | 

Margaret MacMillan’s The War that Ended Peace gives a fascinating description of the background, stretching back to around 1900, of what she, like people at the time, calls the “Great War.” She relates how the Bosnian crisis of 1908, the Moroccan crises of 1905 and 1911, the crises arising from wars among the Balkan countries in 1912 and 1913, and various minor incidents were successfully muddled through without war among the great powers. The most general source of tension seems to have been fear of being attacked first and concern to make and maintain alliances.

Leading statesmen optimistically expected that tension between Austria-Hungary and Serbia, exacerbated by the assassination of Archduke Franz Ferdinand on 28 June 1914, would somehow be resolved like the earlier crises. Even after Austria-Hungary rejected Serbia’s compliant but not total acceptance of its ultimatum and declared war, hope lingered of keeping the war contained.

Few policymakers had wanted war (the main exception perhaps being Franz Conrad von Hötzendorf, Austro-Hungarian Chief of Staff). The German Kaiser was no exception, although he was addicted to impulsive speeches and interviews, liked to strut in military uniform, and even enjoyed fiddling with the detailed design of uniforms (as did his fellow emperors Franz Joseph and Nicholas II).

World War I was a momentous and enduring tragedy. Germany, for one, had everything to gain from continuing peace.

As those examples suggest, MacMillan goes into revealing detail not only about demographic, economic, political, diplomatic, and military situations and events but also about people — royalty, politicians, foreign ministers, diplomats, generals and admirals, journalists, and influential or well connected socialites — together with their backgrounds, illnesses, deaths, and strengths or quirks of personality.

Much of this is relevant to the role of sheer and even trivial accident in momentous history. MacMillan herself notes several examples. The Russian monk Rasputin, whatever his faults, strongly advocated peace and had great influence with the Imperial family; but he had been stabbed by a madwoman on the very day of the Austrian Archduke’s assassination and was recovering slowly, far from St. Petersburg. The Archduke himself had long realized that Austria-Hungary was too weak to risk an aggressive foreign policy. Alfred von Kiderlen-Wächter, German Foreign Minister and in MacMillan’s opinion a force for peace, had died in December 1912. Joseph Caillaux, France’s peace-minded Prime Minister, had had to resign in January 1912, partly in connection with his second wife’s shooting of an editor who had threatened to publish some indiscreet love letters that Caillaux had sent to her while she was still married to someone else. Although MacMillan does not explicitly raise the question, I was set to wondering how events would have evolved if Otto von Bismarck, a realist who was satisfied with Germany’s international position achieved by 1871, had been alive and in office in 1914. Or what if Gavrilo Princip’s bullet had missed the Archduke?

MacMillan ends her book, apart from a 13-page epilogue, with the outbreak of war in July-August 1914. That is fine with a reader more interested in the consequences of particular wars and with how the wars might have been avoided (as many potential wars no doubt were barely avoided) than with the details of the actual fighting. World War I was a momentous and enduring tragedy. Germany, for one, had everything to gain from continuing peace, including its growing leadership in science and industry. MacMillan writes a gripping story. She conveys a feel of the suspense that must have prevailed during the final crisis. My opinion of her book is overwhelmingly favorable.

Or it would be except for one minor but pervasive and annoying defect. The book is erratically punctuated, mainly but not everywhere underpunctuated. Even independent clauses, often even ones with their own internal punctuation, go unseparated by a comma or semicolon. Restrictive and nonrestrictive phrases and clauses are not distinguished, as clarity requires, by absence or presence of punctuation. Such erratic and erroneous punctuation delays understanding, if usually only for a second. Even so, it distracted me from the book’s fascinating story.

Above all, it distracted me with sustained wonder about how so untypically mispunctuated a book could emerge from a major publishing house. Could the copyeditor have given up in the face of a daunting and tedious task? Could an incompetent editor have imposed the damage, which the author then passively left standing? Could the author have committed the errors herself and then, perhaps out of bad experience with previous copyeditors, have insisted on none of their tampering this time? None of these hypotheses seems plausible, but I can’t think of a better one. The author’s including her copyeditor in her long list of Acknowledgments adds to the mystery.

I’d be grateful if someone could relieve my curiosity with the true story.


Editor's Note: Review of "The War that Ended Peace," by Margaret MacMillan. Random House, 2013, 784 pages.



Share This


Memories of War

 | 

Last month I visited the Kamikaze Peace Museum in Chiran, Japan, a small town characterized by cherry-lined streets and what remains of a centuries-old Samurai village. The museum is a moving tribute to the 1,000 or so young men who were ordered to give their lives for god and country (the emperor was considered divine) by flying their planes directly into American targets in the Pacific during the final months of World War II. Chiran was the departure point for most of those flights.

The museum contains photographs of all the men, along with the letters many of them wrote to their families on the eve of their death. These pilots were little more than boys, most of them aged 17–28, some of them photographed playing with puppies as they posed, smiling, in front of their planes. In their letters they urged their mothers to be proud, their sisters to be comforted, their girlfriends to move on without them, and their children to be brave. One man wrote, “I am sorry that Papa will not be able to play horsey with you any more.” Another’s girlfriend leapt from a bridge to her death after she read his letter, and yet another’s wife drowned herself and her children before his flight so he could die without regret. Several of these young pilots were Koreans conscripted into the service against their will. None felt he had a choice; living with the loss of honor would be much more painful than any fear of death. I felt nothing but sadness for these young boys.

Two weeks later I was in Oahu, where over 200 Japanese planes attacked Pearl Harbor in the early morning of December 7, 1941, killing 2,400 Americans, wounding another thousand, and crippling the American fleet. The attack brought America into war in the Pacific. One cannot visit the Pearl Harbor Memorial without feeling profound sadness for the loss of life that day and in the four years that were to come. Yet, having just visited the Kamikaze Peace Museum, I could not hate the men who flew the bombers into Pearl Harbor. The words of Edwin Starr resonated in my mind: “War: What Is It Good For?”

Perhaps it is good for peace. But at what price? I thought of this as I watched The Railway Man, based on the memoirs of a British soldier, Eric Lomax (Colin Firth and Jeremy Irvine) who was captured by the Japanese during World War II, forced to help build the railway through Thailand that was immortalized by the Oscar-winning film The Bridge on the River Kwai (1957), and tortured by his captors when he built a radio receiver inside their prison. The title of the film has dual meanings; not only does Lomax help build the railroad through Thailand, but from his youth he has had an obsession for trains and has always memorized details about train schedules, train depots, and the towns that surround train stations. In context, the title also suggests a metaphor for the bridges that are eventually built, through arduous effort, between Lomax and others, including his wife Patti.

None felt he had a choice; living with the loss of honor would be much more painful than any fear of death.

As the film opens, Lomax (Firth) is a middle-aged man who meets a pretty nurse, Patti (Nicole Kidman), on a train. He immediately falls in love with her. (The film implies that this is a first marriage for the shy and socially inept Lomax, but the real Eric Lomax was already married at the time he met Patti. He married Agnes just three weeks after returning from the war, and then divorced her just a few months after meeting Patti on the train. This, and the rest of the story, suggests to me that he returned from the war safe, but not sound.) Eric notes morosely, “Wherever there are men, there’s been war,” and Patti replies with a gentle smile, “And wherever there’s been a war, there’s been a nurse like me to put them back together.”

This introduces the key to their relationship. The war has officially ended 35 years earlier, but it still rages in Lomax’s mind. He will need the kind and patient wisdom of a nurse to help put him back together again. His struggle with post-traumatic stress disorder is skillfully portrayed when ordinary events trigger painful memories that transport him immediately to his jungle experiences as a POW. For example, the sound of the shower triggers terrifying memories of the water torture he endured at the hands of his brutal captors. The unexpected intrusion of these scenes demonstrates the unending aftermath of war and the difficulty of controlling its horrifying memories.

Wise casting adds to the pathos of this fine film. Much of what I know about World War II has been shaped by the films I’ve seen, and most of those were populated by actors well into their 30s and 40s. But in this film Young Eric (Jeremy Irvine) and his comrades are played by slender boys in their early 20s who can’t even grow a stubble of beard after four days aboard a prison train. They are closer to the tender ages of the soldiers they are portraying, and this increases the pathos of the story and our admiration for the strength and resolve of these boys who are thrust into manhood, much like the kamikaze pilots, before they even know what war is.

The Railway Man is a character-driven film that demonstrates the choices we have, even when it seems we have no choices at all. Jesus demonstrated the power of choice when he said, “If a man requires of you his coat, give him your cloak also” and, “If a man smites you, turn the other cheek.” He wasn’t telling his followers to give up and give in, but to take charge and move on, by invoking the right to choose one’s attitude when it seems that the freedom to choose one’s actions is gone. This film demonstrates that same transformative power of choice.


Editor's Note: Review of "The Railway Man," directed by Jonathan Teplitzky. Weinstein Company, 2014, 116 minutes.



Share This


The Apple of Knowledge and the Golden Rule

 | 

Russell Hasan is an author who has contributed a good deal to Liberty. Now he makes a contribution to liberty itself, in the form of two extensive monographs: The Apple of Knowledge: Introducing the Philosophical Scientific Method and Pure Empirical Essential Reasoning, and Golden Rule Libertarianism: A Defense of Freedom in Social, Economic, and Legal Policy. Both works are available online, at the addresses that follow at the end of this review. And both are very interesting.

I’ll start with The Apple of Knowledge, which itself starts with an account of the author’s quest for wisdom. He did not find it in the lessons of professional (i.e., academic) philosophers, who venerated the counterintuitive claims of earlier professional philosophers, often echoing their conviction that objective truth could not be found. The author turned to the objectivist philosophy of Ayn Rand, but found that “it was truly a political philosophy, and not a rigorously reasoned system of metaphysics and epistemology. Rand’s ideas seemed clever and useful, but they contained contradictions and holes and gaps.”

So, as an intellectual entrepreneur, Hasan decided to see whether he could solve crucial philosophical problems himself. That’s the spirit of liberty.

He states his agenda in this way:

The six problems that this book will solve are: 1. Induction, 2. Consciousness, 3. Knowledge, 4. The Scientific Method, 5. Objectivity, and 6. Things in Themselves.

Hasan believes that these problems can be solved by his versions of “(1) the philosophical scientific method, and (2) pure empirical essential reasoning.”

What does this mean in practice? It means a rejection of dualism and radical skepticism, a reasoned acceptance of the world as empirically discovered by the healthy consciousness. An example:

When you look at this book and say “I am looking at this book, I am reading this book, I am aware of the experience of this book,” and you wonder about what it means to be conscious and to have an experience and to see this book, the only things in the picture are two physical objects, (1) this book, which physically exists and is the thing that you are experiencing and are conscious of, and (2) your brain, which is the consciousness that experiences and is aware of the book by means of the perceptions and concepts in your brain. Similarly, when you see a red apple, the red apple itself is the red that you see, and your brain is the subject which perceives that object and is aware of that object. Nowhere in this picture is there a need to deny that consciousness exists. We need not deny that you really see a red color. We need not deny that you are aware of an apple. And there is also no need to believe in ghosts or non-physical souls as an explanation for your being aware of an apple and seeing its red color.

As this example suggests, Hasan has an admirably clear style throughout. His clarity may also suggest, erroneously, that the problems he addresses are easy to solve, or that he deems them easy to solve. They aren’t, and he doesn’t. For every statement he makes there are time-honored quibbles, evasions, and yes, real challenges. The enjoyment of reading through this fairly long book comes from following Hasan’s own path among the challenges, assessing his arguments, and finding out how many of those arguments one wants to buy.

To this process, a statement of my own ideas can add no particular enjoyment. For what it’s worth — and it isn’t directly relevant to Hasan’s essential concerns — his grasp of Christian and biblical theology could be much stronger. Here’s where the dualism that he rejects asserts itself despite his efforts; he tends to see Christian ideas (as if there were just one set of them) as dualistically opposite to his own: Christians are against the world, the flesh, and the devil, while he is heartily in favor of the first two, at least. But it’s not as simple as that. “World” and “flesh” can mean a lot of things, as a concordance search through St. Paul’s epistles will illustrate. You don’t need to believe in God to recognize the complexity of Christian thought. (And, to digress a bit further, “666” didn’t come “from ancient confusion between the Latin word ‘sextus’ which means six and the Latin word ‘sexus’ which means sex.” No, it originated in the biblical book of Revelation [13:18], and it’s a code term, probably for “Nero.”)

It makes no difference whether you’re smarter or richer than I am, because it requires the same effort — that is, none — for both of us to leave each other alone.

About the philosophical problems that Hasan treats I can say that he usually appears to make good sense — very good sense. His education in the objectivist tradition is evident; his respect for the real world — which is, after all, the world that all philosophy is trying to explain — is still more evident. Both are valuable, and essential to his project. Indeed, Apple of Knowledge can be viewed as a particularly interesting and valuable addition to the objectivist tradition of philosophy that begins with Ayn Rand.

Golden Rule Libertarianism is an exposition and defense of a variety of radical libertarian ideas — about victimless crimes, war and peace, government intervention in the economy, and so on. Few libertarians will be surprised by the results of Hasan’s inquiries in these areas — but what does “Golden Rule Libertarianism” mean?

This represents what I take to be a new approach, though one that is nascent in the libertarian concept of the great “negative right,” the right to be left alone. From this point of view, it makes no difference whether you’re smarter or richer than I am, because it requires the same effort — that is, none — for both of us to leave each other alone. The Golden Rule, most famously enunciated by Jesus but, as Hasan points out, hardly foreign to other religious and ethical teachers, yields a more “positive” approach. “Do unto others what you would have them do unto you.” Yet nobody wants his neighbor to do certain things — to prohibit him from speaking or publishing his views or sleeping with whomever he wants, even on the pretense of helping him. In this sense, the Golden Rule turns out to be just as “negative” as libertarians could wish. As Hasan says in one exemplification of his theory:

if you let me be free to make economic decisions, including what wage to pay and at what price to buy services from other people, then I will give you the same freedom to make your own choices instead of me making your choices for you.

There is a pragmatic dimension to this. In case you are wondering whether letting everyone be free to make his or her own decisions would leave the poor in the lurch, or, to vary the metaphor, in the clutches of an exploitative capitalism that the poor are not capable of turning to their own advantage, Hasan adds:

The best thing you can do for me is to get the government out of my way and let me be free, because capitalism helps the poor more than socialism.

Libertarians understand this, and Hasan provides plenty of reasons for everyone else to understand it too. His book will be valuable to nonlibertarians, because there is something in it for every interest or problem they may have. As he says, in another exemplary passage:

The liberal concern for civil liberties, e.g. my freedom to write atheist books, and the conservative concern for freedom from regulation, e.g. my freedom to buy and sell what I want on my terms, is really two sides of the same libertarian coin, because if the government claims the right to be the boss of your beliefs then it will soon usurp the power to be the boss of your place in the economy and take total control over you, and if the government is the boss of the economy then it will inevitably take over the realm of ideas in order to suppress dissent and stifle criticism of the economic planners.

I believe that Hasan is right to pay particular attention to what he calls “the coercion argument,” which is one of the strongest ripostes to libertarian thought. It is an attempt to argue against libertarian ideas on libertarian grounds. The notion is that if I leave you alone I permit someone else to coerce you. As Hasan says,

Some version of the coercion argument underscores a great deal of anti-libertarian sentiment: poor people will be coerced into selling their organs and body parts, which justifies denying them the right to do so. Poor people are coerced into accepting dangerous, low-paying jobs such as coal mining, or are coerced into working long hours for wages that are lower than what they want. They are coerced into buying cheap high-fat fast food, or are coerced into buying cheap meat, packed at rat-infested plants, and so on. The coercion argument is a thorn in the side of laissez-faire politics, because socialists argue that poor people aren’t really free in a capitalist system where they face economic coercion.

Hasan’s insight into the legal history and ramifications of the coercion argument is enlightening:

An example of the grave seriousness of the coercion myth is legal scholar Robert Lee Hale’s famous law review article “Coercion and Distribution in a Supposedly Non-Coercive State” (1923). Hale brainwashed generations of law students with his argument that capitalist employers exert coercion upon workers, and socialism would not produce more coercion or less freedom than capitalism.

This is a powerful myth, but Hasan has little trouble refuting it. Others are yet more formidable; I would be surprised, however, if even a hostile reader could emerge from a serious consideration of Hasan’s arguments without admitting serious damage to his or her own assumptions.

For libertarian readers, the fun is in seeing what Hasan will do with various popular topics of libertarian discourse — natural rights versus utilitarianism, racial discrimination, gay marriage, an interventionist versus a non-interventionist foreign policy, privatization of education and banking, disparity of wealth, etc. Even well-informed libertarians will be surprised, and probably grateful, for many of the arguments that Hasan adduces.

Hasan is one of the few questioning occupational licensing, which exacts immense costs from society, and especially from the poor, who must pay dearly for even the simplest services.

I was such a reader — and for me, the book gained in stature because I did not always agree with it. For me, libertarianism is more a matter of experience and less a matter of moral logic than it is for Hasan; but even within the large area of our philosophical, or at least temperamental, disagreement, I found myself admiring his intrepid and intricate, yet nevertheless clear and cogent progression of thought. I suspect that anyone who shares my feeling for the great chess match of political economy will share my feeling about this book.

Not all of Hasan’s many topics can possibly be of intense interest to everyone, but that’s just another way of saying that the book is rich in topics. My heart rejoiced to see a chapter on the evils of occupational licensing — a practice that virtually no one questions but that exacts immense costs from society, and especially from the poor, who must pay dearly for even the simplest services of licensed individuals. And I was very pleased to see Hasan take on many of the most sacred cows of my fellow academics.

One is game theory. Readers who are familiar with game theory and with the part of it that involves the so-called prisoner’s dilemma know that for more than two decades these things have been the favorite pastime, or waste of time, among thousands of social scientists. (If you ask, How can there by thousands of social scientists? or, Why don’t they find something better to do?, see above, under “occupational licensing.”) The tendency of game theory is to deal with people as objects of power, not subjects of their own agency. Its effect has often been to emphasize the statist implications of human action. Hasan cuts through the haze:

The specific refutation of Game Theory and the “prisoner’s dilemma” is that the solution is not for the group to impose a group-beneficial choice onto each individual, it is for each individual to freely choose the right choice that benefits the group. If the benefits of the supposedly right, good-for-the-group decision are really so great, then each individual can be persuaded to freely choose the right, optimal, efficient choice.

My advice is to get the book, which like the other book is available at a scandalously low price, read the introductory discussion, then proceed to whatever topics interest you most. You may not agree with the arguments you find, but you will certainly be stimulated by the reading.


Editor's Note: Review of "The Apple of Knowledge: Introducing the Philosophical Scientific Method and Pure Empirical Essential Reasoning," and "Golden Rule Libertarianism: A Defense of Freedom in Social, Economic, and Legal Policy," by Russell Hasan. 2014.



Share This


It’s Smart, It’s Exciting, It’s Fun

 | 

 

The specific details of a superhero movie plot seldom really matter; all we usually need to know is that an evil superpower, sporting a foreign accent, is out to destroy the world as we know it, and it is up to the superhero not only to protect the community from destruction but also to preserve our way of life. Dozens of superheroes have been created in comic-book land, and all of them have been sharing time on the silver screen for the past decade or more, with half a dozen of their adventures released this year alone. So far audiences are flocking to theaters with the same enthusiasm that kept our grandfathers heading to the local cinema every Saturday afternoon to see the latest installment of Buck Rogers.

These films tend to reflect the fears and values of whatever may be the current culture, which is one of the reasons for their lasting popularity. We see our worst fears in the threats posed by the enemies, and our hopes and fears in the characters of the heroes. But lately those heroes have been somewhat reluctant and unsure of their roles as heroes, and the people they have sworn to protect have been less trusting and appreciative — they complain about collateral damage and even question the heroes’ loyalty. In an era of relativism and situational ethics, a full-on hero with overwhelming power seems hard to support.

The Avengers share conversations praising freedom and choice, and they reject blind obedience in favor of making their own decisions.

This month it’s Captain America’s turn to save the day. Created by Jack Kirby and Joe Simon in 1941, Captain America (alter ego: Steve Rogers) is a WWII fighter pilot who is transformed from a 5’4” wimp to a 6’2” muscle man through a scientific experiment intended to create an army of super warriors. He ends up being cryogenically frozen and is thawed out in modern times. Part of his appeal is his guileless naiveté, especially as he reacts to modern technology and mores. He uses his virtually indestructible shield to fight for truth, justice, and the American way (okay, that’s the other superhero, but their morals are virtually the same). I like Captain America’s shield — it signifies that his stance is defensive, not aggressive.

As The Winter Soldier opens, nothing is going right for the Avenger team led by Nick Fury (Samuel L. Jackson) and Captain America (Chris Evans). Police, government agencies, and even agents of SHIELD (Strategic Homeland Intervention, Enforcement and Logistics Division, the organization that oversees and deploys the superheroes) are attacking them and treating them as national enemies. The Captain and former Russian spy Natasha (Scarlett Johansson), aka the Black Widow, have become Public Enemies number 1 and 2, but they don’t know why. They spend the rest of the movie trying to clear their names and save the world, without any help from the government they have sworn to uphold.

While the specific plot isn’t particularly important in these movies, motivation usually is. Why do the characters do what they do? Meaningful dialogue inserted between the action scenes reveals the values of both good guys and bad guys, and away we go, rooting for the guy who is going to save us once again.

I’m happy to report that Captain America: The Winter Soldier, lives up to its potential. As a libertarian, I can agree with most of the values it projects. First, politicians, government agencies, and the military industrial complex are the untrustworthy bad guys in this film, and for once there isn’t an evil businessperson or industrialist in sight. Additionally, the Avengers share conversations praising freedom and choice, and they reject blind obedience in favor of making their own decisions. For example, The Falcon (Anthony Mackie) aka Sam Wilson, tells Steve about his buddy being shot down in the war, and then says, “I had a real hard time finding a reason for being over there after that.” Captain America admits, “I want to do what’s right, but I’m not sure what that is anymore.” Like Montag in Bradbury’s Fahrenheit 451, he is ready to think for himself and determine his own morality. (Compare that philosophy to Peter Parker [Spider-Man] being told by his wise Uncle Ben that responsibility is more important than individual choice in Spider-Man 2, followed by Uncle Ben’s death when Peter chooses “selfishness” over responsibility.)

Meanwhile, the Secretary of State (Robert Redford — yes, Robert Redford! He said his grandchildren like the franchise, so he wanted to do the film for them) says cynically of a particular problem, “It’s nothing some earmarks can’t fix.”

The mastermind behind the assault on freedom (I won’t tell you who it is, except that it’s someone involved in government) justifies his destructive plan by saying, “To build a better world sometimes means tearing down the old one” and opining that “humanity cannot be trusted with its own freedom. If you take it from them, they will resist, so they have be given a reason to give it up willingly.” Another one adds, “Humanity is finally ready to sacrifice its freedom for security,” echoing Ben Franklin’s warning. These power-hungry leaders boast of having manufactured crises to create conditions in which people willingly give up freedom. This isn’t new, of course. Such tactics are as old as Machiavelli. Yet nothing could feel more current. I’m happy to see young audiences eating this up.

Captain America first appeared on film in 1944, at the height of WWII. He has never been as popular as Superman, Batman, or Spider-Man. A made-for-TV movie aired in 1979, and a dismal version (with a 3.2 rating) was made in 1990. However, the latest incarnation, with Chris Evans as the wimp-turned-military powerhouse, has been highly successful, with three films released in the past four years: two self-titled films (Captain America: The First Avenger in 2011, and this one) as well as one ensemble outing (The Avengers, 2012).

These power-hungry leaders boast of having manufactured crises to create conditions in which people willingly give up freedom. This isn’t new, of course.

One of the things I like about the Avengers is that they aren’t born with innate super powers à la Superman or X-Men; for the most part their powers come from innovation, technology, and physical training. They’re gritty and real, and they bruise and bleed. Directors Anthony and Joe Russo were determined to make this movie as real as possible too, so they returned to live action stunts whenever they could instead of relying on CGI and green screen projection. Yes, they use stunt doubles when necessary, but, as Anthony Mackie (the Falcon) reported in praise of the Russos, “if they could build it [a set piece], they built it. If we [the actors] could do it [a difficult maneuver], we did it. . . . That’s why the movie looks so great.” Many of the action scenes are beautifully choreographed and often look more like dancing than fighting, especially when Captain America’s shield is ricocheting between him and a gigantic fighter plane.

Of course, the film has its share of corniness too. When you’re a hero named Captain America, you’re expected to be a rah-rah, apple-pie American, and Captain America is. He even drives a Chevy, the all-American car. So does Nick Fury (Samuel L. Jackson), who brags about his SUV with a straight face as though it’s a high-end luxury vehicle. In fact, all the SHIELD operatives drive Chevys, as do many of the ordinary commuters on the street. That’s because another concept that’s as American as apple pie is advertising. Product placement permeates the film, but most of the time it’s subtly and artfully done. Captain America wears an Under Armour t-shirt (which is pretty ironic when you think about it — under armor beneath a super-hero uniform), and the Falcon, whose superpower is a set of mechanized wings that let him fly, sports a small and subtle Nike swoosh on his after-hours attire. (Nike — the winged goddess, get it?)

Captain America is a hit, and for all the right reasons. The dialogue is intelligent, the humor is ironic, the action sequences are exciting, and the heroes are fighting for individual freedom. It even contains a theme of redemption. And for once, the bad guys aren’t businessmen. Ya gotta love it.

Captain America: The Winter Soldier, directed by Anthony Russo and Joe Russo. Sony Pictures, 2014, 136 minutes.




Share This


The Road to Potential

 | 

How I hate the word “potential”! While acknowledging innate abilities with faint praise, it reeks of withering disappointment, talents wasted, opportunities lost.

Transcendence is a film with tremendous potential.

It begins with a talented cast of discriminating actors that includes Johnny Depp, Rebecca Hall, Cillian Murphy, Morgan Freeman, and Paul Bettany. (OK, scratch Morgan Freeman from the list of “discriminating actors.” Freeman is a fine actor, but he has become as ubiquitous as Michael Caine.) Add a postapocalyptic setting, an army of zombified robotic post-humans, a love story that transcends death, and the time-honored “collision between mankind and technology.” Mix in some dialogue about challenging authority and questioning the meaning of life, and create a metaphor suggesting the thirst for power in the form of the World Wide Web. It seems like the perfect formula for an exciting sci-fi thriller.

Yet Transcendence barely gets off the ground. The story, about terrorists who attack the world’s computer hubs simultaneously in order to stop the Internet, should be powerfully engaging, but it lacks any building of tension or suspense. Instead it is a dull, slow-moving behemoth emitting loud, unexpected bursts of explosive violence.

Will Caster (Johnny Depp) is a computer programmer working on ways to heal the planet through nanotechnology. When terrorists attack his computer lab and infect him with deadly radiation poison, his wife Evelyn (Rebecca Hall) and his research partner Max (Paul Bettany) convince him to let them upload his consciousness onto a hard drive that will allow his sentient self to continue to live within the machine. It’s an interesting concept that ought to cause one to reflect on what makes us human: is it the physical body of flesh and bones that can move and act? Or is it the non-physical collection of memories, thoughts, and personality? Many people have had their heads cryogenically frozen after death, in hopes that someday their minds can be restored within a synthetic body and they can regain life. But that isn’t what this movie is about.

“Transcendence” is a dull, slow-moving behemoth emitting loud, unexpected bursts of explosive violence.

The plan works, and Will speaks from the computer screen after his death. However, Max immediately and inexplicably regrets having uploaded Will to the machine, so he joins forces with the terrorists (who also join forces with the government — it’s hard to keep all the factions and their motivations straight) to stop Will from doing what he was uploaded to do. Meanwhile, Evelyn joins forces with Will and together they build a solar power grid in an isolated Nevada desert to give Will enough juice to mingle with every scintilla of the Internet.

Yes, this makes Will omniscient, omnipresent, and all-powerful. And that’s a definition of God, right? Will is treated like God’s evil twin, set on sucking up all the power in the universe (there’s that metaphor of the power grid.) But he doesn’t do anything evil. He doesn’t steal power from others; he creates his own from the sun — and he pays the Nevada residents a good wage to work for him. He doesn’t kill anyone, destroy anything, or even growl menacingly. In fact, he uses his power to refine his work in nanotechnology, and soon he is able to heal the blind, make the lame walk, and restore life to those who have been killed. (In case you hadn’t noticed, this is godlike too.) As they are healed, his new followers become like him — imbued with the Internet and able to communicate with Will through his spirit — that is, the Cloud.

This storyline has the potential for being eerie and scary, à la Invasion of the Body Snatchers; but Will’s followers don’t do anything wrong either, and they aren’t at all zombie-like. They are just humans who once were disabled and now can see, walk, lift, run, hear, and produce. How is that different from living with a pacemaker or a titanium hip, and carrying around a smart phone that keeps one constantly connected to the Internet? Nevertheless, the government is determined to take them out, simply because they are now created in Will’s image and have the ability to communicate worldwide.

All of this has the potential for philosophical discussion, but I had to use all my creativity as a literature professor to reach the potential message I’ve described here. The message is beyond subtle — so subtle, in fact, that I think it went over the director’s own head. I’m not sure what he intended to suggest, except possibly that the collision between mankind and technology is usually good for a studio green light. I doubt that he was even aware of the potential metaphor or deeper meaning of the film he created.

Ah. There’s that word again. “Potential.” A film that had transcendent potential is instead fraught with withering disappointment, wasted talent, and lost opportunities. Insomniacs will love it.


Editor's Note: Review of "Transcendence," directed by Wally Pfister. Alcon Entertainment, 2014, 119 minutes.



Share This


Noah Sails Where No Rock Ever Sailed Before

 | 

Myth has been defined as “a story, often employing superhuman figures and supernatural events, that attempts to explain something about the world and that expresses some important value or belief of a culture or society” (Howard Canaan, Tales of Magic from around the World). Myths have simple plots with few specific details; their meaning can evolve over time to represent changing cultural values. This is what director Darren Aronofsky has done with his epic new film, Noah — he has created a myth like that.

Audiences who want to see the biblical story of Noah will be disappointed. Judeo-Christian believers, indeed, will be offended and outraged by the laughable inaccuracies in this movie, from the biblical point of view. (Believers will know they’re in trouble when they see the film begin with the words, “In the beginning there was nothing.”) Darren Aronofsky has rewritten a new myth, for modern times. It is no longer the story of a prophet who was chosen by God to build an ark and repopulate the earth after everyone else drowns. The conflict is no longer between God and Satan but between humans and Rock People (representing Earth — but more below). Rock People communicate with a Creator, but humans do not communicate with God. The purpose of religion is not to forge a relationship between God and people but to protect the earth and the animals. “If anything happens to one of these creatures, it will be lost forever,” Noah warns his sons, but he has no similar concern for humanity. As Noah walks among the wicked community that is about to be destroyed by the flood, he observes many gruesome acts, but the pinnacle of their depravity is presented as they cleave animal carcasses for cooking. Methuselah explains, “Men will be punished for what they have done to this world,” not for what they have done to one another. Noah is a modern myth that represents the hegemonic values of today.

Aronofsky fractures the Bible, combining snippets from several biblical stories and pretending that they all happened to Noah.

Maybe it was because I had just seen the new Mr. Peabody and Sherman movie, but Rocky and Bullwinkle’s Fractured Fairy Tales kept coming to mind while I was watching Noah. Aronofsky fractures the Bible, combining snippets from several biblical stories and pretending that they all happened to Noah, including Eve’s attraction to the serpent, Cain’s murder of his brother Abel, Elisha’s army of angels, and Abraham’s near-sacrifice of his son Isaac. It’s the most bizarre concoction, yet I’m sure that many gullible filmgoers went home saying, “Wow! I didn’t know Noah almost killed his granddaughters!” You see, in order to understand Rocky’s Fractured Fairy Tales, you had to know that Sleeping Beauty did not eat a poisoned apple.

And here’s another thing you probably wouldn’t know was in the Bible if you didn’t see this movie: God did not create humans — some crazy giant Rock People did. These Rock People look like piles of boulders until they pull themselves together, Transformer-like, and start stomping around the earth. They have multiple arms and glowing eyes and thundering voices à la Optimus Prime and are a whole lot more exciting than the voice of God. They create an eerie static hum whenever they’re close by, and they strike fear into the hearts of men. Except the hearts of the ones they like.

According to the Book of Aronofsky, these Rock People came from outer space as meteors of light and got stuck in the muck of primordial creation. They made humans out of the dust of the earth, and as the film opens they’re really mad at themselves for doing it because humans really suck. But you already know that, if you’ve been reading the newspapers lately.

See, it turns out that animals are “the innocents” and “man broke the world.” Eve’s real treachery wasn’t curiosity or disobedience or a desire for wisdom; it was that she brought children to the earth and allowed her descendants to wreak havoc there. Now Noah’s wife wants to do the same! But Father Noah Knows Best. He is determined to put all the animal pairs onto the ark and save only his three sons, his post-menopausal wife (Jennifer Connelly), and one barren girl (Emma Watson) so that humans cannot repopulate the earth and ruin it again. His job is simply to keep the animals safe until the flood subsides, and then quietly let humans become extinct.

It probably doesn’t surprise you that in this movie, Noah never communicates with God, or vice versa. So where does he get the idea of building the ark? From the dregs of a psychotropic tea given to him by Methuselah (Anthony Hopkins). Methuselah, by the way, has supernatural powers, but when he uses them to cause a wonderful and necessary miracle, Noah gets pretty ticked off and starts grabbing for daggers. Actually, there is very little to set Noah apart from the wickedness around him. He wields an ax and a bludgeon with the best of them, and he can be pretty heartless.

Despite the assertion that “in the beginning there was nothing,” there are deists in the movie. They just aren’t the prophets. The Rock People talk directly to a Creator, and so does Tubal-cain (Ray Winstone), the leader of the wicked nomads and a descendant of Cain, who killed his brother Abel and was forced to bear a mark on his forehead as the first murderer. Tubal-cain doesn’t actually appear in the biblical story of Noah, but Aronofsky throws him in, probably because he became a blacksmith (Genesis 4:22) and is credited by many scriptorians with inventing weapons of war. While Noah is drinking the psychedelic Kool-Aid, Tubal-cain is calling out to God, “Speak to me! I am a man, made in your image. I am like you — I give life and I take it away. Why will you not converse with me?” Meanwhile, the priesthood that has been passed from Adam to Methuselah to Noah is embodied in the skin of the serpent that tempted Eve in the Garden. This is no mere fracture of the tale; Aronofsky delights in making the godly evil and evil godly.

The Rock People have multiple arms and glowing eyes and thundering voices à la Optimus Prime and are a whole lot more exciting than the voice of God.

Sure, I get it: Hollywood doesn’t like references to God (approving ones, anyway). And yes, I know the difference between Sunday School and the Cineplex; I wasn’t expecting a sermon. But why make a movie about Noah if you are going to leave out the driving force behind the story? It’s like making Clash of the Titans without Zeus or Poseidon. Why not just make a movie about Rock Giants that duke it out with brutal nomads while one family escapes in a boat with a bunch of animals? Let those of us who know how to read leave the theater saying, “Wow, did you catch those references to Noah?” instead of “Man, did he ever get that wrong!”

What drew Aronofsky to the story of Noah in the first place? I suspect it was the same characteristics that have kept myths alive for centuries. Archetypal characters, iconic conflicts, and simple truths about human nature resonate with us. One does not have to be religious to experience the resonance of biblical stories, nor should religious people be offended that I categorize biblical stories as myth. Contrary to popular opinion, “myth” does not mean “a lie,” or “a story that is not true.” Myths express “not historical or factual truth, but inner or spiritual truth. They are the shared stories that express insights about human nature, human society, or the natural world” (Canaan). Myths are so profound that they transcend the need to be factual. In fact, they can even transcend Hollywood’s need to be cynical.

Despite my criticism of the first two hours of this film, I found the conclusion profoundly satisfying. After all the fracturing and twisting and pushing away from humanity, Aronofsky ends with a cathartic moment of transformation and hope. It probably isn’t worth the two-hour journey to get there, and it’s totally out of whack with the source material. But Aronofsky creates a lovely scene of redemption at last.


Editor's Note: Review of "Noah," directed by Darren Aronofsky. Paramount Pictures, 2014, 138 minutes.



Share This


TSA Training Film

 | 

Liberty’s readers know that I’m a fan of Liam Neeson’s middle-aged reincarnation as an action hero. His romps through thrillers with such single-word titles as Taken and Unknown, beating up bad guys half his age as he struggles to rescue his family (a common theme in his action films). This is cinematic escapism at its best.

At first this classically trained Shakespearean actor, who earned an Oscar nomination in 1993 for his powerfully moving performance as Oskar Schindler in Schindler’s List, was embarrassed by the success of Taken. He admits that he agreed to take on the role simply for the opportunity to spend three months in Paris (and, I suppose, for the $5,000,000 fee he was reportedly paid), but he expected the film to go straight to video, he says, where no one would see it. Nevertheless, he has embraced his new role as an action hero, and enjoys resurrecting the skills he learned as a professional boxer in Dublin, many years ago, for these beat-’em-up films.

In Non-Stop Neeson again gets to growl menacing lines and land knock-out punches as he chases the bad guys, this time while flying on a jet between New York and London. But this time it was a lot harder for me to enjoy the ride.

Neeson plays Bill Marks, an air marshal who springs into action when he receives a text message from someone on the plane threatening to kill a passenger every 20 minutes until $150 million is transferred to a numbered bank account. Who is the culprit? And how can he be stopped?

Marks doesn’t know who the bad guy is, so he treats every single passenger as the hijacker and murderer. This is the TSA run amok with self-righteous determination.

Marks storms through the plane, grabbing anyone who looks suspicious and slinging the suspects around the plane. He stops at nothing (get it? non-stop?) in his determination to stop the killer. He snatches cell phones from breast pockets, rummages through carry-on bags, breaks one passenger’s nose and another passenger’s arm and another passenger’s neck. He shoots guns and thrusts knives and shoves food carts.

This is classic Neeson action-hero schtick, and I usually love it. But I have a problem with it in Non-Stop: these passengers aren’t bad guys. Well, one of them is. But Marks doesn’t know who, so he treats every single passenger as the hijacker and murderer. This is the TSA run amok with self-righteous determination. It doesn’t matter who might be hurt or even killed, so long as the air marshal gets his man. I actually cheered when the passengers finally mustered enough gumption to smack Marks in the head with a fire extinguisher, even though he was just “doing his duty” and “protecting” them.

Also uncomfortably along for the ride are Julianne Moore as the air marshal’s seatmate and Lupita Nyong’o as a flight attendant. Both are downright silly in their hand-wringing. I’m sure that if director Jaume Collet-Serra had known Nyong’o was going to be awarded an Oscar for her role in in Twelve Years a Slave and conducting a media blitz the very week his film was released, he would have given her a few more lines. Instead she is virtually hidden in the background. I rather imagine she is relieved that this film didn’t open in January, while the Academy members were still voting . . .

Even the denouement of Non-Stop is disappointing. I won’t tell you who did it, but it really doesn’t matter. The reaction is more of an “oh . . .” than an “Aha!” That’s because the story is set up like an Agatha Christie mystery in which every last suspect could plausibly be guilty. Whoops — I guess that’s exactly what the TSA wants us to think, isn’t it?

Non-Stop is a disappointment in every way. If this had been Neeson’s first foray into the action thriller genre, it would indeed have ended up going directly to video. I don’t even recommend it on Netflix.


Editor's Note: Review of "Non-Stop," directed by Jaume Collet-Serra. StudioCanal, 2014, 106 minutes.



Share This


Two Films: One Right, One Not So Right

 | 

The weakest of this season’s Oscar finalists is Philomena. This film about an Irish woman’s search for the baby she gave up for adoption, more than half a century earlier, has received four Oscar nominations, including Best Picture, Best Actress, and Best Writing (Adapted Screenplay). It is a good film, with moments that are lighthearted and funny and other moments that are deeply emotional and full of anguish. The performances by Judi Dench as Philomena; Steve Coogan as Martin Sixsmith, the down-on-his-luck journalist who helps her; and Sophie Kennedy Clark as the young Philomena are top-rate. But the film is marred by the same characteristic that is probably driving the critics and the Academy to rave about it: it revels in unfair and bitter vitriol against the Catholic Church. Hollywood loves to hate religion.

Philomena is really the story of two souls — the title character and the journalist — who have had their lives pulled asunder by external forces. When the young and unmarried Philomena becomes pregnant, her parents send her to a convent house where unwed mothers are hidden away and cared for until their babies are born and put up for adoption. To earn their keep, the girls do domestic work inside the convent, and they are allowed to see their babies every day until homes are found for them. But the outcome is known from the beginning: the girls have come to the convent to hide their pregnancies, give up their babies, and return to normal life. The nuns are simply doing what they agreed to do.

Philomena’s parents are scarcely mentioned in this film. All the vitriol and venom are reserved for the Catholic Church.

The sad truth, however, is that no one knows until she has experienced it how hard the mothers’ role really is. How can she “return to normal life” once she has had a baby growing inside her? Whether she marries the father, raises the child by herself, gives the child to another family, or terminates the pregnancy, there is no forgetting the child and no going back to what life was like before. Parents of the pregnant girl might mean well in trying to go backward; “six months away and it will be as though it never happened,” they might think. But they don’t know. Certainly the nuns and priests don’t know; they’ve taken a vow never to become parents except indirectly, as Mother Superior or Father to the flock. Only the members of this exclusive club of special mothers can truly know what it’s like, so I won’t pretend to suggest that I know the answers. I only know that it’s hard.

The film turns the nuns and the church into the villains of the story, and it’s true (or seems to be true) that they were harsh in how they enforced their rules. But it should be remembered that no one in the church reached out and kidnapped these young unwed mothers; their parents sent them to the convents, and social custom embraced the plan. In a climate in which unwed mothers were treated as outcasts and their children were treated as bastards, these premature grandparents did what they thought was best for their daughters, the babies, and the childless couples who wanted them. And yes, for themselves. But Philomena’s parents are scarcely mentioned in this film. All the vitriol and venom are reserved for the Catholic Church, through several disparaging remarks made by Sixsmith toward the Church, and even more through the cruel, heartless way the nuns treat the mothers of the babies, and by the deliberate withholding of information by the convent’s head nun. I’m not Catholic, but I am offended by the anti-Catholic sentiment that permeates the film.

Martin Sixsmith has experienced a frustration of his own: as the film opens, he is a former journalist who has been sacked from his position with the Labour Party over an offense that he did not commit. He is outraged by the unfairness and tries to have his job restored, just as Philomena tries to reclaim her son, but to no avail. After reporting international news for so long, he feels demeaned by accepting this fluffy human-interest story for a magazine. But accept it he does, and the two set off for America to trace the snippets of information available to them about the child’s adoptive parents.

They are an unlikely pair, Martin with his international political interests and Philomena with her game shows and romance novels. She nearly drives him nuts with her never-ending summaries of the latest love story she is reading and her penchant for talking to strangers. These lighthearted scenes provide some of the most enjoyable moments in the movie, and balance the scenes of unbearable anguish portrayed by Young Philomena and the more controlled, but just as real, anguish felt by her older self. This is a lifelong pain that never goes away.

The film is certainly worth seeing, on its artistic and its social merits. But better than Inside Llewyn Davis? Or even Saving Mr. Banks? (Neither of them was nominated for Best Picture.) Not on your life. Philomena was nominated purely for its political correctness in hating on the Catholic church. And that’s just not a good enough reason in a season of such outstanding films.

No external considerations were necessary to produce admiration for the next film that I want to consider — another nominee for Best Picture: her.

her is a cautionary tale about the love affair with electronic devices and the disconnect it is causing in normal relationships, from simple inattention to internet dating and cybersex. Even the name, “her,” suggests objectification; the title is not She, and it is not even capitalized. “her” is just the objective case of what once was a woman.In this story of a near-future utopia, the voices that talk to us from our phones and GPS units and have names like “Siri” have developed emotions and personalities that aren’t almost human; in many ways they’re better than human. But this is not Westworld (1973) run amok, with sentient robots destroying their creators in order to take over the planet. No, “her” is a soft-spoken voice that comes in the night, whispering sweet nothings and taking over the creators’ emotions.

But this isn’t intercourse, and it isn’t real. It’s just mutual masturbation.

Theodore Twolmy (Joaquin Phoenix) is an emotionally crippled introvert who writes “heartfelt personal letters” for other people. It’s sort of like being a cross between a Hallmark poet and Cyrano de Bergerac. Theodore is separated from his wife, Catherine (Rooney Mara), whom he has known since childhood, and is very lonely. His days are filled with writing love letters, but he lacks any love in his own life. He turns to what amounts to porn calls in the middle of the night, but that doesn’t satisfy him. He spends his evenings playing holographic video games and becomes so immersed in the adventure that when he’s out on a blind date, he talks about the video character as though he were a friend. And the date gets it. Without thinking it’s weird or nerdy. Just as Ray Bradbury predicted in Fahrenheit 451, the people on the screen have become family.

This scene in which Theodore talks about his video friend reminded me of the time, years ago, when my son completed the final level of the first “Zelda” game. He had been working at it for a few weeks, and I thought he would feel exhilarated. Instead, he was morose and despondent. “You can start the game again,” I told him, thinking that would help him shake the blues. He responded with great sadness, “But she won’t remember me!” That was my first understanding of just how deeply someone can become involved in a cyber relationship, even one that doesn’t have a real person at the other end of the email.

Enter Samantha (Scarlett Johansson), the witty, husky voice inside Theodore’s electronic devices. When Theodore purchases a new operating system to manage his electronic information and Outlook files, he is surprised to find how humanlike the artificial intelligence interface is. Because this software has complete access to all his files, “she” knows him inside out and can evolve into a personality that responds to his emotional as well as organizational needs. And he responds viscerally to this being who knows him so deeply. It is what he has been aching for.

The film’s delicate tone makes it both very special and very disturbing. The sets and costumes contribute a great deal to that tone. The colors are mostly soft oranges and greens, the fabrics natural and touchable. The clothing is only slightly futuristic — the shirts have a different kind of collar, for example, and they are tucked into pants that ride high above the waist, instead of riding low on the hips as they do today. Furniture is sleek and mildly mid-century, with wall hangings and table decorations made of wood or stone. It’s unlike anything I’ve seen before, yet so natural and comfortable that I expect to see it “in reality” next year. The overall effect is rather dreamy and inviting, not unlike Theodore’s relationship with Samantha.

Soon Theodore is spending all of his time talking with Samantha. He takes her on “dates” by putting his phone in his shirt pocket with the camera facing forward, and they have flirtatious conversations together. At a party he leaves the group of human friends to go into an empty side room and chat with Samantha. At night he feels especially close to her. He lies in bed in the dark, watching for his phone to light up with a message from her. There is something so magical and enticing about speaking to her in the dark. He tells others that Samantha is his girlfriend. He becomes goofy with happiness, giddy with the swivet of romance. It leads to a sick isolation from the real people in his life — an isolation many real people create for themselves as they engage in cyber relationships.

Of course, the nighttime conversations eventually lead to cybersex. Despite the giddiness of the growing “relationship,” he still feels morose and disconntected.

He tells her, “Sometimes I think I’ve already felt everything I’m ever going to feel, and from here on out I’m never going to feel anything new.” After a pause he adds, “But you feel real to me, Samantha.”

And then it starts. “I wish I could touch you,” he says. “How would you touch me?” she asks, genuinely curious, since she does not have a body or any experience with touch. “First I would . . .” and he tells her where he would touch her. And touch her.

His imagined touching is gentler and more romantic than his experience with phone porn earlier in the film, before he has “met” (that is, purchased) Samantha. It suggests that their deep intellectual conversations have led to a deeper, more meaningful sexual connection as well.

“Mmmmmm,” she responds. “That’s nice.” And he expresses more places he would touch her if he could.

And then . . . the fireworks. For both of them.

It seems utterly romantic. They’ve been talking for weeks. It feels like real communication. They seem to be connecting on a deep, intimate, personal level. There’s a reason sex is called “intercourse.” But this isn’t intercourse, and it isn’t real. It’s just mutual masturbation. Or in this case, single masturbation, because Samantha exists only in his computer. She’s not real, and what they seem to have is not real, either. He loves the rush he feels when he is talking to her, but it keeps him from having any real relationships with real people. And that, of course, is the danger of cyber “relationships.” They are emotionally stimulating, but socially crippling.

“How do you share your life with someone?” Samantha asks when Theodore tries to tell her about his relationship with Catherine and his grief at their breakup.

“Through influence,” he suggests, thinking about how he and Catherine would talk to each other about their writing and their careers. “Try this, try that,” he explains about their creative influence on one another. “You grow and change together,” he continues, trying to understand the sharing of a life as he explains it to Samantha — who is, of course, his own creation. “But the danger is growing apart.”

Perhaps she is right. Perhaps falling in love — true love, with a real human — is insanity.

He believes that he cannot grow apart from Samantha, because they are so completely in sync and in love. “You’re mine,” he says simply. But there are no guarantees in cyber relationships; there is only what you believe you have created. And that, too, is a danger. It is far too easy in cyber relationships to invent personas that aren’t quite real, to create dialogs that are fresh and funny and exciting, but in the end are just scripts in an evolving melodrama.

Are human relationships any better? “Falling in love is socially acceptable insanity,” Theodore’s friend Amy (Amy Adams) opines at one point. And perhaps she is right. Perhaps falling in love — true love, with a real human — is insanity. Perhaps there isn’t any logic or sense or sanity about human relationships. They’re hard to develop and even harder to maintain, especially in this day when everyone’s head seems to be dipped toward an electronic device. “Falling in friendship” can be just as inexplicable. We seem drawn toward communicating with cyber friends, checking our email and updating our tweets, even while a real, live friend is right there beside us. It’s a serious and growing problem, this love affair with electronics, a problem that is beautifully, disturbingly displayed in this creative and powerful film.


Editor's Note: Review of "Philomena," directed by Stephen Frears. BBC Films, 2013, 98 minutes; and "her," directed by Spike Jonze. Annapurna Pictures, 2013, 126 minutes.



Share This


Oscar Shrugs

 | 

Good filmmaking has much in common with good poetry. Filmmakers and poets both employ language and techniques, specific to their art, that allow them to give their works multiple layers of meaning within tightly condensed packages. Poets use metaphor, alliteration, rhythm, tone, symbol, euphony, and rhetorical structure to streamline their communication with their audiences; filmmakers use lighting, music, costumes, setting, and those same metaphors, rhythms, and symbols to create a similar effect.

This is especially true of filmmakers like the Coen Brothers, who have been creating startlingly brilliant films since Blood Simple (1984) and Raising Arizona (1987). Those two freshman films — one a violent crime thriller and the other a quirky, lighthearted romp (OK, its main characters are criminals too, but they have such good hearts!) — demonstrated early on the breadth of their artistic palettes. While many filmmakers have such recognizable styles that they eventually become adjectives (Hitchockian, Spielbergian, Bergmanesque, etc.), others do something new and inventive each time. The Coens are like that. While they tend to repeat some of the same artistic tools — they have favorite actors, cinematographers, and musicians — each film offers something predictable only in its unpredictability.

Music is one of their most effective artistic tools, so it should not be surprising — and yet it is — that the Coens would make a film that is simply a week in the life of a folk singer in the 1961 Greenwich Village music scene. As the film opens, Llewyn Davis (Oscar Isaac) is finishing a set in a small, dark cabaret. When the theater manager tells him that a friend in a suit is waiting for him outside, he goes out back and is promptly punched in the face. We don’t know why, and we don’t find out why until much later in the film. Nevertheless, this event seems to be the beginning of a long week of unhappy events in the life of a struggling artist.

Many will see him as a Howard Roark who refuses to compromise his art, even if it means not having a career. But Llewyn’s choices are often driven by his instinct for survival.

Llewyn has no money, no gigs, and no real hope of future gigs. He’s trying to make it as a solo artist after beginning his career as half of a duo, and so far it isn’t working. He sleeps on the couches of friends and bums cigarettes and sandwiches whenever he can. He’s a likeable guy, though down on his luck, and he has a gorgeous, haunting voice. The best part of this film is simply listening to the music. As Llewyn says after finishing a song, “If it isn’t new, and it never gets old, then it’s a folk song.” The soundtrack might be based in the ’60s, but the music feels as contemporary as yesterday, with emotion that is deep and painful.

Llweyn makes a lot of unwise decisions that lead to the unfortunate circumstances he encounters, and that’s an important but subtle message in this film. Many will see him as a Howard Roark who refuses to compromise his art, even if it means not having a career. But Llewyn’s choices are often driven by his instinct for survival. When it’s winter in New York and you have no home, no overcoat, no food, and no cigarettes, you make decisions based on short-term needs rather than long-term consequences. For example, you might take the quick hundred bucks for playing a recording session rather than holding out for the lucrative royalties that are due to you as a represented musician, because you need the money right now. (By the way, that studio session in which Llewyn, who doesn’t read music, learns his part by ear and then performs it for the recording is simply magical.)

This aspect of the film reminds me of the interchange between Siddhartha and the merchant Kamaswami in Herman Hesse’s Siddhartha in a scene that occurs shortly after Siddhartha leaves the ascetic life of the monks to join the materialistic world of the city:

"Everyone gives what he has. The warrior gives strength, the merchant gives merchandise, the teacher teachings, the farmer rice, the fisher fish," [Siddhartha begins.]
"Yes indeed. And what is it now that you've got to give? What is it that you've learned, what are you able to do?" [Kamaswami responds.]
"I can think. I can wait. I can fast."
"That's everything?"
"I believe, that's everything!"
"And what's the use of that? For example, the fasting — what is it good for?"
"It is very good, sir. When a person has nothing to eat, fasting is the smartest thing he could do. When, for example, Siddhartha hadn't learned to fast, he would have to accept any kind of service before this day is up, whether it may be with you or wherever, because hunger would force him to do so. But like this, Siddhartha can wait calmly, he knows no impatience, he knows no emergency, for a long time he can allow hunger to besiege him and can laugh about it. This, sir, is what fasting is good for."

But Llewyn doesn’t know how to fast, or how to wait, and so he takes the cash in hand now instead of waiting for the more valuable royalties that could be worth much more later. He is like Esau, selling his birthright for a mess of pottage when he was famished from hunting in the forest.

In this film John Goodman portrays the most despicable character of his career, even worse than his shyster Klansman in O Brother, Where Art Thou? (another Coen Brothers film with a sublime musical score and ethereal lighting and cinematography). His character isn’t violent, but he’s vile. Goodman can and will do anything, and good directors know it. He’s having quite a career as a character actor.

Like good poetry, and good art, this is a film to be savored, pondered, and re-viewed in order to understand the richness of its meaning. Several recurring images — a cat, or cats, that show up throughout the film, for example, and the way Llewyn adjusts his coat just before he sings — create a disconcerting yet satisfying sense of ambiguity that adds to the layers of meaning. You’ll want to go with a friend, just to talk about the film afterward. Inside Llewyn Davis is about an aspiring ’60s folk singer, but it’s about so much more. It’s about choice and accountability, about survival in a harsh environment, about the conflict between commercialism and individuality. It’s about the artist in us all, and the price most of us aren’t willing to pay for greatness. It’s one of my favorite films in a season of good films.

A note about recognition: snubbing Inside Llewyn Davis is one of the stupidest mistakes the Academy of Motion Picture Arts and Sciences has made in a long time. Philomena?? Instead of this?? I don’t know what they were thinking. Maybe they just didn’t want to put in the effort it takes to peel back the layers of genuine art.


Editor's Note: Review of "Inside Llewyn Davis," directed by Joel and Ethan Coen. Mike Zoss Productions, 2013, 109 minutes.



Share This
Syndicate content

© Copyright 2013 Liberty Foundation. All rights reserved.



Opinions expressed in Liberty are those of the authors and not necessarily those of the Liberty Foundation.

All letters to the editor are assumed to be for publication unless otherwise indicated.