Getting Ready for October 21

 | 

For a long time,  I’ve been reporting on the apocalyptic prophecies of Family Radio, the group that identified May 21, 2011, as the date for the manifestation of Christ and the rapture of God's elect. When that date passed without either the Rapture or the great earthquake that Family Radio’s founder and chief, Harold Camping, had predicted, it was a big news story. It got enormous attention around the world. As I’ve been saying, this was actually a significant event, not just a media event, because it provided the best chance we’ll probably ever have of seeing what occurs when prophecy conclusively fails for a large group of people.

What followed May 21 was a process familiar to students of apocalyptic history — the spiritualization of the failed prophecy. Camping, who at first seemed stunned by the complete normality of May 21, soon decided that the earthquake had actually occurred, but it had been a spiritual earthquake, signaling an invisible and wholly spiritual Last Judgment. According to him, the enrollment of the elect had been completed; all that remained was the final elimination of the non-elect, which would take place, as he had previously prophesied, on October 21, 2011, when the physical universe would be totally destroyed. God's activity would thus be visible on October 21 as it should have been on May 21. Camping suggested that the remaining months of Family Radio’s existence would be devoted to quiet cultivation of the spiritual lives of the elect, not the attempted conversion of persons irrevocably condemned.

Already, however, there was strong evidence that many, if not most, of the people at Family Radio's headquarters in Oakland, California were dissenters from the official message. Most broadcasts on the worldwide radio network had ignored Camping's distinctive doctrines and predictions. Many broadcasts were devoted to presentations that contradicted his doomsday prophecies — discussions of health maintenance, provision for old age, long-term strategies for child rearing, care for the environment, and so forth.

Camping’s new emphasis appeared to satisfy both the believers and the nonbelievers within the organization. The former could continue to believe whatever he said; the latter could go about their normal business, unworried about the need to convert anyone to his unusual ideas. Family Radio’s website withdrew all direct mention of Camping's endtime books and pamphlets, although it continued, and continues, to run a link to his quaint answer to the question, “What Happened on May 21?

Yes, we got a few details wrong about the second coming, or the total collapse of the financial system, or the destruction of the middle class, or the coming of global warming (which used to be global cooling), but thank the Maker that the Message still got out.

Then, on June 9, Camping, age 89, suffered a stroke. He was hospitalized, and his Monday through Friday live broadcasts ceased. Virtually the only Campingite voice on Family Radio was that of an epigone, one Chris McCann, who kept preaching the party line about May 21 and October 21, though without Camping’s goofy panache. In a recorded talk that FR broadcast on August 12 (one of a series of talks that is still going on), McCann said of the apocalypse of May 21, “In some small degree it didn’t happen.”

In August, Family Radio’s monthly direct-mail fundraising letter quoted listeners who thanked FR for its message, even though May 21 didn’t turn out to be exactly what they had been led to anticipate. “I am not disappointed with anyone at Family Radio," one listener said. "I believe all intentions were good.” The letter betrayed no visible embarrassment on FR's part. But the September letter didn't mention May 21, or October 21, either. It contented itself with an understated request for support. So the stage was set for a full, though gradual, withdrawal from predictions and disconfirmations.

On September 20 came the news, delivered by website, that Camping had returned to his home, followed on September 27 by a recording of Camping’s own voice — firm and clear, only a little slurred, and precisely the same in reasoning and intention as his pre-stroke explanations of what had occurred and will occur in 2011.

In this new message, Camping reasserted the idea that October 21 will see the end of the physical universe. The elect will survive; the non-elect (everyone not saved by May 21) will perish eternally. His one addition came in response to a question of urgent concern among his remaining followers: what will happen to the unsaved members of our families?

Camping had already established the doctrine that only 200,000,000 people, out of the billions who have ever inhabited this planet, are among the elect. Now he offered consolation to people about to be deprived of their families and friends. He said it is likely that there will be no violence on October 21: “Probably there will be no pain. . . . They will quietly die and that will be the end of their stories.”

“The end," he went on, "is going to come very, very quietly, probably during the next month, probably by October 21.” Lest you mistake “probably” as a concession to uncertainty, he also said, “I am very convinced that all the elect will go to be with the Lord in a very few weeks.” Regrettably, however, from the point of view of his own credibility, he recurred to an idea that he had been preaching before his stroke — his explanation of why God had let him go so wrong about May 21. There were a lot of things, he said, that “we” didn’t understand, but it was good that God had withheld the full truth; it was good that God had let Camping declare, in the most dogmatic terms, that there would be a literal cataclysm on May 21 — because if he hadn't, the rest of his message wouldn't have aroused much interest.

Here is the unconscious cynicism that religious and secular prophets so often display. Yes, we got a few details wrong about the second coming, or the total collapse of the financial system, or the destruction of the middle class, or the coming of global warming (which used to be global cooling), but thank the Maker that the Message still got out. So please keep trusting and respecting us, the people uniquely qualified to convey such Messages.

I will continue to report on events at Family Radio. My current, highly fallible prediction is that within a few months after October 21, Mr. McCann will vanish from the broadcast schedule, the greatness of Mr. Camping will be institutionally recalled, but not his teachings, and Family Radio will return to a more or less typical Christianity — unrepentant, unconfessed, and unwilling to remember the great events of 2011. Such is the way of this sinful world.




Share This


Thoughts on “Hayekian Insights for Trying Economic Times”

 | 

Following a recent panel at the Cato Institute commemorating the publication of a new edition of F.A. Hayek’s The Constitution of Liberty, Arnold Kling brought to my attention a recent essay on Hayek by Bruce Caldwell. On the panel, Caldwell provided a thorough and concise refutation of George Soros’s blatant misreading of Hayek. Naturally, I sought Caldwell's essay out.

In this work, “Ten (Mostly) Hayekian Insights for Trying Economic Times,” Caldwell seeks to “identify 10 key themes to be found in the writings of Hayek and others in the tradition to which he belonged that may provide some insights into how we might respond to the current dilemmas that we face.” The essay is thought-provoking, and several points are worthy of further discussion.

Theme #1: The business cycle is a necessary and unavoidable concomitant of a free-market money-using economy.

Caldwell cites Austrian business cycle theory to proffer an explanation of the recent financial crisis:

Hayek’s theory offers a pretty good description of at least part of what happened in the latest meltdown, especially in terms of the Federal Reserve’s interest rate policy and its effects on the housing sector. In Hayek’s theory, problems start when the market rate of interest is held too low for too long. This always politically popular policy leads to malinvestment — too many investment projects get started that cannot ultimately be sustained. When people realize what has happened, investment spending collapses and a recession begins. The dangers of a prolonged low-interest-rate regime in distorting how the various factors of production in the economy are allocated — what the Austrians call the structure of production — is something to take away from the theory, especially given the political popularity of such a policy.

My understanding of the Austrian business cycle theory is the same as Caldwell’s, but I characterize it a bit differently. As I understand it, a fiat monetary system creates money and then disperses it via banks — in the U.S., the 12 regional Fed banks — to selected customers and then out to the rest of the economy. When the flow waxes or wanes, the wave crests, and troughs move over various segments of the public, leading to higher or lower investment and consumption.

These monetary phenomena may not accord with the underlying realities of consumer demands. That mismatch can beget over- or underinvestment, thus creating a bubble and then the inevitable collapse. Commodity markets have evolved many mechanisms to reduce the severity of such fluctuations — including hedging, stockpiling, and alternative technologies.

Would a competitive money supply reduce the business cycle problem in a similar fashion? I recognize that the idea may be viewed as radical, but then many classical liberal concepts are seen as radical in our illiberal political environment.

Theme #2: The 1970s show why Keynesian economics was rejected.

In the late 1960s and 1970s, Keynesian policies created their own backlash — a consequence of the economic calamities they begat.

When inflation began to appear in the late 1960s due to LBJ’s deficits, a precisely calibrated income tax surcharge designed to tamp down demand was imposed. Yet because it was viewed as temporary, it had no effect, and inflation continued to rise. This was the first signal that the machine metaphor might have been the wrong one.

Things got much worse in the 1970s as inflation turned into stagflation. The main lesson of the 1970s was that once inflation gets started, it is very difficult to get rid of it. To fight it, the government has to tighten up the economy. This in turn induces unemployment, and because the effect on inflation is not immediate, for a time both the unemployment rate and the inflation rate go up together.

Sadly, Keynesian economics was not rejected for long. We’re all Keynesians again — falling prey to government “stimulus” and scientistic fallacies. Government doesn’t need to “prime the pump.” Rather, we need a deregulatory stimulus to free the nation’s creative economic forces. As Wayne Crews, CEI’s vice president for policy, puts it:You don’t need to teach the grass to grow; simply move the rocks off of it!

Theme #3: Some regulation is necessary . . .

The comment by Caldwell that I most enjoyed at the Cato forum was his citing of Hayek’s response to Wassily Leontief’s furious attack on him, which essentially boiled down to, “How dare you criticize planning!” Hayek’s answer was that the question was not whether to plan or not, but rather, Who should plan for whom.

In his essay, Caldwell defines this distinction: “The sort of planning that Hayek favored was a general system of rules, one that would best enable individuals to carry out their own plans.” He adds, “For markets to work effectively, they must be embedded in a set of complementary social institutions.” Indeed, the regulatory disciplines of a competitive marketplace are generally far more effective than the regulatory disciplines of a politicized bureaucracy.

Theme #4: . . . but a lot of regulation is fraught with problems and will make matters worse.

Caldwell makes an important point about the speed and wisdom of bureaucrats:

The basic Austrian insight here is that entrepreneurs (including those who realize there is money to be made from devising ways of getting around regulations) are always forward-looking, while regulators and legislators are almost of necessity backward-looking.

While there might be a handful of individuals knowledgeable in the complexities of financial engineering, the likelihood that such individuals will find employment in a federal regulatory agency satisfying is nil.

And even if such wise individuals existed and were willing to toil away in government planning offices, their actions would still be hampered by the fact that no one else would know what they’re up to at any one time, and how long it would be before they would change course.

Regulation also inserts uncertainty. As Hayek put it, “the more the state ‘plans,’ the more difficult planning becomes for the individual.” There was plentiful evidence of this in the recent downturn. In the fall of 2008, each announcement by the Fed and the Department of the Treasury, while meant to reassure the markets, produced more and more panic. It also froze people into inaction. One could imagine the decision-making process that took place in many people’s minds: “Should I hold onto my house that is underwater, in the hopes of a government bailout? Should I buy a car now that the prices are low or wait for some government program that will cause them to fall even lower? A stimulus plan is coming, and I don’t know what it will look like; probably best to delay all decision-making for now, to wait and see.”

Over and over again, we encounter examples of people basing their decisions on trying to guess what the government is going to do. Contrast this with what happens in well-functioning markets, where people make their decisions principally by looking at changes in market prices, prices that reflect underlying scarcities.

Indeed, government bureaucrats have no means to convey information as effectively as prices can.

Theme #5: The economy is an essentially complex phenomenon for which precise forecasting — on which the construction of rational policy depends — is ruled out.

Exactly — and when we put all our eggs in the same basket, the results of errors are magnified. “[T]he things that we actually do know all concern limitations on our knowledge and on our ability to formulate and carry out rational policy,” Caldwell notes; and continues:

This does not mean that policymakers cannot get things right when it comes to managing the economy as a whole. It is just that sometimes stabilization policy stabilizes the economy, and sometimes it destabilizes it, and we usually can’t tell in advance — and sometimes not even in retrospect — which scenario is unfolding or has unfolded.

Theme #6: In any complex social order, any action may have both good and bad unintended consequences.

One reason for optimism is the fact that the term “unintended consequences” has entered the public policy debate. Perhaps the fallacies of central planning are becoming clearer?

The bad side of unintended consequences is that many attempts to impose our will on the complex adaptive system that is the economy cause things to happen that were not part of our original intention. For example, as everyone recognizes, a market system does not satisfy our longings for “social justice.” In response, well-intentioned people — or those with interests who can play on the sentiments of the well-intentioned — naturally seek to make adjustments in a market system so as to produce more desirable results. Unfortunately, time and again, history has demonstrated that . . . all sorts of pernicious effects will occur that were not part of the original intention.

As I’ve argued before, during the Great Depression, people wisely distrusted big business, so they turned to big government — which had never been tried in peacetime — as a more attractive option. Today neither is trusted, which improves the odds for a more realistic comparative assessment of markets vs. government.

Theme #7: Basic economic reasoning captures what we can know and say about the essentially complex phenomenon that we call the economy.

Following Hayek, Caldwell describes the market economy as a mechanism for the efficient allocation of scarce goods. He is pleased that "basic insights about the workings" of the market are now built into economic education:

These tools allow us to talk about the fundamental fact of scarcity, the choices that scarcity makes necessary, the costs of choice, and the ways to push back against scarcity, at which point the notions of the division of labor, specialization, comparative advantage, the productivity of capital, and the gains from trade are introduced. If one adds to these the concepts of elasticity of demand and supply, and some basic intuitions about market structures, one can explain a lot about the world, as anyone who has ever taught an introductory economics course knows.

Here I have some major disagreements. I find the view of economics as a system for efficiently allocating scarce goods, a view that Caldwell seems to favor, overly static. I prefer the Coasian view of the market as a set of institutions for lowering the transaction costs of voluntary exchanges. In this regard, I’m influenced by Joseph Schumpeter, who noted:

A system — any system, economic or other — that at every given point of time fully utilizes its possibilities to the best advantage may yet in the long run be inferior to a system that does so at no given point of time, because the latter’s failure to do so may be a condition for the level or speed of long-run performance.

I recall my own undergraduate one-year course in economics. When presented with the positive-sum nature of exchange driven only by self-interest, I asked, “Why wouldn’t the party that held both items in a given transfer just stop the transaction at that moment?" Most transfers involve a period when one party holds both items and at that (possibly brief) moment, short-term self-interest would entice that party to not complete the transaction and singularly benefit. My professor didn’t understand my concern, and only years later did I come to understand that markets “work” because both parties hope to engage in future transactions. If they default once, they will face either cultural or legal sanctions and will at least find it hard to identify future transaction partners. That is, before a world of voluntary exchanges can occur, institutions (cultural or legal sanctions, an expectation of future exchanges) must exist — institutions that discipline transfers. (In a priori probabilistic terms introduced by Ronald Coase, the transaction costs must have been reduced earlier to make such transfers mutually advantageous.

Of course, when I asked my question, economics professors had no interest in or understanding of the role of institutions in lowering transaction costs, of making markets possible. And often they still don't have that interest or understanding. Most people trying to understand why markets exist are unaware of the evolution of the institutions that make voluntary exchange viable. Naiveté about “markets” has led to “market socialism” and “market mechanisms” and other collectivist beliefs that markets can be created from whole cloth by means of top-down political planning. Consider, for example, the various emission trading systems that are now being proposed. The late Warren Nutter aptly noted: “Markets without property rights are a grand illusion.” He was discussing the mechanical attempts in Russia after the fall of the Iron Curtain to replicate markets, but the principle is true elsewhere.

Unfortunately, modern economics is often based on static equilibrium models designed to be solved rather than to resemble reality. Coase became a nonperson in the economics profession (as did most Austrians), in large part because he kept asking embarrassing questions of this sort.

Theme #8: Demands for social justice can be satisfied.

I believe that this was Hayek’s biggest mistake.

Somewhat controversially in the eyes of certain Austrians and libertarians, Hayek argued that in a society that had reached the general level of wealth that Britain or the US had achieved, “there can be no doubt that some minimum of food, shelter, and clothing, sufficient to preserve health and the capacity to work, can be assured to everybody,” and also that the state should “assist the individuals in providing for those common hazards of life against which, because of their uncertainty, few individuals can make adequate provision.”

Granted, Hayek’s concept of a “safety net” was quite minimal in comparison to that of our modern welfare state, but to argue for it in the first place leads inevitably to unsustainable middle-class entitlements. We may not be able to avoid such policies altogether, but to endorse them is to endorse the instability of the welfare-regulatory state. Hayek struggled with this dilemma; I do not think he ever resolved it. (This may be the reason that, in a previous Cato panel, Richard Epstein argued that Hayek never overcame his social democratic roots.)

Theme #9: Freely adjusting market prices helps solve the knowledge problem and allow social coordination (the basic Hayekian insight).

Here I agree completely. I consider Hayek's “The Use of Knowledge in Society” the most important essay in economics. The idea embodied in that work was the essence of the market calculus debate between von Mises and Hayek on one side and Lerner, Lange, and Kornai on the other. Caldwell offers a good summation of Hayek’s view.

The question that must be solved in constructing a rational economic order in such a world is: How can we use the knowledge that is dispersed among millions of fallible market agents so as to achieve some level of social coordination and cooperation?

Hayek’s answer was that a market system with freely adjusting market-determined prices is, when embedded within an appropriate institutional structure, a marvelous mechanism for coordinating human action.

Unfortunately, many modern economists — including some self-avowed “free market” economists — have ignored Hayek’s view on this topic, as the vogue for Pigouvian taxes and quotas illustrates.

Theme #10: The basic "public choice" idea is true: more often than not, government cures are not only worse than the disease, but lead to further disease.

I largely agree with Caldwell’s valuation of public choice economics in helping to explain why government grows and rarely recedes.

Public choice theorists believe that politicians, like everyone else, act in their own self-interest. If consumers maximize utility, firms maximize profits, and politicians maximize votes, what do bureaucrats maximize? The answer is troubling: Bureaucrats have an incentive to maximize the size of the bureaucracy under their control.

However, I find that public choice focuses too heavily on economic motivations, without taking other factors into account. Public policy is a two-tier process that includes both economic and ideological interest groups. Public choice thinkers tend to ignore the motivations of the latter, even though their influence is often much greater than that of business people or other economically motivated groups.

Bruce Yandle’s “Baptists and bootleggers” paradigm illustrates how economic and ideological groups often interact to pursue shared agendas. One group — the Baptists — advocates prohibition on moral grounds, while another unrelated group — the bootleggers — profits from the extralegal opportunities created by policies resulting from the former’s moral crusade. There has, however, been too little attention paid to the ideological groups.

Much of publicpolicy is driven by ideological groups crafting narratives that effectively link their favorite policies with core social values. Aaron Wildavsky and Mary Douglas argued that people respond to a policy by a quick decision as to whether that policy advances or threatens their core values. That decision will be influenced by the narratives communicated about policy.

Today’s “Baptists” are often environmental, labor, “consumer,” “human rights” groups advocating government intervention in the economy to advance some feel-good cause. To date, free market advocates have been far less effective than the left in crafting narratives that persuade a majority of people that classical liberal policies advance core values — whether these be equality, fairness, order, or security — better than do statist ones.

In conclusion, I should note two questions that I believe Hayek neglected to explore adequately. The first is: why do so many bad policies evolve and survive? (Granted, Hayek's “The Atavism of Socialism” essay deals with that theme to some extent.) The other question is: how could Hayek's own ideas be implemented? His focus on “What do we know and how do we know it?” was crucial, but more attention to “How do we change it?” would have been valuable.

Hayek did have a change agenda — one I agree with — but he did not clarify sufficiently what we could do to bring that about. His recommendation to fight the war of ideas is necessary, but not sufficient. Still, few others have done as much as Hayek, whose work I consider critical in the battle for the future of civilization.




Share This


All-American Johnny and the Educators

 | 

The debate over Johnny and the American educators goes on. Who’s more to blame for the problems in our public schools? The star maverick educator Michelle Rhee said the other night on a talk show she could see students getting paid for a good performance. The conservative education professionals simply say that “Johnny,” meaning most of the public school students, has to do better, to make our schools better. Parents say that educators will have to teach him better to make our schools better. Educators are beating up Johnny in the press to make their accusation strong. He needs to get real (which is as strong as their language gets). Ordinary lay folk know things aren’t right in America’s public schools. Some powerful politicians are demanding our schools be the best in the world.

We don’t need to be in an international contest. We have a lot to do here at home. “High schools are the downfall of American school reform,” said Jack Jennings, President of the D.C. Center on Education Policy. This disclosure pointed the finger. Educators knew the problem was this deep. They kept the scandal this buried. The public high schools in America number 27,000, and they have on their rolls millions of students; and 7,000 drop out every day. Administrators don’t know what to do about the high schools.

Powerful politicians are also making us accept over 100,000-plus foreign students from Latin America and Asia in our schools, who depress the test scores. They have trouble with the tests in English, and sometimes with discipline. Johnny is in a classroom of noisy students from many cultures and can’t get serious about paying attention when the teacher is busy keeping order in the class. Johnny doesn’t get rigorous tutoring at home and study discipline. He doesn’t score well on tests, which is not totally his fault.

Our high schools can’t do better when they’re like this, forced to be politically correct. Is this why Johnny doesn’t like being taught with “them”? Donal O’Shea, Dean of Faculty, Mount Holyoke College, and author of The Poincare Conjecture (referring to the formidable topological theorem the western world has been trying to prove for 100 years) dropped another bomb on American education in his March 2007 Forbes essay, saying that in 1985, of the million students who received bachelor’s degrees that year, only 16,000 majored in math or statistics. More disturbing, of the 1.4 million who received bachelor’s degrees in 2004, only 13,300 majored in math or statistics. More and more of our high schools are letting their students graduate with little or no science and math and serious humanities education. Graduates who go to college anyway mostly are woefully weak.

The public high schools in have on their rolls millions of students; and 7,000 drop out every day.

Secretary of Education Arne Duncan said recently that Johnny should read more. Although no one of his stature should have to say that reading is serious, it's clear that Johnny has to do much more reading to improve his articulation and language crafts — the kind of reading that doesn't always register on standardized tests. Harvard educated actor Matt Damon recently told a reporter, “We’re tying teacher salaries to how well kids are performing on tests; that kind of mechanized thinking has nothing to do with higher order [thinking].” He may be right, not only about tests in English but also about tests in math-based subjects. President Obama and his advisers put education on their plate the first thing in 2008, recognizing that the high schools were shockingly unhealthy, especially in the math and science departments. His team selected STEM subjects — science, technology, engineering, mathematics — to do the job. The team promised high school students who go for these subjects that they would be richly rewarded upon graduation from college as a STEM major. They’ll enjoy the good life, a professional career, prestige, and security all their lives. The team was romantic. Yet mathematics is the most important subject — and not just for STEM subjects, but for all the other disciplines too. Mathematics is the yeast in all of them. No subject can grow, get strong, become precise without it. Every subject has to establish its foundation on sturdy logic to survive. Mathematics is logic in its supreme form.

Johnny doesn’t know this about math. He is taught math in school as a set of mechanical exercises, found in a “manual,” a textbook, filled with them. The textbook is dully and poorly written. The author isn’t well-read, and doesn’t have to be. The book is written to sell to the state’s textbook adoption committee. Publishers fight hard to win the contract, which is quite big if the state is big and buys one math textbook to be used by all its high school algebra students. Johnny can’t comprehend this “adopted” textbook; the “whizzes” in class don’t read it either, but understand what the math formulas and the “mechanical writings” are saying. The mechanical math geeks are educators’ darlings. They test brilliantly and are “inventive” and become gadgeteers.

Johnny is taught math in school as a set of mechanical exercises, found in a “manual,” a textbook that is dully and poorly written.

Test-driven educators need to see students less than as machines, but more, particularly teens, as fragile souls always in need of constant anchoring. The horrors they can commit! Think of the 1999 Columbine High (Colorado) massacre of 12 students and a teacher by two male seniors (a front page story), and even the 2005 brouhaha at Monta Vista High (Culpertino, California) of Asian and white students over who’s best in math. Some whites at the school moved out of town. (Front page story, “The New White Flight”!) Teens easily crack under pressure. If only they were disciplined to channel their energy to better use, they would make high school a healthier world and also ensure that our pride and joy — the 18 of the 20 best universities in the world that are American — retain their strength. Joseph Nye of the Harvard Kennedy School said there are 750,000 foreign college students in American colleges. But then he said, “We have to do something about our secondary education.”

Remember that the 750,000 foreign students don’t have the cultural wherewithal to create brilliant American writing. That task belongs to Johnny, and he shouldn’t be thrown away. If he's not a math geek, Johnny still may learn how to contribute to American letters, which aren't brilliant enough.

The annual State Regents Exams for New York high school seniors reveal why educators should get real. The exams demand that to be college-ready each senior score at least 80 in math (last year many failed to solve the simplest of quadratic equations) and 75 in English Language Arts (two essays have to be written). The high school graduating rate for 2009 was 77%, but only 41% of the class was prepared for college. The two-year college was the only hope of many of the graduates not prepared for four-year college. Poor inner-city Johnny has it the worst — nothing, nobody to help him hope for a good score on the impossible tests; and no hope that the education system will take an interest in him.

Miracles do happen. A New York inner-city Johnny was picked to star in a Walmart ad that takes place in a school library. Johnny glows as he’s helped with his reading by a retired, lawyerly, grandfatherly looking gentleman smiling like one with the patience of Job. The ad runs twice nightly on the PBS “Tavis Smiley Show.” Thanks to Walmart for telling inner-city Johnny across America that “a mind is a terrible thing to waste.” The ’hood does have rich soil worth cultivating. Will other big businesses come in and help other overwhelmed students? Remember that the great English novelist Charles Dickens was born dirt-poor.

By the way, no one at our 18 most hallowed universities proved the Poincare Conjecture. Last year, a reclusive Russian mathematician, Gregori Perelman, proved it but refused the Clay Institute’s $1,000,000 prize. This caliber of confident mathematician tends to be shy, and to have other baggage, such as being incomprehensible at points in his lectures. One can’t expect the high school geek math teacher to be less handicapped. He mumbles at the blackboard. So Johnny’s best shot is to do a lot of good reading with a dictionary to get verbal competence and confidence in writing. That would be quite an achievement — more of an achievement than a brilliant score on a math test.




Share This


The Anti-Drug Argument for Legalization

 | 

In an early 2011 episode of the libertarian TV show “Stossel,” John Stossel debated Ann Coulter about ending the War on Drugs. At one point Coulter exclaimed in a tone of shocked outrage that Stossel could not possibly be serious in saying that legalization would lead to a decrease in drug abuse. Here I want to argue precisely that point.

It is possible for someone to believe that nobody should ever do drugs but also to support the libertarian proposal for ending the Drug War and legalizing all recreational drugs. The two positions are fully consistent, because both legalization and the end to widespread drug addiction will flow naturally from a psychological and philosophical shift toward a culture of more personal responsibility and away from a culture of irresponsibility. The cause of most drug addiction can be traced to irresponsibility, and irresponsibility is the psychological precondition of the welfare state. This explains why the drug subculture is dominated by the Left. We libertarians can silence some of our most vocal opponents if we undermine the alliance between the anti-drugs movement and the statist War on Drugs. This essay is one step toward achieving that goal.

I hate “recreational” drugs, and I do not think that anyone should use them. But I firmly believe that recreational drugs of every type should be legalized. I could argue that drug use is a victimless crime, or that human beings own their own bodies and have the right to do to themselves whatever they wish. I could argue that the War on Drugs is racist because it targets substances commonly used by members of racial minorities. But such arguments have been made many times before. Libertarian thinktanks such as the Cato Institute have already produced ample empirical evidence showing that legalization does not correlate with drug abuse. I have no need to repeat this evidence. My argument is different. I am going to argue that legalization, if accompanied by a psychological and philosophical shift towards a culture of personal responsibility, would lead to a long-term widespread decrease in drug abuse.

If the foes of drug use are so sure that it is an evil, then why are they so afraid of their inability to persuade consenting adults to abstain from drugs?

Legalization might cause a temporary spike in drug use, as curious Americans would be tempted to experiment. Then again, there might not be a major spike, because despite the War on Drugs, most Americans have already experimented. But even if there were a spike it would not last long. The rational, intelligent American public would soon learn, or reaffirm its current conviction, that drug use is self-destructive and stupid. Indeed, if the foes of drug use are so sure that it is an evil, then why are they so afraid of their inability to persuade consenting adults to abstain from drugs? The truth, of course, is that their arguments are too obvious to be necessary for rational people. Human goodness and happiness depend upon reasoning and reason’s ability to perceive reality accurately; mind-altering drugs impede this process.

I have seen firsthand how drugs can ruin lives and how difficult it can be to quit once someone becomes addicted. I will proudly state that within the past two years I have been able to quit drinking alcohol and smoking cigarettes. Without providing any detailed horror-story anecdotes, I think that it is widely known that alcohol makes people stupid and aggressive, that cigarettes are a deadly, lung-destroying poison, that drugs cause people to lose their grip on reality, and that hard drugs are physically self-destructive and can ruin lives in any number of ways. There can be some debate about whether or not moderate, infrequent recreational drug use is a bad thing (although I think that it is), but there is no question that habitual drug abuse, in other words drug addiction, is both physically and psychologically poisonous. Drugs are a mess, and every sane person knows it.

The question, for me and other drug-haters, is: how to get people to stop using drugs? One possible approach is to outlaw them. This policy has undeniably failed, as drug use of every kind is rampant, despite the government’s best efforts to eliminate it. But if you can’t force people not to do drugs, then what can you do?

A more sophisticated and refined approach would look at the reasons why people choose to do drugs, and would fight the choice to use drugs at its source. People become drug addicts because they make a choice to be weak-willed, lazy, and irresponsible. A drug, after all, is a substance that functions by going between you and reality, so that your experience of reality becomes more pleasant than it would have been sober. The drug does not change reality; it merely changes the chemicals in your brain. It is undeniable that sober reality is the reality that objectively exists in the physical world, and drug-experienced reality is a fictional reality which does not actually exist. Therefore, in a sense, drugs are the ultimate subjectivism and solipsism, in which you choose to cope with the problems in your life not by facing reality soberly and seeking to improve it, but by choosing to change your brain so that you will not feel the pain of your problems any more, so that you won’t have to be aware of what is really going on. The tremendous appeal of drugs is their usefulness for escapism.

I suspect that addiction is usually more psychological than physical, because every human being has the power to quit doing drugs at any time if he makes a genuine choice to do so. Although there are many drugs that have withdrawal symptoms of sickness and agony, rare indeed is the drug that will actually kill you if you stop abusing it, and sobriety is beneficial to one’s health. Addiction comes from the mind, not from the body. What, then, is the nature of an addiction?

The cause of most drug addiction is pain and suffering. A drug addiction is merely a manifestation of the sadness inherent in the condition of being human. Pleasure, wealth, friendship, love, romance, and happiness are not given to humans; we have to work for them. When we make mistakes we lose what we want. The fight to be happy is difficult and messy and full of misery and horror. A person can, however, cope with the human condition responsibly by choosing to face and try to improve reality. This means that he assumes responsibility for both success and failure; he accepts the rewards for good choices and the punishments for bad ones. Alternatively a person can choose the irresponsible choice of abandoning reality, not trying to make things better, and trying to hide from or escape from sorrow.

The essence of irresponsibility is seeking to break the causal connection between the choices you make and what happens in your life. Drugs are addictive because they are uniquely useful for living life irresponsibly. They kill your awareness of your life and blind you to the punishments for your choices. Drugs are as popular as they are because everyone experiences the pain of the problems in life. But this pain evolved as nature’s way of motivating people to solve their problems.

The problem with addiction is not merely that you use the drug constantly and it damages your physical health. It is that a human being becomes ethical by thinking and making choices, and drugs make the drug user’s choices for him or her. The essence of personal responsibility is taking responsibility for your choices and not easy shortcuts around doing the work that is necessary in order to be happy. Drug addiction is fundamentally irresponsible, not merely because it is a lazy way to cope with problems, and not merely because it impairs the ability to choose, but because it is easy and tempting for drug users to blame their actions on the drug, shifting causation away from themselves. That is the core of irresponsibility.

Government acts upon the body politic like a drug, blinding the people to reality.

The issue of whether a person chooses to live responsibly or irresponsibly is at the heart not only of the issue of drug addiction, but also the issue of which form of government to choose. Drug use is a personal manifestation of irresponsibility, but a political manifestation of irresponsibility is socialism. An irresponsible government will hide from society’s problems and use any quick-fix snake oil it can imagine to make people think that it is doing the right thing, without ever actually addressing the causes of society’s problems and trying to fix them. The irresponsible person blames his problems on something else and looks to external saviors to solve his problems instead of taking responsibility and solving his problems himself. The modern-liberal voter looks to government to make his choices for him and give him wealth instead of creating wealth for himself. Government, in short, acts upon the body politic like a drug, blinding the people to reality. The more we rely upon government to live our lives for us, the more we lose control and the farther we fall from the condition of being able to solve our own problems.

Because drug abuse and big government are two manifestations of the same irresponsible attitude towards life, it is no coincidence that the drug culture is permeated by the modern-liberal or socialist Left. On the other hand, a culture of personal responsibility, such as is embodied by the libertarian political philosophy, would militate against the problem of drug addiction.

Personal responsibility is inconsistent with using government to force people to behave ethically regarding activity that does no violence to others. We libertarians must make a stand for legalization, but we should fight this battle not for the sake of drug addicts, but for freedom as a matter of principle, supported by rational arguments for individual responsibility.

Many drug foes seems incapable of grasping the notion that you can persuade a reasoning mind to choose sobriety freely. Perhaps this is because the anti-drug interest groups have shown not one iota of understanding of how to talk to people about drugs. Instead of running anti-drug ad campaigns that treat people like rational adults, the anti-drug groups (usually in conjunction with government agencies) ads designed to scare or guilt-trip people into quitting drugs. People who have chosen to use drugs as a way to cope with reality are already more afraid of facing reality than they are of death, and they have chosen to be irresponsible. So appealing to the fear of death and the guilt of letting down your loved ones is a silly strategy. A manipulative emotional trick never has the same impact as persuasive reasoning. The proper anti-drugs approach is to convince people rationally.

Happy people are far more difficult to rule than sad, depressed, miserable people with drug-addled brains.

It is notable that when a special interest group wants people to behave in a certain way, but lacks any well-reasoned arguments, it petitions government to pass a law to coercive obedience. Some fools actually may believe that people know better than to do drugs but are too weak to resist temptation and therefore need the government to force them to choose sobriety. Only weaklings and cowards would buy this argument. The government has no special knowledge of the dangers of drugs, no knowledge that the American people lack, nor does it possess a magic wand to make drugs any less appealing. The most effective anti-drug strategy is rational persuasion in a free, legalized society.

When the government forces you to do something that you aren’t persuaded you should do, it is treating you like a child — and the condition of being a child is precisely the condition of not assuming responsibility for yourself, the very condition that leads to drug addiction in the first place. Legalization would send a message that we as a people need to take responsibility for our own choices. It is the best thing the government could do to combat drugs. Rampant drug abuse and the War on Drugs would both be killed by a cultural shift towards personal responsibility. Happy people are far more difficult to rule than sad, depressed, miserable people with drug-addled brains. If society changes so that people are happier and more satisfied with their lives, the power of the government will be vastly curtailed.

If the socialists and the anti-drug warriors actually wanted to solve the drug problem, marijuana would be legal today. Marijuana is far less dangerous than alcohol. It is the opposite of a gateway drug; it is merely a convenient means of experimentation for curious people making the transition from child to adult. Over the long term, legalized pot would decrease hard drug use. Unfortunately, we cannot depend on the state to do the rational thing and legalize marijuana.

At this juncture, the libertarian movement should try to have it both ways: we have already gained significant popularity by appealing to drug users who want drugs to be legalized, but we could also gain a loyal following among drug haters. We should preach that our path of social and political self-responsibility is the way best suited to sober, clear-headed, rational adults. We can thereby attract to our ranks many of the people whose lives have been ruined by drugs and who are looking desperately for an escape from the drug-induced carnage. But because responsible adults are more likely to support free market capitalism than people who are irresponsible and immature, I think that libertarianism can only triumph with the support of sober voters. One might wonder why the many voters who abuse illegal drugs do not swarm the polls and vote libertarian politicians into elected office. My explanation is simple: voters with drug-addled brains are too lazy and irresponsible to become political activists, even though they stand to gain the most from legalization.

Right now the anti-drug, anti-legalization lobby is a powerful foe of libertarianism. The anti-drug activists are passionate and fanatical because they understand the evil of drugs and take inspiration from the virtue of sobriety. But so do I, and my hatred of drug abuse does not make me think that the horrors of the Drug War are in any way justified. If we could chip away at the link between the anti-drug movement and the anti-legalization movement, libertarianism would lose some of its most zealous opponents (perhaps including Ann Coulter and conservatives like her). We should try to persuade some of the anti-drug advocates to abandon the prohibitionists and back legalization as the clever solution to America’s drug addiction problem.




Share This


Classic Problem, Classic Films

 | 

The topic of this essay is a broad issue in moral philosophy: conflicts of loyalty, specifically, loyalty in war. My “texts” are four classic movies about World War II.

Let us start with some conceptual analysis of the central concept: loyalty. “Loyalty” means devotion to or consistent support of something. Loyalty is correlated with duty: to feel loyalty is to feel you have a duty to support something. But it connotes more than just adherence, it connotes the willingness to sacrifice one’s own good for the sake of the other.

The things to which a person can be said to be loyal of course include other people, either singly or in groups (such as families, friendship circles, gangs, companies, clans, tribes, nations, or ethnic groups). A person can also be loyal to a belief system (such as an idea or concept, a theory, an ideology, a religion, or a cause). My hunch is that when one is loyal to a belief system, it is usually because it is derived from or associated with a person or group with whom he feels personal loyalty. For example, an Irishman’s loyalty to the cause of Irish independence would, I suspect, derive from his strong identification with Irish family and friends. But I won’t pursue that theory here.

W.D. Ross’ view has the unwieldy moniker “Multiple-rule deontologism,” though it is simply common sense put forward in a highly abstract way.

Because a person is typically related to a variety of other people and groups in a variety of ways, loyalties often conflict. My loyalty to my friend may conflict with self-interest (loyalty to oneself, so to speak), or my loyalty to other friends. My loyalty to my family may conflict with my loyalty to the country, or for that matter my loyalty to my lover. The permutations here are endless.

Now, different ethical theories analyze moral phenomena in different ways. Perhaps the best known ethical theories are utilitarianism, ethical egoism, and natural rights ethics.

Both egoism and utilitarianism tie the moral rightness of an act (or anything else, such as an institution or a rule) solely to whether it leads to the best results. They differ about whom those best consequences are intended for: is it just for the person acting (egoism), or for everyone affected (utilitarianism)?

Natural rights theory is one of a variety of theories that tie the rightness of acts to things other than consequences, such as the motives or character of the agent. Specifically, natural rights ethics holds that your act is right if it flows from your rights and doesn’t violate the rights of others.

Each of these moral perspectives has its uses. For analyzing whether the country ought to enact a new law, say, utilitarianism is the obvious tool. For analyzing whether you ought to engage in a business, ethical egoism is a useful tool. To analyze whether a controversial business practice is just, natural rights ethics is probably the best instrument.

But for analyzing situations in which people act from conflicting loyalties, no better tool is at hand than an ethical theory put forward most clearly and compellingly by the subtle and sophisticated moral philosopher W.D. Ross. In the literature Ross’ view has the unwieldy moniker “Multiple-rule deontologism,” though I have always regarded it as simply common sense put forward in a highly abstract way.

In his view, in morally puzzling situations, we are faced with conflicting prima facie duties, and must determine from among them which one is our actual duty in the context. For example, suppose I am trying to decide whether to leave my wife after an unhappy marriage of many years. I have to sort through my obligations to my wife (whom I chose to marry and therefore to whom I have an obligation), to my children (whom we brought into the world and therefore to whom I owe something), and of course myself. In Ross’ perspective, the exact nature of the relationships you have had (and their specific histories) is what is crucial in determining your actual duty, and not (as in egoism) just your duty to promote your own welfare or (as in utilitarianism) your duty to promote the welfare of the human race impartially considered.

An important feature of Ross’ theory is that even when one prima facie duty overrides the others in a given situation, and hence constitutes the actual duty in that situation, the other duties are in truth none the less still duties, so as conditions change, one of them may override the others in turn. This gives his theory a dynamic aspect missing in many other ethical perspectives.

I think that Ross’ theory is a sadly neglected tool in the philosophy of film. I want to use it to analyze conflicts of loyalty in war movies.

More than any other ethically challenging situation, war raises issues about loyalties to others in conflict with the basic human imperative of survival, as well as conflicts between the general obligation to others to do them no harm and the imperative to kill the enemy. I will examine World War II movies, because WWII is generally considered the 20th-century war in which American involvement was most morally justified. This enables us to focus more on the personal than on the political struggles of the characters involved.

Let’s consider first a fine film starring two grossly underrated actors, The Enemy Below. This film tells the story of one particular small naval battle — a battle between an American destroyer and a German submarine — in the South Atlantic Ocean. The battle is shown as a kind of chess match between the two captains, both seasoned veterans. The US ship, the USS Haynes, discovers the U-boat as the sub is trying to make it to a rendezvous with a German merchant raider. The American commander, Capt. Murrell (Robert Mitchum) is trying to gain the full loyalty of his crew, who know about him only that his last ship was sunk. The German commander, Capt. Von Stolberg (Kurd or “Curt” Jurgens), has had his crew for a long time, and they are fully loyal to him.

An early part of the battle tips us off to Murrell’s (and Von Stolberg’s) capabilities. The American destroyer sights the German U-boat and closes in on it. But the U-boat escapes. Instead of pursuing it further, Murrell breaks off the attack and slows the movement of his own ship. His number one, Lt. Ware (David Hedison), asks him what he is doing. Murrell explains that he is trying to gauge his opponent. He explains that he gives the German captain so many minutes to reach a safe depth, level out and spot the destroyer, realize it is open to attack, and let loose a volley of torpedoes.

The crew watches in amazement as two torpedoes zip by harmlessly.

We cut to the sub, and see Von Stolberg, who after diving and leveling, does indeed realize the destroyer is open to attack, and remarks to his own number one, Schwaffer (Theodore Bikel) that the destroyer’s captain is either clever or foolish. Just as Murrell anticipated, Von Stolberg decides to test Murrell, and fires the torpedoes. Up above, Murrell, after waiting an appropriate amount of time, barks out the order to turn the ship sharply and increase the engines to full. The crew watches in amazement as two torpedoes zip by harmlessly. Von Stolberg now realizes that Murrell is clever, as the destroyer goes on the attack. It is clear to both captains they are up against able opponents, and the battle is joined. As Von Stolberg remarks, “This American captain is no amateur. Well, neither am I.”

In the end, after an extended battle of the ships’ crews and the captains’ wills, both captains make fatal errors and both errors are exploited by the opponent. The result is that both see their ships go down — a most un-Hollywood-like ending.

But as interesting as the naval chess match is to watch, the fascination of the movie comes in learning the personalities of the two warriors. In neither case is the motivation for such fierce fighting either some kind of extreme ideological commitment or exaggerated patriotism. In both cases the commitment is to their job and above all to their crews, whose deaths (should they occur) would be on their hands. In neither case do the other prima facie obligations — such as to friends, country, or humanity in general — disappear, and in another context, another prima facie duty will return as the actual one.

We see this repeatedly in various scenes and dialogical exchanges in the film. For instance, when asked by Ware what he thinks his foe is like, Murrell replies, “I have no idea what he is, what he thinks. I don’t want to know the man I am trying to destroy.” One is tempted to shout at the screen, “Just so!” When you are in the standard battle situation, you are under a general obligation to fight for your country. But your specific obligation, the first loyalty, is to those whom you command, and the foe — while understood to still be human — must only be the foe.

In a scene in the sub, after enduring an intense depth-charging run, one that makes the gung-ho and devout Nazi sailor Kunz want to surrender, there is this exchange:

            Von Stolberg: “Mueller, what is the condition of the ship?”

            Mueller: “We have not been hurt.”

            Kunz: “But we cannot escape!”

            Von Stolberg: “It will be your privilege to die for the New Germany.”

Von Stolberg’s sarcasm makes it clear he has contempt for the kind of men who wanted the war but can’t deal with what it entails. The loyalty is with those for whose lives you are responsible, not some abstract ideology.

Or consider the exchange between Doc and Murrell. The exchange occurs after Murrell has explained why he switched from the Merchant Marine to the Navy. In a powerful scene — one to which few actors besides the restrained Robert Mitchum could do justice — Murrell explains that he resolved to fight subs after the ship he was on was torpedoed by one, and he had to watch as the half of the ship — the half upon which was his newly-married wife! — sink rapidly to the bottom of the ocean.

Doctor: “Well, in time we’ll all get back to our stuff again. The war will get swallowed up, and seem like it never happened.”

Murrell: “Yes, but it won’t be the same as it was. We won’t have the feeling of permanency that we had before. We’ve learned a hard truth.”

Doctor: “How do you mean?”

Murrell: “That there is no end to misery and destruction. You cut the head off a snake, and it grows another one. You cut that one off, and you find another. You can’t kill it, because it’s something within ourselves. You can call it the enemy if you want to, but it’s part of us; we’re all men.

This dialog, which Mitchum delivers in a manner-of-fact, unemotional way, is the sort of dialog that would tempt a lesser actor to chew the scenery. Not Mitchum.

This scene brings up an interesting question. The viewer may wonder about the role revenge may be playing in Murrell’s actions. Now, it is one of the strengths of Rossian moral theory that it can explain a pervasive feature of our ordinary moral lives that is not easy to explain by other moral theories. I am referring here to loyalty toward the dead.

For example, let’s suppose that my parents (while alive) treated me with just the normal care and concern that parents typically render towards their children. My loyalty would be expected, even though they are dead, in such matters as giving them an appropriate funeral and carrying out their final wishes as expressed in their wills. These obligations are not explained by future consequences (for me in particular or humanity in general), or by the “natural rights” of the deceased — they’re dead! — but by our past mutual history as a family.

Von Stolberg’s sarcasm makes it clear he has contempt for the kind of men who wanted the war but can’t deal with what it entails.

But while seeing his wife die after a torpedo attack may explain Murrell's choice to seek duty on a destroyer, his demeanor and words make it clear that it is in no way impelling him to destroy this sub. He is no Ahab, for he has no history with that sub or its (at this point unknown) commander that would create such a desire for revenge.

It is clear in the film — from this and other scenes — where the loyalties of the protagonists lie. Von Stolberg makes clear his contempt for the Nazis and what they have wrought. He fights as a professional soldier for his country, but his first loyalty is to his crew. This Murrell understands, and respects, as shown in another scene. As the sub is being depth-charged savagely, and the crew is getting disheartened and beginning to panic, Von Stolberg puts a record on the PA system — a rousing song that the German U-boat cadets learn at their academy. He demands that the crew join him in singing it. They do, and begin to recover their courage. Up above, the destroyer’s sonar picks up the sounds, and Ware expresses wonder (for in playing the music, the sub is making it easier for the destroyer to locate it). Murrell immediately understands what the other captain is doing, and expresses admiration even as he returns to the attack.

Another insight into how the key protagonists in the movies view their loyalties comes near the end. Von Stolberg has gotten most of his men off the doomed sub, as Murrell has his. Murrell spots the German captain for the first time, and wonders why he hasn’t abandoned ship. Von Stolberg replies that his friend and number one is badly wounded. It is clear that the German’s duty to his crew is discharged by having them abandon their ship, which will sink at any moment. But he feels he has to risk his life for this man, and doesn’t expect his crew to do the same, precisely because the sailor was his long-time friend, not theirs. In this new context, Von Stolberg‘s prima facie loyalty to his friend has become his actual duty. This is a moral calculation that Murrell understands, and he helps in the rescue. In this contest, Murrell’s prima facie duty to humanity has become his actual duty.

At the end of the movie, the two captains converse side by side. Their ending dialog is telling:

Von Stolberg: “I should have died many times, Captain, but I continue to survive somehow. This time it was your fault.”

Captain Murrell: “I didn’t know. Next time I won’t throw you the rope.”

Von Stolberg: “I think you will.”

The Enemy Below is a superb action war movie. The director (and himself a fine actor) Dick Powell put in an enormous amount of effort on making it look realistic. It is filmed in color, and the display of naval action (such as the maneuvers of the ships, the depth charge firings, and so on) has a palpable realism. The film absolutely rightly won an Oscar for Best Effects. The support acting is excellent, especially David Hedison as Lt. Ware, as well as Theodore Bikel as Capt. Von Stolberg’s number one and good friend Heinie Schwaffer. Also excellent is Russell Collins as Doc. But the two leads are just superb. Mitchum at his best (as he is here) was one of the best in film, despite his never having won an Oscar, and Jurgens (who was nominated for a Best Foreign Actor BAFTA award for his performance) was a renowned actor in both Germany and the United States.

The second film is a much-neglected gem, Decision Before Dawn. The movie is about the last phase of WWII, during which the Russian Army is about to enter Germany from the east, while the Allied Army is poised to attack across the Rhine. Germany by this time has had its major cities pulverized by Allied bombing, and the country faces massive shortages.

The American military command expects that the Germans will fight bitterly to defend their soil, and has set up an intelligence unit near the border to identify German POWs who are potentially willing to go back into Germany and spy for the Americans. The intelligence unit is headed by Colonel Devlin (Gary Merrill).

Devlin identifies two promising potential agents, given the code names “Tiger” (Hans Christian Blech) and “Happy” (Oskar Werner). We learn that these are quite different people with quite distinct motives.

Tiger is an outright egoist. Before the war he was a circus worker and a petty thief, and was drafted into combat. He is willing to go along with the Americans — or let them think he will — in exchange for better treatment. And, as all the other characters in the movie know, he returned from his last assignment alone — meaning that his partner was either captured or killed.

In contrast, Happy acts out of a sense that the war needs to be ended, for the good of all sides, not least of all for the good of the German people, who are suffering on a massive scale in what is clearly a losing cause — suffering for the stubborn pride of a few high military men. His resolve in this comes from seeing a fellow POW killed by other German prisoners for expressing the thought that Germany was losing the war.

Helping train Happy for his espionage work is a young Frenchwoman, Monique (Dominique Blanchar). Despite her understandable hatred of the Germans, she finds herself attracted to Happy.

The central drama of the film gets started when Devlin is told that a certain German general wants to negotiate the surrender of his entire command. Because of the high stakes in this operation, Devlin selects an American, Lieutenant Rennick (Richard Basehart) to accompany Tiger and Happy in the mission. Tiger is generally suspected by everyone (again because he returned from his last mission alone), but he is chosen because he knows the area. Happy is assigned the task of locating the 11th Panzer Corps, because it may block the defection.

One of the strengths of Rossian moral theory that it can explain our loyalty toward the dead.

Rennick, like most of the Americans, generally suspects all the Germans in the training facility, since they are all — let’s be blunt — Germans. Worse yet, they are traitors. Even worse yet, they are traitorous German spies. As a narrator intones at the opening of the movie,

Of all the questions left unanswered by the last war, and probably any war, one comes back constantly to my mind. Why does a spy risk his life? For what possible reason . . . If the spy wins, he’s ignored. If he loses, he’s shot.

This brings up a fascinating feature of the flick: the viewer — no matter what his nationality — typically has a visceral, instinctive aversion to the traitor, no matter how well-motivated the treachery. We are instinctively revolted by treachery to the tribe, as we are to incest or touching the dead. This puts us in Rennick’s position of distrusting the sincere Happy.

The three agents are dropped behind enemy lines into Germany, and split up, with Rennick and Tiger making their way to a safe house, while Happy goes in search of the Panzer unit. Along the way, Happy and the viewer meet a variety of Germans. Some are clearly weary of the war, but one — a superficially friendly Waffen SS courier who is still a devout Nazi — poses a major risk to him.

As luck would have it, Happy (who is posing as a medic trying to return to his unit) is commandeered to take care of the colonel who is in charge of the very Panzer unit Happy was assigned to locate.

After treating the colonel, Happy sets off with the information to the safe house. He is by now being sought by the Gestapo, and almost gets captured, but manages to join the others. During this time, Rennick and Tiger discover that the general who was thinking of surrendering is now in a hospital guarded by the SS, so the unit will not be surrendering after all.

The film moves to a tense denouement, as the German spies and their American control — with important information but an inoperable radio — have to try to swim across a river to the American-controlled side. Tiger attempts to flee, and Rennick shoots him. Happy and Rennick swim halfway across the river to an island, but as they move to swim to the other side, the Germans discover them and start shooting. Happy, unable to make the swim, creates a diversion and allows himself to be captured, ensuring that Rennick can swim across to safety.

Happy is shot as a deserter. Rennick is faced with a cognitive conflict, arising from his attitude towards the enemy: his life was saved by one of the very traitorous Germans he despised so profoundly.

A bigger conflict lies in the heart of Happy. In agreeing to spy on the Nazi army, he has to put aside loyalty to the government for which he fought, and embrace a higher loyalty to his country and what is truly good for its people. This seems clear to the viewer from the start, but not to the American soldiers in the film, who condemn the “turncoat” Germans uniformly, with a reflexive loathing of those who would work against their country’s government (wrongly equating a country’s government with the country itself).

In the end, in doing his true duty, Happy pays with his life. His act of self-sacrifice grows out of his feeling of loyalty to the American officer who risked his life to accompany him on his mission.

This film is outstanding on every level. Visually, it has an uncanny verisimilitude. It was filmed on location in Wurzburg, where rubble still clogged the streets at the time of filming. (The German audience must have been especially struck by this film.) The movie was nominated for a Golden Globe, for Best Cinematography (in Black and White).

The dialogue is no less gritty than the setting, and the characters are true to life — no comic-book heroes here. The fighting scenes are quite convincing as well. Indeed, the movie was based loosely on real life: the Allied intelligence services did in fact employ German POWs to re-enter Germany as Allied agents.

The direction by famed European director Anatole Litvak is spot on. He gets fine performances from all the cast. Litvak was nominated by the Directors’ Guild of America for its Outstanding Directorial Achievement in Motion Pictures award, and the film was nominated for Oscars for Best Picture and Best Film Editing.

The viewer — no matter what his nationality — typically has a visceral, instinctive aversion to the traitor, no matter how well-motivated the treachery.

But the acting deserves special praise. Gary Merrill was always an outstanding actor — who can forget his superb support roles in the classic war film 12 O’clock High and the equally classic “woman’s movie” (as the studio categorized it), All About Eve, both truly great films? He is excellent here as Col. Devlin, the commander in charge of the operation. Also worth noting was Hildegard Knef as Hilde, the desperate German bar girl, and Dominique Blanchar as Monique, the French aide to the intelligence unit who finds herself falling in love with Happy. Also worth noting is Wilfred Seyferth’s performance as the Waffen SS courier Heinz Scholtz.

The three lead actors are especially fine. Richard Basehart played the American agent Lt. Dick Rennick, who accompanies the two German volunteer spies. Hans Christian Blech is perfect is perfect as the cynical Sgt. Rudolf Barth (“Tiger”). Most outstanding is the lead, a very young Oskar Werner as Cpl. Karl Maurer (“Happy”). Werner was an excellent visual actor, and his gift for conveying facially his character’s thoughts and emotions was superbly used in this film.

The third film is the remarkable recent German movie, John Rabe. The movie is based on the amazing true story of the eponymous hero, a German businessman who was instrumental in saving over 200,000 Chinese civilians during the conquest and occupation of Nanking (now Nanjing) in 1937–38, often and rightly referred to as “The Rape of Nanking.” (The reader may wish to read my earlier review of an outstanding documentary on the subject, Nanking, that appeared in the August 2008 Liberty, pp. 44-45.)

John Rabe was the head of the Nanking factory of the German multinational corporation Siemens’. (Siemens was a major player in providing pre-war China with electric power and telecom services). Rabe, we need to note well, was a committed Nazi. As the Japanese started their invasion of China in the 1937, he was of course sympathetic to them, since Imperial Japan and Nazi Germany had grown politically close, especially after von Ribbentrop became military attaché in 1934, and the Anti-Comintern Pact was signed in 1936. (When Japan invaded China — and China signed a military pact with the Soviet Union — in 1937, Hitler finally turned his back on China and sided with Japan completely.)

But as the actual Japanese military — as opposed to whatever idealized military Rabe envisioned — moved in, he saw how horrifyingly vicious they were. The Imperial Japanese Army at the time was governed by the Bushido code of warrior honor, which viewed it as the duty of a true warrior to die in combat rather than to surrender. That perspective had a dark side, however: it virtually guaranteed that whenever the Japanese Army won a battle, the victors would view surrendering soldiers (and the prostrate populace) with contempt, and consider them deserving of whatever cruelty the victors cared to inflict.

At the opening of the film, we meet John Rabe (Ulrich Tukur) and Dora (Dagmar Manzel), his beloved wife. They have lived in Nanking (then China’s capital city) for about 30 years. Rabe has come to love the country, and is reluctant to leave, but retirement looms. However, during his farewell party, the Japanese begin their attack, with their planes indiscriminately bombing the city. Rabe opens the factory gates so the workers and their families can come in and get some protection. In a striking — not to say jarring — scene, the employees stop the Japanese air strikes by spreading a huge Nazi flag above their heads.

The next morning, the most important foreigners remaining in the city get together to discuss what can be done to help the hapless citizenry. Here we meet the other central figures in the story. Dr. Rosen (Daniel Bruhl), a German Jewish diplomat, points out that Shanghai, which faced a similar attack, set up a “safety zone.” Valerie Dupres (Anne Consigny) — a fictional character loosely based on a real person — who is the head of a Chinese women’s college in Nanking, proposes that Rabe lead the committee for setting up the zone, in large part because she sagely realizes that his affiliation with the Nazi party can be useful in dealing with the Japanese. Dr. Robert Wilson (Steve Buscemi), who doesn’t like Rabe precisely because of his Nazi sympathies, is reluctant to agree.

In a striking — not to say jarring — scene, the employees stop the Japanese air strikes by spreading a huge Nazi flag above their heads.

The following day, Rabe is supposed to leave with his wife on the return trip to Germany. Nevertheless, he has decided to stay to help the Chinese, and watches his wife leave on the ship. As it leaves, however, it is attacked by Japanese planes, and Rabe fears that his wife is dead. In the face of his personal sorrow, he commits to setting up the safety zone.

In one key scene, we see the conflicts that Rabe feels mirrored in a Japanese officer. In this scene, the Japanese have captured a large number of Chinese soldiers defending Nanking. Prince Yasuhiko Asaka, the head of a lesser branch of Japanese nobility and a career military officer, orders the mass execution of the Chinese “captives” — a term covering not merely the POWs but any and all civilians. (In fact, it may have been his assistant, Lieutenant Colonel Cho, a political extremist, who actually gave the order, with Asaka only tacitly consenting).

A young Japanese major dissents timidly, but is immediately slapped down, and the massacre commences. (After the war, Asaka was lucky to escape prosecution for the war crimes committed under his command, when General MacArthur decided for political reasons to grant immunity to all of the Imperial family). It is obvious to the viewer that the young major is conflicted by his duty to follow orders (imperative in any military organization, but especially so in a viciously authoritarian one) and his more general duty to behave in a humane way toward POWs and non-combatants. There are universal rules that morally supersede military orders, some codified in the Geneva Conventions. I will return to this point shortly.

As the soldiers and then much of the general populace get murdered by a Japanese army driven mad by power and bloodlust, civilians pour into the safety zone that the Rabe-led committee had managed to set up.

The film vividly portrays a number of horrific events, including one in which Mme. Dupres refuses to allow the Japanese (who have found a group of Chinese soldiers hiding on the grounds of the Girl’s College) to take 20 of the young women along for sexual exploitation, and subsequently has to endure the sound of POWs being machine-gunned in reprisal. In another scene, while Rabe is negotiating with the Japanese commanders, his driver is hauled off and decapitated as part of a killing contest between two Japanese officers.

As the brutal occupation grinds on, an improbable friendship forms between Wilson and Rabe, leading to some lighter scenes of their drinking and singing songs (one of which mocks Hitler). During the committee’s Christmas celebration, Rabe faints — he has received an unmarked package containing his favorite cake, tipping him off to the fact that his wife is alive. Wilson discovers that Rabe is diabetic, and saves his life by procuring some insulin from the Japanese enemy he detests.

In the new year, the situation becomes grave. Rabe uses the last of his savings to help buy rice for the refugees, and discovers that the reason the supplies of rice are being used up so rapidly is that the Girl’s College is hiding some Chinese soldiers. Rabe and the rest of the committee realize that if the Japanese discover this, they will close the zone and likely kill all the people protected there.

This leads to the denouement, in which the Japanese decide to march into the protected zone. But Rabe is tipped off by the young Japanese major, and the Japanese troops who march in find that the committee and the Chinese civilians have formed human shields to protect the POWs. Japanese tanks are brought in, but before shots are fired, the Japanese discover that international journalists and diplomats have just returned to the city, and the Japanese are forced to back down.

The film ironically ends more happily than its real-life hero. The film ends with Rabe being taken to the harbor for his return to Germany. He is cheered by the Chinese as he reunites with his wife. In actuality, Rabe did return to Germany with his wife but was immediately arrested by the Gestapo, precisely for bringing the Japanese atrocities to the world’s attention. He was released after the war, but — unbelievably — his request for “de-Nazification” was initially denied by the British authorities. In 1950, he died poor and unremarked. Only in 1997 did he receive belated recognition for his humane and honorable work when the Chinese moved his remains to the Nanking Memorial Hall. And finally, in 1993, the German government got around to acknowledging his decency and bravery.

As the soldiers and then much of the general populace get murdered by a Japanese army driven mad by power and bloodlust, civilians pour into the safety zone.

Some have noted a resemblance between John Rabe and Oskar Schindler (the subject of Spielberg’s Schindler’s List). There are analogies, to be sure. Both Rabe and Schindler were real businessmen, both were initially drawn to the Nazi movement, and both in time became committed to saving the lives of at least some of the intended victims of the Axis war machine.

But there are some major relevant differences as well, the biggest being the presence of internal conflict of duties in the case of one character but not the other. Schindler was, from what I can tell, an opportunist who came to see the humanity of the victims of the Holocaust, and fought for some of them, but was never conflicted about it. In contrast, Rabe believed — yes, foolishly — that the Nazis were better than the viciously cruel Imperial Army. This absurd belief comes out in Rabe’s letter to Adolf Hitler:

To the Fuehrer of the German people, Chancellor Adolf Hitler: My Fuehrer, as a loyal party member and upstanding German, I turn to you in a time of great need. The Japanese Imperial troops conquered the city of Nanking on December 12, 1937. Since then I have witnessed atrocious crimes against civilians. Please help to end this catastrophe and make an appeal to our Japanese allies in the name of humanity. With a German salute — John Rabe

Here we see the conflict between Rabe's commitment to his country and his allegiance to people who had worked for him and among whom he had lived. Parallel to this is the conflict faced by the young Japanese major who, in spite of tremendous pressure to carry out unquestioningly the war crimes demanded by his commanders, first risked the good opinion of his superior officers by questioning the order to summarily execute unarmed POWs, then risked his life by tipping off Rabe about the imminent Japanese incursion.

In facing their conflicts, both Rabe and the major are rather like Sophocles’ Antigone. In the play, Antigone disobeys King Creon’s order that her dead brother Polynices (who had fought against Creon’s favorite, Eteocles and lost) be left unburied, for the animals to eat. Antigone buries her brother with her own hands, and when Creon demands an explanation for her breaking of the law, she replies that she is following a law older than that of kings. As she says, "Nor did I deem that thou, a mortal man, / Could’st by a breath annul and override the immutable, unwritten laws of Heaven."

In saving the Chinese, Rabe and the young Japanese officer were obeying a higher and older law, one that demands that we protect the innocent, no matter what state alliances apply, and no matter what prior personal allegiances have been established.

This film is a fine piece of cinematic art. It was highly acclaimed in Germany, garnering seven “Lola” (German Film Award) nominations, and winning Lolas for Best Film, Best Actor, Best Production Design, and Best Costume Design. Besides winning the Lola for Best Actor, Ulrich Tukur also won the Bavarian Film Award for Best Actor, for his very impressive performance. But as popular as it was in Germany, the movie was, alas, shunned in Japan. Not one Japanese distributor could be found to show it. The Japanese, it must be admitted, to this day still have not come to grips with their often atrocious behavior in WWII.

As the brutal occupation grinds on, an improbable friendship forms between Wilson and Rabe, leading to some lighter scenes of their drinking and singing a song that mocks Hitler.

Tukur well deserved his awards, giving a powerful performance, at once restrained but revealing. Steve Buscemi is excellent in support (he was nominated for a Lola for Best Supporting Actor, a rare nomination for an American), playing the more emotionally open American doctor who worked with Rabe to save the Chinese. Florian Gallenberger, who wrote the screenplay, also did a superb job in directing the movie. And Jurgen Jurges did an outstanding job on the cinematography. At the time of filming, a lot of 1930s-era housing stock in Shanghai was being demolished for new high-rise buildings, and he was able to use footage of it in portraying the damage done by the Japanese bombing.

The final movie I want to take up is rightly characterized as a classic. It is one of David Lean’s many outstanding contributions to cinematic art: The Bridge on the River Kwai. Like John Rabe, Lean’s movie is based on historical reality, though (as we shall see) it is not as faithful to history as the Rabe movie.

The Bridge on the River Kwai starts with a unit of British POWs captured at the fall of Singapore, marching into a Japanese work camp in western Thailand. They march in whistling the rousing “Colonel Bogey March,” a popular British tune dating to the First World War. They are assembled in front of the camp’s commander, Colonel Saito (Sessue Hayakawa).

As the British march, we meet another character, US Navy Commander Shears (William Holden), who is helping bury a dead POW. We get a sense of his egoism and general skepticism about the war when he bribes the Japanese captain supervising him and his fellow grave-digger (an Australian named Weaver) with a cigarette lighter taken from one of the corpses, and intones over the grave

Here lies Corporal Herbert Thompson, serial number 01234567, valiant member of the King’s own, or Queen’s own, or something, who died of beriberi in the year of our Lord 1943. For the greater glory of . . . [pause] what did he die for? . . . I don’t mock the grave or the man. May he rest in peace. He got little enough of it while he was alive.

The British POWs are commanded by Colonel Nicholson (Alec Guinness). Saito informs the British that they are to work on a bridge over the River Kwai for a railway line. He tells them that he will require all POWs, even the officers, to start work in the morning, Nicholson tells Saito that the Geneva Conventions forbid compelling officers to work, but that only makes Saito repeat his orders furiously.

The next morning, when the POWs assemble, the officers refuse to work. Saito at first threatens to shoot them, then backs down, leaving them in the scorching sun, then putting them in a punishment hut. Saito orders Nicholson to be put in “the oven,” a tiny iron hut that is exposed directly to the sun, where Nicholson stays without food or water, to break his will.

The British medical officer, Major Clipton (James Donald), an obviously reasonable, rational medical scientist, faced with two stubborn career military men following what they view as their military codes, attempts to negotiate — but Nicholson refuses all compromise. As all this is going on, the British soldiers are doing their best to passively resist — by feigning work and slyly sabotaging the project.

This leads to one of the great scenes in this great film. Saito, the Bushido-bound martinet, faces an even more code-bound martinet and the possible failure of his own project. He feels that such a failure would obligate him to commit actual suicide — Seppuku — in accord with Japanese tradition. At this point, Saito gives in, using the excuse of the anniversary of the 1905 Russo-Japanese War to release Nicholson and exempt the other officers from the actual construction work.

Nicholson reviews the status of the project, and finds it in shambles. To the surprise of his men, he says that he wants to build a “proper” bridge, i.e., one that will succeed in bearing the weight of railroad traffic. The officers and men clearly wonder aloud if this isn’t outright collaboration. But Nicholson replies that only by working as real soldiers on a real bridge will he be able to restore his men’s discipline, self-respect, and morale — all essential to surviving the harsh conditions imposed on them. He thus subordinates his loyalty toward military goals to loyalty toward his men.

At this point, three of the POWs — including Commander Shears — attempt an escape. Two are killed, but a wounded Shears manages to escape, and (with the help of some locals) manages to make it to safety. The movie focuses on Shears, who is recovering at a British Ceylonese hospital. Shears makes time with a gorgeous nurse and looks forward to shipping out to the US.

But Shears' plans are upset by the head of the British Special Forces in Ceylon, Major Warden (Jack Hawkins). Warden wants Shears to volunteer to accompany and guide a commando unit back to the POW camp to blow up the bridge. Shears, ever the egoist, informs Warden that in fact he (Shears) is not an officer, but an ordinary seaman who switched uniforms with the real Commander Shears after their ship had been sunk and Shears was killed — because the seaman knew that officers get better treatment in captivity. But Warden has already discovered this, and the US Navy has assigned the egoist to Warden’s command. “Shears” has no choice, so he “volunteers.”

One might suppose that Shears' agreeing to go on Warden’s commando mission would be a case of conflict (between his innate egoism and his loyalty to his country’s cause in the war), but it isn’t, really. His decision is easily explicable on egoistic principles. He has been unmasked; thus he faces return to the States and a mandatory court martial. Depending on how the trial goes — would the judges view him as having deserted? or as having allowed the real Shears to die? or maybe even having killed the real officer? — the seaman faces a long time in military prison, and maybe even execution. So he decides to take his chances on going along with the mission, which may result in his being exonerated, receiving an award, and retaining the simulated rank of Commander that the Navy has allowed Warden to offer the egoist. No, his conflict comes later.

We return to the POW camp and reach another interesting plot twist. In order to get the bridge done on time, Nicholson offers Saito to allow the British officers to do physical labor along with the enlisted men, if Saito allows the Japanese officers to do the same. This immediately arouses the careful viewer’s attention: wasn’t the protection of his men, including making sure that they were all treated in accordance with the Geneva Conventions, exactly the matter over which he fought Saito so fiercely? What strange shift in loyalties is going on in the man?

We now rejoin the commando team as it parachutes in, near the POW camp. One of the commandos is killed in the jump, but the other three — Shears, Warden, and a Canadian Lieutenant Joyce (Geoffrey Horne) — with the help of Thai villagers (almost all women) make it to the bridge. Warden is wounded along the way, and wants the others to leave him, but Shears insists they all push forward. When we see Shears pushing the mission forward, insisting that Warden be carried, we begin to realize that the egoist is beginning to transcend his egoism and become committed to the mission.

The group arrives at the bridge, and Warden sets up a plan. Shears and Joyce plant the explosives at night, and are to set them off in the morning, when a Japanese troop train is scheduled to pass over it.

Here we reach yet another plot turn. In the morning, the wiring to the explosives is visible, because the river lever has dropped during the night. As Nicholson and Saito are giving the project its final inspection, Nicholson sees the wires ands alerts Saito. As the train approaches, they try to stop the pending explosion. Joyce jumps out and kills Saito, and Nicholson calls out for help, meanwhile trying to prevent Joyce from reaching the detonator. Shears rushes across the river to help Joyce, but Japanese soldiers shoot both of them. It is in these ending moments, as he faces death to complete what he has now accepted as his mission, not just the mission he went along with, that we realize his loyalty has shifted completely.

Rabe was released after the war, but — unbelievably — his request for “de-Nazification” was initially denied by the British authorities.

Nicholson, recognizing Shears (“You!” he gasps) — and being thus implicitly rebuked by the sight of an egoist now committed to doing the right thing and fighting for the correct cause — at last also recognizes his duty as a British soldier. He cries out “What have I done?” as he tries to reach the detonator. On the cliff above, Warden fires his mortar, killing the two commandos and mortally wounding Nicholson, who manages to stagger over to and collapse upon the detonator. The bridge blows, taking with it the train.

The final scenes are equally compelling. Warden, who has to escape with the only remaining help he has, the Thai women, shouts at them that he had to do what he did and kill the young commandos — presumably, so that they wouldn’t be captured and forced to divulge information. The British doctor Clipton rushes out to see what has happened. As he surveys the carnage, he shakes his head and exclaims, in a voice choking with emotion, "Madness . . . Madness!” Madness, indeed — countless men were killed to build a “monument,” and more were killed to destroy it.

Now, there was some controversy about the film concerning its historical accuracy. The movie follows the book (The Bridge over the River Kwai, by French novelist Pierre Boulle, who is probably best known for his script for The Planet of the Apes) rather closely, though with one important difference that I will explore shortly. Yet the book itself was only loosely based on the real story of the Japanese Imperial Army’s construction of the Thailand-Burma Railway (also grimly named the Railway of Death) in 1942-43. The project — 260 miles of railway line connecting Bangkok and Rangoon, crossing the Mae Klong river — used primarily forced labor (called “Romusha” in Japanese). It cost the lives of upwards of 16,000 Allied POWs and 100,000 conscripted Indonesian and Malaysian laborers. The real bridge over the main river was first built out of wood, then out of steel, and was not destroyed until 1945, and then by Allied bombers, not commandos.

Moreover, there was a real British officer who worked to save the British POWs — Lieutenant Colonel Phillip Toosey. Toosey was a leader in the defense of Singapore, where he won the DSO for heroism. He refused an order to evacuate in order to remain with his men in captivity. By complaining against their mistreatment even at the cost of being beaten, and by negotiating cleverly, he was able to improve their living conditions. After the war he was a devoted exponent for helping the veteran far-eastern POWs.

Toosey was apparently little like the fictional Colonel Nicholson, and was certainly not a collaborator. In fact, he encouraged secret sabotage, such as deliberately mixing the cement improperly and infesting the wooden trestles with termites. The novelist Pierre Boulle, who had actually been a POW in Thailand, said that he based the character of Nicholson on his memories of a number of French officers who had collaborated with the Japanese.

Again, there really was a Saito. But the real Saito — Risaburo Saito — was a Sergeant-Major who was only second in command of a POW camp. More importantly, the real Saito was viewed by the POWs as relatively reasonable and humane. In a war crimes trial, Toosey spoke in Saito’s defense, and the two formed a friendship.

However, while the film is not faithful to reality, I would contend that it is better off for not being that way. It is, after all, a fictional feature film, not a documentary. Specifically, the film departs from reality in a way that highlights the conflicts in the protagonist, helping us think about the source and nature of collaboration.

Normally, a person who collaborates in war does so out of simple egoism. By cooperating with the enemy, he typically furthers his self-interest: he gets better food, easier work, a place in the new power structure, or merely money (thirty pieces of silver, perhaps). But Nicholson clearly is not initially acting out of self-interest. His willingness to endure being boxed in “the oven” shows that.

No, Nicholson’s loyalty to his troops and his military code of conduct are what make him want to build a “proper” bridge as a way to keep discipline and morale up, which would help the men survive in a harsh environment. Although some of his officers wonder whether this is collaboration, Nicholson’s decision is reasonable.

Nevertheless, as the work progresses Nicholson loses his moral focus, as his loyalty shifts to the project itself, and his growing — what? friendship? mutual admiration? or simply partnership? — with Saito. The tip-off scene is when he and Saito are inspecting the completed project, and Nicholson starts musing about how one day the war will be over, and this project will be left as a kind of monument.

At this point, Nicholson's loyalty is shifting from his men to the man with whom he is collaborating, and possibly to himself — to concern for his later reputation, i.e., self-aggrandizement. The viewer sees this in the scene where Nicholson suddenly requires that his officers start working alongside the enlisted men — the very thing that, earlier, he had opposed so strongly that he was willing to be put in “the oven.” This morally indefensible shift in perspective leads him to help, at first, to expose the plan to destroy the bridge. Only towards the end, when the sight of the egoist Shears fighting for the right military objective shakes him to his senses, does Nicholson recover the proper moral perspective.

This scene was not in the book, which ends with the commandos trying to blow up the bridge but only succeeding in derailing the train; Nicholson has no hand in any of it. Boulle was not in favor of the change in plot — though he liked the movie on the whole — but I am convinced that Lean’s instinct was right. It creates two characters who are internally complex, with loyalties that shift subtly through the film, forcing us to try to understand their motives.

The critical acclaim for this film was unprecedented. It won Oscars for Best Picture (Sam Spiegel), Best Director (Lean), Best Actor (Guinness), Best Cinematography (Jack Hilyard), Best Music Score (Malcolm Arnold), Best Film Editing (Peter Taylor), and Best Writing/Screenplay (Pierre Boulle, with blacklisted writers Carl Foreman and Michael Wilson added in 1984). Sessue Hayakawa was nominated for a Best Supporting Actor Oscar.

Madness, indeed — countless men were killed to build a “monument,” and more were killed to destroy it.

Additionally, the film won BAFTA awards for Best British Film, Best British Actor (Guinness), Best British Screenplay (Boulle), and Best Film from Any Source. The film won Golden Globes for Best Motion Picture (Drama), Best Motion Picture Director, and Best Motion Picture Actor (Drama) (Guinness). Again, Hayakawa was nominated for a Golden Globe for Best Supporting Actor. Lean won the Directors Guild of America award for Outstanding Directorial Achievement in Motion Pictures. And the film score copped a Grammy.

All this critical acclaim was well deserved, in my view. The movie was one of only a comparative handful of movies — several of which are works by David Lean — that I would point to as working on all three levels on which a film can work: the philosophic, the literary, and the aesthetic. (I discussed this terminology in my review of The Lost City, in the December 2006 Liberty.) At the philosophic level, the level of ideas, the film is an interesting exploration of codes of military honor and the nature of collaboration with the enemy. At the literary level, the level of plot and character, the movie gives us some unforgettable images of characters, such as an inwardly weak Japanese martinet, an egoist out for survival in a brutal environment, and a morally flawed though strong officer. And at the aesthetic level, the level of the sight and sound of the work, you have really masterful cinematography and an unforgettable score.

Moreover, the acting just couldn’t get any better. Alec Guinness — always a favorite actor of Lean’s and appearing in most of his flicks — is just perfect as the hidebound Nicholson. Although Guinness was troubled by the project — he thought that the movie had an anti-British flavor — he gave a fine performance. In fact, he thought that the scene in which his character is released from the oven and staggers forward was the best acting in his career (he had modeled the movements on those of his son who was afflicted by polio).

And William Holden, who could play the egoist and reluctant warrior well (as shown in his other fine war pictures, Stalag 17 and The Bridges at Toko-Ri) was perfect in this flick.

Also noteworthy is the performance of Jack Hawkins as Major Warden, the commando squad leader. Warden portrays a martinet, but bereft of any inner conflicts. When he is wounded, he doesn’t want to be carried by the Thai women, because it endangers the mission. Ironically, it is the egoist Shears who forces him to accept help. At the end, he has no compunction about blowing up both Shears and Joyce, as the Thai women — who function in the scene as a kind of Greek chorus — stare at him in horror. He is totally focused on the mission, and feels no conflicts about anything he has to do to accomplish it.

But especially noteworthy in support is the performance by Sessue Hayakawa as the conflicted Saito, a character at once militaristic and vulnerable, even brittle.

A few comments about this historically fascinating actor are in order. Hayakawa was born Kintaro Hayakawa in Japan in 1889. He was the scion of a military man, and was groomed to become an officer in the Japanese Navy. But he ruptured his eardrum in a swimming dare as a teenager. This led his father to feel bitterly disappointed in him. He himself was led to feel a profound sense of shame before his father, and he attempted seppuku, stabbing himself in the abdomen 30 times. Had his father not broken down the door to the room in which he was attempting suicide and taken him to a hospital, he would have died. I suspect that this aspect of his personal background is what lends such credibility to Saito’s contemplation of the act in the movie.

Hayakawa, as a young adult, made his way to America in 1911, studying economics at the University of Chicago, but then getting into acting. He rapidly became a silent screen star. His pictures between the mid-1910s and the late 1920s were hugely popular, both in the U.S. and Europe, putting him in the same class as Charlie Chaplin, Douglas Fairbanks, and Rudolf Valentino. (In fact, Hayakawa often played the exotic lover, a role he explored before Valentino arrived on the scene.) At his peak in the 1920s, Hayakawa was clearing $2 million a year from his film work (helped by the fact that he was one of the first actors who formed his own production company). This stellar background in silent cinema made him a fine visual actor, though a restrained one (he attributed his restraint to his Zen training), which talent he used to great effect in this movie.

Hayakawa’s personal and professional life was full of conflict. After moving to America in disgrace, he made a brilliant early career in Hollywood, but with the rise of talkies (along with anti-Japanese sentiment in America) in the 1930s, he went abroad to work in theatre and film. Ironically, he was never really popular in Japan. His early performances as the exotic Asian lover didn’t suit the Japanese audiences, which were very eager to embrace everything American. Later in his career, these audiences — at that point rejecting America — rejected him as too Americanized.

He was in France in the late 1930s, filming a movie, and when the Germans occupied the country in 1940, he was essentially trapped there. Hayakawa didn’t just sit around and dream of past glories: he lived as a professional artist, selling his watercolor paintings, while also working with the Resistance, helping to save downed Allied airman. He proved his loyalty by his actions.

In the late 1940s, he was offered work again in Hollywood, starting with Humphrey Bogart’s production Tokyo Joe, and then Three Came Home, a film based on the true story of a woman held in a Japanese POW camp (with Hayakawa playing the camp commander).

His performance in Bridge on the River Kwai well deserved an Oscar for Best Supporting Actor, and he considered it his best acting in a career spanning 80 movies.

Let’s close by returning a comment I made earlier, when I said that the famous All About Eve was of a genre that Hollywood studios called “women’s movies.” But it was a great film — one that transcended a particular genre of entertainment to say things of great interest to people generally. War movies were (and are), like detective and action flicks, generally considered “Men’s Movies.” They are normally just a genre of entertaining movies aimed at a target audience. But when they work at their best, they too can transcend the genre to arrive at universal interest.

I contend that the four movies I have discussed are great and transcendent in this way. And at the core of them, what makes them fascinating is the way their protagonists sort through conflicting duties, of the kind that W.D. Ross well understood and analyzed. But in identifying these four films, I know I have only scratched the surface. The conflicts I have discussed are central to the storylines of many great films, and many great works of literature as well. This is, indeed, a very broad and deep subject.


Editor's Note: Films discussed: “The Enemy Below.” 20th Century Fox, 1957, 98 minutes. “Decision Before Dawn.” 20th Century Fox, 1951, 119 minutes. “John Rabe.” Hofmann & Voges Entertainment, Majestic Filmproduktion, 2009, 134 mins. “The Bridge on the River Kwai.” Columbia Pictures, 1957, 161 mins.



Share This


Marshall v. Jefferson

 | 

In the September 2009 issue of Liberty (in a book review entitled "Liberty and Literacy"), Stephen Cox — ever the analytical wordsmith — extols the content and form of Thomas Jefferson’s brilliant first sentence, second paragraph of the Declaration of Independence:

“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable rights, that among these are life, liberty, and the pursuit of happiness.”

Focusing on the form, he paraphrases the passage into run-of-the-mill prose and berates the reader who can’t tell the difference. He says that “language is not just a method of communicating… (It) is a way of creating pleasure,” and that if one doesn’t see that, then one is illiterate, knows nothing about writing and should —  go away.

Braving Cox’s acid pen and his usually faultless reasoning, I took issue with his (and nearly everyone else’s) assessment of Jefferson’s passage. I responded in Liberty in November 2009:

“Lofty words. Pure poetry, perhaps — but devoid of any connection to reality. It is not self-evident that '“all men are created equal,' or 'that they are endowed by their Creator with certain unalienable rights,' or 'that among those are Life, Liberty, and the pursuit of Happiness.' One need only look at the history of the Bill of Rights and its ignored 9th Amendment to realize that the only rights citizens retain — much less 'are endowed with' — are those that they explicitly claw from their government; Life, Liberty and the pursuit of Happiness not included. Perhaps they were too 'self-evident'?

“It took nearly 100 years for those three self-evident rights to be included in the Constitution under the 14th Amendment as 'life, liberty, or property.' And even now they’re not secure. 'Pursuit of Happiness' was an elegant albeit vague and meaningless euphemism for property, which Jefferson was loath to include, fearing it might justify slavery. Unfortunately, the omission later caused such an erosion of property rights that there is now popular clamor for a property rights amendment to the Constitution (in spite of the 14th Amendment).

The government established under the Articles of Confederation was about as powerful and effective as today’s United Nations.

"The slippery nature of even enumerated rights — much less 'self-evidently endowed' rights — comes to mind in Justice Oliver Wendell Holmes’ dissent in the Lochner v. New York case. His particularly perverse interpretation of the 14th Amendment, using original intent, mind you, found that since the amendment was originally written to protect the rights of freed slaves, it could not apply to workers and management deciding the length of their workday. But then, he was famous for declaring that he could decide any case, any way, using any principle. (He’d later go on to find that eugenics, as government policy, was justified under the Constitution.)

“As populist rabble-rousing, Jefferson’s clause is second to none, and in that sense, it is great writing. However, as a description of reality or a recipe for government, it is a complete failure. Therefore I must counterintuitively conclude, being a firm believer in the dictum that form follows function, that the clause in question is neither effective nor elegant writing.”

Only later, after reading R. Kent Newmyer’s legal biography of John Marshall, the fourth Chief Justice of the United States, John Marshall and the Heroic Age of the Supreme Court, did I that realize that I was not alone in being skeptical of "natural rights" and, instead, advocating enumerated rights.

The controversy over enumerated versus self-evident rights began immediately after the Constitutional Convention disbanded and each state was asked to ratify the new document. Contrary to what today’s tea partiers, radical states’ righters, and some libertarians and conservatives believe, the Constitution was created to increase the power of the federal government, both absolutely and over the states. Under the previous arrangement — the Articles of Confederation — the federal government was dependent on the whims of the states — individually, mind you — for voluntary revenues and a host of other items. These constraints made defense of the new country a much tougher proposition — in raising an army and funding a war. In today’s terms, the government established under the Articles of Confederation was about as powerful and effective as today’s United Nations.

To John Marshall, Alexander Hamilton, and a coterie of other patriots who would later coalesce into the Federalist Party, this arrangement spelled ruin for the new country — not only because the United States might not be able to defend itself adequately, but also because it wouldn’t be able to pay its bills dependably, obtain credit, or participate in the foreign exchange mechanisms necessary for international commerce.

Under the old dispensation, individual states were responsible for debts incurred during the Revolutionary War, and some were thinking of defaulting, either from irresponsibility or from spite toward some of the Brits who’d bankrolled them. As Hamilton observed about the debt and its consolidation under federal responsibility, it was "the price of liberty." Additionally, each state (as well as private entities) could issue its own currency. Without the full faith and credit of a central government, the new country would be unable to participate effectively in international trade — a serious impediment under the new, capitalist world order.

Without the full faith and credit of a central government, the new country would be unable to participate effectively in international trade — a serious impediment under the new, capitalist world order.

Most delegates to the Constitutional Convention appreciated this. Yet, because the new Constitution increased the federal government’s power, some delegates (anti-federalists, later to coalesce around Thomas Jefferson and his Democratic-Republican Party), fearing tyranny, fought for a bill of enumerated rights, limiting the federal government. The idea that such a bill would be forthcoming may have beena make-or-break point for ratification.

Counterintuitively, people opposed to including a Bill of Rights (many of them Federalists) replied that it was impossible to enumerate all the self-evident rights that the people retained; that enumerating a few rights would guarantee only those; and that the unenumerated rights would forever be lost (think of the right "to privacy," later discovered in penumbras and emanations, together with the right "to earn an honest living," "to marry a person of the same sex," "to marry more than one person," "to cohabitate," "to fight for your country in spite of being gay," "to suicide," "to ingest any substance," "to enter into contracts," "to make obscene profits," etc. — you get the point).

As we all know, the Jeffersonian anti-Federalists won the battle for the Bill of Rights, mostly because the Federalists’ arguments — vague and hypothetical — did not have the immediacy of the fear of tyranny. The odd hitch — to a modern audience — was that the Bill of Rights did not apply to state governments, only to the federal government. Massachusetts retained a state religion until 1830, and the Bill of Rights wasn’t interpreted to apply to the states until the passage of the Fourteenth Amendment after the Civil War.

In order to allay Federalists’ concerns, the Ninth Amendment to the Constitution stated: “The enumeration in the Constitution of certain rights shall not be construed to deny or disparage others retained by the people.”

Unfortunately, that language has become meaningless — as well it should, if examined critically.

Robert Bork, who failed to be confirmed to the Supreme Court because some senators thought him too conservative, both politically and judicially, has likened the Ninth Amendment to an inkblot. In The Tempting of America he argued that “while the amendment clearly had some meaning, its meaning is indeterminate; because the language is opaque, its meaning is as irretrievable as it would be had the words been covered by an inkblot.” From the Left, Laurence Tribe has said that “the ninth amendment is not a source of rights as such.” The best defense of the Ninth comes from Randy Barnett, a libertarian constitutional scholar (and contributor to Liberty), who says that it “calls for a presumption of liberty.”

Marshall, in spite of his Federalism, understood the problem with the Ninth Amendment and advocated the enumeration of rights. He did not believe in natural rights endowed by a creator; he believed that we the people endow ourselves with rights based on expediency and tabulated as positive law. He was a nuts-and-bolts lawyer who believed that the best laws were those that required the least interpretation. Ironically (as we shall see), though he eschewed grandiose philosophical visions and paradigms, he is best known for defining the role of the Supreme Court in the new United States and establishing a rock-paper-scissors hierarchy among the three branches of government that remains the modern modus operandi.

Marshall was a nuts-and-bolts lawyer who believed that the best laws were those that required the least interpretation.

Marshall’s concern with rights was particularly personal. An admirer of John Locke, Marshall was obsessed with the sanctity of contracts to a degree that today might be considered excessively libertarian. He believed that the terms of a contract superseded statutory limitations. For example, in a minority opinion, he opined that bankruptcy laws could not relieve a debtor from a previous obligation. Likewise, he would have heartily supported the 1905 Lochner v. New York decision that overthrew a statute limiting the working hours of bakers as an infringement of the rights of employees and employers to negotiate their own contracts. He also believed that legislation was a contractual obligation of government to the citizenry. In Fletcher v. Peck (1810), Alexander Hamilton, representing a group of investors — including himself — argued that a Georgia state law authorizing a land sale was a contract, and that Georgia's Rescinding Act of 1796 invalidating the sale was unconstitutional under the contract clause of the constitution. Marshall agreed.

These perspectives had the curious effect of another rock-paper-scissors round robin: though Marshall was a strong advocate of the supremacy clause, the phrase in the Constitution stipulating that federal law trumps state law (a big damper on states’ rights), his view of contracts tends to elevate individuals above both state and federal law. But I digress — back to Marshall’s personal concern with rights.

Marshall and his family were speculators in lands west of the Appalachians. Like most libertarians and economists today, Marshall saw nothing wrong with speculation; in fact, he believed that speculators provided services essential to the opening of new tracts for settlement — subdivision, assessment of resources, initial price arbitrage, surveying into lots, market making, etc. The buying and selling of deeds required contracts, the details of which, he believed, were between the parties involved, and should be arrived at with minimal government interference.

But speculators, then as now, had a bad reputation among the substantial portion of the population that didn’t understand their function and thought they were making profits without effort. These folks, Thomas Jefferson prominently among them, believed in the ideal of a republic of small, yeomen farmers. Speculators were just an unnecessary — even an evil — obstacle to that ideal.

Jefferson and Marshall despised each other. Though the Democratic-Republican Jefferson managed to restore his friendship with Federalist John Adams, he could not stand fellow Virginian Marshall.

Marbury v. Madison

Though not the beginning of their feud, Marbury v. Madison — one of two of Marshall’s most famous decisions — best summarizes their intellectual conflict. It is the decision that established the power of the Supreme Court to overturn congressional legislation through the principle of judicial review, thereby elevating the Supreme Court to coequality with Congress and the Executive.

Judicial review, already a long-standing legal principle in other contexts, was a power not specifically granted to the newly-established Supreme Court. Marshall understood that without it, the Supreme Court could never properly function as one third of the triad it was designed to be, since an effective separation of powers required three equally potent branches of government. It is a complex and convoluted decision, tight in reasoning, and difficult to explain. I’ll give it a shot.

The case began amid the bitter political conflicts of the waning days of Adams’ administration. The (barely) peaceful transfer of power to the opposition was a landmark in the new nation’s development. Still, anti-Jefferson riots were expected in the capital.

In a bold attempt to curtail the new administration’s power, Adams nominated Marshall, his Secretary of State, as the new Chief Justice of the United States. The Federalist-controlled lame-duck Congress not only quickly confirmed him, it also passed a law authorizing the appointment of a number of justices of the peace to govern the District of Columbia in case the riots materialized. Adams immediately appointed 42 Federalist judges.

Jefferson was livid. As Newmyer says:

“Unfortunately for historians, there were no cameras to record the deliciously ironic moment on March 4, 1801, when the new chief justice administered the oath of office to the new president. With his hand on the Bible held by Marshall, Jefferson swore to uphold the Constitution Marshall was sure he was about to destroy…It was not coincidental that Marshall turned his back to the president during the ceremony. . . . Jefferson had already concluded that the federal judiciary had to be humbled and ‘the spirit of Marshallism’ eradicated.”

The (barely) peaceful transfer of power to the opposition was a landmark in the new nation’s development. Still, anti-Jefferson riots were expected in the capital.

The new appointments were duly signed and sealed but, ominously, not all of them were delivered by the Secretary of State (still John Marshall), whose job it was to finalize the procedure, but who had only had the last week of the Adams administration in which to comply. When James Madison, Jefferson’s newly appointed Secretary of State (and an author of the Constitution), assumed his duties on March 5 he discovered the remaining undelivered appointments.

Jefferson ordered Madison not to deliver the commissions. Enter William Marbury, one of the prospective justices appointed by Adams. He and three other denied appointees, petitioned the Supreme Court for a writ of mandamusdirected at Madison, in essence ordering him to comply.

Meanwhile, the new Democratic-Republican Congress repealed the legislation that authorized the appointments in the first place and, adding fuel to the fire, cancelled the 1802 Supreme Court term, Marshall’s first. The intervening period permitted Marshall and his colleagues to ponder the constitutionality of events, the dangers of challenging executive authority head-on by issuing the mandamus, and the formulation of strategy.

For two years, the Court’s powers, or lack thereof, had been debated in Congress and in the court of public opinion. The Court had even been a focus of Jefferson’s political agenda. Specifically, was the Supreme Court subservient to Congress or the Executive or both, or was it equal in stature and power? Marshall was looking for an opportunity to settle the debate, and Jefferson gave it to him when he blocked Adams’ judicial appointments.

The new Democratic-Republican Congress, adding fuel to the fire, cancelled the 1802 Supreme Court term.

In February 1803, the Court came out fighting, opening its term with Marbury v. Madison. Immediately, Jefferson — claiming executive privilege — insulted the Court by refusing to permit US counsel to appear or executive witnesses to be heard. And he continued to stonewall, micromanaging executive witnesses even when the Court established, after much technical to-ing and fro-ing, that it did indeed have jurisdiction in the case, and that it could go forward.

In what was to become his typical fashion, Marshall (with a unanimous Court), decided the case on narrow grounds: the rule of law. He stated that Marbury’s office was vested when President Adams signed his commission; that at that point — irrespective of mundane details — the operation of law began. Marshall, to Jefferson’s great irritation, virtually lectured the new president that he was not above the law.

So, where is the judicial review, the Court’s power to overturn congressional legislation, for which Marbury v. Madison is so well known? In the last six pages of the 26-page opinion, in which the court struck down section 13 of the Judiciary Act of 1789. Marshall's reasoning almost became the proverbial camel passing through the needle's eye.

In the Act, Congress had magnanimously granted the Supreme Court the right to issue mandamus writs, reasoning that, since the power wasn’t specifically granted by the Constitution — and the Court couldn’t very well function without it — it was necessary for the Court to have it.

Marshall disagreed on two major points. First and foremost, he declared that Congress — through simple legislation — could not change the Constitution, and that only the Supreme Court had the power to interpret it. Second, for a variety of reasons, Marshall decided that the Constitution already gave the Court the power to issue mandamus writs.

Confused? There is no doubt that Marshall was out to prove a point and — with some fancy footwork — had to weave a sinuous path to make it. Luckily (and some say, with Marshall’s prodding and collusion) circumstances, timing, allies, and even adversaries, all fell into place for him. Almost all aspects of the decision are still debated, even as to whether it was at all necessary; mainly because many commentators believe the Constitution already implicitly grants the power of judicial review to the Court. In Federalist No. 78, Hamilton opines that not only is judicial review a power of the Court, it is a duty. Not one delegate to the Constitutional Convention argued against the principle.

Jefferson — claiming executive privilege — insulted the Court by refusing to permit US counsel to appear or executive witnesses to be heard.

But critics claimed that the Marshall court had vastly overreached. Jefferson himself believed that the president (and the states) had the power to interpret the constitution, and he forever fulminated against the decision. Congress, however, was not troubled and took it in stride — which leads Newmyer to conclude that, “put simply, it was presidential power, not congressional authority, Marshall targeted.” The Supreme Court’s power of judicial review was extended over the states in 1816 in Martin v. Hunter’s Lessee,another Marshall decision.

Jefferson versus Marshall

John Marshall was the longest serving — and arguably the most important — chief justice. Serving from 1801 to 1835, he presided over the most formative decisions the new country faced. He helped to establish a balanced, effective, and more manageable government, and helped set the tone for the future sparring among the three branches of federal power. During his term, the Constitution became much more than a founding document — it became something closer to accepted law.

Today most of us perceive political parties as somewhere within the Left-Right continuum. It is difficult to see things in any other way: that’s how today’s politics play. But the Federalist versus Democratic-Republican divide was an entirely different one.

Although most people today associate Jefferson with individual rights and a fundamentalist view of the Constitution; and the Federalists with the advocacy of a strong central government, the distinctions are not so facile and clear-cut. For one, the Democratic-Republicans supported slavery, while the Federalists generally opposed it. These positions led the Jeffersonian tradition directly to the policies of Jackson, Calhoun, and, finally, Jefferson Davis; while the Federalist tradition led to Lincoln.

The irony here is that those who were most skeptical of the Constitution are the ones referred to as “strict constructionists,” while the Federalists are regarded as free-wheeling interpreters of its provisions.

The Federalists were the first to see and understand the failure of the Articles of Confederation, so they pushed for change. The anti-Federalists thought that the Articles could be tweaked for improvement and were skeptical about the whole constitutional enterprise. In the end, they accepted it reluctantly — and showed it. To them, “strict constructionism” meant that if the Constitution granted one the right to eat, the right to obtain food didn’t automatically follow. Or, if it granted the right to free political speech, the right of media accessibility for broadcasting that speech — since it wasn’t actually spelled out — didn’t exist. Such thinking, often imbued with deep resentment, led to muddled action, ambivalence, and, sometimes a reversal of roles — with the president himself leading the way.

Jefferson, expecting an immediate victory, ordered a squadron of ships to destroy the Muslim navies. The war dragged on for almost 15 years.

Jefferson’s cavalier attitude toward the Constitution was shown early in his presidency, with his 1801 attack on the Muslim state of Tripolitania on the Barbary Coast (Tunisia, Algeria, Morocco, and present day Libya) without a congressional declaration of war, which contemporary opinion believed he was constitutionally obligated to obtain. Several of the Barbary states had demanded tribute from American merchant ships in the Mediterranean. When the Americans declined, the Pasha of Tripoli captured several seamen and held them for ransom. Jefferson, expecting an immediate victory, ordered a squadron of ships to destroy the Muslim navies. The war dragged on for almost 15 years.

But it was the Louisiana Purchase — the constitutionality of which even Jefferson was skeptical about — that was really troublesome. For starters, the Constitution did not empower the federal government to acquire new territory without the universal consent of every state (as per Andy P. Antippas' view in his History of the Louisiana Purchase). Some of the articles of the Purchase Agreement were also in violation of the Constitution because they gave preferential tax treatment to some US ports over others; they violated citizenship protocol; and they violated the doctrine of the separation of powers between the president, Congress, and the judiciary.

As Antippas recounts:

“Jefferson and his fellow Republicans were 'strict constructionists.' i.e., they allegedly adhered to the letter of the Constitution and were strong proponents of 'state’s rights' and 'limited government;' however, Jefferson and most of his party members chose simply to ignore all the Constitutional issues as merely philosophical for the sake of expediency — Jefferson’s response to his critics was 'what is practicable must often control what is pure theory' — in other words, 'the end justifies the means.'"

Jefferson’s specious argument to his critics was that the federal government's power to purchase territory was inherent in its power to make treaties. The Senate bought that argument and ratified the Louisiana treaty.

In their individual approaches to personal liberty, Jefferson’s and Marshall’s actions speak volumes. As I’ve already mentioned, Marshall was fanatically laissez-faire, while Jefferson favored greater economic regulation for what he thought was the good of society. Specifically, Jefferson favored a society of agrarian smallholders and did not approve of speculators buying up western lands as soon as they were available — he wanted smallholders to get in on the action right away. He did not understand the redeeming socioeconomic value of speculators, abhorred their — in his view — unearned profits, and advocated restricting or eliminating these — again, to him — unnecessary middlemen, prominent among whom were Marshall and his family.

Both were Virginians and slaveholders, but their treatment of slaves differed markedly. Jefferson is known to have beaten his slaves; there is no evidence that Marshall ever did. In his will, Marshall wisely granted more liberty to his slaves than we might intuitively suppose today. He gave them two options upon his death: liberty, with severance pay, so they could set themselves up independently (or emigrate to Liberia); or continued servitude, in case the radical transition to liberty was more than they could handle. In 1781, near the end of the Revolutionary War, 23 of Jefferson’s slaves escaped to the British.

Marshall's and Jefferson's approaches to Native Americans were even more illuminating. Though Jefferson’s words spoke respectfully, even admiringly, of the noble savage, his policies began the trail of tears that would destroy cultures and result in the reservation system.

As soon as Louisiana was purchased, Jefferson embarked on a cold-blooded policy toward Native Americans. In a lengthy letter to William Henry Harrison, military governor of the Northwest Territory, he explained that the nation's policy "is to live in perpetual peace with the Indians, to cultivate their affectionate attachment from them by everything just and liberal which we can do for them within the bounds of reason." But he goes on to explain "our" policy (presumably his own, and that of the United States) on how to get rid of every independent tribe between the Atlantic states and the Mississippi, through assimilation, removal, or — if push came to shove —  "shut(ting) our hand to crush them." Finally, in secret messages to his cabinet and Congress, Jefferson outlined a plan for the removal of all Native Americans east of the Mississippi to make sure that the land would never fall to the French or the British, who chronically supported the Indians in their disputes against the US.

Marshall, in contrast, did everything he could to prevent the confiscation of Indian land and the eviction of the Indians from Georgia, in a series of cases collectively known as the Cherokee Indian cases.

As a young man, while serving in the Virginia House of Delegates in 1784, Marshall supported a bill that encouraged intermarriage with Native Americans. Three years later, in the Indian slave case of Hannah v. Davis, he argued successfully that Virginia statute law prohibited the enslavement of Native Americans.

The Cherokee Indian cases, too long and complicated to detail here, came before the court in the early 1830’s, when Andrew Jackson was president. The final one, Cherokee Nation v. Georgia, turned on the supremacy clause. In a bald-faced land grab, Georgia had declared sovereignty over Cherokee lands. The Cherokees sued. The Marshall court decided that Indian affairs were the province of the federal government alone; therefore the Georgia statutes that claimed control over Indian lands were null and void. But neither Georgia nor Jackson — both strong states’ rights advocates — nor Congress supported Marshall’s decision. Jackson is reputed to have said, “John Marshall has made his decision, now let him enforce it.” Congress, meanwhile, had passed the Indian Removal Act of 1830, which Jackson heartily endorsed. It was a Pyrrhic victory that few Cherokees savored on their forced march along the "trail of tears and death" to Oklahoma.

McCulloch v. Maryland

Besides the fundamental issue of judicial review, another fundamental issue remained to be addressed in order to make the Constitution something closer to ultimate law, as opposed to simply a guiding, founding document: objective guidelines for practical interpretation. Other than the Federalist papers, which were not law, firm guidance about constitutional interpretation was lacking.

At one end of the spectrum was the strict fundamentalist approach (see examples already mentioned above), akin to religious fundamentalist interpretations of the Bible: virtually no interpretation. At the other extreme were freewheeling, almost poetic readings — complete with what today are called “penumbras and emanations.” With Holmesian effort, one could interpret anything, anywhere, in any way. This was not only impracticable law, but a recipe for tyranny. In 1819 Marshall got the opportunity — again, with some prodding and collusion on his part — to set the standards.

The constitution had given Congress the power “to coin money and regulate the value thereof.” Under that clause, the first Congress — with the approval of President Washington — chartered the first Bank of the United States, legitimized by its power to coin money. Though Jefferson opposed it, he left the bank in place. He and his political friends referred to it as "Hamilton’s bank." Being strict constructionists, they argued that it was Congress that had been empowered by the constitution to coin money, not the Bank. However, four years without the bank during the War of 1812 spoke eloquently about the value of its services. When a charter for a second Bank of the United States (the first charter ran for only 20 years) was introduced in 1816, it had the support of President Madison, who signed the bill into law. Only Virginia's congressional delegation voted (11 to 10) against the bank.

Though Jefferson’s words spoke respectfully, even admiringly, of the noble savage, his policies began the trail of tears that would destroy cultures and result in the reservation system.

Though the Constitution had empowered the federal government to coin money, it had not explicitly barred states from doing so. Virginia, a staunch states’ rights advocate, kept its Bank of Virginia, headquartered in Richmond. And there was one of the rubs: the branch of the Bank of the US stationed in Richmond was too much competition for the alternative, state banking system.

Exacerbating the dispute was the mismanagement of the Second Bank of the United States, which — shades of today’s crisis — had provided easy credit for a land boom in the south and west. When the bank called in its improvident loans to state banks — to cover its own debts, which it had improvidently incurred — the default of banking institutions swept like wildfire across the southern and western states.

Battle lines were drawn. The states moved against the national bank. Ohio, in the most radical reaction, outlawed the Bank of the United States, using the theory of “nullification," according to which states could cherry-pick federal laws, rejecting whichever they chose. Nullification had been around since 1798; it gained from the support of none other than Jefferson and Madison. (Though it must be admitted that when pressed about the constitutionality of nullification, Madison hedged and declared that it was an extraconstitutional option.) Taken to extremes, nullification implied the right to secede, with each state being judge of the constitutionality of its own cause; and, as Ohio later tried to do with McCulloch v. Maryland, the right to reject Supreme Court decisions. Though the Civil War and numerous Supreme Court decisions have hacked both the legs and arms from nullification, like the black knight in Monty Python’s Holy Grail it keeps coming back for one more round. Russell Pearce, an Arizona state senator, is sponsoring the latest nullification bill. Libertarians should reflect on the fact that nullification cuts both ways: it is at least as likely to be used to nullify as to uphold individual rights.

To cover its debts, Maryland passed a law taxing — in the most punitive and unconventional manner — the Bank of the United States. James McCulloch, head of the bank's Baltimore branch, refused to pay the tax. Maryland sued.

When the case finally reached the Supreme Court in 1819, Marshall found for McCulloch — less than a week after the conclusion of oral arguments, leading some to wonder whether he’d written the decision before hearing counsels’ arguments. As well he might. Not only was the case “arranged” (both parties sought an expeditious decision), but Marshall apprehended the issues immediately, commenting to a fellow justice that “if the principles which have been advanced on this occasion were to prevail, the constitution would be converted into the old confederation.”

Many principles were in play in Marshall's decision: federal supremacy over the states within a constitutional sphere (Maryland could not punitively tax a federal institution), judicial review (reaffirmed), nullification (denied — state action may not impede valid constitutional exercises of power by the federal government)and finally, the most important issue, implied powers.

Invoking the necessary and proper clause of the Constitution, Marshall declared:

“Let the end be legitimate, let it be within the scope of the constitution, and all means which are appropriate, which are plainly adapted to that end, which are not prohibited, but consist with the letter and spirit of the constitution, are constitutional.”

Here, finally, was an objective guideline for the interpretation of the constitution. Accordingly, the constitutionality of the Bank of the United States was established without a doubt. By extension, its heir today, the Federal Reserve Bank, is a valid, constitutional entity empowered by Congress “to coin money and establish the value thereof” — however much we may disagree with its methods and their effects.

* * *

Much controversy over the Constitution and its meaning continues — witness the Russell Pierce case and the calls for the abolition of the Federal Reserve. Even the Bill of Rights is not fully settled law. During recent arguments before the Court, Justice Elena Kagan sought to minimize the importance of an attorney’s statement, with which she disagreed, by referring to “buzz words”:“heightened scrutiny” and “rational basis.” These words refer to the standards a court should employ in assessing the impact of governmental action that may affect individual rights. As Chip Mellor of the Institute for Justice has stated in Forbes, contemporary judicial activism has “creat[ed] a hierarchy of rights with those at the top (like the First Amendment) receiving relatively strong protection — the heightened scrutiny — and those at the bottom (property rights and economic liberty) receiving very little," since these latter are thought to require a "rational basis" for review.

Following Cherokee Nation v. Georgia, Jackson is reputed to have said, “John Marshall has made his decision, now let him enforce it.”

The Constitution is only "settled" law in the sense that nearly all Americans accept it as not only our primary founding document, but also as the lawful basis for our government. In many other respects, it is far from "settled" — witness the extremely varying interpretations ascribed to it and the continuing legal battles over exactly what it means and how to apply it. John Marshall, in Marbury v. Madison and McCulloch v. Maryland, set parameters within which that debate should productively take place. Understanding those two cases — and Marshall's perspective — is essential to a knowledgeable understanding of our government's structure and powers.

As libertarians, we can be most effective if we work within the framework of accepted law to protect and extend liberty, rather than making ineffective flanking attacks from the swampy fringes, armed with quixotic arguments. The Constitution must be scrupulously and objectively interpreted, and with due respect for Marshall’s great tradition: first, as Randy Barnett has suggested, according to “original meaning” of the words used at the time; then, according to “original intent” — a less stringent bar, requiring interpretation of documents such as the Federalist Papers. This approach to interpreting the Constitution is a firmer bulwark for liberty than the well-intentioned but murky intellectual musings of Jefferson, which — though noble and intelligent — are no substitute for tight legal reasoning.




Share This


Seen and Unseen

 | 

Recently, President Obama stumbled through a poorly conceived bus tour of several states in the Midwest. The object of the junket seems to have been to counter media coverage of the GOP presidential candidates who’d gathered in Ames, Iowa, for the first major straw poll of the 2012 election cycle. Instead, Obama made comments about car- and truck-manufacturing that reminded listeners of his central-planning mindset. He treated a group of Tea Party leaders in a haughty and condescending manner. And, most damning, he stammered through the following self-justification:

“We had reversed the recession, avoided a depression, gotten the economy moving again. But over the last six months, we’ve had a run of bad luck.”

The man is not good at improv. And he’s not well-read. Numerous pundits (not all of them right-leaning) noted that the president’s excuses reflected this famous quote from the great Robert Heinlein:

“Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded — here and there, now and then — are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty. This is known as ‘bad luck.’”

How could Obama, a man who trades on being seen as smart and articulate, make such a boneheaded gaffe?

I think this has to do with the ignorance and insular nature of American statists. They operate under a simplistic notion of politics — call it “Manichean,” if you feel like being generous about their philosophical grounding, “infantile” if you don’t.

Their adversaries are “enemies;” and their enemies are “terrorists,” “extreme” and “crazy.” (The quoted terms in that last sentence are from a few recent articles posted on dailykos.com — but Obama and his underlings have used them, too.)

These mutterings reflect a shallow worldview. American collectivists haven’t read the books that define and expand on free-market philosophy; most justify their ignorance by dismissing Hayek, Mises, Rand et al., as “evil.” Instead, they seem to skim some magazines and websites. Mostly, though, they watch TV. And they focus on the personal manners (and lives) of limited-government advocates, rather than the substance of the positions.

Obama’s supporters focus on (and equate themselves with) the weakest and least rational of the president’s critics — a motley crew of bigots and conspiracy mongers. As a result, Obama’s supporters weaken themselves. They can’t understand that there are rational criticisms of a president who has done so much damage to the philosophical and political foundations of the United States.

They just don’t see.

This blindness has rendered Rep. Ron Paul — the most effective advocate of real limited government among the recognized presidential candidates — something of an invisible man.

Because you read Liberty, you know more about Dr. Paul and his latest campaign for the White House than do most Americans. But, for a moment, put yourself in the shoes of an ordinary salt-of-the-earth citizen or even an impassioned Obama supporter. You’d probably have only the vaguest sense of who Ron Paul is. And you wouldn’t understand how many people share Paul’s perspective and beliefs. When Paul finishes a razor-close second in the aforementioned Iowa straw poll, you’d fall back on your epithets. Or just deny the whole thing.

Rep. Debbie Wasserman Schultz (whose retiree-heavy Florida congressional district gobbles up more than its share of federal benefit dollars) took the first option. Here’s some of the invective she hurled about the Iowa straw poll results, via CNN:

“In previous presidential campaigns, we might have chalked extreme fringe-type candidates like Michele Bachmann and Ron Paul as an anomaly. . . . But we’re looking at the core of the Republican Party now. The heart of the Republican Party is the extreme right wing.”

No surprise. A woman who counts on scaring pensioners to maintain her livelihood is bound to vilify people who talked about benefit cuts. I’d just like, once, to read a quote from the wretched Ms. Wasserman Schultz that didn’t include the word “extreme.”

Most statists, though, have simply chosen to pretend Paul doesn’t exist. He and Rep. Michele Bachmann finished in a near-tie for first place in the Iowa straw poll, separated by less than 1% of all votes cast. Mrs. Bachmann won but, if the vote had been an actual election, many jurisdictions would have called for an automatic recount. The next Sunday, Bachmann appeared on all five of the so-called “major” weekend TV news programs; Paul appeared on none.

Obama’s supporters can’t understand that there are rational criticisms of a president who has done so much damage to the philosophical and political foundations of the United States.

Bachmann — whose public persona strikes some as addled — serves as a stand-in during this election cycle for the absent Sarah Palin. Perhaps that’s why the establishment media revels in making Bachmann look ridiculous, as a recent and unflattering cover picture on Newsweek magazine proved. Media outlets that still favor Obama seem to be following a strategy of portraying Bachmann as “crazy” and, therefore, any Republican challenger to the president as crazy by association.

Paul is included in this scheme. But, mostly, he’s simply ignored. And this isn’t just a left-wing phenomenon. Fox News Channel’s top-rated host Bill O’Reilly, a statist of a nominally “conservative” stripe, goes out his way to ignore Paul. And, when pressed, O’Reilly dismisses Paul’s chances of winning even the GOP nomination as “zero.”

In the days after Bachmann’s media blitz, a slight shaft of light — from an unexpected source — cut through the willful darkness. TV talk show host and topical comedian Jon Stewart ran a humorous segment pointing out the media’s obvious denial of Paul’s presence and popularity. Stewart referred to Paul as “the 13th floor” of the presidential news coverage and took cable TV reporters to task for blatantly ignoring the congressman’s close second-place finish in Iowa.

The New York Times, the Associated Press and U.S. News (yes, it still exists as an online news site . . . but has dropped “and World Report” from its name) followed Stewart’s satire with semi-serious articles that discussed the media’s dismissal of Paul, in Iowa and in general.

The Associated Press piece acknowledged that Paul has raised enough money to stay in the presidential race for a long time. And that his supporters are more dedicated than most. But it concluded:

“Still, Paul finds himself outside the bounds of traditional Republicans. His opposition to the wars in Iraq and Afghanistan defines him as a dove. His skepticism toward the Federal Reserve has spooked Wall Street. And his libertarian views on gay rights draw the ire of social conservatives. He also tweaks Republicans on foreign policy, arguing it isn’t the United States' role to police Iran's nuclear program or to enforce an embargo with Cuba. ‘Iran is not Iceland, Ron,’ former Sen. Rick Santorum told Paul during Thursday's debate.”

An article on presidential politics that quotes Rick Santorum as an authority on anything is suspect, in my opinion.

The U.S. News piece concluded lazily by quoting an establishment media hack to characterize Paul’s candidacy:

“ 'He’s got a very dedicated cadre of people,’ says Larry Sabato, director of the University of Virginia’s Center for Politics. ‘And they're very intense, but they’re relatively few in number . . . It’s ridiculous talking about him getting the nomination.’ ”

Prof. Sabato’s record on political predictions is no more reliable than former Sen. Santorum’s.

I have no idea how well Ron Paul will do in the coming presidential primaries. But I know that he has the money and the organization in place to campaign until the GOP convention. And I know that he’s announced he won’t seek reelection to his seat in congress, so that he can dedicate himself to this presidential run.

I hope he lasts long enough to force more of the establishment media and the GOP powers-that-be to acknowledge he exists. And that his arguments for limited government are as mainstream as anything the rent-seeking Ms. Wasserman Schultz has to say.

Now, if we can only get them to acknowledge Gary Johnson . . .




Share This


The Passing Paradigm

 | 

The latest much-ado-about-nothing crisis passed, with a result that should seem familiar. In 2008, Americans were told that if the TARP bill (a $787 billion taxpayer-funded welfare handout to large banking institutions) wasn’t passed, the stock market would crash and massive unemployment would follow. After an unsuccessful first attempt to pass the bill amid angry opposition from constituents, the bill passed on a second vote. Subsequently, there was a stock market crash followed by massive unemployment.

This time, our political-media cabal told us that if Congress was unable to pass a bill to raise the debt ceiling, the government would not be able to meet its short term obligations, including rolling over short term bonds with new debt. US debt would be downgraded from its AAA status, and a default would be imminent. After the melodrama, Congress passed the bill raising the debt ceiling. Standard and Poor’s subsequently downgraded US Treasury debt anyway, and deep down everyone knows that a default is coming as well, one way or another.

We are seeing the end of a paradigm. Thomas Kuhn argued in The Structure of Scientific Revolutions (1962) that anomalies eventually lead to revolutions in scientific paradigms. His argument holds equally true for political paradigms.

A paradigm is a framework within which a society bases its beliefs. For example, people at one time believed that the forces of nature were the work of a pantheon of gods. Sunlight came from one god, rain from another. The earth was a god, as was the moon. With nothing to disprove the premises of the paradigm, it persisted. People went on believing that sunlight and rain were the work of sunlight and rain gods because there was no compelling reason for them to believe otherwise.

However, within any paradigm there are anomalies. Anomalies are contradictions — phenomena that cannot be explained within the framework of the paradigm. People have a startling capacity to ignore or rationalize away these anomalies. While it may defy logic to continue to believe that rain comes from a rain god even after evaporation and condensation has been discovered and proven, people would rather ignore the anomalies and cling to the paradigm than face the fact that the paradigm is false.

There is at least one thing that will be quite obvious: centralized government is insane.

But once there are too many anomalies, the paradigm fails, and a new one must take its place. This new paradigm renders the old one absurd, even crazy. At some point in the future, people will look back on the political paradigm of the 20th and early 21st centuries. There is at least one thing that will be quite obvious to them: centralized government is insane.

Consider the premises upon which this present paradigm relies: all facets of society must be planned and managed by experts. The judgment of the experts trumps the rights or choices of any individual. The choices made by the experts will result in a more orderly society and greater happiness for the individuals who compose it. There will be better results from one small group of experts controlling everyone than multiple groups of experts controlling smaller subgroups of society.

Of course, libertarians reject every one of these assumptions on its face. A free society does not tolerate “planning” or “management” by anyone. All choices are left to the individual, as any attempt to plan or manage his affairs amounts to either violation of his liberty, looting of his property, or both. However, let’s assume that the first three assumptions of the present paradigm are valid and merely examine the last. Even that does not hold up to scrutiny.

Suppose an entrepreneur starts a business. At first, his market is local. He opens retail outlets that are overseen by store managers. The entrepreneur is the CEO of the company and manages the store managers. Even at this point, the CEO must trust day-to-day decisions to his managers. He has no time to make everyday decisions as he tries to expand his business. The managers do this for him and he concentrates on strategic goals.

His business is successful and soon he begins opening outlets outside of the original market. He now has a need for regional managers to manage the store managers. He manages the regional managers and leaves the details of how they operate within their regions to them.

The business continues to expand. With retail outlets in every state, there are now too many regions for the CEO to manage directly. The CEO appoints executive directors to manage larger regions, each composed of several smaller ones. There is an executive director for the West Coast, another for the Midwest, and another for the East Coast. Of course, the CEO has the assistance of his corporate vice presidents who manage sales, operations, human resources, and other company-wide functions from the corporate office.

Now, suppose that one day the CEO decides to fire the executive directors, the regional managers, and the store managers. He will now have the salespeople, stock clerks, and cashiers for thousands of retail outlets report directly to him and his corporate vice presidents. Would anyone view this decision as anything but insane?

As silly as this proposition sounds, this is a perfect analogy for how we have chosen to organize society for the past century. The paradigm rests on the assumption that every social problem can be better solved if the CEO and his corporate staff manage the cashiers and the salespeople directly. As in all failed paradigms, anomalies are piling up that refute its basic assumptions.

This paradigm assumes that centralized government can provide a comfortable retirement with medical benefits for average Americans, yet Social Security and Medicare are bankrupt. It assumes that a central bank can ensure full employment and a stable currency, yet the value of the dollar is plummeting and unemployment approaches record highs (especially when the same measuring stick is used as when the old records were set). It assumes that the national government’s military establishment can police the world, yet the most powerful military in history cannot even defeat guerrilla fighters in third-world nations. It assumes that the central government can win a war on drugs, yet drug use is higher than at any time in history. It assumes that experts in Washington can regulate commerce, medicine, and industry, yet we get Bernie Madoff, drug recalls, and massive oil spills.

Hundreds of years ago, the prevailing medical science paradigm assumed that illnesses were caused by “bad humors” in the blood. Operating with that assumption, doctors practiced the now-discredited procedure known as “bleeding.” They would cut open a patient’s vein in an attempt to bleed out the bad humors. As we now know, this treatment often killed the patient. Most rational people today view the practice of bleeding as nothing short of lunacy.

Ironically, this is a perfect analogy for the paradigm of centralized government. The very act of a small group of experts attempting to manage all of society drains its lifeblood. It is the uncoerced decisions of millions of individuals that create all the blessings of civilized society. It is the attempt by a small group of people to override those decisions that is killing society before our very eyes. Someday, people will look back on our foolishness and laugh as we do now at the misguided physicians who bled their patients to death. The present paradigm is dying. The revolution has begun.



Share This


The Master of the Internet

 | 

Federal Communications Commission Chairman Julius Genachowski is a political hack, a personification of statist mendacity. He’s a danger to individual liberties and free markets — not because of any clear intention to do wrong, but because he’s a man of gilded academic credentials yet little evident wisdom or insight.

Like the president he serves, Genachowski was educated and has spent his adult life in an echo chamber of small-minded conformists. And, like the president, Genachowski struggles to describe grand ambitions with the vocabulary of a clerk.

During the 2008 presidential campaign, Barack Obama made a big deal about “net neutrality” — a term that meant different things to different people. To traditional left-wing partisans, it meant government-funded high-speed internet service for the usual laundry list of aggrieved minority groups. To Silicon Valley tech firms, it meant cracking down on big internet service providers (ISPs) who wanted to charge heavy users of bandwidth more than light users.

“Net neutrality” was (and is) poorly-defined and is therefore likely to disappoint some or all interested parties whenever it is implemented. As public policy, it is an inherently cynical proposition. The Federal Communications Commission is the regulatory agency best positioned to give shape and force to the vague term; so Obama needed someone to run the agency who wouldn’t mind facing the inevitable disappointment that would come from fulfilling a cynically-made promise. He needed someone with enough career ambition to want the job — but not enough insight to recognize what a bad hand it would be.

Like the president, Genachowski struggles to describe grand ambitions with the vocabulary of a clerk.

Genachowski’s curriculum vitae reads much like the president’s: Columbia undergraduate, Harvard Law School. Of course, a lesser Ivy, followed by Harvard Law, doesn’t mean so much — a few geniuses and many middling mediocrities have followed that path.

According to his official biography, the FCC Chairman has spent his whole career “active at the intersection of social responsibility and the marketplace.” But what does that mean? “Social responsibility” — like “social justice” and “public interest” — is a code word that careerist tools use to camouflage their unmerited self-regard.

After Harvard, Genachowski clerked for Supreme Court Justices David Souter and William Brennan. A promising start. Later, he clerked for Abner Mikva on the D.C. Circuit Court of Appeals. This might seem like a step backward to the uninitiated; but Mikva is a lion among the establishment Left. The step, however, did establish Genachowski as more a politico than a great legal mind. It was followed by stints working for Charles Schumer and a couple of House committees — which confirmed Genachowski's drift into the ranks of partisan political hacks.

When the Democrats lost control of the House in 1994, Schumer found Genachowski a spot working for FCC Chairman Reed Hundt. Hundt did some good during his time at the FCC. He continued his predecessor’s efforts to lower regulatory barriers and, therefore, costs related to international telephone service; and he didn’t stand in the way of ownership consolidation that was going on at the time in the terrestrial radio business. But generally he did the statist bidding of the Clinton administration.

In a 2010 talk at Columbia University, Hundt admitted that “his” FCC had used its oversight powers to pick winners in the telecommunications market. It crafted regulations to “favor the Internet over broadcast” as the common medium of the country because right-wing talk radio “had become a threat to democracy.” And he bragged that he’d made policies “to allow the computers to use the telephone network to connect to the Internet . . . and to do it for free. In other words we stole the value of the telephone network . . . and gave it to society.” According to the Los Angeles Times, he called the highlight of his FCC tenure “state-sanctioned theft.”

Genachowski unintentionally highlights the moral emptiness of the administration he serves. And the moral emptiness of the American “progressive” movement in general.

Hundt seems to have been an inspiration to the careerist Genachowski — proof that a man with Ivy League credentials but no particular qualities could rise to levels of high esteem in the nation’s capital. The elite among these hacks parlay this esteem into lucrative post-government employment with rent seekers such as McKinsey and Co. and the Blackstone Group. In 1997, Genachowski left the FCC and went to work as a sort of bar-admitted personal valet to Barry Diller at IAC/InterActiveCorp, a New York-based conglomerate of internet commerce companies whose crown jewel is the widely-reviled Ticketmaster. This episode is Genachowski’s main claim to business experience. His resume entries while at IAC — “Chief of Business Operations” and “General Counsel” — sound impressive, until you realize that Diller runs the firm as a sort of personality cult and doesn’t suffer strong (or, some say, even competent) subordinates.

Genachowski’s resume also includes short stops at a couple of minor venture capital funds. But his track record doesn’t suggest any particular vision for technology or business. He seems to have been an access-peddler brought in to provide political contacts. And he didn’t last long in any of those gigs. He was just biding his time until he could pass back through the revolving political door.

Genachowski unintentionally highlights the moral emptiness of the administration he serves. And the moral emptiness of the American “progressive” movement in general. Two speeches that he gave December 2010 set a framework for examining his circular logic, slipshod ethics, and tired rhetoric.

On December 1, 2010, he gave a talk with the Orwellian title, “Remarks on Preserving Internet Freedom and Openness” to FCC staffers. As we’ll see, his definition of “openness” is perverse — and it may be the key to understanding the Obama brand of knee-jerk statism.

“After months of hard work at the FCC,” he said, “and after receiving more than 100,000 comments from citizens across America, we have reached an important milestone in our effort to protect Internet freedom and openness.”

This theme of “hard work” recurs in Genachowski’s rhetoric. It’s not clear exactly what he thinks is “hard” about the work of dictating how internet service providers can operate. The approach of his FCC has been to meet separately with various “stakeholders” in telecommunications regulatory policy — and then to issue edicts influenced, if not drafted, by a handful of leftwing thinktanks.

In any event, congratulating himself and his staff for doing “hard work” is reminiscent of government bureaucrats who call their offices “shops.” They’re contemptuous of real shops and actual hard work.

“Yesterday, I circulated to my colleagues draft rules of the road to preserve the freedom and openness of the Internet.”

This is almost too easy. Who but a hard-charging statist, blind to simple logic or common sense, would believe that such “rules” preserve “freedom and openness?”

“This framework . . . would advance a set of core goals: It would ensure that the Internet remains a powerful platform for innovation and job creation; it would empower consumers and entrepreneurs; it would protect free expression; it would increase certainty in the marketplace, and spur investment both at the edge and in the core of our broadband networks. . . . The proposed rules of the road are rooted in ideas first articulated by Republican Chairmen Michael Powell and Kevin Martin, and endorsed in a unanimous FCC policy statement in 2005. Similar proposals have been supported in Congress on a bipartisan basis. And they are consistent with President Obama’s commitment to ‘keep the Internet as it should be — open and free.’ ”

There are two points worth noting in the section above.

One, Genachowski often presses the point that his edicts are consistent with those of the GOP appointees who’ve preceded him. In his worldview, that which is “bipartisan” is inherently good. He doesn’t seem to understand that both parties are deluded in believing that their central planning will make the market for communications services operate more efficiently.

If government agencies get to define and things like the levelness of playing fields and the unreasonableness of “discrimination,” there will be no free market of ideas.

Two, Obama and his minions are being cagey when they claim “openness” as a goal of their market rules. “Openness” is a term of little fixed definition in economics circles — or almost anywhere else. It sounds like a free-market value — but statists can use it as a Trojan horse for bringing central-planning policies into effect. The main policy hiding in this horse is “net neutrality,” a regulatory concept that means . . . whatever the Feds want it to mean.

“This openness is a quality — a generative power — that must be preserved and protected. And . . . there are real risks to the Internet’s continued freedom and openness. Broadband providers have natural business incentives to leverage their position as gatekeepers to the Internet. Even after the Commission announced open Internet principles in 2005, we have seen clear deviations from the Internet’s openness — instances when broadband providers have prevented consumers from using the applications of their choice without disclosing what they [the providers] were doing.”

In this opaque discussion of “openness,” Genachowki seems to be referring to the 2010 federal appeals court decision in Comcast Corp. v.FCC. It came as the result of a legal dispute between the FCC and the telecom giant that dates back several years. Comcast — one of the biggest ISPs in the United States — has made a practice of slowing some customers’ internet connections when they use BitTorrent, a bandwidth-hogging file-sharing service used primarily to trade digital versions of TV shows and films. (Many film and TV studios also complain that BitTorrent encourages piracy of copyrighted content.)

The FCC has maintained that this selective tightening of specific uses of the internet pipeline violates its policies of “openness” and “net neutrality,” providing an opportunity for the Feds to dictate corporate action to the likes of Comcast. Comcast has maintained that the FCC’s statements about internet openness are just industry guidelines, not legally enforceable.

The two sides have spent a lot of time in federal court, debating these points. In 2008, a trial judge ruled for the FCC and said it could wrap just about any edict it wished within the policy cloaks of “openness” and “net neutrality.” But in April 2010, a federal appeals court overruled the trial court and found that the FCC does not have the authority to sanction Comcast for restricting access to BitTorrent.

Internet access is not a right; it’s a consumer service. High-speed internet access is a luxury service, and even less a right.

During oral arguments, D.C. Circuit Judge Raymond Randolph told the FCC’s lawyers, “You can’t get an unbridled, roving commission to go about doing good.” In his written decision a few months later, Randolph wrote: “Policy statements are just that — statements of policy. They are not delegations of regulatory authority.”

Randolph’s ruling is a big win for liberty — and the prevention of regulatory creep. But it’s a big problem for Genachowski. Playing bureaucratic games with the “net neutrality rulemaking process” accomplishes nothing; it’s all just talk, without any statutory basis. Until Congress passes a law codifying net neutrality, Comcast can tell the FCC to sod off. And keep tightening the pipe for BitTorrent.

Back to Genachowski’s hackery:

“Protecting Internet freedom will drive the Internet job creation engine. . . . [C]onsumers and innovators have a right to know basic information about broadband service, like how networks are being managed. The proposed framework therefore starts with a meaningful transparency obligation, so that consumers and innovators have the information they need to make smart choices about subscribing to or using a broadband network. . . . [C]onsumers and innovators have a right to send and receive lawful Internet traffic — to go where they want and say what they want online, and to use the devices of their choice. Thus, the proposed framework would prohibit the blocking of lawful content, apps, services, and the connection of non-harmful devices to the network. . . . [C]onsumers and innovators have a right to a level playing field. No central authority, public or private, should have the power to pick which ideas or companies win or lose on the Internet; that’s the role of the market and the marketplace of ideas. And so the proposed framework includes a bar on unreasonable discrimination in transmitting lawful network traffic.”

Genachowski must have had trouble in Constitutional Law — even at Harvard. And introductory logic at Columbia — if he ever took it. Reread that last paragraph. The man conflates a “right to a level playing field” with eyewash about a free “marketplace of ideas.” Those two things are mutually exclusive. If government agencies get to define and dictate (or, sometimes, dictate first and define later) things like the levelness of playing fields and the unreasonableness of “discrimination,” there will be no free market of ideas. Or anything else.

Another point: Genachowski’s broad strokes, pitting “consumers and innovators” against evil ISPs, are so crude as to be meaningless. In many cases, the ISPs are the innovators and content-owners on the internet. And, as he proceeds to explain, his notion of “consumer” less resembles any person than it does a grievance mechanism concocted by some leftwing think tank.

“Universal high-speed Internet access is a vital national goal that will require very substantial private sector investment in our 21st century digital infrastructure. For our global competitiveness, and to harness the opportunities of broadband for all Americans, we want world-leading broadband networks in the United States that are both the freest and the fastest in the world.”

The grubby materialism of statist political philosophy usually leads to farcical conclusions. Thus, Genachowski’s talk of “vital national goals” includes the right to a “level playing field” in terms of high-speed internet access. But why stop there? Why not add a right to a level playing field in terms of high-speed cars? High-definition TVs? Brushed-aluminum appliances?

Internet access is not a right; it’s a consumer service. High-speed internet access is a luxury service, and even less a right.

“. . . Accordingly, the [FCC’s current] proposal takes important but measured steps in this area — including transparency and a basic no blocking rule. Under the framework, the FCC would closely monitor the development of the mobile broadband market and be prepared to step in to further address anti-competitive or anti-consumer conduct as appropriate. . . . The Commission itself has a duty and an obligation to fulfill — a duty to address important open proceedings based on the record, and an obligation to be a cop on the beat to protect broadband consumers and foster innovation, investment, and competition.”

Any Beltway bureaucrat who doesn’t carry a gun but talks about being a “cop on the beat” should be summarily thrown in the Anacostia River.

This last bit is very important. There’s little foundation in either statute or legal precedent for the FCC to have regulatory authority over the “mobile broadband market.” But mobile devices are the fastest-growing segment of the consumer electronics marketplace, and the Feds want control of what appears and how it appears on all those smartphones and tablets.

And beware: any Beltway bureaucrat who doesn’t carry a gun but talks about being a “cop on the beat” should be summarily thrown in the Anacostia River, especially if he thinks that a “cop on the beat” is supposed to go around fostering things. Genachowski uses the phrase repeatedly — so he’ll require multiple immersions.

On December 15, Officer Genachowski gave another speech — this one at the National Press Club in Washington, entitled “Response to Communications Workers of America’s ‘Speed Matters’ Report.” It finished the work that his earlier comments had begun:

“CWA was one of the very first organizations to question whether America’s broadband networks are where they need to be if we hope to realize the full potential of this transformational technology. . . . Slowly but surely, others have come to recognize the strategic importance of having world-leading broadband networks, but, as today’s report makes clear, we still have a lot of work to do. The FCC has been working hard to address the key challenges CWA has spotlighted in this report.”

This is a typical approach for Obama Administration apparatchiks: they allow labor unions to define regulatory policy. CWA leaders encourage partisan “activism” and compare political opponents to Nazis (need proof? Google “Christopher Sheldon,” vice president of a New Jersey CWA local). That an obsequious FCC chairman allows the leaders of this union to set policy priorities is just bad.

“The economy and jobs are at the core of our work. We’re focused on seizing the opportunities of communications technologies to catalyze private investment, foster job creation, compete globally, and create broad opportunity in the United States. . . .

"I agree with CWA that the great infrastructure challenge of our generation is high-speed broadband Internet. Robust broadband networks create all kinds of jobs, all across the country &‐ everything from construction jobs, to urban planners and architects, engineers and scientists, sales people and IT professionals.”

Internet access doesn’t make people employable. Employable people tend to be early adapters of technology.

Here’s another example of the Obama administration’s folly. A central plank of its political philosophy is that the purpose of business is to create jobs. Of course, that is false. Jobs are a side-effect of business, the purpose of which is to make profits. But high-speed internet access as an engine of job creation is something of fetish for Genachowski. In a separate speech, he said:

“[E]very billion dollars spent on infrastructure will create 20,000 to 40,000 jobs — jobs that can’t be outsourced. . . . These includes all kinds of jobs — construction jobs, urban planners and architects, engineers and scientists, sales people and IT professionals.”

In yet another speech — to Jesse Jackson’s corporate shakedown organization, the Rainbow PUSH Coalition — Genachowski blathered: “If we want the United States to be the world’s leading market for the innovative new products and services that drive economic growth and job creation, we need all Americans to be online. . . . [Question: Even convicts? Even four-year-olds?] Broadband is essential to economic opportunity. Job listings are moving exclusively online. Increasingly, if you’re not connected you can’t find a job.”

This is a classic example of a dull-witted statist confusing correlation with causality. Internet access doesn’t make people employable. Employable people tend to be early adapters of technology.

But back to his National Press Club screed:

“[I]f we want the job-creating Internet services and applications of the future developed in America, we are going to have to do better. That’s why our National Broadband Plan sets a goal of 100 megabits per second broadband to 100 million homes. This would make the U.S. the world’s largest market for very high-speed broadband services and applications — unleashing American ingenuity and ensuring that businesses and jobs are created here, and stay here. . . . Because speed matters, we set a goal of at least 1 gigabit-per-second service to at least one anchor institution in every community in the country. These ultra-fast testbeds will help ensure that America has the infrastructure to host the boldest innovations that can be imagined.”

This is old-fashioned, institutionally-decadent pork. And it has the same rotten smell as the various high-speed rail projects Obama likes mentioning. There’s no evidence that Americans want or need the “testbeds” (what a word) that Genachowski describes. But, using nothing more than circular logic, he promises big-government boondoggles designed . . . to generate the CWA dues that allow people like Christopher Sheldon to avoid honest work.

“In September, the Commission approved an order giving schools and libraries the flexibility to buy low-cost fiber through our Universal Service Fund, moving us one step closer to achieving this goal. And, as the National Broadband Plan recommends, we’re also working with the military to make military bases one-gig centers.”

You have to give Genachowski credit for unearthing old bureaucratic technology. The United Service Fund is an obsolete program, originally intended to encourage telephone service to poor rural areas. There’s no objective evidence that it needs to remodeled into some sort of open-ended subsidy for “schools and libraries” (as if libraries, of all things, were cutting-edge, job-creating agents of innovation). The fact that the FCC is trying to do this is yet another example of how government programs never go away.

Maybe, behind all the bullshit, Genachowski doesn’t even understand what “regulate” means.

As far as military tech goes, the whole Bradley Manning-Wikileaks episode suggests that generals need to get better control of the carbon-based links in their computer chain before asking for screaming fast internet connections.

“As CWA’s report states, to spur innovation, the Internet must not only be fast, it must remain open. That’s why the FCC is also moving to preserve the freedom and openness of the Internet. . . . It’s a vital part of what we need to do unleash innovation and protect free speech, to foster broadband investment and promote a vibrant economy — to create jobs in the United States. And that’s why it’s essential that we move forward next week with our strong and balanced proposal to adopt the first enforceable rules of the road to protect Internet freedom.”

Again, there’s cognitive dissonance among these hacks. Nothing the FCC or any other government agency does creates jobs. Even if the FCC’s budget were increased fivefold and its offices crammed with more bureaucratic inmates, the sovereign debt or tax revenue required to fund such folly would quash actual jobs in the private sector, by removing the money to pay for them.

“As the Speed Matters report emphasizes, two key challenges facing the U.S. are broadband availability and adoption. . . . Up to 24 million Americans couldn’t even get broadband if they wanted it. And even where broadband is available, too many Americans are not adopting. Roughly 1 in 3 Americans has not adopted broadband, nearly 100 million people. The adoption rate is even lower among certain communities — low-income Americans, rural areas, minorities, people with disabilities.”

You can hear the echoes of Obamacare’s coverage mandate. If “too many Americans” don't have something, or don't even want something, then the government should enable them to have it — or force them to have it.

“We’ve got out work cut out for us, but with the help of the organizations here, I’m confident we’ll get the job done.”

What job? Forcing high-speed internet access on people who haven’t asked for it . . . and may not be able to afford it, or want to afford it, instead of other things? Arranging big-ticket boondoggles that will make work for self-interested groups like the CWA? Flouting the appeals court decision and dictating terms of operation to ISPs — while hiding behind anodyne jargon such as “rules of the road”?

Whatever the answer, Genachowksi’s “job” seems to make a mockery of his statement in a February 2010 interview with The Wall Street Journal, in which he said, for once in plain language, “We’re not going to regulate the Internet.” Maybe, behind all the bullshit, he doesn’t even understand what “regulate” means.

The most difficult part of reading through the scores of speeches and press releases of Chairman Genachowski is enduring the constant repetition of the same tired rhetoric, the same meaningless cliches. It’s easy to see why statists — and all politicians — become cynical. Repeating the same stupid phrases over and over again must rob any promise, even any concept, of meaning. And this is “work.” This is a "job."

A quick bit of history. The FCC was created in 1934 to allocate and regulate the use of the radio spectrum — which was a scarce commodity at the time, though essential to a cutting-edge technology — and broadcast signals. That was, arguably, a defensible regulatory role. But, since those early days, FCC bureaucrats (including most of the agency’s chairmen) have been pushing at every edge to expand their role. And they have usually been itchy to regulate the content that broadcasters send across the airwaves. This constant urge to regulate content negates the more humble, technology-focused purpose that the FCC is supposed to serve.

Just a few days after dishing out gross flattery to the CWA, Genachowksi did his master’s bidding and vomited up an Executive Order establishing “basic rules of the road to preserve the open Internet as a platform for innovation, investment, competition, and free expression.”

When he first took the FCC Chair, he had described “net neutrality” as a set of rules that would prohibit ISPs from tightening access for applications — such as BitTorrent — that they found undesirable. And his scheme seemed to apply to both wired and wireless networks. But the Comcast decision threw a wrench into those grand plans, so Genachowski claimed unconvincingly to have reconsidered his position and become a moderate dealmaker with a light regulatory touch.

If the FCC regulates ISPs under Title II as telecommunications infrastructure, the internet would become in effect a public utility.

His story isn’t supported by reality. The FCC’s December 2010 Order prohibits ISPs from blocking content, requires them to disclose how they filter traffic, and bans them from “unreasonable” discrimination against applications and web sites. And the FCC gets to make up what’s reasonable and unreasonable as it goes along. (Wireless Internet service providers are completely exempt from the Order.)

So ISPs may own the hardware of the Internet, but the FCC controls how that hardware is used. And the present FCC Chairman favors application suppliers — such as BitTorrent. And Google. And the sites run by IAC/InterActiveCorp. This couldn't give the FCC any power to control free expression or free innovation, could it?

The Order — which, like agency policies before it, does not have the weight of law — passed on a 3-to-2 vote among the FCC commissioners. It was a party-line vote, with the two commissioners appointed by Republican presidents voting against and the three appointed by Democrats for. Commissioner Robert McDowell, who voted against the Order, predicted that it would result in an “era of regulatory arbitrage.”

Other critics said that Genachowski’s Order gives the FCC a tool to regulate content and, echoing the Comcast decision, pointed out that the agency has no legal authority over the internet in the first place. One critic aptly compared Genachowski’s Order to a rule forcing FedEx and UPS to treat all packages in the same way the Postal Service does.

In response to these criticisms, the FCC organized faint praise from leftwing thinktanks that had supplied Genachowski with many of his talking points. Harold Feld, a talking head from a thinktank called Public Knowledge said:

“[The Order is] hardly more than an incremental step beyond the Internet Policy Statement adopted by the previous Republican FCC. After such an enormous build up and tumultuous process, it is unsurprising that supporters of an open Internet are bitterly disappointed — particularly given the uncertainty over how the rules will be enforced.”

Comments like this were supposed to support Genachowski’s claim that he was acting as an honest broker trying to work out a compromise — just as Obama had tried to position himself regarding the Patient Protection and Affordable Care Act. In both cases, the claims were false; the “compromises” split trivial differences between similar visions of corporate welfare. In the case of net neutrality, Democrats said that Republicans were protecting the interests of the cable and phone companies that are the main providers of broadband internet service to American households. Republicans said that Democrats were protecting application companies such as Google, Netflix, and BitTorrent, which have become successful in an era of unregulated internet and want to raise barriers against potential competitors.

Genachowski’s Order drew the attention of Congress. And not in a good way. In April 2011, the House of Representatives approved House Joint Measure 37 — which prohibits the FCC from regulating how internet service providers manage their broadband networks. This action was aimed squarely at thwarting Genachowski’s power grab. Rep. Greg Walden — Measure 37’s author — told theNew York Times:

“Congress has not authorized the Federal Communications Commission to regulate the Internet. [Genachowski’s Order] could open the Internet to regulation from all 50 states.”

Walden went on to say that, in his opinion, the Order was an Obama administration attempt to use the regulatory process “to make an end run around” the Court of Appeals ruling in Comcast.

At about the same time, a separate congressional i‐nquiry forced Genachowski to answer questions about whether White House officials had improperly influenced the net neutrality rules. Rep. Darrell Issa — chairman of the House Oversight Committee — pointed to media reports that suggested “Obama administration officials had knowledge of and potentially contributed to [the] crafting of” the FCC’s rules in this area. Issa also noted that Genachowski and Obama had made suspiciously similar remarks about the rules in separate speeches made during the fall of 2009. And he asked pointedly whether former White House economic adviser Larry Summers had been the conduit with the FCC, planning Genachowski’s net neutrality Order.

Genachowski took a sleazy, legalistic tone in evading Issa’s questions. He whined that the Communications Act of 1934 “does not prohibit communications between commissioners and commission staff and members of the administration.” He said that the FCC’s rules requiring disclosure of such communications did not take effect until the release of a “Notice of Proposed Rulemaking.” Since the Notice of Proposed Rulemaking on net neutrality was issued in October 2009, he claimed that he didn’t have to explain any meetings that had taken place before that date. And finally — sounding like a minor-league version of Bill Clinton playing games with verb tenses — Genachowski said that the FCC’s “Office of General Counsel is not aware of any potential violations of the ex parte rules in connection with the subject matter.”

According to committee staffers, Issa didn’t expect candid or complete answers from Genachowski. The purpose of his questions was to show the FCC that Congress was aware of its attempted power-grab. But Genachowski ignored that message. He’s still grasping at more regulatory power.

Free Press, a leftwing thinktank that has an extremely close and influential relationship with Genachowski’s FCC, has suggested that the agency should try to move broadband service into the same regulatory category as telephone lines. Rather than regulating broadband providers as information services under Title I of the Communications Act, Free Press says the FCC should regulate them under Title II as telecommunications infrastructure.

If the FCC does this, the internet would become in effect a public utility. This is a troubling — and exhausting — proposition. The United States doesn’t need yet another whole category of consumer services wrapped in the obscuring cloak of “public utility.” Public utilities are bad for many reasons, not least the fact that bureaucrats like Julius Genachowski consider them tools of social engineering.

Of course, Genachowski is neither wise enough nor honest enough to acknowledge any of this. And that shouldn’t come as a surprise. His grasping careerism is the reason he was chosen for the job.




Share This


Toward Prohibition’s End

 | 

Marijuana prohibition is coming to an end. I see it in my neighborhood, as a storefront is vacated by an architect and occupied by a purveyor of medical cannabis. I see it politically. Legalization is coming, though exactly when and how is not yet clear.

Washington, my home state, is one of 16 medical-marijuana states (and one of the five that have allowed it since the 1990s, the others being California, Oregon, Alaska, and Maine). That leaves 34 non-medical-marijuana states. Still, the list of medical-marijuana states keeps growing: Arizona in 2010, as well as the District of Columbia, and Delaware in 2011.

The opponents of medical marijuana argue that it is a step toward full legalization, and they are right. Politically it is. But the next step is a tricky one.

The problem is the federal law. When California legalized medical cannabis in 1996, it set up a conflict of federalism. Under the Constitution, particularly the Ninth and Tenth Amendments, there ought not to be any federal law about marijuana, but there is. The Controlled Substances Act exists, and the courts uphold it.

In 2005, the US Supreme Court ruled on the federal claim of power over marijuana as medicine. That was the Raich case. A prominent libertarian legal theorist, Randy Barnett, argued at the high court against the federal position, and he had a fine argument. But he lost. There were sharp dissents by justices Sandra Day O’Connor and Clarence Thomas, but the court sided with the government.

Having pushed aside the Constitution, however, the Bush administration failed to press its advantage in the field — at least, not for a decisive victory. Then, in 2009 came the Obama administration. In October of that year, the Justice Department said there would be no federal prosecutions of doctors or patients who were following their state’s medical-cannabis laws.

That was taken as more of a favorable signal than it was. In California, storefront dispensaries were opened with big images of marijuana leaves and green crosses in their windows. But the memo had not made any promises to suppliers of marijuana. By 2011 dispensaries had opened in several states, and US attorneys drew the line. They sent letters warning that any business in marijuana would not be tolerated.

I can report what happened in Washington state. It had one of the earliest medical marijuana laws, but it was a law with holes in it. Some of the holes favored the users. For example, the law allowed a provider to serve only one patient. Dispensaries had opened, some of them serving hundreds of patients, on the bold assertion that they were serving one patient at a time. The state law was just vague enough to make this plausible.

No matter what the Obama people privately believe about marijuana, their priority is his reelection, which means not being branded as the Dope Smokers’ President.

The law had another hole that was dangerous for users. It allowed them to raise a medical defense at trial but said nothing about protection from arrest. There was a case about this: State v.Fry. In Colville, a small town in the state’s rural northeastern corner, the cops had come to the door of one Jason Fry, a man who had been kicked in the head three times by a horse. Fry had anxiety attacks and smoked marijuana to calm himself. The cops had heard about it, and at his doorstep they could smell it. Fry showed them his doctor’s letter giving him permission to use it, but they phoned a judge, got a warrant, searched his home, and busted him for having more plants than the state Department of Health allowed.

At the Washington Supreme Court, the question was whether the judge had probable cause to issue the warrant. Only one justice — libertarian Richard Sanders — sided with Fry, arguing that arrest protection was implicit in the measure passed by voters. The other eight sided with the state.

Under the regime of the past few years, in the liberal parts of Washington, particularly around Seattle, medical users have been mostly OK, and in rural counties they have had to take their chances.

The state senator from my Seattle district, one of the most liberal districts in the state, offered a bill to make sense of all this. It would have set up state licensing and regulation of growers, processors, and dispensers of medical marijuana, bringing them into the open. It also called for a voluntary state registry for medical users, to give them protection from arrest. The Democrat-controlled legislature passed the bill and sent to Democratic Gov. Christine Gregoire. Then the US attorneys in Seattle and Spokane, both of them long-time Democrats, wrote to the governor, warning that under federal law any state employee who licensed a marijuana business would be liable to federal prosecution.

Nowhere had the federal government prosecuted state employees for following state medical-marijuana law. It was possible, but it would be a direct federal-state confrontation. Was the Obama administration ready for that? The press noted that the governor, who previously was the state’s attorney general, might have a personal motive to comply with the Justice Department’s request: she is in her second term, is set to leave office at the end of 2012, and might like a law-related job in a second Obama administration. Whatever her motive, she cited the threat and vetoed the parts of the bill for licensing of marijuana suppliers.

After her veto, the US attorney in Spokane ordered all dispensaries closed, and joined with Spokane Police to raid the ones that defied him. As I write, he has not yet charged anyone with a crime. In liberal Seattle, where voters in 2003 had made simple possession of marijuana the lowest priority for police, the US attorney has so far stood aside while the Democratic city attorney and the Republican county prosecutor — both of them elected officials — work to keep the dispensaries open.

Parallel to the push for medical cannabis has been a drive for general legalization. It has begun in the early medical-marijuana states, and using the same tool as was used in those states: the voter initiative.

The voters of California, who were the first to approve medical marijuana by public vote, had another such vote in 2010. It was Proposition 19, a measure to allow people over 21 to cultivate, transport, and possess marijuana for personal use, and to allow cities to license commercial grows and dispensaries. Prop 19 garnered 46.5% of the vote. It failed, but not by much: a switch by fewer than 4% of the voters would have put it over the top.

What will state and local politicians do if their constituents vote for legalization and the feds oppose them?

In any complicated measure such as Prop 19, there are many arguments to convince people to vote no. There was the argument about protecting kids, though marijuana is available on the black market now and the measure wouldn’t have legalized it for them. Always there is the argument that the measure is flawed, whether the principle is right or not. In California, Mothers Against Drunk Driving opposed the measure on the ground that it didn’t define an illegal THC threshold for drivers. In California, several arguments were made by recipients of federal money. A school superintendent argued that legal marijuana would prevent the schools from meeting the requirements for federal grants. Business interests argued that they would lose federal contracts because they could no longer guarantee drug-free workplaces. Thus federal contracts and grants become weapons in political campaigns.

In any case it was close, and in the matter of social change, it is common to fail the first time. If you want to win, you try again.

Washington state was behind California, but not by much.

In 2010, two marijuana defense attorneys wrote a voter initiative that would have repealed all state marijuana law for adults over 18. The measure had no regulations in it. The organizers explained that if it had regulations in it, the federal government could challenge them in the courts under the doctrine of federal preemption, and have the regulations and the repeal thrown out. But a simple repeal would leave nothing for the feds to challenge.

One of the attorneys, Douglas Hiatt, said that was how New York and some other states had undermined liquor prohibition. It had worked, and the smart thing was to do it that way again.

The strategy made sense legally, but politically it didn’t work. The American Civil Liberties Union of Washington, which favors legalization, refused to back it because it included no regulations. The pro-legalization forces split. No prominent politicians stood up for the measure, the backers couldn’t raise any money to pay signature gatherers, and they fell short on signatures.

Also in 2010, a state representative from my district introduced a bill in the legislature to legalize and regulate marijuana. It died without a hearing.

In 2011, the two defense attorneys collected signatures for their initiative again, with the same result: they had too little money and fell short. My representative ran her bill again; this time it was endorsed by the state’s largest newspaper, the Seattle Times, which came out for full legalization. The bill failed once more, but it got a hearing, some respectable people testified in its favor, and they were covered in the press.

At the end of the legislative session, a well-connected group, including ACLU-WA, travel entrepreneur Rick Steves, and the former Republican US attorney in Seattle, John McKay, announced a legalization initiative aimed at the state ballot in November 2012. It has regulations in it, including a tight limit on THC in the bloodstream of drivers. But it is legalization for adults over 21 — and backing it are the names to attract money, and to assure wavering voters that it is OK to vote yes.

So it is in Washington state. According to NORML (National Organization for the Reform of Marijuana Laws), it appears that legalization measures will be on the ballot in 2012 in California and Colorado, and perhaps Oregon, Ohio, and Massachusetts.

These efforts are not welcomed by the Obama government. In the matter of civil liberties Obama has not led a liberal administration, and medical marijuana, or any marijuana, is not an issue he cares about. And no matter what the Obama people privately believe about marijuana, their priority is his reelection, which means not being branded as the Dope Smokers’ President.

So far, major politicians have mostly not supported legalization. California’s two Democratic senators, Dianne Feinstein and Barbara Boxer, opposed legalization, as did Republican Gov. Arnold Schwarzenegger and the Democrat who replaced him, Jerry Brown. Washington state’s two liberal Democratic senators, Maria Cantwell and Patty Murray, have given no support to legalization, nor has its Democratic governor. But what will politicians like this do if their people vote for legalization and the feds oppose them?

The smart ones will support their constituents. And that will start having an effect in Washington, D.C., where the endgame will play out.

This is now looking more and more likely. Voters in some state are going to pass a bill of legalization. And before that, the fight may come over state licensing of growers and dispensers of medical cannabis. Already the federal government is challenged by the dispensaries, and already it fights back, but cautiously and opportunistically. In the medical marijuana states it has been reluctant to haul in the proprietor of a storefront clinic, charge him with the federal crime of trafficking in forbidden drugs, and ask a jury to convict him and a judge to imprison him. If it is to win this battle, it will have to do that and make it the rule.

Generally the feds have acted where they have support from local politicians. But in some places, including where I live, they no longer have politicians’ support because they no longer have the public’s support. And where medical cannabis is legalized and used, support for prohibition erodes. It is gone among the young, and cannabis for people with cancer and back pain now erodes it among the old.

Expect fireworks ahead.




Share This
Syndicate content

© Copyright 2013 Liberty Foundation. All rights reserved.



Opinions expressed in Liberty are those of the authors and not necessarily those of the Liberty Foundation.

All letters to the editor are assumed to be for publication unless otherwise indicated.