The Return of Moktada

 | 

On January 5th, radical Shia cleric Moktada al-Sadr returned to Iraq from more than three years of self-imposed exile in Iran. He brought with him the specter of renewed violence in that war-torn country.

For those readers who have done their best to forget America’s Iraq misadventure, here’s a bit of background. Al-Sadr is the son of a revered Shia imam who was murdered by Saddam Hussein in 1999. He became prominent by leading Shia opposition to the American occupation after 2003. In 2004 his militia, the Mahdi Army, twice battled U.S. troops. Though not victorious, the Sadrists lived to fight another day. Al-Sadr also avoided arrest by U.S. forces on a warrant issued against him for the murder of another cleric. America thus failed to nip in the bud the young cleric’s militant movement.

During the civil war of 2006-07, the Sadrists carried out brutal sectarian cleansings in Baghdad and elsewhere. Even the onset of the American surge of ground troops in early 2007 failed to slow the pace of the carnage. At the same time, the Mahdi Army began to slip out of al-Sadr’s control; by the summer of 2007 the frenzy of violence caused even many Shia to turn against the Sadrists. Then the weight of American power began to have an effect; many Sadrist cadres were killed or captured by US troops. At the end of August al-Sadr declared a unilateral ceasefire and took himself off to the Iranian holy city of Qom, where he sought safety and the opportunity to polish the rather rough edges he had displayed as a political and religious leader.

In his absence the government of Prime Minister Nouri al-Maliki, given a breathing space by the apparent success of the Surge, was able to consolidate its hold on power. In early 2008 Iraqi government forces, backed by US and British logistical, intelligence, and air support, defeated the Sadrists first in Basra, Iraq’s second-largest city, and then (though less decisively) in Baghdad itself. The Sadrist movement had reached its low point. Even so, however, it had once again survived. “We may have wasted an opportunity . . . to kill those that needed to be killed,” an anonymous US official stated at the time. Today that official looks more and more like a prophet.

After the Basra and Baghdad defeats the Sadrists eschewed the gun in favor of the ballot. They scored surprising successes in local elections in 2009. Then, in national elections this past March, they emerged as the second largest Shia bloc, barely trailing al-Maliki’s party. As a result, al-Sadr became a kingmaker; Maliki’s reappointment as prime minister in late 2010 was possible only because the Sadrists supported him. In return they received ministerial posts and at least one provincial governorship. They are in the enviable position of having power without true responsibility: if the government succeeds, they will share in the credit; if it fails, they will blame al-Maliki and bring the government down. The Sadrists have made it clear that al-Maliki has only so much time to restore services, revive the economy, and end what’s left of the American occupation.

An anonymous US official stated that “We may have wasted an opportunity . . . to kill those that needed to be killed.”

The question of a continued American presence is a vexing one for all concerned — except the Sadrists. There are less than 50,000 US troops left in the country. Under an agreement negotiated by the Bush administration, all US forces are supposed to leave by the end of 2011. The Obama administration has stated that it would consider an extension of the US military presence only if Iraq requests it. Al-Maliki would very much like to see some US troops remain, as would the Kurds and most of the Sunnis. But al-Maliki risks looking like an American puppet if he asks for an extended troop presence. The Sadrists, on the other hand, are unequivocally opposed to any US troops remaining after the Dec. 31, 2011 deadline. Their attitude is not merely designed to appeal to Iraqi nationalist feeling. At some point in the future the Sadrists could decide to seize power. They probably would have a good chance of succeeding, provided US troops are not available to stop them.

The US State Department is supposed to take over the American role in Iraq’s security after 2011. Its active arm will be thousands of contractors (that is, mercenaries) whom it has been hiring and trying to put in place before the last uniformed Americans depart. While the Wikileaks revelations have shown that US diplomats are an intelligent and dedicated group of professionals, the idea of putting diplomats in charge of security in a place like Iraq seems a dicey proposition indeed. The employment of contractors will undoubtedly lead to incidents in which Iraqi civilians are killed. The reaction of the Iraqi populace, and specifically the remaining militias, is all too easy to predict. Recall the burned bodies of American contractors hanging from a bridge in 2003.

The Sunni insurgency, despite heavy blows inflicted by US and Iraqi forces, remains able to carry out widespread and damaging attacks. It may in fact be on the brink of a resurgence, for many Sunnis who joined the pro-US, pro-government Awakening movement have grown disaffected with a Shia-dominated government that has cut back on cash payments and jobs for Sunnis.

We have then the makings of a new explosion in Iraq, with no prospect of an American “Surge II” should the worst occur. Into this maelstrom steps Moktada, the prophet and redeemer of the Shia masses and of the armed fanatics who thirst to avenge past beatings received at the hands of the Americans and al-Maliki. One is reminded of the situation in St. Petersburg in 1917, with al-Maliki in the role of Kerensky and al-Sadr as the “plague bacillus,” Lenin. Admittedly the two men are, for the present, partners, which Kerensky and Lenin never were. But one cannot help but feel that, given the past, their paths must diverge. It may be one, or two, or four years before the situation plays out. But I can’t help but think that one or the other of these men is going to wind up dead.

 

 




Share This


A Cigar

 | 

In my youth, I was spoiled for a long time. No one really spoiled me. I took care to spoil myself, again and again. But the bitch Reality often intruded.

I was spending a dream summer on a small Mexican island on the Caribbean. Everyone should have at least one dream summer, I think, and no one should wait for old age. I had several dream summers myself. Anyway, my then-future-ex-wife, or TFEW (pronounced as spelled) and I were renting one of four joined concrete cubes right on the beach, on the seaward side of the island.

There was no running water in the cell but you could clear the indoor toilet with a bucket of seawater. You could also buy a bucket of nearly fresh water for a shower. There was a veranda and the doors locked. We slept in our own hammocks outside in the sea breeze most nights, although there was a cot inside. We also cooked on our butane stove on the veranda. We thought it was all cool. There was a million-dollar ocean view (probably an underestimate).

To feed ourselves, we bought pounds of local oranges and bread baked daily. Mostly, I dived for fish and spiny lobster all day. It got to the point where we grew tired of lobster. I even went to the water's edge slaughterhouse early one morning to compete for some shreds of bleeding turtle meat. Turtle meat, it turns out, looks like beef, but it tastes like old fish. Then I invented new ways of cooking lobster. The TFEW was a good soldier who liked reading. Also, her patience was frequently rewarded (but I am too much the gentleman to expand on this).

One morning, I woke up by myself near dawn and prepared my Nescafé, bent down on the small butane stove set on the tiled floor of the veranda. I looked up to the sea for a second and I was hit by a scene from the great Spanish director Luis Buñuel. Less than one hundred yards from me, bobbing up and down but stationary, was a low wooden boat packed with about 50 or 60 people just standing silently. They were not talking, they were not shouting, and they were not moving. It was like a dream, of course, but I knew I was not dreaming. Quickly, details came into focus. One detail was that one of the people in the boat wore a khaki uniform and the characteristic hat of the Cuban militia. Goddamn, I thought, this is what I have been reading and seeing on television for years! It's the real thing!

Then the practical part of my brain took over. I tried to yell at them that my rocky beach was not a good place to land. One made a gesture indicating they could not hear me because of the small breakers. Bravely, I abandoned my undrunk Nescafé and dived into the waters I knew well, because I had taken a dozen lobsters right there, under the same rocks, in front of my door. I did the short swim in a minute or two, and hanging from the side of the boat I told them how to go around a nearby point past which there was a real harbor. They thanked me in a low voice, like very tired people, in a language that was clearly Spanish but that sounded almost comical to my ears.

An hour later, I walked to the harbor where the main café also was to find out about my refugees. Naturally, I felt a little possessive of them since I had discovered them all by myself. Soon after I arrived, they started coming out of processing by the local Mexican authorities. (Incidentally, I think I witnessed there a model of humane efficiency worth mentioning.) Each walked toward the café, an envelope of Mexican pesos in hand.

A tall, skinny black Cuban spotted me, from earlier in the morning, when I was in the water. He walked briskly to me and took me in his arms. It was moving but pretty natural, since I was the first free human being he had laid eyes on in his peril-fraught path to freedom. He spoke very quickly with an accent I was not used to. What perplexed me is that he kept saying “negro,” with great emotion. After a few seconds in his embrace, I realized he was calling me, “mi negro.” I wondered for an instant how I had become a Negro's Negro. Then it came back to me, out of some long buried reading, that Cuban men sometimes call their mistress “mi negra,” irrespective of color, the overt color reference serving as a term of endearment, of tenderness.

I took my new buddy to the café to buy him breakfast. He pulled out my chair ceremoniously and took an oblong metallic object out of the breast pocket of his thin synthetic shirt. This he handed to me with tears in his eyes. Inside was a long Cuban cigar. I did not have the heart to tell him I did not like cigars. I smoked the damn thing until my stomach floated in my throat. He watched beatifically, in the lucid understanding that that little act testified to his personal victory against the barbarism of communism.




Share This


Latin America: Autumn of the Antipodes?

 | 

In response to last November’s flooding, mudslides, and destruction of homes that left tens of thousands of Venezuelans destitute in makeshift shelters, Hugo Chávez went camping to show solidarity with his people. The luxurious tent was a gift from Libyan strongman Muammar Gaddafi. Piling insult atop bad grace, Chávez was photographed inspecting the devastation in a Cuban(!) military uniform. He is not one to dwell on the negative. Instead of looking grave, concerned, and statesmanlike, he pursed his lips and spread his cheeks in a smirk that bordered on the manic, sticking his head out the window of a Jeep like a dog without good sense. Not since Mexican President Antonio López de Santa Anna (he of the Alamo) buried his leg with full military honors has a Latin American leader been this much fun.

But for Chávez, the bigger crisis is his impending loss of power. Though the 2010 parliamentary elections netted his United Socialist Party 95 seats, the opposition, successfully united, won 64 seats, thus depriving Chávez of the two-thirds and three-fifths majorities required to pass organic and enabling legislation or fiddle further with the constitution. But never mind, Hugo has a plan.

He requested that the lame-duck legislators grant him unlimited powers to rule by decree for 18 months, limit legislative sessions to four days a month, turn control of all parliamentary commissions over to the executive branch, limit parliamentary speeches to 15 minutes per member, restrict broadcasts of assembly debates to only government channels, and penalize party-switching by legislators with the loss of their seats. The lame duck legislators dutifully complied. Vice President Elías Jaua says the powers are necessaryto pass laws dealing with vital services after the disaster and with such areas as infrastructure, land use, banking, defense and the “socio-economic system of the nation.” For good measure, the lame-duck Chavista legislature also passed a law barring non-governmental organizations such ashuman rights groupsfrom receiving US funding; another law terminating the autonomy of universities; more broadcasting and telecommunications controls; and the creation of “socialist communes” to bypass local governments.

Not since Santa Anna buried his leg with full military honors has a Latin American leader been this much fun.

Opposition newspaper editor Teodoro Petkoff called it a “Christmas ambush,” writing in his daily Tal Cual that Chavez is preparing totalitarian measures that amount to “a brutal attack . . . against democratic life.” Chavez’s end-run around the new National Assembly, which convened on January 5, was blatantly illegal, as such emergency powers can only be granted by the legislature to cover a period within the term of the legislature in office. Chávez demanded powers that extend well beyond the previous legislature’s term and effectively emasculate the new legislature, which would never have given him the two-thirds vote he would need, since 40% of its members are in opposition. Venezuelans have reacted with roadblocks and peaceful but energetic massive resistance. Security forces and government thugs have counter-reacted violently. Many people have been injured, not only physically, but economically as well. On New Year's Eve the Bolivar was halved in value, from 2.6 to the dollar to 4.3.

The most infamous precedent for this maneuver was the German Reichstag’s March 1933 enabling law granting Adolf Hitler the right to enact laws by decree for four years, making him dictator of Germany. No doubt the affair will end up in court — decided by Chávez-appointed judges. Still, it’s only a matter of time before the ship of state either turns or crashes.

By contrast, Sebastián Piñera, Chile’s new president, responded to the March 2010 earthquake with grace and alacrity; and six months later rallied the country behind the 33 miners trapped for 70 days in a deep mine cave-in. Unlike President Obama, in his autarkic response to the BP fiasco, Piñera requested and received international assistance. But Piñera is perhaps more notable as the poster boy of a subtle, newly evolving trend throughout Latin America, a trend only now being recognized: the “normalization” of its politics.

Normalization means the peaceful alternation of center-left with center-right governments, which is the status quo in most developed, liberal democracies. Piñera, a center-right candidate, followed two decades of center-left government.

By definition, normalization is dull, boring, and bereft of transformational ideals. But it is nonetheless great news, especially when compared to the radical swings of the past, when left-wing revolutions followed right-wing golpes de estado (or vice versa). The inevitable mayhem, war, and death were always followed by authoritarian regimes.

Chávez demanded powers that extend well beyond the previous legislature’s term and effectively emasculate the new legislature.

In Chile, the Marxist government of Salvador Allende, which had begun to forcibly expropriate property, was overthrown by a military coup after inflation exceeded 140%. To restore order, General Augusto Pinochet brutally imposed a right-wing authoritarian regime. To his credit, however, he laid the groundwork for the political and fiscal stability Chile now enjoys. He invited the so-called Chicago Boys — Milton Friedman and his acolytes — to design a stable and prosperous economic framework, and he relinquished power slowly and honestly by means of a new constitution and open plebiscites. In 1989, Pinochet lost an election to the Concertación, the center-left coalition that would hold power until Piñera’s election.

As Fernando Mires, a Chilean political science professor at the University of Oldenburg, Germany, has observed, “Everything that does not directly deal with war and death, is a game.” Politics is a game, and games require rules. Once war and death enter the scene, the game of politics is over. Latin America is now anteing up to the table. Though not always perfectly correlated, political stability goes hand-in-hand with some degree of fiscal and institutional stability — preconditions for people’s ability to lead healthy, productive lives.

Independence and Chaos

When Father Miguel Hidalgo’s Grito de Dolores declared Mexican independence from Spain on September 16, 1810, it began a protracted independence movement throughout the continent. Two days later, Chile, at the other end of Latin America, instituted de facto home rule. To be sure, Haiti had already defeated Napoleon in 1804 to gain independence, and Cuba would throw off Spanish rule with US help as late as 1898 (and, some would argue, didn’t actually achieve full independence until the Castro regime nullified the Platt Amendment, which gave the US Congress a veto power over foreign affairs). But the main course of events took place within two decades.

In 1821, Spain recognized Mexico’s independence. By 1823, after it had recognized the independence of much of the rest of Latin America (with Portugal ceding Brazil in 1822), US president James Monroe felt comfortable enough to declare the Americas a European-free zone, in spite of Spanish forces still holding out in what was to become Bolivia.

Latin American independence movements were products of the Enlightenment, influenced by the US Declaration of Independence and subsequent constitution — in the context of the times, left-wing revolutions. But, as Marxist commentators never fail to decry, the American revolutions were not “true,” social revolutions, but rather bourgeois realignments. The original Spanish conquest had left most of the basic indigenous structures of authority intact, replacing Moctezuma and Atahualpa with the throne of Madrid. Latin American independence movements recapitulated that strategy, replacing the Spanish aristocracy with homegrown landed gentry.

Meanwhile, a new model of revolution had emerged: the French Revolution, in which the ideals of the Enlightenment metastasized into a nightmare. The monarchy was decapitated; the ancien regime gone; an empire was founded. Traditional concepts of how societies ought to be organized had been put aside.

In Latin America, would-be liberators, criticized from both Right and Left, became disillusioned and turned away from democracy. Up north, Agustín de Iturbide, the Mexican heir of a wealthy Spanish father, switched sides to fight for Mexican independence and declared himself emperor of Mexico; Santa Anna, another side-switcher, overthrew him and established a republic, then a dictatorship. Santa Anna ended up ruling Mexico on 11 non-consecutive occasions over a period of 22 years. Asked about the loss of his republican ideals, he declared,

It is very true that I threw up my cap for liberty with great ardor, and perfect sincerity, but very soon found the folly of it. A hundred years to come my people will not be fit for liberty. They do not know what it is, unenlightened as they are, and under the influence of a Catholic clergy, a despotism is the proper government for them, but there is no reason why it should not be a wise and virtuous one.

This general sentiment came to be shared by most of Latin America’s liberators.

Meanwhile, Central America (including the Mexican state of Chiapas but excluding Panama), and known as the Captaincy General of Guatemala, seceded from Mexico, becoming “The Federal Republic of Central America” after a short-lived land grab by Mexican Emperor Iturbide who pictured his empire extending from British Columbia to the other Colombia. The Federal Republic didn’t last. In the 1830s Rafael Carrera, led a revolt that sundered it. By 1838, Carrera ruled Guatemala; in the 1860’s he briefly controlled El Salvador, Honduras, and Nicaragua as well, though they remained nominally independent.

Not one to be left behind, the Dominican Republic jumped on the bandwagon in 1821, but was quickly invaded by Haiti. Not until 1844 was the eastern half of Santo Domingo able to go its own way. Sandwiched between Cuba and Puerto Rico (both still held by Spain), in 1861 the Dominican Republic — in a move unique in all Latin America — requested recolonization, having found the post-independence chaos untenable. Spain gladly acquiesced. The US protested but, mired in its own civil war, was unable to enforce the Monroe Doctrine. In 1865, the Dominican Republic declared independence for a second time.

Sebastián Piñera, Chile’s new president, represents a subtle trend only now being recognized: the “normalization” of Latin American politics.

South America fared no better. Simon Bolívar, after a series of brilliant campaigns that criss-crossed the continent, created the unstable Gran Colombia, a state encompassing modern Colombia, Panama, Venezuela, and Ecuador, with himself as president — a model Hugo Chávez aspires to emulate. Bolívar then headed to Peru, to wrest power from Joséde San Martín, its liberator. Bolívar was declared dictator, but the Spanish still held what is now Bolivia. He finished San Martín’s job by liberating it and separating it from Peru. The new state was christened with his name. By 1828, Gran Colombia proved unmanageable, so Bolívar declared himself dictator, a move that ended in failure and more chaos.

Southern South America was liberated by San Maríin and Bernardo O’Higgins, with Chile and Argentina going their separate ways. In Chile, Bernardo O’Higgins turned from Supreme Director into dictator, was ousted, and was replaced by another dictator. A disgusted San Martín exiled himself to Europe, abandoning Argentina to a fate of civil war and strongmen. Uruguay and Paraguay carved themselves a niche — but only after Uruguay’s sovereignty had been contested by newly independent Brazil. In Paraguay, José Rodriguez de Francia, Consul of Paraguay (a title unique to Latin America) became in 1816 “El Supremo” for life. An admirer of the French revolution — and in particular of Rousseau and Robespierre — he imposed an extreme autarky, closing Paraguay’s borders to all trade and travel; abolishing all military ranks above captain, and insisting that he personally officiate at all weddings. He also ordered all dogs in the country to be shot.

Strongmen and Stability

The Latin American wars of independence were succeeded by aborted attempts at unity or secession; wars of conquest, honor, and spite; land grabs, big and little uprisings, civil wars; experiments in democracy, republicanism, federation, dictatorship, monarchy, anarchy, and rule by warlords or filibusters; and even reversion to colonialism; all with radical “left-right” swings — in a word, by every imaginable state of affairs, none long lasting. It all culminated in the era of the caudillo: a populist military strongman, usually eccentric, sentimental, long-ruling, and (roughly speaking) right-wing.

In his novel Autumn of the Patriarch, Colombian author (and confidante of Fidel Castro) Gabriel García Márquez offers a profile of a caudillo that has yet to be surpassed. The stream-of-consciousness, 270-page, 6-sentence prose “poem on the solitude of power” was based on Colombia’s Gustavo Rojas Pinilla (1953–57) and Venezuela’s Juan Vicente Gómez (1908–1935), with dashes of Franco and Stalin thrown in. But its indeterminate timelessness, stretching from who-knows-when to forever, also evokes Mexico’s Porfirio Díaz (1876–1911), Paraguay’s Alfredo Stroessner (1954–89), the Dominican Republic’s Rafael Trujillo (1930–61), and Nicaragua’s Anastasio Somoza (1936–56). It could also easily include Brazil’s Getulio Vargas (1930-54), Argentina’s Juan Perón (1946–55 and 1973–76), Haiti’s “Papa Doc” and “Baby Doc” Duvalier (1957–86), and, yes, the longest ruling military strongman of all — Fidel Castro (1959–201?).

The caudillo period had no specific time frame; it was rather a response to instability (or injustice, in the case of left-caudillos) that varied over time, country, and cultural conditions. Take Mexico for example. Besides the usual post-independence chaos, it also suffered invasions from the US and France. So, in 1846, Porfirio Díaz, an innkeeper’s son and sometime theology student, left his law studies to join the army — first, to fight the US, then to fight Santa Anna in one of the latter’s multiple bids for power, and finally to fight the French-imposed Emperor Maximilian.

Politics is a game, and games require rules. Once war and death enter the scene, the game of politics is over.

In the war against Maxmillian, Díaz rose to become division general under Benito Juárez’ leadership but retired after Mexican forces triumphed and Juárez assumed the presidency in 1868. It didn’t take long for Díaz to become disillusioned. One principle that had developed in Mexican politics and ironically — especially considering the nearly 35 years in power that Díaz would enjoy — became institutionalized, was one-term presidential term limits. So when Juárez announced for a second term in 1870, Díaz opposed him. Losing, he cried fraud and issued a pronunciamento, a formal declaration of insurrection and plan of action accompanied by the pomp and publicity emblematic of Mexican politics. After another pronunciamento and additional revolts much politicking, and a term in Congress, Díaz succeeded in ousting his adversaries. He was elected president in 1877. Having based his campaign on a platform of “no reelection," he reluctantly stepped aside after one term and turned over the presidency to an underling, whose incompetence and corruption ensured Díaz’s victory in the 1884 contest.

He set out to establish a pax Porfiriana by (as he termed it) eliminating divisive politics and focusing on administration. The former was achieved by stuffing the legislature, the courts, and high government offices with cronies; making all local jurisdictions answerable to him; instituting a “pan o palo” (bread or a beating) policy, enforced by strong military and police forces; artfully playing the various entrenched interests against each other; and stealing every election. Porfirio Díaz opened Mexico up to foreign investment, built roads and public works, stabilized the currency, and developed the country to such a degree that it was compared economically to Germany and France.

Classifying caudillos as Left or Right is not always easy. Caudillos who focused on economic development, fiscal stability, and monumental public works are generally perceived as right-wing, while those who improved education, fought church privilege, or imposed economic controls are perceived as left-wing. Nearly all were initially motivated by idealism, followed by disillusionment with democracy and addiction to power. Nearly all lined their pockets. Venezuela alone, between 1830 and 1899, experienced nearly 70 years of serial caudillo rule, which, some would argue, continued intermittently to the present.

In Ecuador, 35 right-wing years initiated by a caudillo were followed by 35 left-wing years initiated by another caudillo. General Gabriel García Moreno had saved the country from disintegration in 1859 and established a Conservative regime that wasn’t overthrown until 1895, when Eloy Alfaro led an anti-clerical coup. Alfaro secularized Ecuador, guaranteed freedom of speech, built schools and hospitals, and completed the Trans-Andean Railroad connecting the coast with the highlands. In 1911, his own party overthrew him and further liberalized the regime by opening up the economy. The Liberal Era lasted until 1925. Altogether, Alfaro initiated four coups — two succeeded, and one finally killed him — that made him the idol of Rafael Correa, Ecuador’s present, left-wing, president.

One right-wing caudillo, the Dominican Republic’s Rafael Trujillo (1930–61), was prematurely Green, restricting deforestation and establishing national parks and conservation areas in response to the ravages in next-door Haiti. His successor (after a five-year, chaotic interregnum that included a civil war and US Marines) was Joaquín Balaguer, an authoritarian who dominated Dominican politics until 2000 and continued Trujillo’s conservation policies.

Some caudillos combined elements from both Left and Right, coming up with ideologies that were internally inconsistent but extremely popular. Argentina’s Perón absorbed fascism, national socialism and falangism while stationed as a military observer in Italy, Germany, and Spain. Back in Argentina he allied himself with both the socialist and the syndicalist labor movements to create a power base. In 1943, as a colonel, he joined the coup against conservative president Ramon Castillo, who had been elected fraudulently.

In Paraguay, José Rodriguez de Francia closed the borders to all trade and travel; abolished all military ranks above captain, and insisted that he personallyofficiate at all weddings. He also ordered all dogs in the country to be shot.

When Perón announced his candidacy for the 1945 presidential elections as the Labor Party candidate, the centrist Allied Civic Union, the Socialist Party, the Communist Party, and the conservative National Autonomous Party all united against him — to no effect. As president, his stated goals were social justice and economic independence; in fact, he greatly expanded social programs, gave women the vote, created the largest unionized labor force in Latin America, and went on a spending spree that nearly bankrupted Argentina (it included modernizing the armed forces, paying off most of the nation’s debt, and making Christmas bonuses mandatory). Perón also nationalized the central bank, railways, shipping, universities, utilities, and the wholesale grain market. By 1951, the peso had lost 70% of its purchasing power, and inflation had reached 50%.

During the Cold War, Perón refused to pick either capitalism or communism, instituting instead his “third way," an attempt to ally Argentina with both the United States and the Soviet Union. Today, Peronism remains a vital force in Argentina, with President Cristina Fernández at its helm.

Sandino Lives!

Not that caudillismo needed any intellectual justification, but the social Darwinism that developed during the late 19th century helped to rationalize many of the abuses committed under its aegis. Then, fast on its heels and in rebuttal to it, Marxism burst on the scene, invigorating the Left by advocating the forcible redistribution of wealth. The Left-Right divide widened, and conflict sharpened.

In 1910, old and ambivalent about retiring, Porfirio Díaz decided to run once more for president of Mexico. When he realized that his opponent, Francisco Madero, was set to win, he jailed him on election day and declared himself the winner by a landslide. But Madero escaped and, from San Antonio, Texas, issued his Plan de San Luis Potosí, a pronunciamento promising land reform. It ignited the Mexican Revolution.

The Zapatista Army of National Liberation now sells t-shirts and trinkets to finance its anti-capitalist jihad.

Though not specifically Marxist, the Mexican Revolution has been interpreted as the precursor to the Russian Revolution. Its ideologies — especially “Zapatismo” — are part of the progressive, Fabian, and socialist zeitgeist of the time. In fact, however, the Mexican Revolution — a many-sided civil war that lasted ten years — was so indigenously Mexican as to elude historians’ broader interpretive models. Yet it was the first effective and long-lasting leftist Latin American movement. Its successors are Cuban communism, liberation theology, Bolivarian socialism, and many others. Out of it coalesced Mexico’s Institutional Revolutionary Party (PRI), heir to a coalition of forces and ideologies that were, at last, fed up with fighting. The PRI, a member of the Socialist International, instituted de facto one-party rule, and controlled Mexico for over 70 years.

Other radical leftist revolutionary movements followed — some sooner, some later, not all successful — operating either by force or through the ballot. The earliest (1927) was that of the Sandinistas in Nicaragua. Augusto César Sandino identified closely with the Mexican Revolution. Although he was not a Marxist, his movement adopted that ideology after his death. Five years later, in next-door El Salvador, the Farabundo Martí National Liberation Front (Martí was a Communist Party member and former Sandinista) rose in revolt. Guatemala followed in 1944 with the Jacobo Arbenz coup, then Cuba in 1952 with Castro’s insurrection.

With Castro’s accession to power in 1959, the outbreak of Marxist revolts in Latin America intensified. During the 1960s the Tupamaros rose in Uruguay. In Peru, various groups, including the Shining Path, revolted. The FARC, ELN, and M-19 followed in Colombia. In 1967, Fidel’s own Che Guevara met his death while trying to organize a premature revolution in Bolivia. Then, in 1970, Chileans voted in — by only 36%, a plurality in a three-way race — the first elected Marxist regime in the Americas.

Venezuela was next. Hugo Chávez launched his first, unsuccessful coup in 1992. After a stint in jail he was pardoned, ran for president in 1998, and won.

In Bolivia, Evo Morales, a former trade union leader, and his Movement Toward Socialism won the 2005 elections with a majority.

Latin American Marxism, unlike the European sort, has little to do with the industrial revolution or conditions of the working class. Not only is it currently more tolerant of religious belief; it is more relaxed about ideology and — again, currently — lacks gulags and killing fields. It is more about land distribution and “Social Justice” — a term whose words, innocuous and benign in themselves, don bandoliers and carbines and become fighting words when capitalized.

Social Justice is the concept of creating a society based on the principles of equality, human rights, and a “living wage” through progressive taxation, income and property redistribution, and force; and of manufacturing equality of outcome even in cases where incidental inequalities appear in a procedurally just system.

The term and modern concept of "social justice" were created by a Jesuit in 1840 and further elaborated by moral theologians. In 1971 Peruvian priest Gustavo Gutiérrez justified the use of force in achieving Social Justice when he made it a cornerstone of his liberation theology. As a strictly secular concept, Social Justice was adopted and promulgated by philosopher John Rawls.

The Other Path

Mario Vargas Llosa is a Peruvian writer and 2010 Nobel laureate — pointedly awarded the prize for his literary oeuvre, as opposed to his political writings, but this from a committee that awarded Barack Obama a Peace Prize for nothing more than political penumbras and emanations. Vargas Llosa started as a man of the Left. His hegira from admirer of Fidel Castro to radical neoliberal candidate for president of Peru in 1990 is a metaphor for Latin America’s own swing of the pendulum today.

In 1971 he condemned the Castro regime. Five years later, he punched García Márquez (patriarch of Marxist apologists) in the eye. Their rupture has never been fully healed (or explained), but it is attributed by some to diverging political differences. In 1989, when Peruvian economist Hernando de Soto published his libertarian classic, The Other Path (an ironic allusion to the Shining Path guerrilla movement), Vargas Llosa wrote its stirring introduction. He and de Soto advocated individual private property rights as a solution to property claims by the Latin American poor and Indians. Both the fuzzy squatters’ rights of the urban poor and the traditional subsistence-area claims of indigenous communities were being — literally — bulldozed by corrupt or insensitive governments; the two authors believed that the individual occupiers of the land should own it as their private property. This proposed solution did not sit well with the Social Justice crowd. To them, communal rights trumped individual rights.

But it struck a chord with the poor and dispossessed. So Vargas Llosa declared for the presidency in 1990 on a radical libertarian reform platform (the Liberty Movement). In Peru, the Shining Path guerrillas were terrorizing the country and the economy was a disaster, having been run into the ground by left-wing populist Alan García, who was now running for reelection. In the outside world, Soviet communism and its outliers were disintegrating, both institutionally and ideologically. Between García and Vargas Llosa in the three-way race stood Alberto Fujimori, the center-right candidate. Vargas Llosa took the first round with 34%, nearly the same majority that had put Allende into office in next-door Chile. But he lost the runoff, handing Peru over to the authoritarianism (as well as the reforms) of the Fujimori regime.

Latin American politics culminated in the era of the caudillo: a populist military strongman, usually eccentric, sentimental, long-ruling, and (roughly speaking) right-wing.

Without skipping a beat and less than a month later, Vargas Llosa attended a conference in Mexico City entitled "The 20th Century: The Experience of Freedom." This conference focused on the collapse of communist rule in central and eastern Europe. It was broadcast on Mexican television and reached most of Latin America. There Vargas Llosa condemned the Mexican system of power, the 61-year rule of the Institutional Revolutionary Party, and coined “the phrase that circled the globe”: "Mexico is the perfect dictatorship.” “The perfect dictatorship,” he said, “is not communism, not the USSR, not Fidel Castro; the perfect dictatorship is Mexico. Because it is a camouflaged dictatorship."

But the “perfect dictatorship” was already loosening its grip. Recent PRI presidents had been well-degreed in economics and public administration, as opposed to politics and law. They had already moved Mexico rightward, to the center-left, by privatizing some industries and liberalizing the economy — especially by joining NAFTA. By the 1994 election, the PRI had opened up the electoral system to outside challengers: the center-right National Action Party (PAN) and the strong-left Party of the Democratic Republic (PRD). In the 2000 elections the PRI ceded power to the PAN’s Vicente Fox, though not entirely.

The popular but hapless Fox ended Mexico’s last Marxist uprising, Subcomandante Marcos’ Zapatista Army of National Liberation (they now sell t-shirts and trinkets to finance their anti-capitalist jihad). But he was unable to further the rest of his reform agenda through the PRI-controlled legislature. So Mexico reelected the PAN in 2006. Today, in a move emblematic of Latin America’s change to European-style, alternating center-left/center-right administrations, the PAN and the PRD are exploring avenues of cooperation to pass legislation through the PRI-controlled Congress.

But Vargas Llosa wasn’t through yet. During one of Hugo Chávez’s marathon television tirades in 2009, he challenged Vargas Llosa to a debate on how best to promote Social Justice. When Vargas Llosa accepted, Chavez — in his most humiliating public move to date — declined.

Ho-hum

Peru’s increasingly discredited Fujimori resigned because of corruption, a questionable third presidential term, and the exercise of disproportionate force, once too often. He was followed by Alejandro Toledo, an economist so centrist and dull that he bored his people into not reelecting him. By the 2006 elections, Peru’s centrist politics were entrenched in the most ironic of ways. Alan García, the disastrous, populist left-wing ex-president, ran on a center-right, neoliberal platform — and won. And against all odds, he kept his word. In 2009 Peruvian economic growth was the third highest in the world, after China and India. In 2010 it remained in double figures. The 2011 elections won’t include García, as he can’t succeed himself. They are expected to be contested by the technocratic Toledo and the center-right Keiko Fujimori, Alberto’s daughter and leader of the Fujimorista Party.

It’s much the same — with few exceptions — in the rest of Latin America. Brazil’s widly popular, fiscally prudent, and social justice-sensitive center-left Lula da Silva administration was reelected, this time led by Dilma Rouseff, Brazil’s first female president. She has promised more of the same. In next-door Paraguay, the exceptionally long-ruling (61 years) Colorado Party ceded power in 2008 to the country’s second-ever left-wing president, Fernando Lugo, an ex-bishop and proponent of liberation theology. But Lugo has moved to the center, distancing himself from Chávez and tempering his social and fiscal promises by seeking broad consensus. GDP growth in 2010 was 8.6%.

In 2009 Uruguay elected as president José Mujica, a former Tupamaro guerrilla. But Mujica, described by some as an “anti-politician,” has moved radically to the center. The tie-eschewing, VW Beetle-driving president has promised to cut Uruguay’s bloated public administration dramatically. He identifies with Brazil’s Lula and Chile’s Bachelet rather than Bolivia’s Morales or Venezuela’s Chávez. After 6% growth in 2010, Uruguay is expected to level at 4.4% in 2011.

With the unexpected death of her husband and her disastrous left-wing populist policies (inflation is close to 30%), Argentina’s Fernández is not expected to win reelection in 2011. Reading the writing on the wall, she (unlike Chávez) is tiptoeing toward the center.

In Colombia, the feared authoritarian tendencies of Alvaro Uribe turned out to be wildly exaggerated; and his successor, Juan Manuel Santos, has moved even closer to the center. The two — Santos was Minister of Defense under Uribe — brought the FARC insurgency to its knees, reducing the guerrillas to little more than extortionists and drug dealers. With Colombia’s new-found safety, high growth, and low inflation, its tourist industry is booming.

El Salvador, long the archetype of extreme polarization between the now-peaceful FMLN Marxist revolutionaries and the ex-paramilitary rightwing Arenas coalition, elected Mauricio Funes in 2009. Funes, the FMLN’s surprise candidate, ran on a centrist platform and has stuck to it — throwing the Arenas coalition into disarray. He enjoys a 79% approval rating, which makes him Latin America’s most popular leader. Neighboring Honduras, after deposing a power-grabbing Chávez clone in 2009, elected the center-right Pepe Lobo, who promised reconciliation and stability. Even Guatemala shows signs of progress. The 2007 elections inducted Álvaro Colom, the first center-left president in 53 years.

Latin American Marxism, unlike the European sort, has little to do with the industrial revolution or conditions of the working class.

Costa Rica, long Latin America’s exemplar of democracy and moderation, is becoming ever more so. The 2009 elections turned Laura Chinchilla into Costa Rica’s first female president (one even more stunningly beautiful than Argentina’s Fernández). In spite of being socially conservative, she continues Óscar Arias’ vaguely center-left policies. With the traditional center-right and center-left parties always closely vying for power, the libertarian Partido Movimiento Libertario (PML), which retains a 20% popular vote base (and 10% of the legislature), has emerged as the policy power broker in the Congress.

Latin American politics’ move to the center is even mirrored in its ancillaries. The Cuban American National Foundation, largest of the Cuban diaspora’s political representatives, abjured the use of force after the death of its founder, Mas Canosa, and advocates a more open US policy toward Cuba.

Not all is good news. Though Cuba is showing microscopic hints of change (as reported in Liberty’s December issue), Chávez’ power play in Venezuela after his electoral defeat is yet to play out, and Bolivia’s Evo Morales holds steady after a barely avoided civil war, Nicaragua’s anti-capitalist tyrant Daniel Ortega is bound and determined to hold onto power come what may. But their days, too, are numbered.




Share This


The Capital Gang

 | 

For me “I’m a libertarian and I support the Washington Redskins” is right up there with “I’m from the government and I am here to help.” It makes my shoulders twitch and I feel creepy-crawlies run up and down my spine.

It all started in the run-up to Superbowl XVIII played at Tampa Stadium on January 22, 1984. The highly backed patrician ruling-class Redskins faced the underdog blue-collar working-class Raiders. Their respective QBs had some history as they had competed together for the highly prestigious Heisman Trophy back in 1970. Redskin QB (then with Notre Dame) Joe Theismann changed the way he pronounced his name from Thees-man to Thighs-man to make it rhyme with the name of the vaunted trophy in order to garner more votes. When Raider QB (then at Stanford) Jim Plunkett convincingly blew away Joe and also famous father Archie Manning (2,229 to 1,410 to 849), the Thighs-man camp infamously said that Jim had only won it because both his parents were blind. Please. What a classless act.

Happily the Raiders smashed the Redskins, leading 21–3 after just one quarter and scoring on special teams, defense, and offense. The final score was 38–9, and the record books had to be rewritten. Poetic justice?

One additional happy result of that total whipping was that the distinguished MVP scholar Charles Murray renamed his book of the mid-’80s, the book that shot him to stardom. As he recounts on pages xiii and xiv of the tenth anniversary edition, the working title had been F****** Over The Poor — The Missionary Position. Then it became Sliding Backward, but while he was watching the sad sack ‘Skins go nowhere late that Sunday, the title Losing Ground was born. Some TV commentator probably said something such as “the ‘Skins lose yet more ground to the Raiders,” and a light went on in Murray’s head.

There is only one good reason for the continued existence of the Landover Looters, and it is simply this: every single time they lose, absenteeism within the federal government soars the following Monday.

Eighteen months later I moved from California to northern Virginia and wall to wall, front to back, ceiling to floor ‘Skins fandom. There was no soccer (DC United) and no baseball (Nationals) and the basketball (Bullets) and ice hockey (Capitals) barely registered on the local sports radar screen. All that these rent-seeking, tax-guzzling federal employees and their hangers on cared about was the Redskins. Forget the country. They were totally nuts, completely besotted. There was a 30-year wait for season tickets and probably still is. People had to die before you could advance up the list. And it was all so PC that when the gun death-rate in DC hit record levels the Bullets had to be renamed and chose to be the Wizards.

In defense of all the other pro sport teams named Washington or DC, at least they all play there. The so-called Washington Redskins play in Landover MD and train in Ashburn VA. One wonders how many of the players and staff live in DC and how many in the suburbs or even further out.

I am curious as to why all five major sports leagues have to have a DC area franchise. Surely this cannot be connected to the special status that sports leagues enjoy under federal regulations.

There are large echoes here of the equally despised British soccer team Manchester United (fondly known in Manchester itself as “the scum”) which regularly sits atop the English Premier League. It plays in a city called Stretford, and its players live in the next door, very tony county of Cheshire rather than more downmarket Lancashire.

Hence the joke, How many soccer clubs are there in Manchester? Two: Manchester City and Manchester City Reserves. And hence the sign at the Manchester city line when Carlos Tevez signed to leave United for City: Welcome to Manchester.

The name refers to a criminal act of destruction of private property, deception, and sleight of hand; commemorating an attempt to point the finger of a crime falsely at a minority.

Common sense surely dictates that just as Manchester United should be renamed Stretford United so the Washington Redskins should become the Landover Redskins or perhaps the Landover Looters, to reflect the dominant local industry. It is simply dishonest to trade the way they do. They are living a lie.

But why is the team called the Redskins in the first place? What has the swamp of Washington got to do with Native Americans other than as a source of subsidy and special treatment? The answer is that the franchise started in Boston, Mass., as in the place where white patriots dressed up as Native Americans and chucked all that tea overboard. So the name refers to a criminal act of destruction of private property, deception, and sleight of hand; the name commemorates an attempt to point the finger of a crime falsely at a minority, an attempt to unleash the might of the British Army on peaceful natives. It really is disgraceful.

Speaking of minorities, these ‘Skins so beloved by Federal bureaucrats were the very last team in the NFL to integrate, and they did so with great reluctance and in a pretty surly, bad tempered way. The suspicion is that they did so only because the Department of the Interior owned their then stadium (typical) and the Kennedy administration was not impressed at seeing a non-integrated team in the nation’s capital — not really Camelot!

There are sports bars in the DC region with affiliations other than the ‘Skins, but they are nearly as rare as hens’ teeth. I used to frequent a Steelers bar with my friend Father Jack out toward Dulles on fall Sunday afternoons, until the BATF hit it. “Hands on the table — don’t reach for anything, not even your cutlery — don’t make our day.” I am sure the BATF agents were all ‘Skins fans.

The result is a cloying, all pervading, overarching pro-Redskins atmosphere that is not healthy. I recall taking elder son Miles to pre-K one Monday morning in say 1986; he was proudly wearing his brand new Dallas Cowboys shirt, a gift from Uncle Leonard. A female teacher stopped us in the corridor:

Teacher, somewhat condescendingly and pointing at said shirt: “Mr. Blundell, don’t you know this is Redskins country?”

Blundell in his best posh British accent: “Oh I am terribly sorry. I thought the Cowboys were America’s Team!”

If this were a comic strip, the next panel would show a woman with a screwed up face looking at the heavens, elbows stuck firmly into her ribs and clenched fists raised by her jaw, with a thought bubble reading “Argh! *&#%@?+^#*&.”

So as the population of the Swamp changes every election cycle, waves of well-meaning (I am being charitable) men and women, true sports fans who support good honest teams that play in privately owned stadiums, sweep in and are corrupted into supporting the Redskins. You can’t chat at the water cooler or over coffee or at lunch unless you are in the Skindom. It is so sad, but then Washington believes in monopolies such as currency issuance, taxation, and regulation.

When good internationally proven liberty-minded folk such as me confront these so-called libertarian Redskins we receive really mealy-mouthed responses, typical of which is “Oh, when I think of Washington I think of the person not the place!” Right! These people are confused and confusing, embarrassed and embarrassing, and not to be trusted until they go through therapy.

There is only one good reason for the continued existence of the Landover Looters, and it is simply this: every single time they lose, which is well over 50% recently, absenteeism within the Federal government soars the following Monday. This can only be a good thing.

But there is a solution for the Landover Looters problem. The team should move to Syracuse in upstate New York and become the Syracuse “Washington’s Redskins” with the nickname of the “Waistcoats.” Let me explain. George Washington signed a treaty with the Oneida Nation in that area to fight the Brits. So to the extent that Washington Redskins exist free of deceit, capture, and vainglory they are in the Syracuse-Finger Lakes region.

Why Waistcoats? Because Washington wore them and it’s probably a better nickname than “the big girls’ blouses,” which is what I call “libertarians” who support the Landover Looters.




Share This


Artists in the Movies: The Ten Best Films

 | 

I’m not sure why I consider artists so fascinating. Perhaps it is the especially acute way they see the world — vision being for me only a weak sensory modality. Perhaps it is the fact that they use more of the right side of the brain than I typically use in my own work. But whatever the reason, apparently I am not alone in my fascination, since movies about artists are fairly numerous in the history of cinema. In this essay I want to review ten of the best such movies ever made.

I will confine myself to artists in the narrow sense of painters, as opposed to creative writers, photographers, or musicians. I will even put aside sculptors, even though that rules out reviewing such interesting films as Camille Claudel (1988), the good if depressing bioflick about the sculptress who worked with and was the mistress of Auguste Rodin.

I will also confine myself to standard movies, as opposed to documentaries. There are many fine documentaries about individual artists and artistic movements. One particularly worth noting is My Kid Could Paint That” (2007), a film that honestly explores the brief career of four-year-old Marla Olmstead, who caused a sensation when her abstract paintings caught the attention of the media and the public, and started selling them for many thousands of dollars each. After an expose on CBS News, the public began to wonder if she had really produced her own work. That is the fascinating question the film investigates, but in the background is another, equally fascinating question — whether abstract art has any intrinsic quality, or whether it is all a matter of the perception of the critics.

But to return. One other restriction I will adopt is to consider feature films only, as opposed to TV series. This causes me some grief, since one of my favorite portrayals of painters on screen will have to be skipped — the delightful three-part BBC miniseries The Impressionists (2006). This series is TV at its finest. It is a historically accurate portrayal of the French impressionist school of painters (Manet, Monet, Renoir, Bazille, Degas, and Cézanne) that is compelling and entertaining story telling. It is structured as a series of memory flashbacks that occur to Claude Monet as he is interviewed late in his life by a journalist about the artistic movement he and his circle created.

In theory, it shouldn’t be any more difficult to produce a decent movie about a painter than about any other subject, but in practice, there are many possible pitfalls.

But what does a good movie about an artist include? Such a film can take many forms. It can be a straight bioflick recounting a person’s life and achievements — as in Lust for Life, The Agony and the Ecstasy, and Seraphine. It can explore a controversy, such as the merit of abstract art (Local Color). It can explore some of the ways artists interact with other artists — competition or romantic involvement (Modigliani, Frida, and Lust for Life again). It can examine the interaction between artists and mentors (Local Color), or patrons or art critics (The Agony and the Ecstasy, Girl with a Pearl Earring), or other intellectuals (Little Ashes). It can dramatize relationships between artists and family members (Lust for Life, Moulin Rouge). It can try to meaningfully convey the inspiration for or the process of artistic creation (The Agony and the Ecstasy, Rembrandt, Girl with a Pearl Earring). Finally, it can analyze the personality of an artist (The Moon and Sixpence, Moulin Rouge, Seraphine).

My criteria for ranking these movies are not much different from those I use to judge any other movies: quality of ideas, story, acting, dialogue, and cinematography. In theory, it shouldn’t be any more difficult to produce a decent movie about a painter than about any other subject, but in practice, there are pitfalls that can ensnare you.

In particular, it seems that many directors, in trying to make a movie about art, try to make the movie artsy. One thinks of the disastrously bad film Klimt (2007), an internationally produced bioflick about the Viennese artist Gustav Klimt (1862–1918), played by John Malkovich. The flick is tedious and hard to follow, with numerous hallucinatory scenes interspersed in the action. Malkovich gives a listless performance, portraying the artist as bereft of any charm. The result is risible.

I expect art, not artsiness. And I will mention one other thing I look for in movies about painters: if it accords with the story line, I like to see the artist’s work displayed. If a person is supposed to be great about doing something, one naturally wants to see the evidence.

To build suspense, I’ll present the movies that made my top ten in reverse order of my judgments of their importance and quality.

Number ten on the list is The Agony and the Ecstasy (1965). This lavishly produced film is based Irving Stone’s best seller of the same title, but actually just focusing on part of the story — Michelangelo (1475–1564, portrayed by Charlton Heston) painting the ceiling of the Sistine Chapel at the prodding of his patron, Pope Julius II (Rex Harrison). The eminent director Carol Reed directed the movie, and it was nominated for five Academy Awards, including those for cinematography, art direction, and score. In each of those areas the film is indeed excellent. Especially effective is the scene in which Michelangelo gets his key inspiration for his ceiling mural from observing the beauty of the clouds. The interesting idea explored in the movie is the way in which influence of a patron can help even a highly individual artist elevate the artistic level of his work. The pope insisted that Michelangelo do the job, even though he initially demurred, viewing himself primarily as a sculptor.

Many directors, in trying to make a movie about art, try to make the movie artsy. One thinks of the disastrously bad film Klimt.

The acting in this film isn’t as good as one would expect of the two leads. Heston and Harrison, both recipients of the Oscar for best actor in other movies, seem somehow miscast in their roles. But the movie transcends this weakness; the glory of Michelangelo’s art is on full display in a beautiful color production.

Number nine is Frida (2002), directed by Julie Traynor and starring Selma Hayek (who also coproduced the movie). This is an unvarnished look at the life of Frida Kahlo (1907–1954), focusing on the accident that made her a semi-invalid and caused her lifelong pain, and on her tempestuous marriage to the painter Diego Rivera. Rivera was already famous when they met, and her career grew alongside his. His numerous adulterous affairs are not hidden, nor are her affairs with other women (as well as Leon Trotsky). Both Rivera and Kahlo were devout socialists, as the movie emphasizes.

Selma Hayek’s performance is extraordinary — it is obvious she was completely devoted to the project. She convincingly conveys the physical suffering Kahlo endured, along with the mental anguish caused by Rivera’s endless philandering. She was nominated for an Oscar for her performance. Alfred Molina is excellent as Diego Rivera, and Edward Norton gives a nice performance as Nelson Rockefeller (who, ironically, commissioned Rivera to do a mural for him), as does Antonio Banderas (playing the painter David Alfaro Siqueiros). The cinematography is also excellent, and we get to see quite a few of the artist’s paintings. Traynor does a good job of integrating the history of the times with the story line.

Number eight is Local Color (2006). Written and directed by George Gallo, it is a fictionalized account of his friendship with the landscape painter George Cherepov (1909–1987), an artist he met while he was hoping to pursue art, before turning in his twenties to screenwriting and directing. Gallo’s earliest success was writing the screenplay for “Midnight Run.”

In the movie, the Gallo figure John Talia Jr. (Trevor Morgan) is thinking about what to do after high school. His father (perfectly played by Ray Liotta) hopes he will get a regular job, but young John wants to be a painter. He manages to gain the friendship of a crusty, profane, but gifted older Russian painter, here called Nicoli Seroff (played brilliantly by Armin Mueller-Stahl). Seroff invites John to spend the summer at his house, much to the worry of John’s father, who is concerned that Seroff is gay and will “take advantage” of his son. After some tension between the two, Seroff finally breaks down and shows John how to paint.

Besides being a nice meditation on the role a mentor can play in an artist’s life, the movie has as a subtext an exploration of two related and important questions about contemporary art: is there great artistic merit in abstract art, and should art divide the elites from the ordinary public? This subtext plays out in the exchanges between the prickly Seroff and a pompous local art critic Curtis Sunday (played exquisitely by Ron Perlman, of Hellboy and Beauty and the Beast). Their dispute culminates in a hilarious scene in which Seroff shows Sunday a painting produced by an emotionally disturbed child with whom Seroff has worked. Seroff shows Sunday the painting without revealing who made it, and asks for Sunday’s opinion about the artist. Sunday then begins to talk earnestly about the virtues of the artist, thinking he must be a contemporary painter. When Seroff tells Sunday the truth, Sunday storms off to the howls of Seroff’s laughter. The movie has excellent cinematography — which gathers interest from the fact that all the oil paintings shown in the film were painted by Gallo himself.

In the film Frida, Diego Rivera's numerous adulterous affairs are not hidden, nor are Kahlo's affairs with other women — as well as with Leon Trotsky.

Number seven on my list will be a surprise. It is The Moon and Sixpence (1942). The movie is a superb adaption of W. Somerset Maugham’s brilliant short novel of the same name. The story is about a fictional painter, Charles Strickland, and is loosely based on the life of Paul Gauguin (1848–1903). Strickland (well played by the underrated actor George Sanders, who could play the cad well) is a stockbroker who suddenly and unexpectedly leaves his wife and family in midlife to pursue his vision of beauty — his painting. He is followed by a family friend, Geoffrey Wolfe (a character I suspect Maugham based on himself, and beautifully portrayed by Herbert Marshall), who narrates as he tries to make sense of Strickland’s ethical worldview.

What we see is a man who is an egotist to the core, but we realize that this is an egotism driven by a desire to create. A key scene in this regard is the one in which Strickland explains to Wolfe that he doesn’t choose to paint, but he has to paint. Maugham doesn’t make it easy on us by portraying Strickland as also driven to flee civilization — by, for instance, a bad marriage or family. In fact, the title seems to indicate that in the end he himself fails to appreciate Strickland’s choice: it comes from a Cockney phrase about somebody who is so struck by the moon that he steps over sixpence: by focusing on something abstract — such as artistic beauty—one misses out on something that may be more worthwhile, such as rich human relationships.

But what makes this film powerful is its exploration of the idea that a person can be an egoist—even immoral by conventional standards — but still be a creative genius. Indeed, I recommend this film in my ethical theory classes as an example of Nietzsche’s brand of egoism.

Number six is Girl with a Pearl Earring (2003), a tale from the historical novel by Tracy Chevalier about the life of Johannes Vermeer (1632–1675). It imagines the story of a young woman, Griet, who comes to the Vermeer household as a maid. Griet’s father was a painter, but went blind, forcing her to support herself by working as a domestic servant. The Vermeer household is dominated by his all-too-fecund and extremely jealous (not to say shrewish) wife Catharina, along with her mother Maria Thins.

Griet is fascinated by Vermeer’s work, the colors and composition. Noticing her interest, Vermeer befriends her, letting her mix his paints in the morning. The viewer suspects that this friendship involves a romantic interest, at least on his part. He is careful to keep the friendship from Catharina’s notice. While shopping with the chief maid, Griet meets the butcher’s son Pieter, who is very attracted to her, and we suspect that the feeling is mutual.

As if this incipient romantic triangle weren’t enough excitement for poor Griet, Vermeer’s concupiscent patron Van Ruijven sees her and pushes Vermeer to let her work in his house. Faced with Vermeer’s refusal, Van Ruijven commissions him to paint her, which Vermeer agrees to do. Van Ruijven, obviously, isn’t motivated by art so much as by lust — he even attempts to rape Griet. All this culminates, however, in her becoming the model for Vermeer’s most famous masterpiece, “Girl with a Pearl Earring.” The earring, which is one of a pair borrowed from Catharina, making her extremely jealous, goes with Griet as she leaves Vermeer’s household, ending her adventure with an interesting memento.

What we see is a man who is an egotist to the core, but we realize that this is an egotism driven by a desire to create.

The art direction is superb. It is executed in colors reminiscent of the painter’s method (dark background with vivid tones in the key objects). Appropriately, the film received Oscar nominations for both best art direction and best cinematography. The acting was almost entirely excellent, with Essie Davis playing a very irascible Catharina, Tom Wilkinson a randy Van Ruijven, Judy Parfitt a practical Maria, and Cillian Murphy a supportive Pieter. Especially outstanding is Scarlett Johannson as a very self-contained Griet. She bears an uncanny resemblance to the girl in the actual painting. The sole disappointment is Colin Firth as Vermeer. He plays the role in a very inexpressive way — more constipated than contemplative, to put it bluntly.

Number five is a film about the life and work of a controversial modern artist, Pollock (2000).

Jackson Pollock (1912–1956) was a major figure in the abstract art scene in post-WWII America. He grew up in Arizona and California, was expelled from a couple of high schools in the 1920s, and studied in the early 1930s at the Art Students League of New York. From 1935 to 1943 he did work for the WPA Federal Art Project. During this period, as throughout his life, he was also battling alcoholism.

Receiving favorable notice in the early 1940s, in 1945, he married another abstract artist, Lee Krasner. Using money lent to them by Peggy Guggenheim, they bought what is now called the Pollock-Krasner House in Springs (Long Island), New York. Pollock turned a nearby barn into his studio and started a period of painting that lasted 11 years. It was here he developed his technique of letting paint drip onto the canvas. As he put it, “I continue to get further away from the usual painters’ tools such as easel, palette, brushes, etc. I prefer sticks, trowels, knives, and dripping fluid paint or heavy impasto with sand, broken glass or other foreign matter added.” He would typically have the canvas on the floor and walk around it, dripping or flicking paint.

In 1950, Pollock allowed the photographer Hans Namuth to photograph him at work. In the same year he was the subject of a four-page article in Life, making him a celebrity. During the peak of his popularity (1950–1955), buyers who were pressing him for more paintings, making demands that may have intensified his alcoholism.

He stopped painting in 1956, and his marriage broke up (he was running around with a younger girlfriend, Ruth Kligman). On August 11, 1956, he had an accident while driving drunk that killed both him and a friend of Ruth, and severely injured her. But after his death, Krasner managed his estate and worked to promote his art. She and he are buried side by side in Springs.

Critics have been divided over Pollock’s work. Clement Greenberg praised him as the ultimate phase in the evolution of art, moving from painting full of historical content to pure form. But Craig Brown said that he was astonished that “decorative wallpaper” could gain a place in art history. However one might view Pollock’s work, it has commanded high prices. In 2006, one of his paintings sold for $140 million.

The movie tracks the history fairly closely, starting in the early 1940s, when Pollock attracted the attention of Krasner and Guggenheim, and moving through his marriage to Krasner, his pinnacle as the center of the abstract art world, and the unraveling of his personal life. Throughout, we see him angry, sullen, and inarticulate, whether drunk or sober.

Ed Harris, who directed the film and played the lead, is fascinating (if depressing) to watch. He plays a generally narcissistic Pollock, with the problem of alcoholism featured prominently. He was nominated for a best actor award for this role. Especially good is Marcia Gay Hardin as Lee Krasner, who won a best actress Oscar for her performance. The main defect of the movie is that it gives us no idea why Pollock was angry and alcoholic. Was it lack of respect for his own work? Did he feel it wasn’t really worthy of the praise it received? We get no clue.

Number four is a piece of classic British cinema, “Rembrandt” (1936), meaning, of course, Rembrandt van Rijn (1606–1669), who is generally held to be the greatest painter of the Dutch Golden Age (an era that included Vermeer, his younger contemporary). Rembrandt achieved great success fairly early in life with his portrait painting, then expanded to self-portraits, paintings about important contemporaries, and paintings of Biblical stories. In the latter, his work was informed by a profound knowledge of the Bible.

But his mature years were characterized by personal tragedy: a first marriage in which three of his four children died young, followed by the death of his wife Saskia. A second relationship with his housekeeper Geertje ended bitterly; and a third, common law, marriage to his greatest love, Hendrickje Stoffels, ended with her death. Finally, Titus, his only child to have reached adulthood, died. Despite his early success, Rembrandt’s later years were characterized by economic hardship, including a bankruptcy in which he was forced to sell his house and most of his paintings. The cause appears to have been his imprudence in investing in collectables, including other artists’ work.

Rembrandt’s painting was more lively and his subjects more varied than was common at the time, when it was common to paint extremely flattering portraits of successful people. One of his pieces proved especially provocative: “The Militia Company of Captain Frans Banning Cocq,” often called “The Night Watch,”an unconventional rendition of a civic militia, showing its members preparing for action, rather than standing elegantly in a formal line up. Later stories had it that the men who commissioned the piece felt themselves to have been pictured disrespectfully, though these stories appear to be apocryphal.

The main defect of Pollock is that it gives us no idea why he was angry and alcoholic. Was it lack of respect for his own work? Did he feel it wasn’t really worthy of the praise it received? We get no clue.

The movie is fairly faithful to historical reality, except that it doesn’t explore Rembrandt’s financial imprudence, attributing his later poverty to his painting of “The Night Watch as an exercise in truth-telling. The movie shows him painting these middle-class poseurs for what they were, and their outrage then leading to a cessation of high-price commissions from other burghers. The direction is excellent, as one would expect from Alexander Korda, one of the finest directors Britain ever had. The support acting is first rate, especially Elsa Lanchester as Rembrandt’s last wife Hendrickje, Gertrude Lawrence as the scheming housekeeper and lover Geertje, and John Bryning as the son Titus. Charles Laughton’s performance as Rembrandt is masterful. There are great actors, and then there are legends, and he was both. Unlike his loud and dominating performances in such classics as The Hunchback of Notre Dame and Mutiny on the Bounty, his role in this film is that of a wise and decent man, devoted to his art and his family, and he makes it even more interesting. The main flaw in the film is that we don’t see much of the artist painting or of his paintings, but that is a comparatively minor flaw in an otherwise great film.

Number three is the great Moulin Rouge (1952), based on the life of Henri Marie de Toulouse-Lautrec-Monfa (1864–1901). Toulouse-Lautrec was born into an aristocratic family. At an early age he lived in Paris with his mother, and showed promise as an artist. But also early in his life he showed an infirmity. At ages 13 and 14 he broke first one leg than the other, and they both failed to heal properly. As an adult, he had the torso of a man and the legs of a boy, and he was barely 5 feet tall. A lonely and deformed adolescent, he threw himself into art.

He spent most of his adult life in the Montmartre area of Paris, during a time when it was a center of bohemian and artistic life. He moved there in the early 1880s to study art, meeting van Gogh and Emile Bernard at about this time. After his studies ended in 1887, he started exhibiting in Paris and elsewhere (with one exhibition featuring his works along with van Gogh’s).

He focused on painting Paris on the wild side, including portraits of prostitutes and cabaret dancers. In the late 1880s, the famous cabaret Moulin Rouge opened (it is still in Montmartre to this day), and commissioned Toulouse-Lautrec to do its posters. These brought him to public attention and notoriety, and won him a reserved table at the cabaret, which also displayed his paintings prominently. Many of his best known paintings are of the entertainers he met there, such as Jane Avril and La Goulue (who created the can-can).

By the 1890s, however, his alcoholism was taking its toll on him, as was, apparently, syphilis (not unknown among artists of the time). He died before his 37th birthday, at his parent’s estate. In a brief 20 years, he had created an enormous amount of art — over seven hundred canvases and five thousand drawings.

The movie, directed (and co-written) by John Huston, was a lavish production fairly true to history. It should not be confused with the grotesque 2001 musical of the same name. The cinematography and art direction are superb, showing us scenes of the Moulin Rouge in particular and Paris in general, as captured by the artist. The film won the Oscar for best art direction and best costume design.

The directing and acting are tremendous. (Huston was nominated for best director, and his film for best picture.) Zsa Zsa Gabor is great as Jane Avril, as are Katherine Kath as La Goulue and Claude Nollier as Toulouse-Lautrec’s mother. And Colette Marchand is perfect as Marie Charlet, the prostitute with whom Toulouse-Lautrec becomes involved. She wa nominated for an Academy Award as best supporting actress, and won the Golden Globe, for her performance. But most amazing is the work of the lead, Joses Ferrer, who plays both Toulouse-Lautrec père and fils. Playing Toulouse-Lautrec the artist required Ferrer (always a compelling actor) to stand on his knees. His was a bravura performance, making the artist both admirable and pitiable. Ferrer was nominated for an award as best actor, but unfortunately did not win.

If there is one flaw in the film, it is an unneeded sentimentality, well illustrated by the final scene. With Henri on his deathbed, his father cries to him that he finally appreciates his art, while figures from Henri’s memory bid him goodbye. Huston, one of the greatest directors in the history of film, especially adept at coldly realistic film noir (e.g., “The Maltese Falcon”), should have toned this down. My suspicion is that the studio wanted something emotionally “epic,” and Huston obliged.

Number two on my list is a small independent flick, Modigliani (2004). The story explores the art scene in Paris just after WWI, with artists such as Pablo Picasso, Amedeo Modigliani, Diego Rivera, Jean Cocteau, Juan Gris, Max Jacob, Chaim Soutine, Henri Matisse, Marie Vorobyev-Stebeslka, and Maurice Utrillo living in the Montparnasse district.

We meet Modigliani as he enters the café Rotonde, stepping from tabletop to tabletop as the patrons applaud.

Amedeo Modigliani (1884–1920) was born into a poor Jewish family in Italy. He grew up sickly, contracting tuberculosis when he was 16. He showed interest and talent in art at an early age, and went to art school, first at his hometown of Livorno, then later in Florence and Venice. He was fairly well read, especially in the writings of Nietzsche. He moved to Paris in 1906, settling in Montmartre. Here he met Picasso, and spent a lot of time with Utrillo and Soutine. He also rapidly became an alcoholic and drug addict, especially fond of absinthe and hashish (beloved by many artists then). He adapted rapidly to the Bohemian lifestyle, indulging in numerous affairs and spending many wild nights at local bars. Yet he managed to work a lot, sketching constantly. He was influenced by Toulouse-Lautrec and Cezanne but soon developed his own style (including his distinctive figures with very elongated heads). After a brief return home to Italy in 1909 for rest, he returned to Paris, this time moving to Montparnasse. He is said to have had a brief affair with Russian poetess Anna Akhmatova in 1910, and worked in sculpture until the outbreak of WWI. He then focused on painting, among other things painting portraits of any of other artists.

In 1916, he was introduced to a beautiful young art student, Jeanne Hebuterne. They fell in love, and she moved in with him, much to the anger of her parents, who were conservative Catholics, not fond of the fact their daughter was involved with a poor, drunken, struggling Jewish artist.

And struggle they did. While Modigliani sold a fair number of pieces, the prices he got were very low. He often traded paintings for meals just to survive. In January 1920, Modigliani was found by a neighbor delirious, clutching his pregnant wife. He died form tubercular meningitis, no doubt exacerbated by alcoholism, overwork, and poor nourishment.

His funeral attracted a gathering of artists from Paris’ two centers of art (Montmartre and Montparnasse). It was all very Nietzschean: brilliant young man does art his way, defies all slave moral conventions, and dies in poverty. The genius is spurned by hoi polloi too addled by slave morality to appreciate the works of the übermensch. Jeanne died two days later by throwing herself out a window at her parents’ house, killing herself and her unborn child. It was only in 1930 that the family allowed her to be reburied by his side.

The movie takes place in the pivotal year 1919. We meet Modigliani as he enters the café Rotonde, stepping from tabletop to tabletop as the patrons applaud. He winds up at Picasso’s table, where he kisses Picasso. This bravura entry invites us to focus where we should — on the relationship between these two artists, both important in a new era of art. The relationship is complex. On the one hand, they are obviously friends — and friends of a sort that Aristotle would have approved: their friendship is based on appreciation of each other’s intellectual virtue, their art. But there is a darker side to it: they are also rivals, competitors for the crown of king of the new artists.

Modigliani and Jeanne are struggling to pay rent, and Jeanne’s father has sent their little girl away to a convent to be raised. Modigliani sees a chance to get his child back, and provide for Jeanne. He will enter a work in the annual Paris art competition, one that, so far, he and his circle have scorned. Picasso, feeling challenged, enters also, with other members of the circle joining him. Modigliani knows that the competition will be tough, so by force of will he works to produce a masterpiece. This challenges Picasso as well, and we see the artists vying to see who will win.

But the denouement is tragic. Modigliani finishes, and asks Picasso to take his piece to the show. He goes to City Hall to marry Jeanne. After leaving late, he stops at a bar for a drink. And that foolish act leads to an ending whose bittersweet drama I don’t want to spoil.

The acting is excellent throughout. Hippolyte Girardot is notable as Maurice Utrillo, and Elsa Zylberstein is superb as Jeanne. But the best support is given by Omid Djalili as a smoldering, intense Picasso, who succeeds where Modigliani fails, but understands that his success did not result from greater genius. The amazing Andy Garcia gives a fabulous performance as Modigliani.

It was all very Nietzschean: brilliant young man does art his way, defies all slave moral conventions, and dies in poverty. The genius is spurned by hoi polloi too addled by slave morality to appreciate the works of the übermensch.

The critics panned this movie mercilessly, and it has some flaws — most importantly, you cannot appreciate the film without a knowledge of Modigliani’s biography, because it focuses only on his last year. (It also takes some liberties with the facts.) But the film powerfully conveys a unique friendship and rivalry, and explores an artist’s self-destructiveness. Very instructive, though not the makings of a box office bonanza.

And now — drum roll please! — coming in as number one on my list is a classic that holds up well, more than a generation after its making: Lust for Life (1956).

Like The Agony and the Ecstasy, this movie was based on a best-selling novel by Irving Stone. The director was the brilliant Vincente Minnelli. The film tells the tragic story of the life of Vincent van Gogh (1853–1890), with lavish attention to the man’s magnificent art. It follows the life of van Gogh from his youth, during which he his struggled to find a place as a missionary, to his mature years, during which he struggled to find a place as an artist. (The van Gogh family lineage was full of both artists and ministers.) Van Gogh is usually categorized with Gauguin, Cézanne, and Toulouse-Lautrec as the great post-impressionists.

Van Gogh is played by Kirk Douglas, who was primarily known as an action lead — adept at playing the tough guy outlaw or soldier (helped by his buff physique and chiseled handsome face). This was a casting gamble, but it paid off, with Douglas giving one of the best performances of his career, if not the best performance. He was nominated for a Best Actor Oscar and won the Golden Globe for playing the mentally tormented van Gogh with credibility.

The support acting just doesn’t get any better. Most notable is Anthony Quinn as a young, egoistic, and arrogant Paul Gauguin, who for a brief time was van Gogh’s roommate, but couldn’t handle van Gogh’s emotional intensity and instability. Quinn rightly received an Oscar for best supporting acting. Also excellent is James Donald as van Gogh’s loyal brother Theo.

The story development and dialogue are first rate (the screenwriter Norman Corwin was nominated for an Oscar), as is the art direction (also nominated for an Oscar).

The movie showcases many of van Gogh’s paintings. It also explores the crucial role his brother played in keeping him painting, supporting him financially as well as emotionally. If it were not for Theo van Gogh, the world would likely have never known Vincent. The contrast with Moulin Rouge is stark: Toulouse-Lautrec never got the support of his father until it was too late.

Five films that did not make my list deserve honorable mention. The first must be of a picture I have reviewed for Liberty (October 2009), Seraphine (2009). It is a wonderfully filmed, historically accurate bioflick of the French “Naïve” painter Seraphine Louis (1864–1942). She was discovered by an art critic, flourished for a brief period after World War I, but with the Depression her career ended, and she was eventually confined to an asylum. The relatively unknown actress Yolande Moreau is simply wonderful in the lead role.

The second honorable mention is Convicts 4 (1962), which tells the true story of artist John Resko. Resko was condemned to death after he robbed and unintentionally killed a pawnshop owner while attempting to steal a stuffed toy for a Christmas gift for his daughter. He was given a reprieve shortly before his scheduled execution, and with the help of some fellow inmates adapted to prison life. In prison, he learned to paint, and came to the notice of art critic Carl Calmer, who fought for — and eventually, with the help of the family of the man Resko killed won Resko’s release. Ben Gazzara is outstanding as Resko, and Vincent Price (in real life an art critic and a major art collector) convincing as Calmer.

If the producers had wished to fictionalize the story, they should have done so, and changed the names.

Third honorable mention goes to the television movie Georgia O’Keeffe (2009). Again, since I recently reviewed this movie for Liberty (August 2010), I will be brief. The film gives a nice account of one of the first American artists to win international acclaim, Georgia O’Keeffe (1887–1986). It focuses on her most important romantic and professional relationship, the one with Alfred Stieglitz, the famous photographer and art impresario. O’Keefe (played superlatively by Joan Allen) was hurt by Stieglitz’s philandering, but they remained mutually supportive professionally, even after a painful parting. Jeremy Irons is superb as Stieglitz.

As I noted in my review, at the end of the movie, one is left to wonder why Stieglitz was so callous in his treatment of O’Keeffe (in flouting his adultery, and in one scene bragging about his new paramour’s having his child to O’Keeffe, with whom he had earlier angrily dismissed the idea of having children). Was this merely the blindness of narcissism, or was there an undercurrent of profound envy at his wife’s success as an artist — one greater than his?

Fourth honorable mention is Artemisia (1998), based on the life of Artemisia Gentileschi (1593–1656). She was one of the earliest women painters to win widespread acclaim, being the first woman artist accepted into Florence’s Accademia di Arte del Disegno. The film is gorgeously produced, with a first-rate performance by Valentina Cervi as Artemisia and Miki Manojlovic as Agostino Tassi. Its major flaw is its historical inaccuracy, portraying Tassi as Artemisia’s chosen lover, while in fact he was her rapist. If the producers had wished to fictionalize the story, they should have done so, and changed the names. Stretching or selectively omitting history in a bioflick can make sense, but a total inversion of a pivotal event is a major flaw.

Watching a large number of movies about artists over a short period of time can be a recipe for depression.

The fifth film receiving honorable mention is Basquiat (1996), a good movie about the sad life of Jean-Michel Basquiat (1960–1988), who was one of the earliest African-Americans to become an internationally known artist. He was born in Brooklyn, and despite his aptitude for art and evident intelligence (including fluency in several languages and widespread reading in poetry and history), he dropped out of high school. He survived on the street by selling t-shirts and postcards, and got his earliest notice as a graffiti artist using the moniker “SAMO.” In the late 1970s, he was a member of the band Gray. In the early 1980s his paintings began to attract notice, especially when he became part of Andy Warhol’s circle. In the mid-1980s, he became extremely successful, but also got more caught up in drugs, which led to his early demise from a heroin overdose. Jeffrey Wright is superb as Basquiat, as are David Bowie as Andy Warhol and Dennis Hopper as the international art dealer and gallerist Bruno Bischofberger. Also compelling is Gary Oldman as artist Albert Milo, a fictionalized version of the director Julian Schnabel.

Watching a large number of movies about artists over a short period of time can be a recipe for depression, given the amount of tragedy and pain on display. Often this pain was caused by a lack of public and critical recognition or support, leading great painters to experience genuine deprivation and what must have been the torment of self-doubt. Worse, the pain was sometimes self-inflicted or inflicted on others, because of the narcissism or lack of self-control that made such messy lives for so many artists.

But watching these films is intellectually as well as visually rewarding. You see the triumph of creative will over unfavorable conditions and outright opposition — and the beauty that unique individuals have contributed to the world.




Share This


Never Play Monopoly with Uncle Sam

 | 

On December 13, 2010, U.S. District Judge Henry Hudson ruled in Virginia v. Sebelius that the individual mandate included in the Patient Protection and Affordable Care Act  (PPACA), popularly known as Obamacare, is unconstitutional. (Hudson’s ruling is available as a PDF here.) Does the government have the power to compel people to buy private health insurance? The answer to that question will soon be in the hands of the Supreme Court.

That Judge Hudson did not refer to United States v. South-Eastern Underwriters Association (S.E.U.A.) in his decision struck me as odd, probably because I have not studied law. It is likely that Judge Hudson did not refer to the 1944 ruling because it had no bearing on his decision. Still, as it was the only Supreme Court case that the drafters of the PPACA cited in the part of the law intended to make the individual mandate seem constitutional, my curiosity was piqued.

What follows will review: (1) the part of the PPACA that addresses the constitutionality of the individual mandate, (2) how the mandate might function if the courts waive it through, and (3) the S.E.U.A. case. Only then will 1944 ruling and the 2010 law be examined together. The focus will not be on legal precedents, but on an aspect of the matter with which I am more familiar: bittersweet irony.

Imagine a group of congressional staffers pulling an all-nighter, drafting the healthcare reform legislation. “We can’t use that! Look at the facts!” “Get real! Of course we can! Nobody’s going to read the damn thing!”

The PPACA says the mandate is constitutional because “most health insurance is sold by national or regional health insurance companies, health insurance is sold in interstate commerce and claims payments flow through interstate commerce.” (PPACA, Subtitle F, Part I, Section 1501, (a) (2) (B)).It goes on to say, “In United Statesv. South-Eastern Underwriters Association … (1944)the Supreme Court of the United States ruled that insurance is interstate commerce subject to Federal regulation,” (PPACA, Subtitle F, Part I, Section 1501, (a) (3)). (The consolidated version of the law, after reconciliation, is at ncsl.org.)

Briefly, then, unless the Supreme Court agrees with Hudson, the PPACA will, beginning on January 1, 2014, compel otherwise uninsured people to buy health insurance policies. Such policies will not be crafted to meet the needs of those being compelled to buy them. Instead, they will be “one size fits all” policies crafted to meet the government’s funding needs. For example, there will be no high deductible, catastrophic care policies, nor will there be policies that exclude such things as mental health care.

Additionally, the premiums for these policies will not be based on the real risk factors of the person being insured, but rather on a modified community rating that will compel low-risk purchasers to pay many times more than what would be justified using an actuarial table. The prices that companies charge for the policies will be regulated by the government as well, ensuring that they are steep enough to pay for the services that the high-risk policy holders require. Planned government subsidies notwithstanding, should someone choose not to purchase the mandated policy at the regulated price, he will be saddled with a hefty fine, attached to his tax bill. Should he refuse to pay the fine, he could, under existing tax law, be jailed. (For a good overview, read Hazards of the Individual Health Care Mandate at cato.org.)

Now, what was the South-Eastern Underwriters Association case about? Justice Hugo Black, writing for the majority, describes a group of insurance companies (the S.E.U.A.) violating the Sherman Antitrust Act by conspiring to “restrain interstate trade and commerce by fixing and maintaining arbitrary and non-competitive premium rates” on fire insurance policies. These companies “not only fixed premium rates and agents’ commissions, but employed boycotts together with other types of coercion and intimidation to force non-member insurance companies into the conspiracies, and to compel persons who needed insurance to buy only from S.E.U.A. members.” In addition, Black tells how the companies saw to it that those who were “not members of S.E.U.A. were cut off from the opportunity to reinsure their risks, and their services and facilities were disparaged; independent sales agencies who defiantly represented non-S.E.U.A. companies were punished by a withdrawal of the right to represent the members of S.E.U.A.; and persons needing insurance who purchased from non-S.E.U.A. companies were threatened with boycotts and withdrawal of all patronage.” (Openjurist.com has the entire ruling.)

So. The only Supreme Court ruling cited in the healthcare law to support the notion that the individual mandate is constitutional upheld the lower court decisions to: (1) break up a conspiracy to monopolize and otherwise control what had been a free insurance market, (2) punish conspirators for fixing premiums instead of allowing free-market competition to determine price and product, and (3) bring an end to the use of coercion and intimidation to force customers to buy the cartel’s insurance policies and to force companies to join in a scheme to control the insurance market.

Of course, those who drafted the PPACA cited the 1944 case because Justice Black was, for the first time, placing insurance companies under the sway of the interstate commerce clause, which gives Congress the power “to regulate Commerce … among the several States” (Article I, Section 8, Clause 3). But ask yourself, Did these drafters read the facts of the S.E.U.A. case? If they did, citing it in the law was an act of legislative chutzpah. After all, Justice Black was on the side of the intimidated, the coerced, not the conspirators doing the intimidating.

Imagine a group of congressional staffers pulling an all-nighter, drafting the healthcare reform legislation. “We can’t use that! Look at the facts!” “Get real! Of course we can! Nobody’s going to read the damn thing!” Only a handful of people did. How many read the 2,700-page bill prior to the vote? Surely it was many more than followed up on passing references to ancient fire insurance disputes.

Forgive my obtuseness, but as it was the drafters themselves who placed the 1944 antitrust ruling in Section 1501(a)(3) of the text of the PPACA, would not the Supreme Court be free to raise the facts of that case despite the fact that Judge Hudson made no reference to it? And if the court is free to raise these facts, might there not be an opportunity to ask the defenders of the individual mandate how it is that they see the ruling as supporting their cause, when, practically speaking, Justice Black was condemning exactly the kind of power-grab that they are championing.

After all, now it is the United States government itself that is playing the part of the South-Eastern Underwriters Association, forcing the health insurance industry into a government-controlled cartel, essentially monopolizing what was once a free market, fixing both the product and the price, and using coercion and intimidation to gain the compliance of people and companies alike.

I never thought I’d say this, but this is one trust that is ripe for busting.




Share This


Hallowe’en With the Greens

 | 

A year ago, on Hallowe’en, I spent a scary day at a Green Party meeting. Some of the attendees dressed up in funny costumes to salute the holiday. Most of the costumes were cute, though the Al Gore mask ended up giving me nightmares.

Yet it soon became apparent that I was actually in fantasyland. And I came to see, more clearly than ever, that the social liberal fantasy of working our will on others needs to die. It is not only as childish as trick-or-treat, but downright counterproductive. It cannot be sustained in reality.

I’ve been a liberal all my life. But I’ve come to the end of my tether when it comes to trusting in fantasies. The pretty dreams held forth by the Obama-bots who have taken over Washington have proven as hollow as those of the neocons they replaced.

Most of those at the Green meeting were public servants. They clearly had no link whatsoever to any larger reality.

To cite one example dear to my heart, there are still more people out there who do not believe that gays should marry than there are those who do. That may well change eventually, and I believe it will. Being gay myself, I also hope so. But it will not change because we force the issue.

Polls clearly show the public opposed to single-payer healthcare. This is something I don’t believe in, but even if I did, I would be part of a tiny minority. Marching down the street and screaming about it isn’t going to change that. Neither will staging “die-ins” in front of insurance company buildings.

Fed up, at long last, with the hypocrisy and ineptitude of the Democratic Party, I investigated the Greens. It was the last gasp of my interest in statist “progressivism.”

I thought that they might understand that the corporate power they feared was caused by big government charity to behemoth companies. My fellow meeting attendees said they did. But then they proceeded to moan about “profits,” as if those were the problem.

I worked in the health insurance industry for well over a decade. All told, I was in the insurance business for 30 years. Most of my coworkers and friends will lose their jobs because of the “progressive” healthcare boondoggle. Most of those at the Green meeting were public servants. They clearly had no link whatsoever to any larger reality.

If they can’t convince most people to see things their way, they are perfectly content to steamroll right over the top of the majority to get it. So much for democracy.

I happened to remark to them that the Dems had played Lucy and the football with me one too many times. In return for our support for “hope and change” under Obama, gays are being shafted once again. My new Green friends wholeheartedly agreed with that, and presented me with a petition to sign so they could get ballot status in our state.

Nine people had joined their party, in 2009, up until then. That told them nothing. They remained highly hopeful. Happy Hallowe’en.

I told them that the standard of treatment, of gays and other controversial minorities, was as low as it is, not because of where the Republicans had dragged it, but because of where the Democrats had kept it. That if the Dems wouldn’t raise it, it looked as if no one would.

Even these bold advocates of a “progressive” alternative looked at me with incomprehension. Once intoxicated with the prospect of power, they would, of course, do exactly the same. Happy Hallowe’en again.

This minority within a minority understands nothing except grabbing more and more power. Unlike gays (who really don’t carry a contagious disease, whatever some might think), they do have at least the prospect of multiplying. But they carry a delusion that is not only contagious, but potentially deadly for a free society. They believe that because they’re “right,” they have the right to force anything they want on the populace.

I will invite you, Elvira-like, into the darkness of their crypt. These people claim to believe, very ardently, in “democracy.” They talk a blue streak about it to anyone who will listen, and even to those who won’t. But if they can’t convince most people to see things their way, they are perfectly content to steamroll right over the top of the majority to get it. So much for democracy.

I wish them luck, but I’m doubtful that cursing “the rich” is any answer. The politics of dependency leaves me as empty as I suspect it will leave them.

Happy Hallowe’en, again and again. Yet the Greens’ party platform seemed to anticipate Christmas. It read like a wish-list for Santa: equality for all, goodies aplenty for everybody. I can’t say I disagree about the imagined results. I just don’t see how their small, brave army of retro-revolutionaries can possibly bring it about.

I’m not willing, I’m afraid, to wait for Santa Claus, the Great Pumpkin, or anybody else to bring me goodies and save this country. In fact, I’m fairly certain that no one can do both. The Democrats and the Republicans have been able, thus far, to contribute to nothing but the problem.  Perhaps a third party would work (though not, I believe, the Greens). The November election hasn’t changed my view.

As I consider my own role in the scheme, where does my duty as a citizen take me? It is the road to fame and riches, in this country, to tell people all their troubles are somebody else’s fault, but it’s pretty clear to me we all got into this mess together. We’re like fish who must all swim in the same big pond, every splash we make creating a ripple. Or, like dancers, we must be aware that every step we take contributes to the Busby Berkeley production that is life in America. What is my part in the musical number — or which way goes my ripple?

I work for myself, having chosen not to go on struggling in a troubled industry but, instead, to seek out something new. Since I like nothing better than expressing my own opinions, I am determined, now, to make it as a writer. I want to make it on my own in what I hope is still the land of opportunity.

Many of my friends are also jobless, at least as far as having become untethered from corporate America. I hear much complaining, from them, about those evil rich people who rake in the boodle while they drain the last of their savings on Top Ramen. I wish them luck, but I’m doubtful that cursing “the rich” is any answer. The politics of dependency — of always blaming more powerful others and looking to them for answers — leaves me as empty as I suspect it will leave them.

I also have friends who are Libertarians. In Arizona, where I live — Goldwater country — there are a lot of them — even gay ones. Many of them are actually Republicans who can’t stand being associated with what the religious right has done to the GOP. The more they study the literature of liberty, the better sense they make. For years, now, they have been passing what they’ve read on to me.

As I’ve indulged in the guilty pleasure of libertarian reading, I’ve gradually begun to recognize that here, at last, is a concept that sheds real light and gives genuine hope. Libertarians are the grownups: the ones who aren’t wearing costumes and gobbling candy. They’re the ones keeping the kiddies from killing one another as they squabble over the trick-or-treat bags.

More than merely the grownups, these are the sane people. They deal with human beings on planet earth — not with aliens in some galaxy far away. Their attitude is not “wouldn’t it be nice if people were this way . . . let’s pretend they are!”, but rather, “this is the way we are . . . now let’s make the best of it.” I’m tired of the endless trick-or-treat, the Mardi Gras gone mad that statists, Left and Right, have made of American political life. There is a constant nightmare-funhouse atmosphere to it all — the masks we desperately wear to survive in the make-believe world we have made for ourselves and now, seemingly, don’t know how to escape.

I don’t want to be anybody’s mascot or pet. I want to be a productive citizen in a land where anybody can succeed.

There was a bratty-kids-in-the-secret-clubhouse feel to the Green Party meeting I attended. The partygoers were very impressed with how clever they were, each vying to one-up the other with witty putdowns of those benighted Republicans and Democrats. I didn’t recognize real people in their villains at all. They were huddled there in their treehouse, divvying up their candy and plotting how they’d foil the death rays of the evil Doctor Doom.

Nor did I sense they saw me as anything more than the token lesbian (they already had a token gay man). They had found a puppy, and they wanted to make me their pet. I would make a great new mascot for the treehouse. And next year, maybe they could use me to get more candy. They could make sad eyes and shove me in people’s faces, saying, “How ‘bout a Snickers for little Trixie, too?”

I can’t help believing there must be a better way. I don’t want to be anybody’s mascot or pet. I want to be a productive citizen in a land where anybody can succeed. For years I was afraid to believe I could take off my mask and opt out of the battle over the candy. But I’m ready, now, to try the only way I’ve come to know that stands any chance of working in the real world.

What if we worked for an America where, once again, we can keep most of what we earn, and stop weighing down with oppressive regulations the companies that otherwise would hire us? What if we relied upon ourselves for the answers, instead of always waiting with hungry mouths, like baby birds in the nest, for Big Mama to feed us? Far from making us look hopelessly at life, this attitude would empower us.

We’ve gone out to trick or treat in the guise of a helplessly beached fish, or of a dancer with no rhythm. We scare each other with these costumes, each of us seeing our own helplessness reflected in the mask of the other. What would happen if we ripped away the masks and showed the world the faces of people boldly meeting the future?

We can do this if we commit ourselves to doing all that we each can do. We may even be surprised to find that those we’ve demonized — the ones we’ve been sure held all the power — are as scared, and angry, and overburdened by the cares of the whole world as we are. If each of us shouldered only our own, individual share of the burden, we might find the weight of the world much easier to bear. Atlas shrugged, as we may recall, not because he had to hold up the world, but because he had to do it alone, with all of us on it.

Could the Libertarians hold the key? Their platform remains basic, but it makes more sense than all the declarations and promises of the other parties’ platforms put together. Libertarians aren’t pretending to be the Great Pumpkin. But they aren’t Lucy with the football, either.

They come disguised as nothing other than what they are. Instead of all the costumes and the gimmicks, this may well be exactly what we need.




Share This


Causes and Consequences of the Great Election

 | 

With the Republicans scoring a decisive victory in the Nov. 2 election, the salient questions are: why did it happen, and what effect if any will it have on this country’s governance?

Let me amplify my remark that the Republicans scored a decisive win. As of this writing, the GOP has gained a net of 61 House seats, with the possibility of picking up more (as close races get sorted out). This is the greatest gain in House seats in 60 years. The Republicans have taken a net of six senatorial seats; and they have netted six, possibly seven, governorships. Flying under the mainstream media radar, but hugely consequential, is the net gain of 20 state legislatures and about 700 state legislative seats — consequential, because the state governors and legislators have great redistricting power, and redistricting will necessarily follow the 2010 census. There is just no way to spin away the fact that this was a severe pounding for Obama's party.

For all their mistakes, the Republicans, like hedgehogs, got the one big thing right: they made the election a referendum on Obama and his policies.

So why did the Republicans score such a victory? Several factors are important. To begin with, Obama’s two years in office have revealed him as a narrow-minded leftist ideologue, and a shallow-thinking one at that, who lied about all manner of things. His foreign policy failures have been exceeded only by his domestic policy failures, making him already appear worse than Jimmy Carter, in only a fraction of the time it took Carter to reveal himself as bad. After two years in office, Obama's habit of whining about everything being Bush’s fault rings especially hollow.

For all their mistakes, the Republicans, like hedgehogs, got the one big thing right: they made the election a referendum on Obama and his policies, and the voters responded accordingly.

And there is the undeniable role played by the populist Tea Party organization. This loosely-knit group of populists consists mainly of people discontented about the fiscally ruinous policies that the Troika of Obama, Reid, and Pelosi implemented. The tea partiers brought enthusiasm to the election cycle, and they rightly saw the need to get rid of RINOs such as Mike Castle and Lisa Murkowski. For this they deserve praise. My major criticism is that they stink at vetting candidates — they chose some whose backgrounds were shaky at best (such as Christine O’Donnell, Sharron Angle, and Carl Paladino). Angle, for instance (a candidate whom I reluctantly supported financially), proved to be not exactly a polished public speaker. She lost to Reid in what should have been an easy pickup.

I generally support groups that are unafraid to challenge liberal or overly “moderate” Republicans in primary contests. I'm thinking of such organizations as the Club for Growth, which helped to fund Pat Toomey’s defeat of Arlen Specter in the primary and Toomey’s victorious run for the Senate for Specter’s old seat. But going RINO hunting only makes sense when you have done your homework and identified outstanding candidates to replace the RINOs. Notable here was the Club for Growth’s support of the seasoned and powerfully articulate Marco Rubio — a man with a compelling life story. His candidacy was precisely the way to dump an unprincipled “moderate” hack such as Charlie Crist.

The Tea Partiers show the normal drawbacks of populists. I share their dislike of big government, but I don’t think that the traits of ignorance and passion sit well together. The Tea Party won’t go away, and I wouldn’t want it to; but some coherent thought about what is wrong with the government and what can be done to fix it would be useful. Interesting in this regard was a poll of Tea Party members, showing that 62% of them opposed cutting Medicare and Social Security.

Populists usually profess support for free market economics, but curiously oppose many of the practices that define the system.

I believe that passionate populism was the main reason why the election went the way it did. I also believe that anti-government sentiment will continue to grow, and that the passion we have witnessed so far will reach a public-choice tipping point regarding the welfare state. As the baby boomers age, the expenses of massive entitlement programs will rise inexorably. Ever increasing deficits will wreak havoc with our economy, and we will see repeated outbursts of anti-government populism.

But populism is a two-edged sword. Anti-government populism can get out the vote, but it is an incoherent position, containing within itself the seeds of its own incompetence. The populists hate political pros, and want only neophyte Mr. Smiths going to Washington. But that sets the stage for many more Carl Paladino meltdowns: the populists get charmed by a seemingly likeable outsider (someone who never held any political office, not even a freaking school board seat) and give him the primary victory over more established candidates, only to find numerous defects exposed in the main campaign.

Worse, populists usually profess support for free market economics, but curiously oppose many of the practices that define the system. For example, free market economists from Adam Smith on have stressed the importance of free trade. But populists on both the Left and the Right reject it, espousing a mercantilist philosophy that Smith fought hard to overturn centuries ago. Obama claims that he is creating jobs, but in stoutly opposing free trade, he ensures that job creation will remain lower than it would otherwise be. Many populists would do likewise.

Again, many populists (especially those of the Right) hate the free flow of labor, aka immigration; and the arguments they use make it clear that they are just as opposed to legal as to illegal immigration. They believe that immigrants cost large numbers of jobs, result in lower wages, and (this is usually directed at Latinos) that they refuse to assimilate. Of course, if these ideas are sound — and I do not think that they are — then they argue against all immigration, legal or illegal.

Yet again, many populists (especially those of the Left) love government programs that supposedly help the working class. As I noted earlier, even the majority of Tea Partiers have passionate feelings for Medicare and Social Security. Indeed, Republicans made great hay of pointing out that Obamacare cuts $500 billion from Medicare. But let’s be honest: even without Obama's dramatic expansion of governmental healthcare and the comparatively modest expansion under Bush’s senior drug assistance program, the system of Medicare, Medicaid, and Social Security have been admitted to be unsustainable even by its own trustees.

The Republicans gained from the populist anti-government surge. But the question is what they will be able to do with it, and here I remain skeptical. What are the chances they will actually be able to repeal Obamacare? Rather small. And even if they did repeal it, would that solve the entitlement explosion built into Medicare, Medicaid, and Social Security? Certainly not. The dirty secret is that while people rage against big government, even tea partiers love certain government programs, at least until those programs explode.

And what are the chances the Republican House will be able to get America back on track towards free trade? Again, almost nil. As to the chances of the Republicans getting comprehensive immigration reform, one that insures a reasonable flow of labor to American business, well, these are completely nil also.

The Republicans will be able to do some modest good, such as stopping the proliferation of bailout and stimulus bills, and the creation of new entitlements. And I suspect they may save Bush’s tax cuts, including those for the wealthy. But the bankruptcy of the nation still looms. It is doubtful that, in the near term at least, Republicans can institute the radical changes that are needed to bring entitlement programs into sustainability, or to expand our free market economic system — slashing regulation, lowering corporate income taxes, reforming immigration, getting more free trade agreements enacted, and expanding free choice in education.




Share This


Escaping the Income Tax

 | 

Voters in my home state, Washington, have rejected a state income tax. The vote at press time is 65% no.

This is a watershed vote.

Washington is one of only seven states with no personal income tax. The others are Alaska, Florida, Nevada, South Dakota, Texas, and Wyoming. In addition, New Hampshire and Tennessee tax only interest and dividends.

The Tax Foundation ranks states by their tax burden as a percentage of gross state income. Not by accident, eight of these nine are the lowest-tax states in the union, ranking 43rd to 50thon this list.

Washington is the outlier. It ranks 35th.

The eight lowest-tax states are all “red” states. In the presidential election of 2000, all eight voted Republican. Washington voted Democrat, as it has in all presidential elections since 1988.

If it were up to the state’s politically active Democrats, Washington would have had an income tax long ago. But an income tax requires a public vote. And Washington voted 68% no in 1970, 77% no in 1973, and 67% no in 1975.

There was a message in those numbers, and politicians heard it. For most of 25 years, an income tax has been seen as impossible.

The people of the state continued mostly to vote Democrat, many of them out of a fondness for the redistributive state, but also for other reasons. Washington is one of the least religious states. In 1970 it voted for abortion, and in 2008 it voted for assisted suicide. It was not the first state to offer gay civil unions, but in 2009 it was the first state to do so by vote of the people. It was an early medical-marijuana state, following California’s historic vote in 1996 by its own in 1998. In 2010 a group called Sensible Washington circulated petitions for a ballot initiative to repeal the marijuana laws entirely, and though they had no money, they collected more than 200,000 signatures. It wasn’t quite enough — they are trying again — but that it happened at all shows there is a libertarian streak in this “blue” state.

A liberal group quietly worked for several years on a tax proposal that could be put to Washington voters. In January 2010, income tax proponents sensed it was time to act.

An income tax has long been an electric fence of Washington politics (we have no “third rails” here), and few Democrats running statewide have dared to touch it. One who did was King County (Seattle) Executive Ron Sims, who ran for governor in 2004 supporting an income tax. He lost the Democratic nomination to Attorney General Christine Gregoire, who took no such stand, and was elected. (Sims is now Obama’s deputy secretary of HUD.) Under the previous governor, Gary Locke (now Obama’s secretary of commerce), tax “reform” got no further than a study. In 2002, a majority of the tax-study commission recommended a flat-rate income tax levied on virtually all income. The income tax was envisioned as revenue-neutral to the state, its collections offset by large cuts in the property tax and the sales tax. The study was talked about, but Democrats did not dare act on it.

A liberal group quietly worked for several years on a tax proposal that could be put to Washington voters. In January 2010 the group was encouraged by Oregon Measure 66 to create a new high-earners’ bracket in Oregon’s personal income tax: 11% on individual income above $125,000. Facing a large deficit in their state budget, Oregon voters passed it.

In the 2009–2011 biennium, Washington’s budget had fallen one-fourth short. For the following period, another big deficit was forecast. Income tax proponents sensed it was time to act.

The chairman of the 2002 tax study had been Seattle attorney William H. Gates, Sr. He is the father of Microsoft founder Bill Gates, the richest man in the state. In 2010 the senior Gates, age 84, became the public face of the measure to create an income tax in Washington.

In its final form the measure was called Initiative 1098. It was very different from the tax recommended in the 2002 study. It was not a flat tax. It was steeply progressive, with three rates: 0%, 5%, and 9%, the high rate being one of the highest among the states. The proposed rate was zero on individual income below $200,000 and 9% on income above $500,000. This was a “high-earners tax.”

Initiative 1098 also proposed two tax cuts: an exemption for very small businesses from the state tax on business revenue and a 20% cut in the state tax on property. Together, these cuts offset 22 cents of every dollar the new income tax was expected to collect.

Neither of the two cuts benefited the poor, a group often invoked by supporters of “tax reform.” The cuts were for the middle class. They were there so proponents could say that under Initiative 1098, “the wealthy pay more and the rest of us pay less.”

It was not a flat tax. It was steeply progressive, with the "high-earner" rate being one of the highest among the states.

And what better face to put on it than Bill Gates — even though it was not the famous Gates, but his dad. Gates plunked down $600,000 to pay for signature gathering and ads. (Late in the campaign, his billionaire son announced that he agreed with his father, but contributed nothing.)

I-1098 had a handful of other $100,000-plus contributors: Nick Hanauer, a venture capitalist who had made his grubstake by investing in Amazon.com; Ann Wyckoff, an heiress to the Kenworth and Peterbilt truck fortune; and Bill Clapp, an heir to the Weyerhaeuser fortune. But most of the money raised to sell the state income tax was contributed by public-employee unions. The Service Employees International Union, the largest contributor, put in $2.2 million. In total, the amount raised was nearly $1 for each of the 6.5 million residents of Washington.

Opponents knew they had to match that, and they did. They got some money from corporations, but the biggest checks were from CEOs. These included $425,000 from Steve Ballmer of Microsoft, and $50,000 to $100,000 from former Microsofties Paul Allen, Nathan Myhrvold, and Charles Simonyi, who now have their own companies. Amazon’s Jeff Bezos, who is known as a libertarian, put in $100,000. There were others. CEO contributions were not easy to get, particularly at first. Most CEOs were not people who wanted public attention on an issue like this. But they also did not want a "high-earners" income tax.

Some donated after reading an Aug. 14 editorial in the Wall Street Journal, which said: “Washington would move overnight from one of the nine states with no income tax to having the eighth highest rate in the country. Mr. Gates, a wealthy lawyer whose son is among the richest men on the planet, is pitching the proposal as a chance for 97% of the voters to pay the state's bills by socking it to the richest 3%. What he doesn't say is that Washington's lack of an income tax is among its main comparative advantages in luring those top 3%, along with their businesses and jobs, into the state.”

The pro-tax campaign was not going to talk radically. It needed to wrap “soak the rich” in a warm, progressive blanket.

The Journal was not alone. All the big newspapers in the state opposed Initiative 1098, and for some of the same reasons. The Seattle Times claimed in an editorial headline that the income tax would “Calitaxicate” Washington, a play on an old environmentalist slogan, “Don’t Californicate Washington.”

In presenting its case to the public, the proponents had to make a radical tax sound non-radical. Their supporters could do the dirty work. The Stranger, an alternative Seattle paper famous for its sex columnist, Dan Savage, ran a cover story called, “Tax the Filthy Rich!” But the I-1098 campaign was not going to talk that way. It needed to wrap “soak the rich” in a warm, progressive blanket. And for that, it had the rich man who wanted to be soaked: old Mr. Gates.

He was front and center in the first big TV ad: “Hello. I’m Bill Gates senior. And I love our state. That’s why I helped write Initiative 1098. Middle-class families are struggling. 1098 will cut state property taxes by 20%. And eliminate the B&O tax for small businesses. It will dedicate two billion dollars a year to improve education and health care. And only the wealthiest 1.2% will pay more. Support Initiative 1098. It’s good for Washington.”

Opponents hooted at this. Gates had made Initiative 1098 sound like a tax cut, which it was not. He had been too chicken to use the electric words, “income tax.”

The classic was his next ad, with Gates in blue shirt and khaki pants. He opens: “Some people say Initiative 1098 is about soaking the rich. But it’s really about doing something for the next generation.”

The camera pulls back, and you see Gates sitting above a plastic pool of water, in bare feet. Kids are next to the pool, throwing softballs at a target. Gates says: “You see, state cutbacks have put our kids at risk, and we can’t just sit here and do nothing about it. 1098 will dedicate two billion dollars a year for education and health care, and only the wealthiest 1.2% will pay more.”

A softball hits the target, thwap, and the old man drops into the pool of water. The kids cheer. Gates makes an underwater face through the pool’s plastic wall. He surfaces, streaming with water, and says: “Vote yes on 1098. It’s good for Washington.”

It’s a cute ad with a hard message: soak the rich.

The other side’s ads were also carefully constructed.

Their money had come from CEOs who cared about how a high-earners tax would become a corporate income tax for companies such as privately held Bartell Drug, an “S” corporation that flowed its earnings directly to its shareholders; about how the tax would handicap out-of-state recruitment for technology employers such as Microsoft and Dendreon, a Seattle biotech company; and about how it would simply enable more state spending, making Washington like California.

A softball hits the target, thwap, and the old man drops into the pool of water.

In the fight in Oregon, Nike founder and CEO Phil Knight had bought a fold-over-page-one ad in the Portland Oregonian to make an economic case against Measure 66. It was a fine pitch, but it hadn’t resulted in a sale. The Washington No-on-1098 campaign was not going to make CEO-type arguments.

The No people did not have rich guys in their advertisements. In their signature ad, they offered a middle-aged white woman, overweight, plainly but nicely dressed, in a modest suburban kitchen, pouring herself a cup of coffee from a $79.95 Cuisinart coffeemaker. No Italian espresso machine. She looks straight at the viewers and says: “Times are tough these days. And sure, Initiative 1098 might sound good, but supporters don’t even tell us what it really is: an INCOME TAX. And the problem is that after two years, Olympia can extend the income tax to everyone, including people like me. Look what’s happened with the sales tax. It keeps going up and up. I just don’t trust Olympia. So I’m voting no on 1098.”

The message is that the people pushing the tax are tricksters. They hide the words “income tax.” They hide the fact that in two years the state legislature can pile the tax on the middle class by a simple majority vote. When the woman says, “I just don’t trust Olympia,” the viewer thinks, “Yeah, and I don’t trust the weasels behind this tax.”

It was an effective ad. Before the two sides’ ads began, Initiative 1098 had support in the low 40s, with opposition lower than that. As both types of ads unrolled, the percentage opposed grew but the percentage supporting did not. Sometime between mid-September and early October the two lines crossed. By mid-October, the proponents pulled the Gates ads and were using other ones, but nothing worked for them. It was clear that Initiative 1098 was going to lose.

And it did. It was not a narrow loss. It was a big loss. It was a very big deal for Washington, which remains the only “blue” state with no personal income tax.




Share This


A Libertarian Election

 | 

In emails sent on election day to prospective Democratic voters, President Obama said, “Today, the country will make a choice about the direction we take in the years ahead. " We’ll see now whether he respects that choice. I predict he won’t. Yet the Republicans have won an enormous victory.

Of the 435 seats in Congress, two-thirds are safe preserves for Democrats or Republicans. During this election, the Republicans put two-thirds of the rest of them in play. And of those seats, they won about two-thirds. If America operated with a European parliamentary system, Obama would not be president today. He lost the confidence of the majority of parliamentary districts.

Libertarians should be happy, though perhaps not ecstatic, about the Republican victory.

Why?

Because the Republicans are, on the national level, the only effective barrier to the enormous expansion of government personified by Barack Obama, Harry Reid, and Nancy Pelosi.

Stereotypes? Yes. Amusing targets of ridicule? Right again. Yet until now, these ridiculous figures have been potent encroachers on the freedom of every American.

Despite the gross imperfections of the Republican Party, we have to recognize that it is a party that could not exist without essential libertarian ideas. Just as Obama’s most potent ideas come from European socialism, so the Republicans’ most potent ideas come from American concepts of individual liberty. I refer to default notions of limited government, private property, freedom from unnecessary taxation, ownership of self-protective devices (guns), and unabridged freedom of speech and association. Without these ideas, a libertarian society is impossible. Never mind the rest of it: at this moment, the Republicans are friends of these ideas; the Democrats are not — although even Obama was constrained, in his post-election press conference on Wednesday afternoon, to pay tribute to free enterprise and entrepreneurship as the source of American prosperity.

If America operated with a European parliamentary system, Obama would not be president today.

“Across the country,” says David Harsanyi of the Denver Post, “the electorate laid down a resounding angry vote against activist government. And, mind you, no one had to wrestle with any ambiguity about the objectives of the Republicans. Democrats helpfully hammered home all the finer points of libertarianism, and Republicans typically embraced them. Exit polls showed that this election was a rejection of the progressive agenda of ‘stimulus,’ of Obamacare, of cap and trade. Exit polls show that there was great anger with government — not government that didn't work, or government that didn't do enough, but government that didn't know its place.”

Yet the election wasn’t just about ideas; it was about what can be done with ideas in the electoral marketplace. With this in mind, let’s try to put the events of Nov. 2 into some kind of libertarian perspective.

Many people, such as Neil King, Jr., writing for the Wall Street Journal on Nov. 1, wonder about the volatility of American elections, about the electorate’s movement between, for instance, the 2008 and the 2010 elections. How, King wonders, can the country “solve its long-term problems . . . when voters seem so uncertain which party should lead the charge.” I agree with King’s list of specific problems — deficits, Social Security, healthcare costs: yes, those are real issues. But I disagree with his analysis of the situation.

Even Obama was constrained, in his post-election press conference on Wednesday afternoon, to pay tribute to free enterprise and entrepreneurship as the source of American prosperity.

For one thing, “voters” are not quite “so uncertain.” In American politics, huge results can follow from the shift of only 4.6% of the voters, which was the difference between the returns for the Democratic presidential nominee in 2004 and the returns for the same party’s nominee in 2008. As I’ve often pointed out in Liberty, the two big American parties live by getting as many marginal votes as they can, wherever they can get them. If one party falls beneath its normal margin, it will try to find a group of issues that will allow it to annex some new group of voters, or bring some new inspiration to the formerly faithful. That’s why the Republicans (or the Democrats) can stand for one thing in certain years, and nearly the opposite in others, and why individual candidates within each party can stand for both at the same time.

This year, the Republicans put new life into their dormant libertarian principles, and they won decisively. It is not inconceivable that the Democrats will create some simulacrum to those principles, for use in the next election. But the important thing is to reduce the power of politics to “solve” our problems.

When government is perceived as the source of solutions, the problems ordinarily get worse — because, as many voters saw this year, the fundamental problem isn’t the deficit or healthcare or old-fashioned entitlements programs. The fundamental problem is the reach of government. The libertarian idea — originally, the American idea — is to conserve the power of the individual to decide what his indebtedness shall be, what his investments shall be, and what steps he should take to provide for himself in sickness and old age. The problems that King enumerates would not be political if the friends of government hadn’t “led the chargeto extend government’s power and purview.

The two big American parties live by getting as many marginal votes as they can, wherever they can get them.

That’s why the victory of the Republicans is important and interesting, even exciting. In 2010, the Republicans responded to the repudiation of Bush in 2008 by seeking voters everywhere outside the Democratic base. They largely abandoned their appeals to “social issues,” which hadn’t been getting them any crucial amounts of votes, and they appealed instead to the people’s resentment of the Obama regime as arrogant, spendthrift, anti-property, and anti-individual — in short, fanatically expansive and power-seeking. They saw the Democratic regime as the American phalanx of the European nanny state, now in retreat even in Europe.

After World War II, the two big American parties studied the complex art of gerrymandering. In most states, they perfected it. They learned how to ensure that whoever had a seat in Congress would be able to keep it. To maintain their hold on “minority” (i.e., especially, African-American) voters, the Democrats created “urban” districts in which voters would never have a real choice of parties. But often the Republicans cooperated with the Democrats in the great effort to preserve legislators’ individual seats. In this year, however, some of the most gerrymandered districts in the union changed hands: look at the map of Illinois congressional district 17, and notice what happened there, and you’ll see what I mean. The appeal of an essentially libertarian platform inundated many of the carefully fenced-off legislative fiefdoms, and swept their lords away.

Walter Shapiro of AOL’s “Politics Daily” describes the current situation clearly: “At a time when the percentage of voters who call themselves liberal (about 20 percent) has remained constant, the number of self-identified conservatives among voters has risen from 32 percent (2006) to 34 percent (2008) to a whopping 41 percent (2010). In fact, conservatives outnumbered moderates (39 percent) among 2010 voters. Since such ideological markers normally move at a glacial pace, the dramatic increase in conservatives may be the most lasting legacy of the 2010 election.”

The fundamental problem isn’t the deficit or healthcare or old-fashioned entitlements programs. The fundamental problem is the reach of government.

Consider now that conservative “social issues” were not a factor in the current contest, hence did not increase voters’ self-identification as “conservatives.” The new “conservatives” were attracted to that label largely by the libertarian idea of limited government.

In this way, the Republicans found their voters. On the scale in which elections are won and lost in America, they found them in enormous numbers. And this discovery will have enormous effects — if people who believe in the American ideal of individual liberty continue to demonstrate that they will settle for nothing less.




Share This
Syndicate content

© Copyright 2013 Liberty Foundation. All rights reserved.



Opinions expressed in Liberty are those of the authors and not necessarily those of the Liberty Foundation.

All letters to the editor are assumed to be for publication unless otherwise indicated.