A guide to the occupation, The Occupy Handbook is a talked-about source for understanding why 1% of the people in America take almost a quarter of the nation's income and the long-term effects of a protest movement that even the objects of its attack can find little fault with.
A guide to the occupation, The Occupy Handbook is a talked-about source for understanding why 1% of the people in America take almost a quarter of the nation's income and the long-term effects of a protest movement that even the objects of its attack can find little fault with.
Paperback
-
SHIP THIS ITEMShips in 1-2 daysPICK UP IN STORE
Your local store may have stock of this item.
Available within 2 business hours
Related collections and offers
Overview
A guide to the occupation, The Occupy Handbook is a talked-about source for understanding why 1% of the people in America take almost a quarter of the nation's income and the long-term effects of a protest movement that even the objects of its attack can find little fault with.
Product Details
ISBN-13: | 9780316220217 |
---|---|
Publisher: | Little, Brown and Company |
Publication date: | 04/17/2012 |
Pages: | 560 |
Product dimensions: | 5.40(w) x 8.20(h) x 1.60(d) |
About the Author
Read an Excerpt
The Occupy Handbook
By
Back Bay Books
ISBN: 9780316220217PART I
HOW WE GOT HERE
Advice from the 1 Percent: Lever Up, Drop Out
Michael Lewis
Michael Lewis is the bestselling author of Liar’s Poker, Moneyball, The Blind Side, The Big Short, and Boomerang. He lives in Berkeley, California, with his wife and three children.
To: The Upper Ones
From: Strategy Committee
Re: The Counterrevolution
As usual, we have much to celebrate.
The rabble has been driven from the public parks. Our adversaries, now defined by the freaks and criminals among them, have demonstrated only that they have no idea what they are doing. They have failed to identify a single achievable goal.
Just weeks ago, in our first memo, we expressed concern that the big Wall Street banks were vulnerable to a mass financial boycott—more vulnerable even than tobacco companies or apartheid-era South African multinationals. A boycott might raise fears of a bank run; and the fears might create the fact.
Now, we’ll never know: the Lower 99’s notion of an attack on Wall Street is to stand around hollering at the New York Stock Exchange. The stock exchange!
We have won a battle, but this war is far from over.
As our chief quant notes, “No matter how well we do for ourselves, there will always be 99 of them for every one of us.” Disturbingly, his recent polling data reveal that many of us don’t even know who we are: fully half of all Upper Ones believe themselves to belong to the Lower 99. That any human being can earn more than 344 grand a year without having the sense to identify which side in a class war he is on suggests that we should limit membership to actual rich people. But we wish to address this issue in a later memo. For now we remain focused on the problem at hand: how to keep their hands off our money.
We have identified two looming threats: the first is the shifting relationship between ambitious young people and money. There’s a reason the Lower 99 currently lack leadership: anyone with the ability to organize large numbers of unsuccessful people has been diverted into Wall Street jobs, mainly in the analyst programs at Morgan Stanley and Goldman Sachs. Those jobs no longer exist, at least not in the quantities sufficient to distract an entire generation from examining the meaning of their lives. Our Wall Street friends, wounded and weakened, can no longer pick up the tab for sucking the idealism out of America’s youth. But if not them, who? We on the committee are resigned to all elite universities becoming breeding grounds for insurrection, with the possible exception of Princeton.
The second threat is in the unstable mental pictures used by Lower 99ers to understand their economic lives. (We have found that they think in pictures.) For many years the less viable among us have soothed themselves with metaphors of growth and abundance: rising tides, expanding pies, trickling down. A dollar in our pocket they viewed hopefully, as, perhaps, a few pennies in theirs. They appear to have switched this out of their minds for a new picture, of a life raft with shrinking provisions. A dollar in our pockets they now view as a dollar from theirs. Fearing for their lives, the Lower 99 will surely become ever more desperate and troublesome. Complaints from our membership about their personal behavior are already running at post–French Revolutionary highs.
We on the strategy committee see these developments as inexorable historical forces. The Lower 99 is a ticking bomb that can’t be defused. They may be occasionally distracted by, say, a winning lottery ticket. (And we have sent out the word to the hedge fund community to cease their purchases of such tickets.) They may turn their anger on others—immigrants, for instance, or the federal government—and we can encourage them to do so. They may even be frightened into momentary submission. (We’re long pepper spray.)
But in the end we believe that any action we take to prevent them from growing better organized, and more aware of our financial status, will only delay the inevitable: the day when they turn, with far greater effect, on us.
Hence our committee’s conclusion: we must be able to quit American society altogether, and they must know it. For too long we have simply accepted the idea that we and they are all in something together, subject to the same laws and rituals and cares and concerns. This state of social relations between rich and poor isn’t merely unnatural and unsustainable but, in its way, shameful. (Who among us could hold his head high in the presence of Louis XIV or those Russian czars or, for that matter, Croesus?)
The modern Greeks offer the example in the world today that is, the committee has determined, best in class. Ordinary Greeks seldom harass their rich, for the simple reason that they have no idea where to find them. To a member of the Greek Lower 99 a Greek Upper One is as good as invisible. He pays no taxes, lives no place, and bears no relationship to his fellow citizens. As the public expects nothing of him, he always meets, and sometimes even exceeds, their expectations. As a result, the chief concern of the ordinary Greek about the rich Greek is that he will cease to pay the occasional visit.
That is the sort of relationship with the Lower 99 we must cultivate if we are to survive. We must inculcate, in ourselves as much as in them, the understanding that our relationship to each other is provisional, almost accidental, and their claims on us nonexistent.
As a first, small step we propose to bestow, annually, an award to the Upper One who has best exhibited to the wider population his willingness and ability to have nothing at all to do with them. As the recipient of the first Incline Award—so named for the residents of Incline Village, Nevada, many of whom have bravely fled California state taxes—we propose Jeff Bezos.
His private rocket ship may have exploded before it reached outer space. But before it did, it sent back to Earth the message we hope to convey:
We’re outta here!
The Widening Gyre: Inequality, Polarization, and the Crisis
Paul Krugman
and
Robin Wells
Paul Krugman is a professor at the Woodrow Wilson School, Princeton University, and an op-ed columnist for the New York Times. He is the 2008 winner of the Nobel Prize in Economics. He is the author of three New York Times bestsellers, The Great Unraveling (2005), The Return of Depression Economics (1999), and The Conscience of a Liberal (2007), and of End This Depression Now! (2012). Robin Wells is an economist and a coauthor, with Paul Krugman, of the bestselling textbook Economics. She was formerly on the faculty of Princeton University and Stanford University Business School.
America emerged from the Great Depression and the Second World War with a much more equal distribution of income than it had in the 1920s; our society became middle-class in a way it hadn’t been before. This new, more equal society persisted for thirty years. But then we began pulling apart, with huge income gains for those with already high incomes. As the Congressional Budget Office has documented, the 1 percent—the group implicitly singled out in the slogan “We are the 99 percent”—saw its real income nearly quadruple between 1979 and 2007, dwarfing the very modest gains of ordinary Americans. Other evidence shows that within the 1 percent, the richest 0.1 percent and the richest 0.01 percent saw even larger gains.
By 2007, America was about as unequal as it had been on the eve of the Great Depression—and sure enough, just after hitting this milestone, we plunged into the worst slump since the Depression. This probably wasn’t a coincidence, although economists are still working on trying to understand the linkages between inequality and vulnerability to economic crisis.
Here, however, we want to focus on a different question: why has the response to crisis been so inadequate? Before financial crisis struck, we think it’s fair to say that most economists imagined that even if such a crisis were to happen, there would be a quick and effective policy response. In 2003 Robert Lucas, the Nobel laureate and then president of the American Economic Association, urged the profession to turn its attention away from recessions to issues of longer-term growth. Why? Because, he declared, the “central problem of depression-prevention has been solved, for all practical purposes, and has in fact been solved for many decades.”
Yet when a real depression arrived—and what we are experiencing is indeed a depression, although not as bad as the Great Depression—policy failed to rise to the occasion. Yes, the banking system was bailed out. But job-creation efforts were grossly inadequate from the start—and far from responding to the predictable failure of the initial stimulus to produce a dramatic turnaround with further action, our political system turned its back on the unemployed. Between bitterly divisive politics that blocked just about every initiative from President Obama, and a bizarre shift of focus away from unemployment to budget deficits despite record-low borrowing costs, we have ended up repeating many of the mistakes that perpetuated the Great Depression.
Nor, by the way, were economists much help. Instead of offering a clear consensus, they produced a cacophony of views, with many conservative economists, in our view, allowing their political allegiance to dominate their professional competence. Distinguished economists made arguments against effective action that were evident nonsense to anyone who had taken Econ 101 and understood it. Among those behaving badly, by the way, was none other than Robert Lucas, the same economist who had declared just a few years before that the problem of preventing depressions was solved.
So how did we end up in this state? How did America become a nation that could not rise to the biggest economic challenge in three generations, a nation in which scorched-earth politics and politicized economics created policy paralysis?
We suggest it was the inequality that did it. Soaring inequality is at the root of our polarized politics, which made us unable to act together in the face of crisis. And because rising incomes at the top have also brought rising power to the wealthiest, our nation’s intellectual life has been warped, with too many economists co-opted into defending economic doctrines that were convenient for the wealthy despite being indefensible on logical and empirical grounds.
Let’s talk first about the link between inequality and polarization.
Our understanding of American political economy has been strongly influenced by the work of the political scientists Keith Poole, Howard Rosenthal, and Nolan McCarty. Poole, Rosenthal, and McCarty use congressional roll-call votes to produce a sort of “map” of political positions, in which both individual bills and individual politicians are assigned locations in an abstract issues space. The details are a bit complex, but the bottom line is that American politics is pretty much one-dimensional: once you’ve determined where a politician lies on a left–right spectrum, you can predict his or her votes with a high degree of accuracy. You can also see how far apart the two parties’ members are on the left–right spectrum—that is, how polarized congressional politics is.
It’s not surprising that the parties have moved ever further apart since the 1970s. There used to be substantial overlap: there were moderate and even liberal Republicans, like New York’s Jacob Javits, and there were conservative Democrats. Today the parties are totally disjoint, with the most conservative Democrat to the left of the most liberal Republican, and the two parties’ centers of gravity very far apart.
What’s more surprising is the fact that the relatively nonpolarized politics of the postwar generation is a relatively recent phenomenon—before the war, and especially before the Great Depression, politics was almost as polarized as it is now. And the track of polarization closely follows the track of income inequality, with the degree of polarization closely correlated over time with the share of total income going to the top 1 percent.
Why does higher inequality seem to produce greater political polarization? Crucially, the widening gap between the parties has reflected Republicans moving right, not Democrats moving left. This pops out of the Poole-Rosenthal-McCarty numbers, but it’s also obvious from the history of various policy proposals. The Obama health care plan, to take an obvious example, was originally a Republican plan, in fact a plan devised by the Heritage Foundation. Now the GOP denounces it as socialism.
The most likely explanation of the relationship between inequality and polarization is that the increased income and wealth of a small minority has, in effect, bought the allegiance of a major political party. Republicans are encouraged and empowered to take positions far to the right of where they were a generation ago, because the financial power of the beneficiaries of their positions both provides an electoral advantage in terms of campaign funding and provides a sort of safety net for individual politicians, who can count on being supported in various ways even if they lose an election.
Whatever the precise channels of influence, the result is a political environment in which Mitch McConnell, leading Republican in the Senate, felt it was perfectly okay to declare before the 2010 midterm elections that his main goal, if the GOP won control, would be to incapacitate the president of the United States: “The single most important thing we want to achieve is for President Obama to be a one-term president.”
Needless to say, this is not an environment conducive to effective antidepression policy, especially given the way Senate rules allow a cohesive minority to block much action. We know that the Obama administration expected to win strong bipartisan support for its stimulus plan, and that it also believed that it could go back for more if events proved this necessary. In fact, it took desperate maneuvering to get sixty votes even in the first round, and there was no question of getting more later.
In sum, extreme income inequality led to extreme political polarization, and this greatly hampered the policy response to the crisis. Even if we had entered the crisis in a state of intellectual clarity—with major political players at least grasping the nature of the crisis and the real policy options—the intensity of political conflict would have made it hard to mount an effective response.
In reality, of course, we did not enter the crisis in a state of clarity. To a remarkable extent, politicians—and, sad to say, many well-known economists—reacted to the crisis as if the Great Depression had never happened. Leading politicians gave speeches that could have come straight out of the mouth of Herbert Hoover; famous economists reinvented fallacies that one thought had been refuted in the mid-1930s. Why?
The answer, we would suggest, also runs back to inequality.
It’s clear that the financial crisis of 2008 was made possible in part by the systematic way in which financial regulation had been dismantled over the previous three decades. In retrospect, in fact, the era from the 1970s to 2008 was marked by a series of deregulation-induced crises, including the hugely expensive savings and loan crisis; it’s remarkable that the ideology of deregulation nonetheless went from strength to strength.
It seems likely that this persistence despite repeated disaster had a lot to do with rising inequality, with the causation running in both directions. On one side, the explosive growth of the financial sector was a major source of soaring incomes at the very top of the income distribution. On the other side, the fact that the very rich were the prime beneficiaries of deregulation meant that as this group gained power—simply because of its rising wealth—the push for deregulation intensified.
These impacts of inequality on ideology did not end in 2008. In an important sense, the rightward drift of ideas, both driven by and driving rising income concentration at the top, left us incapacitated in the face of crisis.
In 2008 we suddenly found ourselves living in a Keynesian world—that is, a world that very much had the features John Maynard Keynes focused on in his 1936 magnum opus, The General Theory of Employment, Interest, and Money. By that we mean that we found ourselves in a world in which lack of sufficient demand had become the key economic problem, and in which narrow technocratic solutions, like cuts in the Federal Reserve’s interest rate target, were not adequate to that situation. To deal effectively with the crisis, we needed more activist government policies, in the form both of temporary spending to support employment and efforts to reduce the overhang of mortgage debt.
One might think that these solutions could still be considered technocratic, and separated from the broader question of income distribution. Keynes himself described his theory as “moderately conservative in its implications,” consistent with an economy run on the principles of private enterprise. From the beginning, however, political conservatives—and especially those most concerned with defending the position of the wealthy—have fiercely opposed Keynesian ideas.
And we mean fiercely. Although Paul Samuelson’s textbook Economics: An Introductory Analyis is widely credited with bringing Keynesian economics to American colleges in the 1940s, it was actually the second entry; a previous book, by the Canadian economist Lorie Tarshis, was effectively blackballed by right-wing opposition, including an organized campaign that successfully induced many universities to drop it. Later, in his God and Man at Yale, William F. Buckley Jr. would direct much of his ire at the university for allowing the teaching of Keynesian economics.
The tradition continues through the years. In 2005 the right-wing magazine Human Events listed Keynes’s General Theory among the ten most harmful books of the nineteenth and twentieth centuries, right up there with Mein Kampf and Das Kapital.
Why such animus against a book with a “moderately conservative” message? Part of the answer seems to be that even though the government intervention called for by Keynesian economics is modest and targeted, conservatives have always seen it as the thin edge of the wedge: concede that the government can play a useful role in fighting slumps, and the next thing you know we’ll be living under socialism. The rhetorical amalgamation of Keynesianism with central planning and radical redistribution—although explicitly denied by Keynes himself, who declared that “there are valuable human activities which require the motive of money-making and the environment of private wealth-ownership for their full fruition”—is almost universal on the right.
There is also the motive suggested by Keynes’s contemporary Michał Kalecki in a classic 1943 essay:
We shall deal first with the reluctance of the “captains of industry” to accept government intervention in the matter of employment. Every widening of state activity is looked upon by business with suspicion, but the creation of employment by government spending has a special aspect which makes the opposition particularly intense. Under a laissez-faire system the level of employment depends to a great extent on the so-called state of confidence. If this deteriorates, private investment declines, which results in a fall of output and employment (both directly and through the secondary effect of the fall in incomes upon consumption and investment). This gives the capitalists a powerful indirect control over government policy: everything which may shake the state of confidence must be carefully avoided because it would cause an economic crisis. But once the government learns the trick of increasing employment by its own purchases, this powerful controlling device loses its effectiveness. Hence budget deficits necessary to carry out government intervention must be regarded as perilous. The social function of the doctrine of “sound finance” is to make the level of employment dependent on the state of confidence.
This sounded a bit extreme to us the first time we read it, but it now seems all too plausible. These days you can see the “confidence” argument being deployed all the time. For example, here is how Mort Zuckerman began a 2010 op-ed in the Financial Times, aimed at dissuading President Obama from taking any kind of populist line:
The growing tension between the Obama administration and business is a cause for national concern. The president has lost the confidence of employers, whose worries over taxes and the increased costs of new regulation are holding back investment and growth. The government must appreciate that confidence is an imperative if business is to invest, take risks and put the millions of unemployed back to productive work.
There was and is, in fact, no evidence that “worries over taxes and the increased costs of new regulation” are playing any significant role in holding the economy back. Kalecki’s point, however, was that arguments like this would fall completely flat if there was widespread public acceptance of the notion that Keynesian policies could create jobs. So there is a special animus against direct government job-creation policies, above and beyond the generalized fear that Keynesian ideas might legitimize government intervention in general.
Put these motives together, and you can see why writers and institutions with close ties to the upper tail of the income distribution have been consistently hostile to Keynesian ideas. That has not changed over the seventy-five years since Keynes wrote the General Theory. What has changed, however, is the wealth and hence influence of that upper tail. These days, conservatives have moved far to the right even of Milton Friedman, who at least conceded that monetary policy could be an effective tool for stabilizing the economy. Views that were on the political fringe forty years ago are now part of the received doctrine of one of our two major political parties.
A touchier subject is the extent to which the vested interest of the 1 percent, or better yet the 0.1 percent, has colored the discussion among academic economists. But surely that influence must have been there: if nothing else, the preferences of university donors, the availability of fellowships and lucrative consulting contracts, and so on must have encouraged the profession not just to turn away from Keynesian ideas but to forget much that had been learned in the 1930s and ’40s.
In the debate over responses to the Great Recession and its aftermath, it has been shocking to see so many highly credentialed economists making not just elementary conceptual errors but old elementary conceptual errors—the same errors Keynes took on three generations ago. For example, one thought that nobody in the modern economics profession would repeat the mistakes of the infamous “Treasury view,” under which any increase in government spending necessarily crowds out an equal amount of private spending, no matter what the economic conditions might be. Yet in 2009, exactly that fallacy was expounded by distinguished professors at the University of Chicago.
Again, our point is that the dramatic rise in the incomes of the very affluent left us ill prepared to deal with the current crisis. We arrived at a Keynesian crisis demanding a Keynesian solution—but Keynesian ideas had been driven out of the national discourse, in large part because they were politically inconvenient for the increasingly empowered 1 percent.
In summary, then, the role of rising inequality in creating the economic crisis of 2008 is debatable; it probably did play an important role, if nothing else than by encouraging the financial deregulation that set the stage for crisis. What seems very clear to us, however, is that rising inequality played a central role in causing an ineffective response once crisis hit. Inequality bred a polarized political system, in which the right went all out to block any and all efforts by a modestly liberal president to do something about job creation. And rising inequality also gave rise to what we have called a Dark Age of macroeconomics, in which hard-won insights about how depressions happen and what to do about them were driven out of the national discourse, even in academic circles.
This implies, we believe, that the issue of inequality and the problem of economic recovery are not as separate as a purely economic analysis might suggest. We’re not going to have a good macroeconomic policy again unless inequality, and its distorting effect on policy debate, can be curbed.
Take a Stand: Sit In
Philip Dray
Philip Dray is the author of several books, including There Is Power in a Union: The Epic Story of Labor in America (2010) and At the Hands of Persons Unknown: The Lynching of Black America (2002), which won the Robert F. Kennedy Memorial Book Award.
America has often experienced stirrings of unrest that appear inchoate and lacking in direction but which prove enduring and seminal to the country’s history. During the Great Upheaval of 1877 the nation learned for the first time the depth of resentment among its starving classes—that there existed many living in not so quiet desperation. The aftermath of the Civil War had seen the rapid expansion of national markets and the related growth of the railroads, symbolized by the driving of the golden spike at Promontory, Utah, in 1869. Organized labor had struggled to keep pace, but its attempts at nationwide union building were uneven. Railroad workers, for instance, had strong guildlike brotherhoods—of engineers, firemen, brakemen—yet lacked a coherent voice.
By 1877 the railroads, overextended logistically and financially, were forced to extract as much labor as possible from their employees for the least amount of money. This resulted in longer shifts, missed payrolls, and impositions such as the lengthened freight trains, known as “double-headers,” which demanded twice the work from smaller crews and increased the already unreasonably high risk of on-the-job injury. Early summer found workers on the Baltimore & Ohio once again behind in their pay. On July 16 at Martinsburg, West Virginia, a vital rail junction near the Maryland border, hundreds stopped work. They blocked the tracks, bottling up traffic in all directions, and, using their knowledge of the yard, sabotaged switches and drove locomotives onto sidings. One eastbound train bearing cattle to market was off-loaded, the animals left to graze in a nearby pasture.
Within days the revolt spread to nearby Baltimore, New York, and Pittsburgh, and west to Cleveland, Omaha, and San Francisco. Across the country eighty thousand rail workers walked off the job, stranding freight and passengers and bringing the nation’s rail system to a standstill. One Chicago newspaper, unsure what to call the yet unnamed phenomenon, noted its arrival with the simple headline “It Is Here!”
At Pittsburgh an ad hoc Trainmen’s Union under the guidance of Robert Ammon, a twenty-five-year-old brakeman, attempted to coordinate the strikers, but generally the uprising was spontaneous and unscripted. It also held a powerful appeal. Almost everywhere, the workers were joined at the barricades by sympathizers—men and women from the mills, domestic workers, children, the jobless, blacks, whites. In St. Louis, rail workers were joined by brewery men, black stevedores, and even the town’s newsboys in what was probably America’s first general strike. “We’re with you. We’re in the same boat,” a mill worker assured a rally in Pittsburgh. “I heard a reduction of ten percent hinted at in our mill this morning. I won’t call employers despots, I won’t call them tyrants, but the term ‘capitalists’ is sort of synonymous and will do as well.”
The rail barons, finding no one with whom to negotiate and disinclined to do so anyway, sought to suppress the uprising by force. Telegraphs chattered with urgent requests for troops in governors’ offices from West Virginia to Illinois. Crowds blocking train yards confronted units of militia and, after President Rutherford B. Hayes dispatched them, federal soldiers. The official response itself was disordered. In some towns the militia refused to lay hands on the strikers, who were their neighbors and friends, while in Baltimore the governor and other high-ranking officials were trapped in the train station, surrounded by a mob furious that ten men and boys had been shot down by panicked militia. In Pittsburgh a newspaper declared that “The Lexington of the Labor Conflict Is at Hand” after a militia unit opened fire on a crowd. In response the masses descended on local gun shops, buying or looting most of the contents, vandalized trains and tracks, and destroyed the city’s central train depot. They then surrounded and set fire to the roundhouse, in which the soldiers had sought refuge, forcing them to flee for their lives. In Chicago, lethal violence occurred at the Halstead Viaduct, where a mob trapped a smaller number of police and a vicious street battle ensued with fists and clubs, bringing the city’s death toll to thirty.
As troubling as such scenes were, perhaps more important to the nation was the strike’s impact on the flow of goods and commerce. Coal trains were stranded on Pennsylvania mountainsides, and boxcars loaded with rotting produce sat in the sun on sidings just beyond the limits of big-city freight yards. The loading and off-loading of vessels on inland waterways and both the Atlantic and Pacific coasts were interrupted, as was most passenger travel.
Technically, labor failed to win the Great Rail Strike of 1877; the strikers returned to their jobs having secured no formal pact or concessions. Still, rail workers, and all American workers, had gained an invaluable new sense of their collective strength. They had shown that even the vast and powerful railroads were vulnerable. With the weapon of the strike, workers held power; they could shut the railroads—and the country—down. “The Republic had celebrated its Centennial in July, 1876,” one historian has noted. “Exactly a year later, the industrial working class of the nation celebrated its coming of age.”
Nothing would ever be quite the same again; the strike had also opened the country’s eyes to its embarrassingly substantial population of poor people. These were the unemployed, the urchins, the homeless “tramps” and “slum-dwellers” who, in desperate times, became what the Nation’s E. L. Godkin termed “the mob, ready-made.” As events had shown, economic inequality and desperation bred violence, disruption, and radicalism. In the days after the strike, America—from President Hayes to Harper’s Weekly—paused to reconsider the efficacy of the creed of winner take all. “The laissez-faire policy has been knocked out of men’s heads for the next generation,” one newspaper concluded, while at the White House the president wrote in his diary, “The strikes have been put down by force, but now for the real remedy.”
Eighteen seventy-seven was the year the country formally gave up on Reconstruction, withdrawing federal troops from the South and relinquishing the idealistic effort to integrate the more than four million slaves freed by the Civil War into American society. The strike’s violence seemed to validate the shift in regional focus; now the troops could quell urban labor strife. The nation’s best instincts were also redirected toward the amelioration of economic hardship and social ills in the cities, helping to inspire settlement houses, a liberal intellectual Protestant movement known as the Social Gospel, and an oppositional political consciousness that found expression in the nascent Socialist Party.
Each generation of Americans encounters a political, economic, or social dilemma it may choose to confront with thought and activism. In the early twentieth century the challenge was to introduce fairness and public scrutiny into relations between industry and workers, business and consumers. The 1930s brought the Popular Front and the global crusade against Fascism. When, at 4:30 in the afternoon on February 1, 1960, four black college students occupied stools at the whites-only lunch counter in the F. W. Woolworth’s in downtown Greensboro, North Carolina, they secured their generation’s role in the nation’s struggle for racial justice.
The four young men—David Richmond, Ezell Blair Jr., Franklin McCain, and Joseph McNeil—had become friends at all-black North Carolina Agricultural & Technical College, and in late-night bull sessions had often discussed the predicament haunting their futures; even as educated African Americans they would be unable to enter the front door of a movie theater in the segregated South or eat lunch at a Woolworth’s, let alone find meaningful careers. They had planned carefully for that day; all were neatly dressed, and before taking seats at the counter they had purchased school supplies elsewhere in the store and obtained receipts.
Blair cleared his throat and asked the white waitress for a doughnut and a cup of coffee. “I’m sorry, Negroes eat at the other end,” she said, directing him to a stand-up snack bar. When Blair and the others showed no sign of leaving, manager C. L. “Curly” Harris was informed. He had worked hard to maintain his modest corner of the Woolworth’s empire, eschewing the term “five and dime” and insisting his emporium be considered a “junior department store.” He instructed his employees not to make a fuss over the four young men at the counter. They would leave soon enough, he said, if everyone simply ignored them.
News of what was taking place at Woolworth’s spread quickly through the streets of downtown Greensboro, however, and within minutes a crowd had gathered. A lone policeman arrived as well. Although the young men were defying a local segregation law, they were peaceful and had not stolen anything, so he did not arrest them. Finally, Harris announced that the store would close half an hour early. While the crowd exited through the front, the protesters were let out a side door.
Back at the A & T campus, Richmond, Blair, McCain, and McNeil were greeted as heroes. Word of their deed had preceded them, and many of their fellow students wanted to discuss its implications and plan more sit-ins. Greensboro mayor George Roach, meanwhile, had called the school president, Warmoth T. Gibbs Sr. Roach demanded that all A & T students be restricted to campus. Gibbs said that was impossible, as many of them had off-campus part-time jobs. (His first reaction to news of the sit-in at Woolworth’s had been to ask, “Why there? The food’s supposed to be terrible.”)
The next day McCain and McNeil returned to the Greensboro Woolworth’s with sixteen other students. McCain, McNeil, Billy Smith, and Clarence Henderson took seats at the lunch counter and remained all day without being served. On February 8 the protest spread to nearby Durham, where students from North Carolina College were joined at a lunch counter by white students from Duke University. Stores in Nashville, Richmond, and Memphis were next, and soon protests challenging Jim Crow segregation were under way all across the South, not only at five-and-dime chain stores but at swimming pools, beaches, libraries, movie theaters, and churches, while sympathy demonstrations occurred at Woolworth’s, Kresge’s, and W. T. Grant stores in northern cities. A few of the retailers showed signs of capitulation, recognizing that their business relied increasingly on black customers. Many others, however, continued to vow resistance. TV news showed the protesters being jeered at and attacked by hostile whites, and sometimes arrested, but the dominant image was that of nonviolent, decently behaved young people doing something inspiring and good.
“Ella, this is the thing!” Fred Shuttlesworth, a Birmingham, Alabama, minister and a member of the Southern Christian Leadership Conference (SCLC), raved on the phone to Ella Baker, SCLC’s executive secretary, after visiting a North Carolina sit-in. “You must tell Martin that we have to get with this right away. This can really shake up the world.” Martin Luther King Jr. and his colleagues had led the Montgomery bus boycott in 1955 and ’56 and had seen the advances from Brown v. Board of Education (1954) and the integration of Central High in Little Rock, Arkansas, in 1957.
Nothing could match what they were witnessing now, a veritable explosion of civil rights activism, one with broad popularity among young people, black and white, in the North and South. It showed no sign of abating and held the promise of reinvigorating the entire movement. Understandably, there was talk of an existing civil rights organization such as SCLC assuming leadership of the student movement. Baker, however, insisted that the young people be allowed to steer their own course. In mid-April she organized a conference at Shaw University, in Raleigh, that gathered sit-in participants from across the country. There were veterans of the North Carolina and Nashville sit-ins, and activists from as far away as Michigan and New York. As a symbol of pride and defiance some wore clothes torn or bloodied in sit-in battles with police or opposing demonstrators. Baker circulated an essay she had written, “More Than a Hamburger,” which challenged them to look beyond the integration of lunch counters to the demands of the broader civil rights struggle.
One of the first goals pursued by the Student Nonviolent Coordinating Committee (SNCC), which had been founded at the Raleigh conference with Baker and historian Howard Zinn as adult advisers, was the desegregation of retail businesses in Atlanta and Albany, Georgia. The group had also discussed the need for voter registration projects in parts of the Deep South where the black vote had been all but eliminated for decades by intimidation and legislative fiat, and where the federal government had long ago ceased trying to enforce laws designed to protect African American voting rights.
Many in SNCC felt that sit-ins and other forms of nonviolent public protest, or “direct action,” were most needed in the South. Others argued for the urgency of southern blacks regaining the right to vote. The direct action adherents worried that voter registration was too gradual. A central tenet of SCLC’s philosophy, voiced by Martin Luther King and James Lawson, one of the young leaders of the successful Nashville sit-ins, was “the beloved community,” a society of equality and racial justice attained through nonviolent means. Was nonviolence a faith, as Lawson and others believed, or simply a tactic? And could it be sustained in the face of violent reprisals and arrests?
The questions about direct action were answered in 1961, when SNCC entered “Fortress” Mississippi, the South’s most entrenched white supremacist state. “If you went into Mississippi and talked about voter registration, they’re going to hit you on the side of the head,” one SNCC worker quipped, “and that’s as direct as you can get.” There had been lonely voter registration efforts in Mississippi for years. In the Delta town of Cleveland, filling station owner Amzie Moore worked with discreet diligence, trying to make inroads without attracting the hostile attention of local whites. His counterpart in Amite County, in the southern part of the state, was E. W. Steptoe, president of the regional NAACP, who kept a loaded gun in every room of his house.
In August 1961 SNCC workers founded the Pike County Nonviolent Movement, which staged a sit-in of local high school students at the Pike County Library; shortly after, the SNCC’s Bob Moses, who had taken local residents to the courthouse in Liberty, the seat of Amite County, to register to vote, was roughed up by police and arrested. “I didn’t recognize Bob at first, he was so bloody,” Steptoe later said. “I just took off his T-shirt and wrung the blood out of it like it had just been washed.” After receiving stitches, Moses appeared before a rally that night, his head wrapped in bandages.
“The law down here is law made by white people, enforced by white people, for the benefit of white people,” Moses said. “It will be that way until the Negroes begin to vote.” He urged his listeners to find the courage to accompany SNCC workers to the voting registrar’s office at the courthouse. His colleague Marion Barry, speaking for the direct action group who had staged the library sit-in, told the audience, “The attitude of a lot of people is ‘Don’t get in trouble.’ Let me tell you, Negroes have been in trouble since 1619. How can you get in trouble when you’re already in trouble? You’re in trouble until you become first-class citizens.”
In less than eighteen months, from February 1960 through the fall of 1961, the young people’s movement in the civil rights cause had gone from lunch counter sit-ins to a voter registration effort in one of the country’s harshest battlegrounds for racial justice. Their example, and the enthusiasm surrounding the sit-in movement, would carry over to the hundreds of young people who came to Mississippi to serve as the movement’s nonviolent foot soldiers in the 1963 Freedom Vote and the legendary Freedom Summer of 1964. In August 1964 the Mississippi Freedom Democratic Party, the first coalition of fairly elected, biracial Mississippi voters since Reconstruction, traveled to Atlantic City to demand the right to be seated at the Democratic National Convention.
The Mississippi campaign broke down the walls of official complicity and silence in the state and brought the scrutiny of the U.S. Justice Department, and of the world, to the toughest bastion of the Jim Crow South. As the railroad strike of 1877 had led eventually to expanded workers’ rights, so the Greensboro sit-in of February 1, 1960, helped pave the way for passage of the Civil Rights Act of 1964 and the Voting Rights Act of 1965. Both movements remind us that not all successful protests are explicit in their message and purpose; they rely instead on the participants’ intuitive sense of justice.
The 5 Percent
Michael Hiltzik
Michael Hiltzik is a Pulitzer Prize–winning business columnist at the Los Angeles Times and the author of The New Deal: A Modern History (2011), among other books.
Masses of like-minded American citizens gathering together for impromptu protests coast to coast, their theme the concentration of wealth in a privileged class and society’s indifference to the neediest, their ultimate goal a nationwide movement. News media awakening slowly to their presence, then blazoning their demands on front pages. A political establishment uncertain about whether to condemn the protesters, embrace them, or co-opt them.
Familiar as these signposts might seem today, the year was 1934. The disaffected segment of society was the 5 percent—seven million Americans aged sixty-five and older, uniquely afflicted by the Great Depression and uniquely underserved by the nascent recovery emerging under Franklin Roosevelt’s New Deal.
This 5 percent’s protests coalesced as the Townsend movement, launched by a sinewy midwestern farmer’s son and farm laborer turned California physician. Francis Townsend was a World War I veteran who had served in the Army Medical Corps. He had an ambitious, and impractical, plan for a federal pension program. Although during its heyday in the 1930s the movement failed to win enactment of its program, it did play a critical role in contemporary politics. Before Townsend, America understood the destitution of its older generations only in abstract terms; Townsend’s movement made it tangible. “It is no small achievement to have opened the eyes of even a few million Americans to these facts,” Bruce Bliven, editor of the New Republic, observed. “If the Townsend Plan were to die tomorrow and be as completely forgotten as miniature golf, mah-jongg, or flinch, it would still have left some sedimented flood marks on the national consciousness.” Indeed, the Townsend movement became the catalyst for the New Deal’s signal achievement, the old-age program of Social Security. The history of its rise offers a lesson for the Occupy movement in how to convert grassroots enthusiasm into a potent political force—and a warning about the limitations of even a nationwide movement.
Although in technical terms the country touched bottom by the end of 1933, the emergent recovery from the Depression only made conditions on the ground seem that much more dire. Very few groups were left further behind than the aged. The overall unemployment rate had peaked at an estimated 25 percent of the workforce in 1935; but the rate among those sixty-five and older looking for work was 54 percent. The fraying of the nation’s economic fabric hit the elderly especially hard: having spent their entire lives in the bosom of the American Dream, working hard and saving, they were thrown out of their jobs, deprived of their homes, and robbed of their bank savings just as they neared the end of their careers and at a point in their lives when the hope of rebuilding the nest egg was dim.
Millions who had been entitled to employer pensions discovered that these, too, were an empty promise—the Depression that wiped out their employers took their pension guarantees down with them. As for public pension programs, twenty-nine states had enacted versions by 1934, but four had run out of money, and the stipend paid by the others averaged $14.34 a month. The Roosevelt White House was inundated with appeals for help, including one letter from a Texas widow on behalf of her aged mother, left blind and delirious from diabetes and with “no place to go unless it be to the poor house.”
As the origin narrative of the Townsend movement would have it, one morning in 1933 the sixty-six-year-old physician, himself recently let go from his job in the Long Beach, California, health department, saw from his bathroom window three destitute women rooting for trash in an alleyway. The vision drove him to contrive a program aimed at coaxing workers sixty and older into retirement by granting them a government pension of two hundred dollars a month, financed from a federal “transaction tax.” As set forth in a letter published that September in the Long Beach Press-Telegram, his plan became the foundation stone of the Townsend movement.
The plan aimed both to succor the elderly and to produce near-term economic recovery, largely through a mandate that recipients spend their monthly allowances within thirty days, “thereby assuring a brisk state of business, comparable to that we enjoyed during war times.” Asked how any recipient’s compliance with this requirement could be enforced, Townsend would explain: “The neighbors are going to watch him.”
The Townsend campaign would soon take its place as the most important and politically effective mass movement of its time and the first genuine lobby for old-age security. In short order, Townsend Clubs sprang up across the nation. There were newsletters, a national weekly, and a national organization that brought grassroots organizers together and monitored their activities for departures from orthodoxy. The movement became an exemplar of the transformation of a local protest movement into a potent political force. “On Capitol Hill in Washington the politicians are amazed and terrified by it,” Harper’s Monthly reported. In the 1934 and 1936 elections, the movement achieved that nirvana of grassroots protesters—the election to Congress of candidates carrying its banner.
The Townsend movement was not unprecedented in its ambition or even its reach. The thirties were an era of mass movements. The model had been established by the Bonus army of 1932, which, under the disciplined leadership of an unemployed World War I veteran named Walter W. Waters, had advanced peacefully on foot and by rail east from Portland, Oregon, to Washington, D.C. The marchers’ quest was for accelerated payment of the veterans’ bonus that Congress had enacted in 1924, pegged at $1.25 a day of overseas service but not to be paid until 1945. As the economic slump added urgency to the veterans’ demands, Congress tabled almost every proposal for early disbursement. The lone bill to reach President Herbert Hoover’s desk earned his chilly veto in 1931 as a “wasteful expenditure.”
Reaching the capital in May, the Bonus army camped chiefly in the marshy Anacostia Flats until the afternoon of July 28. Just before 5:00 p.m., army cavalry overran the twenty thousand veterans, firing tear gas, wielding bayonets, and setting the marchers’ shacks aflame with torches, all under the command of General Douglas MacArthur while his staff aide, Major Dwight D. Eisenhower, looked on in dismay. The country was appalled by the spectacle of participants in a peaceable gathering being assaulted by government troops, not to mention by Hoover’s initial endorsement of MacArthur’s attack as a blow against “mob rule.” The political import of MacArthur’s overzealous offensive and Hoover’s stony disdain for Americans seeking help from the government was not lost on the president’s electoral challenger. Listening in Albany to reports from the front, Democratic presidential candidate Franklin D. Roosevelt turned to his adviser Felix Frankfurter. “Well, Felix,” he said, “this elects me.”
The Bonus army’s impetus, like that of mass movements following up to the Occupy protesters of the modern day, came less from the absolute harshness of contemporary economic conditions than from the unequal way in which particular segments of society were affected. The Bonus marchers and their supporters lived the phenomenon documented by the economists Thomas Piketty and Emmanuel Saez some seven decades later: while the Depression had impoverished most of the country, the share of income commanded by the top 10 percent of earners had scarcely taken a hit in the downturn. It would keep to a range of 43 to 46 percent from 1929 through the mid-1930s.
The protesters who succeeded the Bonus marchers would themselves speak out for discrete segments of society left stranded by the first emergent shoots of recovery. These movements ranged from the nakedly ideological to the openly partisan. The first category was represented by Rev. Charles Coughlin, the “Radio Priest” of Royal Oak, Michigan. With a liquid brogue that perfectly suited the new broadcast medium, the Canadian-born Coughlin had transformed himself by 1932 from pastor of a wood-frame suburban Detroit church into a Sunday fixture on the Columbia Broadcasting System.
As long as he stuck to castigating the “money powers” of Wall Street and preaching the evils of the gold standard and the virtues of inflation, a message that corresponded reasonably enough to the New Deal’s platform, Coughlin was tolerated by the Roosevelt White House—in 1935, at the urging of Joseph P. Kennedy, he was even received by the president at Hyde Park. Coughlin had no specific program to offer. Rather, he was the carrier of undifferentiated alienation among the working class, of anger they aimed equally at bankers and union organizers. By the late thirties, when Coughlin had turned against Roosevelt, formed his own political party, and had begun preaching apocalyptic sermons aimed at the most disaffected and leavened by anti-Semitism, his influence was on the wane, never having been translated into a single piece of legislation.
At the other end of the spectrum was Huey Long’s Share Our Wealth movement. The Democratic senator from Louisiana proposed capping any family’s wealth at $5 million and its income at $1 million a year, both figures many hundreds of times those of the average family. The guillotine lopping off the excess was federal taxation, with the resulting revenue applied to giving every family a “homestead” allowance of five thousand dollars and a guaranteed annual income of two thousand dollars—a “hillbilly paradise” of wealth without work, as the historian Arthur Schlesinger Jr. uncharitably called it. To Democratic Party leaders, Long’s organization of state and local Share Our Wealth clubs looked very much like an assemblage of shock troops for a challenge to FDR’s renomination in 1936—a challenge that may have been forestalled only by the assassination of Long in 1935.
Among the other movements that emerged in this period were Howard Scott’s utopian technocracy movement and author Upton Sinclair’s 1934 campaign for California governor under the banner of his EPIC platform, for “End Poverty in California.” His campaign manifesto was a pamphlet entitled I, Governor of California and How I Ended Poverty—A True Story of the Future. (He won the Democratic nomination, only to be trampled in the general election by a Republican candidate running with establishment Democratic support.)
But none approached the influence of Townsend and his program, which fell within the extremes represented by Coughlin and Long. One distinction was the character of the leader himself. Thin, erect, and bespectacled, projecting self-effacement and earnestness, Townsend was plainly ill at ease on the rare occasions he shared a stage with the flamboyant Coughlin, Long, or the latter’s lectern-pounding chief proselytizer, a Shreveport, Louisiana, minister named Gerald L. K. Smith. Unlike Long, Townsend professed no personal political ambitions; unlike Coughlin, his platform was devoid of febrile conspiracy-mongering. (He was not devoid of egotism, however, especially when his role in his movement was in question.) Reduced to its essentials, the Townsend movement was a quest for justice for an oppressed and abused segment of the population. From this simplicity it drew its political potency.
Economists and newspaper pundits devoted reams of analysis to puncturing Townsend’s numbers. Social insurance expert Abraham Epstein observed that, given the challenge of spending two hundred dollars a month when national income per capita was five hundred dollars a year and a new car could be bought for six hundred dollars, the program’s guiding principle appeared to be that “everybody wastes his money and everybody gets rich overnight.” (“Think of all the old people running into cabarets… trying to drink champagne to spend the money,” he added. “It would just ruin them.”)
Walter Lippmann, after interviewing the good doctor at length, reported that he had discovered the central financial flaw in the plan: Townsend had calculated his transaction tax on a total value of business transactions he placed at $1.2 trillion; but this was a gross miscalculation, for he did not realize that the sum comprised repeated purchases and sales of a single commodity, as when a farmer sells a bushel of wheat to a miller, who resells it as milled grain to a baker, who resells it as a loaf of bread to a housewife. Taxing every such transaction would bring commerce to a halt, Lippmann reported. “I knew the scheme was fantastic, but in reading about it, it was difficult to fund the particular delusion which had possessed Dr. Townsend,” he wrote. “Now that difficulty is cleared up.”
Yet the condescension of Epstein and Lippmann missed the point. Townsend’s followers were concerned less with the plan’s math—except perhaps for the draw of two hundred dollars a month—than with its attention to their welfare when the political establishment seemed to have forgotten them. Indeed, the power of a program that can be simplified into intelligible morsels has been well understood by promoters up to our present day of sound bite–driven politics, as it was by Huey Long himself, who steadfastly turned away press questions about the implausible economics of Share Our Wealth. (“Never explain,” he counseled one acolyte. “First you must come into power—POWER—and then you do things.”)
Even Townsend’s critics recognized the movement’s role of political catalyst. The New Republic’s Bliven condemned the program as “an economic impossibility”; but in terms that prefigured the rallying cry of Occupy Wall Street, he acknowledged that it had “called public attention most vividly to the fact that the country potentially, and to a large degree actually, the richest on earth[,] gives 80 percent of its people an income not much above the starvation level.”
The Townsend movement bolstered the appeal of its leader with effective organizing provided by one Robert E. Clements, who identified himself variously as the movement’s “co-founder” and “national secretary,” and who insisted on keeping movement leadership centralized. Clements had honed his salesman’s instincts as an agent in the Los Angeles real estate market. It would be his talent for organization, abetted by his skill at ballyhoo, that gave the Townsend movement political heft disproportionate to its membership numbers, which were always murky—in the mid-1930s its leaders claimed anywhere from five million to twenty-five million followers. Of the new members of the Seventy-fourth Congress, which convened in January 1935, more than a dozen had run on platforms encompassing the Townsend Plan. But even before the 1934 election, the movement had exerted a gravitational pull on Social Security.
The Committee on Economic Security, created by Roosevelt in mid-June 1934 with Labor Secretary Frances Perkins as its chair, had been given a brief to consider all forms of social insurance. At first, the committee saw as its main goal the creation of a federal system of unemployment insurance, building on a bill that had been introduced in 1933 by two progressive Democrats, Sen. Robert Wagner of New York and Rep. David J. Lewis of Pennsylvania. The Wagner-Lewis bill was a rough draft designed chiefly to soften up Congress to the concept of federal jobless aid—“frankly for educational purposes,” Perkins wrote later.
Yet the committee soon recognized that its program would have to include old-age relief. Politically this was “almost essential,” Perkins observed. As the 1934 election approached, “in some districts the Townsend Plan was the chief political issue…. The pressure from its advocates was intense.” Roosevelt seemed to have bought into the need for a pension program, “telling people he was in favor of adding old-age insurance clauses to the bill and putting it through as one program,” Perkins recalled.
Yet Roosevelt disliked being pressured, and plainly he found the extravagance of the Townsend Plan distasteful. This led to one of his more ill-considered public statements, when he abruptly and publicly pulled the rug out from under his committee’s pension proposal. The occasion was a huge gathering of social insurance experts Perkins had convened in Washington in November 1934 to put the finishing touches on the Social Security bill. Delivering the keynote speech, FDR unexpectedly reversed course on pensions. “I do not know whether this is the time for any Federal legislation on old-age security,” he said. Without naming the Townsend movement, but leaving no doubt about his target, he continued: “Organizations promoting fantastic schemes have aroused hopes which cannot possibly be fulfilled. Through their activities they have increased the difficulties of getting sound legislation.” Security for the aged would remain on his agenda, he said, but would be addressed “in time.” The speech marked “the kiss of death” for the old-age program, a crestfallen attendee told a reporter for the Baltimore Sun.
Yet the expectations aroused by the Townsendites could not be quelled so easily. Startled by the furor his speech had caused, Roosevelt sent Perkins before the press the following morning to assure them that the audience must have misheard him. Old-age pensions were still in the program, she said, and would very much be part of the bill.
That was true, although the haphazardly drafted old-age provisions of the Social Security bill would reflect the hastiness of the Perkins committee’s response to the rising Townsend movement. The unemployment compensation sections, which had an older pedigree and were based on the Wagner-Lewis bill, were much more painstakingly crafted. All the same, when the Social Security bill came to Capitol Hill for hearings beginning in January 1935, it became obvious that the lawmakers were still panicked by the presumed strength of the old-age movement and unsure that the pension provisions in the bill would mollify the Townsendites.
That placed administration officials in a quandary: they had to explain away Townsend’s manifestly impractical economics while defending the principle of government old-age pensions. Perkins and Edwin Witte, the bill’s chief draftsman, were required to walk this tightrope repeatedly during their long hours of testimony. As Witte explained patiently to the House Ways and Means Committee, to award two hundred dollars a month to everybody over sixty years of age, a population then estimated at ten million, would mean paying out $2 billion a month, or $24 billion a year, when the total annual income of all Americans at the time was $40 billion.
“It is not within the structure of our present economic or governmental system,” he said. “I think it is probably not within the structure of any governmental or economic system that is conceivable.” Evoking the image of Weimar-period hyperinflation in Germany, he added: “I presume we could start the printing presses and give the people two hundred dollars a month… but within the present structure it is not within the picture.”
Perkins was equally blunt when she took her seat before the committee. She assured the lawmakers that the committee on economic security had weighed the Townsend Plan carefully during its deliberations, “because it became a popular newspaper subject of discussion this summer, so that it was looked into sufficiently to make an estimate of what it would cost.” She bowed to the “very honest aspiration which is apparently involved in that plan” but observed that the committee’s conclusion was that it was “quite impossible, and that we must give our more serious and thorough attention to methods that seem more practical.” When Republican Rep. Harold Knutson of Minnesota remarked that the monthly benefits contemplated by the Social Security bill, which averaged about twenty-five dollars, would be “rather disappointing to those who were expecting something like two hundred dollars a month,” she snapped, “the government is not responsible for their having assumed that.”
By then, Democratic Rep. John S. McGroarty, who had been elected in 1934 from California on a platform solely devoted to the Townsend Plan, won the scramble to be the first to introduce it as legislation in the House of Representatives. Fashioned as an amendment to the Social Security bill, McGroarty’s version backed off somewhat from the doctor’s original plan—changing the flat two-hundred-dollar monthly benefit to one “not to exceed” that amount, language that contemporary observers noted could accommodate sums as little as a few pennies a month. Even so, it attracted sixty cosponsors and prompted Congress to invite Townsend to testify on its behalf.
At the witness table, Townsend proved to be less than an entirely confident spokesman for his program, acknowledging that it would be so costly that “several years” would be required to register every senior. “Nobody has been fool enough to expect that we could take 10 millions of old folk and put them immediately on a $200 a month basis,” he conceded to the Ways and Means Committee, prompting Robert Doughton of North Carolina, its chairman, to complain that the people who had been inundating Congress with letters favoring the plan “had it sold to them on the theory that just as soon as this law is enacted they will immediately go on the payroll.” If they realized that they would not get paid for several years, he observed, “the propaganda would cease at once.” McGroarty’s bill eventually failed on an unrecorded vote without a formal roll call, which spared the members the burden of having either its support or opposition on their records.
Contemporary pundits predicted that once the government’s Social Security program was placed fully in operation, the Townsend movement would run out of steam. Yet the movement’s momentum carried well beyond the passage of the Social Security Act in mid-1935. A national Townsend convention in Chicago that October attracted seven thousand delegates and nationwide press coverage. In the 1936 election, another congressional candidate, a Michigan Republican, rode a platform based entirely on the Townsend Plan from obscurity to victory in the 1936 election.
That may have been the movement’s high-water mark. A congressional investigation in 1936—whether motivated by sincere concern for the movement’s members or a desire to undermine a strengthening political threat—raised doubts about whether all the money donated by the members was honestly spent, or whether some of it ended up in Clements’s and Townsend’s pockets. Clements resigned from the organization just before the hearings commenced, depriving the movement of his indispensable organizing genius.
Toward the end of the thirties, mass movements of all sorts lost their charm. Long was dead and Coughlin had devolved into a crank with narrow appeal. Sinclair did not run for office again after his trouncing, contenting himself with writing a retrospective on the campaign entitled I, Candidate for Governor—And How I Got Licked.
Yet the Townsend movement managed to retain a good portion of its appeal. Its effectiveness as a pressure group waned, as was predicted, with the passage of Social Security. As that bill was imperfect at best—historian William E. Leuchtenburg, though acknowledging the act as a “landmark,” described it as “an astonishingly inept and conservative piece of legislation”—the Townsend movement’s presence surely played a role in Congress’s refinement of the old-age program in 1939, when it accelerated the start of benefits to 1940 from 1942 and pared back a scheduled increase in the payroll tax. Townsend died in 1960 at the age of ninety-three. His program struggled on for two more decades, the last Townsend Club shutting down in 1980. What may be its real legacies, Social Security and the idea that a grassroots movement can truly make a difference, survive to this day.
Hidden in Plain Sight: The Problem of Silos and Silences in Finance
Gillian Tett
Gillian Tett is the U.S. managing editor of the Financial Times. She has been named Journalist of the Year (2009) and Business Journalist of the Year (2008) by the British Press Awards and Senior Financial Journalist of the Year (2007) by the Wincott Awards, and she is the author of the New York Times bestseller Fool’s Gold: How Unrestrained Greed Corrupted a Dream, Shattered Global Markets and Unleashed a Catastrophe (2009). Before joining the Financial Times in 1993, she was awarded a PhD in social anthropology from Cambridge University.
Late in 2011, Standard & Poor’s issued a rating report on the U.S. investment bank Morgan Stanley that made for sobering reading. Buried toward the end was a paragraph saying, in effect, that the agency had decided to award a “moderate” risk profile to the bank because of the “complexity” of its business. In particular, its “exposure to the more volatile capital markets business and to more opaque financial products” was a “weakness to the risk profile that is not reflected in our risk-adjusted capital framework [and] can lead to unanticipated losses despite improved risk controls”—or so the agency solemnly declared.
For 99 percent of the population—for almost anyone working outside a bank—that sentence was meaningless gobbledygook. But what it essentially meant was that Standard & Poor’s was unsure what was really going on inside Morgan Stanley. Never mind all those clever rocket scientists who have been employed to monitor the bank, or those pages of financial regulations that have emerged as a result of the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act. When it comes to making sense of the risks attached to Morgan Stanley, or other large banks, a group such as Standard & Poor’s can still only hazard a reasoned guess about the chance of “unanticipated losses.” And for the wider public, it is all but impossible to make sense of that “complexity,” since the issues tend to be buried in jargon (if not at the bottom of a ratings report).
Welcome to one of the big paradoxes of twenty-first-century finance. In many senses, it is unfair to single out Morgan Stanley. I cite this report because it happened to cross my desk, but most of the other large banks are equally complex, and thus equally prone to potential risks that the rating agencies are struggling to understand. Precisely because it is so common, however, this report on Morgan Stanley also points to one of the problems in modern finance: the cultural dangers of gobbledygook, silos, and social silences.
The issue at stake concerns how information travels around the system. Four or five long years after the financial crisis first erupted, it is often tempting for the wider public and politicians to blame it on some nefarious banking plot. After all, the assumption goes that during the credit boom—say, from 2003 to 2007—many bankers got extremely rich, engaging in activities that most people barely even knew existed: just think of all those complex collateralized debt obligations (CDOs) made up of mortgage loans that were concocted before 2007. The bankers who engaged in that mysterious activity also created risks that eventually blew the system up. Thus today it seems almost natural to search for villains—surely this disaster happened because bankers were deliberately hiding what they were doing, or concealing it in a cloak of spin. So the popular theory goes.
I think that the reality is more subtle—and unnerving. In general, I did not have the impression that there was any coordinated, deliberate plot by bankers to conceal their activities or downplay the risks before 2007. Instead, many of the activities were hidden in plain sight. To be sure, bankers did not always want to talk about these activities; many preferred to keep their deals away from the limelight—and the noses of regulators—because that allowed them to boost their margins (and stop rivals from stealing their ideas). But if more people had been willing to wade through rating agency reports, bank filings, and other data, it would have been possible for outsiders to spot that the system was spinning out of control and becoming prone to excess. Anybody willing to confront the gobbledygook would have been alarmed. The question that citizens and politicians alike need to ask is not why did the bankers “hide” their activities before 2007, but why did so few people actually ask hard questions at all. Why, in other words, did Western society allow finance to spin out of control—in plain sight? And what does that mean for how we treat finance today, on Wall Street or anywhere else?
In my view, there are two key issues that need to be discussed. The first is what might be called the silo trap, or the problem of tunnel vision. When I first started writing about complex finance as a journalist back in 2004, I was struck by the degree to which the modern financial system was marked by a pernicious silo mentality. This played out on many levels. Inside the giant bureaucracies of the modern banks, it seemed that different departments existed almost like warring tribes: although the separate desks, or divisions, of banks were theoretically supposed to collaborate, in practice they competed furiously for scarce resources, knowing that whatever desk earned the greatest profits would wield the most power. As a result, desks tended to hug information. The right hand of the bank rarely knew what the left was doing in any detail—nor was the risk department necessarily better informed.
Across the market as a whole, the silo problem was multiplied many times: different banks competed furiously and were often reluctant to tell competitors (or anybody else) too much detail about their activities. In theory, of course, the regulators were supposed to take an overarching view and look at how markets interacted as a whole. In practice, the regulatory infrastructure was fragmented, too, and marked by tribal rivalries that mirrored (and intensified) those private sector splits. In the United States, for example, the regulatory community was split into different bodies: the Office of the Comptroller of the Currency, the Federal Deposit Insurance Corporation, the Federal Reserve, and so on. The euro-zone financial system was fragmented by numerous different national regulators. Even in Britain, where there was supposed to be a single coordinated regulator (namely, the Financial Services Authority, or FSA), the conduct of regulation was weakened by a sense of tunnel vision: though the FSA looked at micro-level financial activity (that is, checked whether individual banks met the narrow regulatory rules), the Bank of England was supposed to look at overall financial and monetary flows (how the banking system as a whole was operating). Communication between the two bodies was patchy.
This fragmented picture made it hard for anyone to connect the dots, and numerous issues fell between the cracks. Inside the banks and regulatory offices, there were certainly people who understood how small pieces of finance worked; outside the financial system, there were some journalists and economists who could vaguely sense how the overall patterns were playing out. But trying to get a clear vision of how finance was developing as an entire system was hard. A sense of tunnel vision permeated the system—hampering bankers as much as anyone else.
The second key problem that dogged the system before 2007—and which also has implications for the future—is an issue that might be described as “social silences.” Before I became a journalist, I trained as a social anthropologist and was influenced by the work of Pierre Bourdieu, a French intellectual who conducted anthropology fieldwork in North Africa. His work has great relevance for finance and many other parts of modern Western society. One of its cornerstones is the idea that societies typically operate with a publicly accepted sense of “discourse” (or doxa), which is shaped by the elite and enables them to maintain power. What matters in terms of that discourse is not what is defined as the culturally acceptable form of dialogue but, more crucially, the question of what is not discussed. Social silences, or the parts of everyday life that are typically ignored, are as important as—if not more important than—the issues that are popularly debated, since it is these silences that help to reproduce a system and power structures over time. Sometimes individual actors are aware of these silences and choose to deliberately conceal information (or not discuss it). More commonly, though, there is simply a tacit, half-conscious recognition that it is better simply to avoid discussing an issue, or that there are cultural disincentives to peering into it—because it is considered either taboo or “boring.” Either way, a pattern of silence or disinterest often plays a useful function in terms of maintaining social structures, even if it is not consciously planned. Or, as Bourdieu says, “The most successful ideological effects are those which have no need of words, and ask no more than complicitous silence.” Upton Sinclair, the novelist, expressed broadly the same thing one hundred years ago when he observed, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”
Finance epitomized this pattern before the financial crash. Back in 2005 and 2006, the topics of credit derivatives and collateralized debt obligations, for instance, were considered to be incredibly boring, if not downright arcane. During that period there were few mainstream media outlets that covered such topics, and even when journalists such as myself wrote about them, it was often tough to get these stories on the front page. That was partly because the subject matter was so unfamiliar; after all, who had heard of CDOs before 2007? But the other problem was that these topics seemed to be wrapped up in technical jargon that few people understood or, more importantly, had much appetite to understand. Faced with financial gobbledygook, the general populace found it easier to leave the whole field of finance in the hands of technical experts, particularly since those technical experts were insisting, before 2007, that modern finance was a wonderfully beneficial thing. To put it another way, the single biggest reason finance remained hidden in plain sight was that insiders had very little interest in rocking the boat—and outsiders little incentive to peer in. The topic was widely perceived to be boring, at least within Western culture, and that kept the problems buried in a silo, without the need for any banking plot.
This pattern raises big questions about the future. In some senses, thankfully, many lessons have been learned since 2007 and 2008. Banks and regulators are keenly aware of the silo problem and are making efforts to take a more holistic vision of how finance operates. Since the financial crisis, for example, most banks have overhauled their internal risk management departments and are trying to take a more “joined-up” approach to analyzing their own activities. Regulators are now communicating far more intensively with each other, across departments and across borders. A Basel-based body called the Financial Stability Board is promoting a much higher level of international dialogue. One of its tasks, apart from monitoring global banking trends, is to look at “shadow banks,” or the nonbank financial institutions that used to be ignored before 2007. Some central banks, such as the Bank of England, are embracing a so-called macro-prudential policy framework, which also seeks to promote a more holistic vision of how financial flows and economies interact. In Washington, the Office of Financial Research is trying to improve the level of data that is being gathered about global financial flow; the hope is that this will also enable regulators to take a more collaborative approach to monitoring the system.
In the aftermath of the 2008 crash, it is also widely recognized that the media and politicians alike need to do a better job of monitoring how modern finance works. No longer are politicians willing to leave banking purely in the hands of bankers, and even the more mainstream elements of the media have tried to unpack these issues in recent years. Ideas that were once near-unimaginable have started to be debated: it is no longer taken for granted, for example, that bankers should naturally get vast bonuses, or considered inevitable that finance should grow faster than the rest of the economy. The concept of state ownership for banks, as well, is no longer taboo—nor is the idea that banks should automatically be allowed to combine businesses as they please. Even the idea of capitalism has come up for more debate, as voices have started to challenge the once-dominant idea that free, globalized markets are always good.
While these signs of progress are certainly welcome, the efforts they represent remain limited in some respects. For one thing, the silo problem has certainly not been eradicated; in spite of all the efforts to embrace joined-up risk management systems and regulatory oversight, many parts of finance remain plagued by tunnel vision. There seems little chance of this disappearing soon. On the contrary, it is almost an inevitable consequence of the sheer size and complexity of many banks: the scale of these operations makes them not simply “too big to fail” but too big to manage—at least in a sensible, collaborative way.
Similarly, the problem of social silences has not disappeared. Instead, it keeps resurfacing in all manner of ways. Take the issue of banking reform. In the last two years, a blizzard of new financial regulations has been created or proposed by parliaments and government bureaucracies in Washington, Brussels, Basel, London, and Paris. Taken together, these regulations could have a significant impact on how finance is conducted in the future, yet their complexity makes them tough to comprehend, let alone monitor. To be sure, there are groups of lawyers who are able to comb through the fine print of the documents, but most of them are employed by banks—since the financial sector is the only player within the system that has enough resources to devote to this analysis. To nonbankers, the task looks as alienating today as making sense of the financial flows that caused the crisis five years ago did.
Put another way, while it was products such as the “CDO squared” or “CDO cubed” that defied public comprehension in 2006, these days the problem is one of “regulatory complexity squared” (or cubed): complex financial products are subject to extremely complex new laws—by governments that have very complex reform goals. And once again a wave of gobbledygook makes it almost impossible for anyone outside the banking world to understand what is going on at banks. Pace that report on Morgan Stanley I quoted earlier.
Is there any solution? Some steps might help. One idea would be to make financial reform dramatically more simple, centering on practical, easy-to-understand principles. (It is worth noting, for example, that the Swiss are driving a wave of radical reform with a mere thirteen-page statement of principles; this compares with hundreds of pages now floating around places such as the United States and the euro zone.) Another sensible step would be to force the banks themselves to simplify their operations—to create companies that can be readily understood by regulators, directors, ratings agencies, and investors alike. Perhaps once a year the directors of the largest banks should be forced to appear in front of a committee of politicians, to explain how their banks make money and where their risks lie. If nothing else, that public grilling might help to concentrate minds.
The other area where there needs to be more debate is in the information business itself. In an ideal world, the best way to deal with the problems of silos and social silences would be to have a lively cadre of institutions and professionals who are committed to silo-busting and shedding light on dark or “boring” places in finance. Journalists are one obvious group who could and should play that role (and speaking as a Financial Times journalist, I can say it is a task that we take very seriously). So could academics, political researchers, or even credit rating agencies. Some of these institutions have been trying to fulfill that function in recent years: there has been a host of investigative pieces emanating from the Western media world, and some penetrating reports from academics as well, not to mention some of the credit rating agencies.
Unfortunately, this silo-busting activity is still far too modest and sporadic. Part of the problem is that universities, newspapers, and rating agencies remain riddled with silos themselves. Another related issue is that the resources of silo-busting institutions such as universities and newspapers are being eroded all the time. Most newspapers today, after all, simply do not have enough reporters available to spend days trying to decode Dodd-Frank or comb through the fine details of bank balance sheets—and they are doubly wary of doing this given that many bills, Dodd-Frank included, tend to look distinctly “dull.” Thankfully, some academics have more resources and time (the work done by NYU economists that “explains” Dodd-Frank, for example, shows how universities can play that role). As some cash-strapped newspapers have been forced to step back from investigative reporting, other bodies have sprung up to engage in long-form research in finance and other spheres (ProPublica in New York is an excellent example of this). But today, as before, there is still a great deal in finance that remains hidden in plain sight, ignored by the wider public and by politicians because it seems technical, complex—or just boring gobbledygook. That is not a comforting thought, for bankers, journalists, or anyone else.
What Good Is Wall Street?
John Cassidy
John Cassidy is a staff writer at The New Yorker and a columnist at Fortune. His latest book is How Markets Fail: The Logic of Economic Calamities (2009).
In early fall 2010, I came across an announcement that Citigroup, the parent company of Citibank, was to be honored, along with its chief executive, Vikram Pandit, for “Advancing the Field of Asset Building in America.” This seemed akin to, say, saluting BP for services to the environment or praising Facebook for its commitment to privacy. During the past decade, Citi has become synonymous with financial misjudgment, reckless lending, and gargantuan losses: what might be termed asset denuding rather than asset building. In late 2008, the sprawling firm might well have collapsed but for a government bailout. Even today the U.S. taxpayer is Citigroup’s largest shareholder.
The awards ceremony took place on September 23 in Washington, D.C., where the Corporation for Enterprise Development, a not-for-profit organization dedicated to expanding economic opportunities for low-income families and communities, was holding its biennial conference. A ballroom at the Marriott Wardman Park was full of government officials, lawyers, tax experts, and community workers, two of whom were busy at my table lamenting the impact of budget cuts on financial-education programs in Vermont.
Pandit, a slight, bespectacled fifty-four-year-old native of Nagpur, in western India, was seated near the front of the room. Fred Goldberg, a former commissioner of the Internal Revenue Service who is now a partner at Skadden, Arps, introduced him to the crowd, pointing out that, over the years, Citi has taken many initiatives designed to encourage entrepreneurship and thrift in impoverished areas, setting up lending programs for mom-and-pop stores, for instance, and establishing savings accounts for the children of low-income families. “When the history is written, Citi will be singled out as one of the pioneers of the asset movement,” Goldberg said. “They have demonstrated the capacity, the vision, and the will.”
Pandit, who moved to the United States at sixteen, is rarely described as a communitarian. A former investment banker and hedge fund manager, he sold his investment firm to Citigroup in 2007 for $800 million, earning about $165 million for himself. Eight months later, after Citi announced billions of dollars in write-offs, Pandit became the company’s new CEO. He oversaw its near collapse in 2008 and its moderate recovery since.
Clearly, this wasn’t the occasion for Pandit to dwell on his career, or on the role that Citi’s irresponsible actions played in bringing on the subprime-mortgage crisis. (In early 2007, his predecessor, Charles Prince, was widely condemned for commenting, “As long as the music is playing, you’ve got to get up and dance.”) Instead, Pandit talked about how well-functioning banks are essential to any modern society, adding, “As President Obama has said, ultimately there is no dividing line between Wall Street and Main Street. We will rise or we will fall together as one nation.” In the past couple of years, he went on, Citi had rededicated itself to “responsible finance.” Before he and his colleagues approved any transaction, they now asked themselves three questions: Is it in the best interests of the customer? Is it systemically responsible? And does it create economic value? Pandit indicated that other financial firms were doing the same thing. “Banks have learned how to be banks again,” he said.
About an hour later, I spoke with Pandit in a sparsely furnished hotel room. Citi’s leaders—from Walter Wriston, in the 1970s, to John Reed, in the 1980s, and Sanford Weill, in the late 1990s—have tended to be formidable and forbidding. Pandit affects a down-to-earth demeanor. He offered me a cup of coffee and insisted that I sit on a comfortable upholstered chair while he perched on a cheap plastic one. I asked him if he saw any irony in Citi being commended for asset building. His eyes widened slightly. “Well,” he said, “the award we are receiving is for fifteen years of work. It was work that was pioneered by Citi to get more financial inclusion. And it’s part of a broader reform effort we are involved in under the heading of ‘Responsible Banking.’ ”
Since Pandit took over, this effort has involved selling or closing down some of Citi’s riskier trading businesses, including the hedge fund that he used to run; splitting off the company’s most foul-smelling assets into a separate entity, Citi Holdings; and cutting the pay of some senior executives. For 2009 and 2010, Pandit took an annual salary of one dollar and no bonus. (He didn’t, however, give back any of the money from the sale of his hedge fund.) “This is an apprenticeship industry,” he said to me. “People learn from the people above them, and they copy the actions of the people above them. If you start from the top by acting responsibly, people will see and learn.”
Barely two years after Wall Street’s recklessness brought the global economy to the brink of collapse, the sight of a senior Wall Street figure talking about responsible finance may well strike you as suspicious. But on one point Pandit cannot be challenged. Since the promulgation of Hammurabi’s code, in ancient Babylon, no advanced society has survived without banks and bankers. Banks enable people to borrow money, and, today, by operating electronic-transfer systems, they allow commerce to take place without notes and coins changing hands. They also play a critical role in channeling savings into productive investments. When a depositor places money in a savings account or a CD, the bank lends it out to corporations, small businesses, and families. These days, Bank of America, Citi, JPMorgan Chase, and others also help corporations and municipalities raise money by issuing stocks, bonds, and other securities on their behalf. The business of issuing securities used to be the exclusive preserve of Wall Street firms, such as ' Stanley and Goldman Sachs, but during the past twenty years many of the dividing lines between ordinary banks and investment banks have vanished.
When the banking system behaves the way it is supposed to—as Pandit says Citi is now behaving—it is akin to a power utility, distributing money (power) to where it is needed and keeping an account of how it is used. Just like power utilities, the big banks have a commanding position in the market, which they can use for the benefit of their customers and the economy at large. But when banks seek to exploit their position and make a quick killing, they can cause enormous damage. It’s not clear now whether the bankers have really given up their reckless practices, as Pandit claims they have, or whether they are merely lying low. In the past few years, all the surviving big banks have raised more capital and become profitable again. However, the U.S. government was indirectly responsible for much of this turnaround. And in the country at large, where many businesses rely on the banks to fund their day-to-day operations, the power still isn’t flowing properly. Overall bank lending to firms and households remains below the level it reached in 2008.
The other important role of the banking industry, historically, has been to finance the growth of other vital industries, including railroads, pharmaceuticals, automobiles, and entertainment. “Go back and pick any period in time,” John Mack, the chairman of Morgan Stanley, said to me recently. “Let’s go back to the tech boom. I guess it got on its feet in the late eighties, with Apple Computer and Microsoft, and really started to blossom in the nineteen-nineties, with Cisco, Netscape, Amazon.com, and others. These are companies that created a lot of jobs, a lot of intellectual capital, and Wall Street helped finance that. The first investors were angel investors, then venture capitalists, and to really grow and build they needed Wall Street.”
Mack, who is sixty-seven years old, is a plainspoken native of North Carolina. He attended Duke on a football scholarship, and he retains the lean build of an athlete. We were sitting at a conference table in his large, airy office above Times Square, which features floor-to-ceiling windows with views of the Hudson. “Today, it’s not just technology—it’s clean tech,” he went on. “All of these industries need capital—whether it is ethanol, solar, or other alternative-fuel sources. We can give you a list of companies we’ve done, but it’s not just Morgan Stanley. Wall Street has been the source of capital formation.”
There is something in what Mack says. Morgan Stanley has raised money for Tesla Motors, a producer of electric cars, and it has invested in Bloom Energy, an innovator in fuel-cell technology. Morgan Stanley’s principal rivals, Goldman Sachs and JPMorgan, are also canvassing investors for ethanol producers, wind farms, and other alternative-energy firms. Banks, of course, raise money for less environmentally friendly corporations, too, such as Ford, General Electric, and ExxonMobil, which need cash to fund their operations. It was evidently this business of raising capital (and creating employment) that Lloyd Blankfein, Goldman’s chief executive, was referring to last year, when he told an interviewer from a British newspaper that he and his colleagues were “doing God’s work.”
Yet Wall Street’s role in financing new businesses is a small portion of what it does. The market for initial public offerings of stock by U.S. companies never fully recovered from the tech bust. During the third quarter of 2010, just thirty-three U.S. companies went public, and they raised a paltry $5 billion. Most people on Wall Street aren’t finding the next Apple or promoting a green rival to Exxon. They are buying and selling securities that are tied to existing firms and capital projects, or to something less concrete, such as the price of a stock or the level of an exchange rate. During the past two decades, trading volumes have risen exponentially across many markets: stocks, bonds, currencies, commodities, and all manner of derivative securities. In the first nine months of 2010, sales and trading accounted for 36 percent of Morgan Stanley’s revenues and a much higher proportion of profits. Traditional investment banking—the business of raising money for companies and advising them on deals—contributed less than 15 percent of the firm’s revenue. Goldman Sachs is even more reliant on trading. Between July and September 2010, trading accounted for 63 percent of its revenue, and corporate finance just 13 percent.
In effect, many of the big banks have turned themselves from businesses whose profits rose and fell with the capital-raising needs of their clients into immense trading houses whose fortunes depend on their ability to exploit day-to-day movements in the markets. Because trading has become so central to their business, the big banks are forever trying to invent new financial products that they can sell but that their competitors, at least for the moment, cannot. Some recent innovations, such as tradable pollution rights and catastrophe bonds, have provided a public benefit. But it’s easy to point to other innovations that serve little purpose or that blew up and caused a lot of collateral damage, such as auction-rate securities and collateralized debt obligations. Testifying in 2010 before the Financial Crisis Inquiry Commission, Ben Bernanke, the chairman of the Federal Reserve, said that financial innovation “isn’t always a good thing,” adding that some innovations amplify risk and others are used primarily “to take unfair advantage rather than create a more efficient market.”
Other regulators have gone further. Lord Adair Turner, the chairman of Britain’s top financial watchdog, the Financial Services Authority, has described much of what happens on Wall Street and in other financial centers as “socially useless activity”—a comment that suggests it could be eliminated without doing any damage to the economy. In an article titled “What Do Banks Do?,” which appeared in a 2010 collection of essays devoted to the future of finance, Turner pointed out that although certain financial activities were genuinely valuable, others generated revenues and profits without delivering anything of real worth—payments that economists refer to as rents. “It is possible for financial activity to extract rents from the real economy rather than to deliver economic value,” Turner wrote. “Financial innovation… may in some ways and under some circumstances foster economic value creation, but that needs to be illustrated at the level of specific effects: it cannot be asserted a priori.”
Turner’s viewpoint caused consternation in the City of London, the world’s largest financial market. A clear implication of his argument is that many people in the City and on Wall Street are the financial equivalent of slumlords or toll collectors in pinstriped suits. If they retired to their beach houses en masse, the rest of the economy would be fine, or perhaps even healthier.
Since 1980, according to the Bureau of Labor Statistics, the number of people employed in finance, broadly defined, has shot up from roughly five million to more than seven and a half million. During the same period, the profitability of the financial sector has increased greatly relative to other industries. Think of all the profits produced by businesses operating in the United States as if they were a cake. Twenty-five years ago, the slice taken by financial firms was about a seventh of the whole. In 2009, it was more than a quarter. (In 2006, at the peak of the boom, it was about a third.) In other words, during a period in which American companies have created iPhones, Home Depot, and Lipitor, the best place to work has been in an industry that doesn’t design, build, or sell a single tangible thing.
From the end of the Second World War until 1980 or thereabouts, people working in finance earned about the same, on average and taking into account their qualifications, as people in other industries. By 2006, wages in the financial sector were about 60 percent higher than wages elsewhere. And in the richest segment of the financial industry—on Wall Street, that is—compensation has gone up even more dramatically. In 2009, while many people were facing pay freezes or worse, the average pay of employees at Goldman Sachs, Morgan Stanley, and JPMorgan Chase’s investment bank jumped 27 percent, to more than $340,000. This figure includes modestly paid workers at reception desks and in mailrooms, and it thus understates what senior bankers earn. At Goldman, it has been reported, nearly a thousand employees received bonuses of at least a million dollars in 2009.
Not surprisingly, Wall Street has become the preferred destination for the bright young people who used to want to start up their own companies, work for NASA, or join the Peace Corps. At Harvard in spring 2010, about a third of the seniors with secure jobs were heading to work in finance. Ben Friedman, a professor of economics at Harvard, wrote an article in 2010 lamenting “the direction of such a large fraction of our most-skilled, best-educated, and most highly motivated young citizens to the financial sector.”
Most people on Wall Street, not surprisingly, believe that they earn their keep, but at least one influential financier vehemently disagrees: Paul Woolley, a seventy-two-year-old Englishman who has set up an institute at the London School of Economics (LSE) called the Woolley Centre for the Study of Capital Market Dysfunctionality. “Why on earth should finance be the biggest and most highly paid industry when it’s just a utility, like sewage or gas?” Woolley said to me when I met with him in London. “It is like a cancer that is growing to infinite size, until it takes over the entire body.”
From 1987 to 2006, Woolley, who has a doctorate in economics, ran the London affiliate of GMO, a Boston-based investment firm. Before that, he was an executive director at Barings, the venerable British investment bank that collapsed in 1995 after a rogue-trader scandal, and at the International Monetary Fund. Tall, soft-spoken, and courtly, Woolley moves easily between the City of London, academia, and policymaking circles. With a taste for Savile Row suits and a keen interest in antiquarian books, he doesn’t come across as an insurrectionary. But, sitting in an office at LSE, he cheerfully told me that he regarded himself as one. “What we are doing is revolutionary,” he said with a smile. “Nobody has done anything like it before.”
At GMO, Woolley ran several funds that invested in stocks and bonds from many countries. He also helped to set up one of the first “quant” funds, which rely on mathematical algorithms to find profitable investments. From his perch in Angel Court, in the heart of The City, he watched the rapid expansion all around him. Established international players, such as Citi, Goldman, and UBS, were getting bigger; new entrants, especially hedge funds and buyout (private equity) firms, were proliferating. Woolley’s firm did well, too, but a basic economic question niggled at him: Was the financial industry doing what it was supposed to be doing? Was it allocating capital to its most productive uses?
At first, like most economists, he believed that trading drove market prices to levels justified by economic fundamentals. If an energy company struck oil, or an entertainment firm created a new movie franchise, investors would pour money into its stock, but the price would remain tethered to reality. The dot-com bubble of the late 1990s changed his opinion. GMO is a “value investor” that seeks out stocks on the basis of earnings and cash flows. When the Nasdaq took off, Woolley and his colleagues couldn’t justify buying high-priced Internet stocks, and their funds lagged behind rivals that shifted more of their money into tech. Between June 1998 and March 2000, Woolley recalled, the clients of GMO—pension funds and charitable endowments, mostly—withdrew 40 percent of their money. During the ensuing five years, the bubble burst, value stocks fared a lot better than tech stocks, and the clients who had left missed more than a 60 percent gain relative to the market as a whole. After going through that experience, Woolley had an epiphany: financial institutions that react to market incentives in a competitive setting often end up making a mess of things. “I realized we were acting rationally and optimally,” he said. “The clients were acting rationally and optimally. And the outcome was a complete Horlicks.” Financial markets, far from being efficient, as most economists and policymakers at the time believed, were grossly inefficient. “And once you recognize that markets are inefficient, a lot of things change.”
One is the role of financial intermediaries, such as banks. Rather than seeking the most productive outlet for the money that depositors and investors entrust to them, they may follow trends and surf bubbles. These activities shift capital into projects that have little or no long-term value, such as speculative real-estate developments in the swamps of Florida. Rather than acting in their customers’ best interests, financial institutions may peddle opaque investment products, like collateralized debt obligations. Privy to superior information, banks can charge hefty fees and drive up their own profits at the expense of clients who are induced to take on risks they don’t fully understand—a form of rent seeking. “Mispricing gives incorrect signals for resource allocation, and, at worst, causes stock market booms and busts,” Woolley wrote in a 2010 paper. “Rent capture causes the misallocation of labor and capital, transfers substantial wealth to bankers and financiers, and, at worst, induces systemic failure. Both impose social costs on their own, but in combination they create a perfect storm of wealth destruction.”
Woolley originally endowed his institute on dysfunctionality with £4 million. (By British standards, that is a significant sum.) The institute opened in 2007—Mervyn King, the governor of the Bank of England, turned up at its launch party—and has published more than a dozen research papers challenging the benefits that financial markets and financial institutions bring to the economy. Dmitri Vayanos, a professor of finance at LSE who runs the Woolley Centre, has presented some of its research at Stanford, Columbia, the University of Chicago, and other leading universities. Woolley has published a ten-point “manifesto” aimed at the mutual funds, pension funds, and charitable endowments that, through payments of fees and commissions, ultimately help finance the salaries of many people on Wall Street and in the City of London. Among Woolley’s suggestions: investment funds should limit the turnover in their portfolios, refuse to pay performance fees, and avoid putting money into hedge funds and private equity firms.
Before leaving for lunch at his club, the Reform, Woolley pointed me to a study by the research firm Ibbotson Associates, which shows that during the past decade investors in hedge funds, overall, would have done just as well putting their money straight into the S&P 500. “The amount of rent capture has been huge,” Woolley said. “Investment banking, prime broking, mergers and acquisitions, hedge funds, private equity, commodity investment—the whole scale of activity is far too large.” I asked Woolley how big he thought the financial sector should be. “About a half or a third of its current size,” he replied.
When I got back from London, I spoke with Ralph Schlosstein, the CEO of Evercore, a smallish investment bank of about six hundred employees that advises corporations on mergers and acquisitions but doesn’t do much in the way of issuing and trading securities. In the 1970s, Schlosstein worked on Capitol Hill as an economist before joining the Carter administration, in which he served at the Treasury and the White House. In the 1980s, he moved to Wall Street and worked for Lehman with Roger Altman, the chairman and founder of Evercore. Eventually, Schlosstein left to cofound the investment firm BlackRock, where he made a fortune. After retiring from BlackRock, in 2007, he could have moved to his house on Martha’s Vineyard, but he likes Wall Street and believes in it. “There will always be a need for funding from businesses and households,” he said. “We saw at the end of 2008 and in early 2009 what happens to an economy when that capital-raising and capital-allocation mechanism breaks down. Part of what has distinguished the U.S. economy from the rest of the world is that we’ve always had large, transparent pools of capital. Ultimately, that drives down the cost of capital in the U.S. relative to our competitors.”
Still Schlosstein agrees with Woolley that Wall Street has problems, many of which derive from its size. In the early 1980s, Goldman and Morgan Stanley were roughly the size of Evercore today. Now they are many times as large. Big doesn’t necessarily mean bad, but when the Wall Street firms grew beyond a certain point they faced a set of new challenges. In a private partnership, the people who run the firm, rather than outside shareholders, bear the brunt of losses—a structure that discourages reckless risk taking. In addition, small banks don’t employ very much capital, which allows them to make a decent return by acting in the interests of their clients and relying on commissions. Big firms, however, have to take on more risk in order to generate the sorts of profits that their stockholders have come to expect. This inevitably involves building up their trading operations. “The leadership of these firms tends to go toward people who can deploy their vast amounts of capital and earn a decent return on it,” Schlosstein said. “That tends to be people from the trading and capital-markets side.”
Some kinds of trading serve a useful economic function. One is market making, in which banks accumulate large inventories of securities in order to facilitate buying and selling on the part of their clients. Banks also engage in active trading to meet their clients’ wishes either to lay off risk or to take it on. American Airlines might pay Morgan Stanley a fee to guarantee that the price of its jet fuel won’t rise above a certain level for three years. The bank would then make a series of trades in the oil futures markets designed to cover what it would have to pay American if the price of fuel rose. However, the mere fact that a certain trade is client-driven doesn’t mean it is socially useful. Banks often design complicated trading strategies that help a customer, such as a pension fund or a wealthy individual, circumvent regulatory requirements or reduce tax liabilities. From the client’s viewpoint, these types of financial products can create value, but from society’s perspective they merely shift money around. “The usual economists’ argument for financial innovation is that it adds to the size of the pie,” Gerald Epstein, an economist at the University of Massachusetts, said. “But these types of things don’t add to the pie. They redistribute it—often from taxpayers to banks and other financial institutions.”
Meanwhile, big banks also utilize many kinds of trading that aren’t in the service of their traditional clients. One is proprietary trading, in which they bet their own capital on movements in the markets. There’s no social defense for this practice, except the argument that the banks exist to make profits for the shareholders. The so-called Volcker rule, an element of the 2010 Dodd-Frank financial reform bill intended to prevent banks from taking too many risks with their depositors’ money, was supposed to have proscribed banks from proprietary trading. However, it is not yet clear how the rule will be applied or how it will prevent some types of proprietary trading that are difficult to distinguish from market making. If a firm wants to place a bet on falling interest rates, for example, it can simply have its market-making unit build up its inventory of bonds.
The Dodd-Frank bill also didn’t eliminate what Schlosstein describes as “a whole bunch of activities that fell into the category of speculation rather than effectively functioning capital markets.” Leading up to the collapse, the banks became heavily involved in facilitating speculation by other traders, particularly hedge funds, which buy and sell at a frenetic pace, generating big fees and commissions for Wall Street firms. Schlosstein picked out the growth of credit default swaps, a type of derivative often used purely for speculative purposes. When an investor or financial institution buys this kind of swap, it doesn’t purchase a bond itself; it just places a bet on whether the bond will default. At the height of the boom, for every dollar banks issued in bonds, they might issue twenty dollars in swaps. “If they did a hundred-million-dollar bond issue, two billion dollars of swaps would be created and traded,” Schlosstein said. “That’s insane.” From the banks’ perspective, creating this huge market in side bets was very profitable insanity. By late 2007, the notional value of outstanding credit default swaps was about $60 trillion—more than four times the size of the U.S. gross domestic product. Each time a financial institution issued a swap, it charged the customer a commission. But wagers on credit default swaps are zero-sum games. For every winner, there is a loser. In the aggregate, little or no economic value is created.
Since the market collapsed, far fewer credit default swaps have been issued. But the insidious culture that allowed Wall Street firms to peddle securities of dubious value to pension funds and charitable endowments remains largely in place. “Traditionally, the relationship between Wall Street and its big clients has been based on the ‘big boy’ concept,” Schlosstein explained. “You are dealing with sophisticated investors who can do their own due diligence. For example, if CALPERS”—the California Public Employees Retirement System—“wants to buy something that a major bank is selling short, it’s not the bank’s responsibility to tell them. On Wall Street, this was the accepted way of doing business.” Early in 2010, the Securities and Exchange Commission appeared to challenge the big-boy concept, suing Goldman Sachs for failing to disclose material information about some subprime-mortgage securities that it sold, but the case was resolved without Goldman’s admitting any wrongdoing. “This issue started to get discussed, then fell to the wayside when Goldman settled their case,” Schlosstein said.
The big banks insist that they have to be big in order to provide the services that their corporate clients demand. “We are in one hundred and fifty-nine countries,” Vikram Pandit told me. “Companies need us because they are going global, too. They have cash-management needs all around the world. They have capital-market needs all around the world. We can meet those needs.” More than two-thirds of Citi’s 260,000 employees work outside the United States. In the first nine months of 2010, nearly three-quarters of the firm’s profits emanated from Europe, Asia, and Latin America. In Brazil, Citi helped Petrobras, the state-run oil company, to issue stock to the public; in the United Kingdom, it helped raise money for a leveraged buyout of Tomkins, an engineering company.
Continues...
Excerpted from The Occupy Handbook by Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.