As we approach the 20th anniversary of the terrorist attacks on September 11, 2001, we might think that we’ve heard all we need to (and then some) about that day. In a nation consumed with mourning more than 650,000 people due to the continuing COVID-19 pandemic, what does it mean to remember the outpouring of unity and grief that followed the murder of close to 3,000 after passenger-filled planes were piloted as missiles?
Innumerable accounts have combed the depths of the grief, shock, anger, and resilience of that day. They chronicle epic heroism and tiny acts of kindness. But “9/11 fatigue” does a disservice to history. Beyond the jingoistic rhetoric, the market-driven spectacle, the bumper-sticker sloganization of memory, the printed tourist guides and Twin Towers tchotchkes for sale around the site perimeter, lie essential, inspiring, and instructive stories about basic human goodness and the power of collective action. Episodes of pragmatism, resourcefulness, and compassion. Moments of grace and solicitude. Lifesaving efforts born of professional honor. The joining of unlikely hands.
Still, remarkably, some of the most affecting of these stories have gone unheard.
Here’s just one. On the morning of September 11, telecom specialist Rich Varela was working a contract gig on the twelfth floor of 1 World Financial Center, directly across the street from the South Tower. A few minutes into his workday, he looked around the windowless, nearly soundproof “comp data” room humming with servers, telephone switchboards, and other electronics, and noticed that things seemed oddly quiet. There must be a late bell today, he reasoned.
About 15 minutes after he’d sat down at his desk, a “crazy, ridiculous rumble” erupted, and the building did a little shimmy. Maybe it was one of those big 18-wheelers, thought Varela, picturing the thunder created when a large truck rolls over metal plates in the street. He gave a passing thought to his surroundings. If that had been an explosion, from a blown gas line or something, nobody would even know I was in here.
A few minutes later, his buddy called from Jersey. “Get outta there! They’re crashing planes into the World Trade Center!”
Because his friend had a history as a prankster, Varela didn’t buy his story right away. “No, I’m serious,” the friend said. And then it clicked. That rumbling sound. Varela gathered his things, neither panicked nor dawdling. He opened the thick door of the comp room that had shielded him from the screeching and strobes of the building’s fire alarm. Down the hall, phone receivers dangled off corkscrewed cords, papers lay strewn across worktables, and chairs loitered at odd angles rather than nestling neatly under their desks. The trading floor was wholly uninhabited. “It was like people just evaporated.”
In the lobby, Varela caught his first glimpse of what looked to him like Armageddon. The whole front of 1 World Financial Center had imploded, leaving the plate glass window in shards. Amid piles of smashed concrete and polished stone, pockets of flame feasted on combustibles. Varela stared at the blazing hole in the side of the South Tower. “You could hear a pin drop. You’re in Lower Manhattan. You could hear sirens in the distance but immediately in the area there was no motion of life. I thought that was so eerie.”
Outside, a series of artillery-like blasts made him duck for cover. The eruptions sounded like “cannon fire or missiles coming into Manhattan.” Boom! Boom! Are there battleships out in the water shooting planes out of the sky? Varela wondered. Turning to face the towers, he realized he was hearing the sounds of bodies hitting the ground. He tried to shake off the images as he headed toward the Hudson.
Halfway to the river, still reeling, Varela heard a rumbling. The South Tower was imploding. He tried to outrun the plume of ash, dust, and smoke that barreled toward him. At the seawall, Varela spotted a boat.
On September 11, 2001, nearly half a million civilians caught in an act of war on Manhattan escaped by water when mariners conducted a spontaneous rescue. This was the largest waterborne evacuation in history—more massive than the famous World War II rescue of troops pinned by Hitler’s armies against the coast in Dunkirk, France. In 1940, hundreds of naval vessels and civilian boats rallied to rescue 338,000 British and Allied soldiers over the course of nine days. But on that Tuesday in 2001, approximately 800 mariners helped evacuate between 400,000 and 500,000 people within nine hours. The speed, spontaneity, and success of this effort was unprecedented.
Somehow, in the sea of reporting that followed the attacks, this fact has garnered remarkably little attention. I wrote Saved at the Seawall: Stories from the September 11 Boat Lift to address that omission.
Within minutes after thick, gray smoke began spilling through the airplane-shaped hole in the World Trade Center’s north tower, adults and children—some burned and bleeding, some covered with debris—had fled to the water’s edge, running until they ran out of land. Never was it clearer that Manhattan is an island. Mariners raced to meet them, white wakes zigzagging across the harbor. Hours before the Coast Guard’s call for “all available boats” crackled out over marine radios, ferries, tugs, dinner boats, sailing yachts, and other vessels had begun converging along Manhattan’s shores.
So many people ached to contribute something that day and the days that followed. They donated blood, bagged up stacks of peanut butter and jelly sandwiches, built stretchers that went unused. But, as fate would have it, mariners who had skills and apparatus that could help right away became first responders. Their collaborative efforts that morning saved countless lives. So did other selfless acts made by people from all walks of life. Varela is just one shining example.
As the debris cloud rained down on him, Varela bounded over the steel railing separating the water from the land and leapt onto the bow of fireboat John D. McKean. He felt his leg buckle and almost snap when he hit the deck. A mass of people jumped on after him, falling onto the deck, some landing on him, and the boat rocked under the weight of the leaping hoards. Varela worried it might capsize. He stumbled over people on his way to the far side of the deck, away from the avalanche, then curled in on himself, choking as everything went black.
When the air cleared a bit, Varela saw casualties all around him. Somebody was nursing a broken leg. A woman lay splayed out beside the bow pipe. It looked as if she had landed face-first on the steel deck. He hollered to a nearby firefighter that she needed medical attention—that she was unconscious and might already be dead. There was little anyone could do on the boat, so reaching a triage center, quickly, was imperative. But other lives needed saving, and people continued to clamber aboard.
Quickly, Varela made his first choice of many that day to help others. Coughing and gagging, he yanked off his gray-green long-sleeved cotton shirt, tore off a strip, and wet it with water he found dribbling from some leaky hose on deck. He tied the makeshift filter around his face and then tore off more strips for fellow passengers.
Soon, the fireboat McKean would evacuate people to safety at a rundown pier in Jersey City. Before that, though, Varela helped heave up a docking line used to rescue a young woman from the water. While there on the Jersey side, he helped carry a chair to transport a man with a shattered leg off the boat.
Then, just as the fireboat crew prepared to cast off lines, the second tower collapsed. Varela saw the looks on the faces of the fireboat’s crew as 1 World Trade Center pancaked down, burying their fellow firefighters along with the civilians they’d been sent to save.
Their horror prompted Varela to make the decision of a lifetime. “‘I’m coming with you,” he said. “You guys need help.”
“Let’s go,” came the reply, and Varela jumped back on board. So did an older gentleman, explaining, “My son’s in that building.”
“It really felt like, I might die today,” Varela later said. “And I was okay with it.” These guys need help, he thought. And that was it.
So many people that day made choices to take risky action for the sake of others: Firefighters climbing the stairwells, co-workers helping those less mobile to escape the burning buildings, office workers in dress shirts hauling equipment with rescue workers in the plaza, and mariners who dropped evacuees at safer shores, over and over, then set course straight back to the island on fire to save still more people trapped at the seawall.
Surfacing these long-overlooked stories grants us a window onto who we have been for one another and who we can be again. Remembering that this, too, is part of our heritage can help us reclaim our humanity as we face the perils of today.
National 9/11 Memorial, Manhattan. Photo by author.
The post-9/11 era, which effectively began shortly after the terrorist attacks on September 11, 2001, has decidedly come to an end. Shaped not only by the loss of life on 9/11 but also by all that ensued in its wake, including the long wars in Afghanistan and Iraq, it was defined by fears of foreign terrorism, security culture, and a xenophobic and jingoistic form of patriotism. But the post-9/11 era was also significantly shaped by an under-appreciated force: the culture of memory.
As the 20th anniversary of 9/11 looms, it makes sense to reflect on this preoccupation, if not obsession, with memory. The urge to memorialize two decades ago was swift and strong, rising out of a deep sense of grief and loss. Over one thousand 9/11 memorials were built around the country and around the world. Many of these memorials were prompted by the decision of the Port Authority of New York and New Jersey to hand out pieces of steel recovered from the site for memorials from 2010 until 2016. In New York, the 9/11 memorial and museum garnered extraordinary attention when they opened in 2010 and 2014. They also cost, together, almost $1 billion, a significant percentage coming from public funds.
Memorialization has been a nationally affirming enterprise, providing comforting narratives of national unity whose coherence nonetheless required the exclusion of many aspects of the event. With memorials being built well into the 2010s, it seemed increasingly that the surfeit of 9/11 memory was not about 9/11, or even those who died that day. Rather it reflected a desire to return to that post-9/11 moment of national unity, in which, however falsely, the nation seemed to speak with one voice: we are Americans.
What does it mean to remember 9/11 20 years later, when an entire generation has been born since? The memory-focused rebuilding of Ground Zero in lower Manhattan most painfully raises this question. Does anyone still care that One World Trade Center, formerly known as the Freedom Tower, is 1,776 feet tall in a gesture of patriotism? Or, that the $4 billion publicly-funded Oculus shopping mall (excuse me, transportation hub) has a skylight that opens on the anniversary of 9/11? The 9/11 museum, which tells a nationalistic story of 9/11 as an exceptional historical event and sells 9/11 hoodies and souvenirs in its gift shop, looks increasingly dated. Both the museum and the memorial are hugely expensive to run, the memorial because of its water features and security costs. The museum’s business model of selling entry tickets to tourists for $26 has been challenged by the pandemic, forcing it to furlough or lay off more than half its staff and cancel its plans for 20th anniversary special exhibitions. It has become the subject of debate about its relevance a mere seven years after its opening.
Despite the proliferation of 9/11 memory, a notable shift in national memorialization was signaled when in 2018 the National Memorial for Peace and Justice and Legacy Museum opened in Montgomery, Alabama. A memorial to over 4,000 victims of lynchings, it demands recognition that terrorism is not a new or foreign aspect of American history, but has long been a part of the US national story as racial terrorism. The Legacy Museum re-narrates the US myth of racial progress to argue that the contemporary mass incarceration of Black Americans is evidence that slavery never completely ended.
National Memorial for Peace and Justice and Legacy Museum, Montgomery, Ala. Photo by author.
Here, memory is being deployed not to uphold a myth of national unity, as 9/11 memory did, but to demand that the national script be revised. It is notable that Bryan Stevenson, who founded the memorial and museum through the Equal Justice Initiative, felt that memorialization was the best strategy to raise public awareness about the legacies of slavery. At the same time, long fought-over Confederate monuments have been toppled and removed, a reckoning with the past that once seemed impossible and now seems inevitable.
We don’t know what the new era we have entered will bring. But the demand that the nation confront its own history of terrorism has been activated. This means remembering terrorism not as a force that comes from outside but as a fundamental aspect of the American project of settler colonialism and slavery that lives on today. The memory of this past must be confronted in the present. To recognize the history of US terrorism is not to demand shame but to open up the opportunity for the nation to move forward from its difficult histories.
My home state of Arkansas has been the subject of many recent news reports due to a low incidence of vaccination against COVID-19 combined with a high incidence of the unvaccinated rushing to feed stores to purchase cattle deworming agents under the belief that these are more effective against a raging respiratory illness than any of that stuff being “pushed” by the “medical establishment.” Many commentators have linked this behavior to the prevalence of conspiracy theories on social media, while others center the increasing failure of average citizens to respect the knowledge and skills experts have accumulated through years of education and experience. The various books touted as providing some kind of means for understanding our present moment fall into these frameworks, from Nancy L. Rosenbaum and Russell Muirhead’s A Lot of People Are Saying: The New Conspiracism and the Assault on Democracy (2019) to Tom Nichols’s The Death of Expertise: The Campaign against Established Knowledge and Why It Matters (2018). However, I would argue that the best text for making sense of why people across the country are rushing to eat horse paste during a pandemic was published more than thirty years ago—Nathan O. Hatch’s 1989 The Democratization of American Christianity.
By tracking the evolution of five difference religious movements in the early American republic (the Christian movement, Methodists, Baptists, Black churches, and Mormons), Hatch demonstrates how the revolutionary fervor for all things democratic pervaded the spiritual and cultural realms as much as it did the political, resulting in the conscience becoming individualized and all traditionally earned authority being held suspect. “Above all, the Revolution dramatically expanded the circle of people who considered themselves capable of thinking for themselves about issues of freedom, equality, sovereignty, and representation. Respect for authority, tradition, station, and education eroded,” Hatch writes. “It was not merely the winning of battles and the writing of constitutions that excited apocalyptic visions in the minds of ordinary people but the realization that the very structures of society were undergoing a democratic winnowing.”
These popular religious movements “denied the age-old distinction that set the clergy apart as a separate order of men, and they refused to defer to learned theologians and traditional orthodoxies.” In addition, they also empowered “ordinary people by taking their deepest spiritual impulses at face value rather than subjecting them to the scrutiny of orthodox doctrine and the frowns of respectable clergymen.” One early Baptist leader, John Leland, emphasized the right of any layman to read and interpret the Bible for himself, writing, “Did many of the rulers believe in Christ when he was upon earth? Were not the learned clergy (the scribes) his most inveterate enemies?” Some backwoods religious dissenters went even further and emphasized their own illiteracy as making them purer vessels for the Almighty, unable to be corrupted by traditional, elite education.
The craving for democratic equality went beyond the statehouse and beyond the church and infused all spheres of life. In his work, Hatch draws occasional attention to folk like Samuel Thompson, an “uneducated practitioner of natural remedies who learned his botanic medicine in rural New Hampshire at the close of the eighteenth century.” In his autobiographical narrative, Thompson argued that Americans “should in medicine, as in religion and politics, act for themselves.” He and his adherents published an array of pamphlets and journals that “made their case by weaving together powerful democratic themes.” As Hatch summarizes, the cornerstone of their teachings was this: “In the end, each person had the potential to become his or her own physician.” No wonder, then, that such medical practices became adopted by these increasingly democratic religious movements. And medicine was not the only profession increasingly undergoing a “democratic winnowing,” for everywhere, common people were determined to throw off the shackles of traditional expertise often associated with the old order as it pertained to law, economics, and more.
According to legend, as British troops surrendered to General George Washington at the end of the Siege of Yorktown, their band showed a remarkable sense of the historical import of the moment by playing a little tune called, “The World Turned Upside Down.” Most historians hold this tale to be apocryphal, but there is some irony in the legend, for the ballad was first published in the 1640s as a protest against Parliament’s prohibitions of the celebrations of Christmas. In England, the common people defied the elite Parliamentary infringement upon their “lowly” traditions, singing:
Listen to me and you shall hear, news hath not been this thousand year: Since Herod, Caesar, and many more, you never heard the like before. Holy-dayes are despis’d, new fashions are devis’d. Old Christmas is kickt out of Town. Yet let’s be content, and the times lament, you see the world turn’d upside down.
Yet in America, those “new fashions,” associated with Herod and Caesar in this ballad, flew under the banner of Christ. But not only was Christmas “kickt out of Town” in America—so, too, was that more recent tradition of inquiry dubbed the Enlightenment. When I was in grade school, we were taught to regard the American Revolution as the apogee of Enlightenment thinking, the incarnation of those lofty ideas of liberty exposited by the likes of John Stuart Mill and Jean-Jacques Rousseau. And certainly, those who formulated the Declaration of Independence and the American Constitution drew from their works. However, the main thrust of American culture ran in the opposite direction.
“This vast transformation, this shift away from the Enlightenment and classical republicanism toward vulgar democracy and materialistic individualism in a matter of decades, was the real American revolution,” writes Hatch. Some writers may lament the so-called “death of expertise,” but we have to ask: when was it ever truly embraced here in the United States? Those empty shelves of cattle dewormer at the local feed store are as much a legacy of the American Revolution as the laws that govern this country. The motivation to self-medicate with horse paste is driven not by some kind of new, mad, conspiratorial thinking but, instead, by a fervent, foundational belief that “all men are created equal.”
Editor’s Note: HNN recently reposted an excerpt of a Medium post authored by Carol Berkin, Richard D. Brown, Jane E. Calvert, Joseph J. Ellis, Jack N. Rakove, and Gordon S. Wood. That post took the form of an open letter of critique of remarks made by Dr. Woody Holton in the Washington Post addressing the significance of Lord Dunmore’s proclamation promising emancipation to enslaved Virginians who took up arms on the side of the Crown in 1775, and of the broader significance of the preservation of slavery as a motive for American independence.
HNN has offered Dr. Holton the opportunity to publish a rejoinder, which he has accepted.
I am flattered that six distinguished professors of the American Revolution have taken an interest in my work—or least its potential impact. Just one index of these scholars’ significance is that I cite all six of them in my reappraisal of the founding era, Liberty is Sweet: The Hidden History of the American Revolution. It is due out next month.
But it saddens me that these senior professors have chosen to deny the obvious fact that the informal alliance between enslaved African Americans and British imperial officials infuriated white colonists and helped push them toward independence. Surely the professors know that the Continental Congress chose as the capstone for its twenty-six charges against King George III the claim that the king (actually his representatives in America) had “excited domestic insurrections”—slave revolts—“amongst us.”
Congress’s accusation culminated more than a year’s worth of colonial denunciations of the British for recruiting African Americans as soldiers and even—allegedly—encouraging them to slit their masters’ throats (as writers in Maryland, Virginia, and North Carolina all expressed it). Indeed, the six professors’ timing is perfect. Others having also doubted this claim, especially in reaction to the New York Times’s “1619 Project,” I last month began a project of my own. Every day I tweet out one quotation from a white American of 1774-1776 who denounced Britain’s cooperation with African Americans, along with an image of the quoted document.
The book version of the #1619 Project appears in 76 days. 1 of its central claims—that colonial whites’ rage at the Anglo-African alliance pushed them toward Independence—has been disputed. So I will tweet 1 piece of evidence every day for the next 76.— Woody Holton (@woodyholtonusc) September 1, 2021
I will end the series after seventy-six days, but I have collected sufficient evidence to go on and on.
I am in no position to lecture these distinguished professors, who count three Pulitzer prizes among them, but since they have criticized my work, I have no choice but to speak plain: I think their critique betrays a fundamental misunderstanding of how the Declaration of Independence came about.
It happened in stages. In 1762, most colonial freemen were, all in all, satisfied with their place in the British empire. Indeed, as Prof. Wood’s former student Brendan McConville emphasizes in The King’s Three Faces, they loved their new king. The initiative for changing the imperial relationship came not from the colonies but from Parliament. From 1763 through late 1774, Parliament sought more from the provincials, especially in the areas I like to summarize as the 4 Ts: taxes, territory, trade, and treasury notes (paper money). And all the free colonists wanted was … none of those changes. Until late in 1774, they strenuously resisted Parliament’s initiatives, but most of them would have been perfectly happy to return to the status quo of 1762. They did not seek revolution but (to use another loaded word from English history) restoration.
The grand question then becomes, “What converted the colonists from simply wanting to turn back the clock—their view from 1763 to 1774—to desiring, by spring 1776, to exit the empire?” Many things: the bloodshed at Lexington, Concord, and Bunker Hill; the news that the administration of Lord North was going to send German (“Hessian”) mercenaries against them, the publication in January 1776 of Common Sense, and much, much more.
All I argued in the essay that the professors criticize is that one of these factors that turned these white restorationists into advocates for independence was the mother country’s cooperation with their slaves. It was not the reason, but it was a reason. And that is important, because it means that African Americans, who of course were excluded from the provincial assemblies and Continental Congress, nonetheless had a figurative seat at the table.
Nor was Blacks’ role passive. Congress depicted them as incited to action by the emancipation proclamation issued by Lord Dunmore, the last royal governor of Virginia, and the professors adopt that same formulation. But here again, the timeline is crucial. Whites began recording African American overtures to the British in the fall of 1774. At first British officials turned them away, but they kept coming, right up until Dunmore finally published his emancipation proclamation on November 15, 1775, four score and seven years before Lincoln’s.
The professors claim that white colonists were already headed toward independence in fall 1774, when these African American initiatives began. But in this they indulge in counterfactual history—assuming they know what would have happened. It seems clear to me that, even that late, had Parliament chosen to repeal all of its colonial legislation since 1762, it could have kept its American empire intact. What we are looking for are the bells that could not be unrung. Especially in the south, one of the British aggressions that foreclosed the possibility of reconciliation was the governors’ and naval officers’ decision to cooperate with the colonists’ slaves (as well as with Native Americans—the Declaration of Independence’s “merciless Indian Savages”—but that is another story).
In Liberty is Sweet, I supply much more evidence for my stadial (stages-based) view of the road to independence. I compare it to a mouse’s escape from a maze, since it was the product not of a grand design but of a series of discrete choices at intersections, from none of which the next was visible. Would that the distinguished professors had waited to judge my reinterpretation by my 700-page book rather than the 700-word Washington Post article I wrote to promote it!
The professors may be correct that we would still get independence even if we removed one of its main ingredients, like Dunmore’s Proclamation … or the Battle of Lexington and Concord, which I teach as not only the first battle of the revolution but also, for many, especially in New England, the final argument for independence. But I would never take that remote possibility as a reason to write a history of the American Revolution that omitted Lexington and Concord. And by the same token, I hope the professors would never omit the Anglo-African alliance.
I agree with the professors that it would be a disservice to pretend that enslaved Americans played a significant role in the origins of the American Revolution if there was no evidence that they did. But the evidence is overwhelming, and I invite you to sample it on Twitter at @woodyholtonusc. If we heed the professors’ call to ignore the influence of the enslaved people of the founding era, we will dishonor not only those heroic Americans but our own search for truth.
Good morning, HNN!
I’m pleased to present the first episode of Season 3 of Skipped History, chronicling the Attica Prison uprising of 1971. It’s been 50 years since the stunning rebellion, and still the consequences are unfolding:
Today’s story comes from Blood in the Water by Heather Ann Thompson. Have you read it? I found it truly draw-dropping.
I hope you enjoy the video. Questions, comments, and suggestions for further reading are welcome!
The controversy over Critical Race Theory has animated teachers, school administrators and state legislators — not to mention parents. The former chancellor of the New York City schools, Richard Carranza, went so far as to proclaim that it was the duty of teachers to combat “toxic whiteness” — a disastrous term that was picked up by the New York Post.
One of the difficulties in discussing Critical Race Theory is that the term has become entwined with the ideas in Robin DiAngelo’s White Fragility. Endless disclaimers that Critical Race Theory (CRT) is about systemic rather than individual racism seem specious to those who conflate the idea with the so-called “anti-racism training” associated with DiAngelo, and the passive-aggressive personal confrontations offered in her training sessions. Educators and others are afraid of undoing the self-esteem of white students, and this is a legitimate concern. I imagine that many race-training sessions at workplaces are intimidating to adults, but the idea is even more of a danger to classroom teaching. No teacher should enter a classroom and announce that “I will be very cautious about this, but you need to understand that you all as individual white people are perpetuating racism in this country.” You cannot have a real discussion after that, no matter how gently you try to approach the subject. As a long-time teacher of American history, I hope to show that it is possible to discuss racism and the years of protests against it without intimidating students of color or white students.
This essay is dedicated to the students and teachers who want to cut through the controversy about teaching race and racism to confront the truths in American history with all its twists and turns, lights and shadows.
I was a teacher of American history for more than 30 years at a high school in Brooklyn, at several of the New York City community colleges, and Hunter and City College as an adjunct instructor. I taught abolition, slavery and Civil Rights, which consumed much of my class time from the first day to the last every semester. My students were a glorious mixture of nearly every race and color in New York City.
I treated them whatever their academic level as intellectuals-in-training by assigning them speeches and documents long and short for homework, which they had to bring to class the next day. I would not lecture, give them any questions to answer or ideas to look for when they read. Instead they were asked to choose sentences they liked or disliked for what ever reason and made brief comments explaining their choices.
In some classes I had the kids write the first few words of their sentences on the blackboard and then we would discuss what they had chosen. They read their whole sentence out loud and everyone read silently along with them. These were works by ML King Jr., Frederick Douglass, Madison Grant (one of the founders of scientific racism) and Barack Obama. I also taught a class in which we read only American speeches. My first question nearly every day we did these longish readings of 10 or 15 pages was for example, “What do you think about Frederick Douglass’s “Fourth of July Oration?” That would lead to a discussion that served as an introduction to the lesson before we turned to their sentences. We would do shorter documents – sometimes in class – one or two times a week and the longer ones twice in three weeks. In my high school classes I called this the Tarzan Theory of Reading because we were swinging through the document by grabbing on to sentence after sentence.
To teach you have to “bring the things before the eyes.” When we discussed the clause “ all men are created equal” the understanding came from the students that it embodies the hopes and dreams of every American and, simultaneously the nightmares of inequality and violence that people of color have been forced to live with in this country. I did this on the first day of every class when the students suggested events for a timeline from 1492 to 1865. They each would write down three events, I would ask students to volunteer one event, I would write them on the board and then we would discuss them as we went along. The Declaration of Independence and its most famous phrase always came up. When it came to teaching the American Revolution, I spent five days discussing the document and its implications for the Revolution and history up to the present. Whatever you think of the New York Times’ 1619 Project with its attempt to de-emphasize the importance of the Declaration, it is still necessary to understand the most famous phrase in American, if not world history. Here is a brief description of my lessons on the Declaration concentrating on the last day in the sequence when we discussed the meanings of “all men are created equal.”
The first assignment for the series of Declaration classes was to determine how many parts the document had – keeping within a limit of five. Then the students were asked to find and underline the references to the Native Americans, the Stamp Act, the Boston Port Act, the Quebec Act and the Massachusetts Government Act. The last three were parts of the Intolerable Acts which caused the Americans to respond by forming the first Continental Congress in 1774. By narrowing down the Declaration structure to three parts, the students could see that in the middle part of the document all the sentences began with He or For. Those were the Grievances.
The students pointed out each of the grievances that they had underlined. We concluded that “For taxing us without our consent” was ambiguous. It could be the Stamp Act or the Navigation Acts, the Townshend Acts or the Tea Tax which was the motivation for the Boston Tea Party in 1773. The Intolerable Acts were more straightforward to identify. All of these details were parts of classes for the weeks prior to our study of the Declaration which was written in June and July of 1776, more than a year after the first battles of the American Revolution, Lexington and Concord in 1775. Originally celebrated as Patriots’ Day in Boston on April 19 it began with the famous “shot heard ’round the world.”
Then I asked for a volunteer to read the first paragraph of the Declaration itself, which begins with “When in the Course of Human Events….” and ends with “impelled them to the separation” in their version. (The document I used was the version from the Yale Avalon 18th century document site.) When I asked them what they thought of the opening words, the students concluded that it is a theory of history: people make history. This is a description of agency, a key term for historians that describes how all peoples can take control of their fate. In our case, the Americans became revolutionaries by protesting the Stamp Act, and the Tea Act, and forming the First Continental Congress in response to the Intolerable Acts.
The first paragraph also discusses the laws of nature and nature’s God that entitled the Americans to separate from Great Britain. Now a student reads the next few sentences which contain the clause “That all men are created equal” and list the unalienable rights to life, liberty and the pursuit of happiness. Thus, “all men are created equal” is one of the natural laws like the right to life and liberty and, of course gravity, that are all on the same plane here: natural laws –- the laws discovered by Sir Isaac Newton that describe how the universe runs. We discussed the most famous clause, “all men are created equal,” by itself on the last day of the week.
At this point I hand out the part of the Declaration written by Thomas Jefferson that the Second Continental Congress dropped from the final version. It was the section that blamed the king for slavery in the 13 American colonies. Here is the beginning:
He has waged cruel war against human nature itself violating its most sacred rights of life and liberty in the persons of a distant people who never offended him, captivating and carrying them into slavery in another hemisphere, or to incur miserable death in their transportation thither. (T)his piratical warfare, the opprobrium of infidel powers is the warfare of the Christian King of Great Britain.
As my students saw immediately, Thomas Jefferson described the slaves as humans with natural rights and he called slavery “cruel war.” Clearly this omitted section was meant to be part of the grievances because the paragraph begins with “He.” Jefferson uses the phrase “piratical warfare” which might be obscure to readers today, but my students knew it referred to man-stealing, an abolitionist term for enslavement. They had read the five-page polemic “African Slavery in America,” by Tom Paine for the third day of class. Man-stealing is one of the key phrases in his abolitionist pamphlet published in 1775 by the Pennsylvania Journal and Weekly Advertiser. Jefferson also refers to the middle passage, which caused the enslaved people to suffer “miserable death” in their journey to North America. He also sarcastically called the slave trade the work of the “Christian King of Great Britain,” who was practicing the “excreble commerce” of the “infidel powers:” the Spanish, Portuguese and Muslims who had preceded the British in the slave trade. Now the hypocrisy of future president Jefferson becomes the topic of discussion; especially since his livelihood depended on the labor of hundreds of enslaved persons on two plantations. He blamed the king for foisting the slaves on the Americans and complained that the king was also
exciting those very people to rise in arms among us and to purchase that liberty of which he has deprived them, by murdering the people upon whom he also obtruded them: thus paying off former crimes committed against the liberties of one people with crimes which he urges them to commit against the lives of another.
Such blatant hypocrisy was common for those defending system of slavery; especially in view of his words about equality and liberty for whites and blacks in this very document. The audacity of Jefferson to claim that “his” slaves were unfairly treated by the king and that the king was to blame for his (Jefferson’s) own ill-gotten gains reminds me of a drug dealer who claims it is fine to sell drugs to “get over.” Of course he does not take them himself, which would be dangerous to his health. It takes a close reading of the phraseology in the quote above to figure out who were the slaves and who were the Patriots in the convoluted grievance. It is remarkable that almost all of the Declaration is clearly written. It is a prose poem that drives you on, in the same way the Gettysburg Address does. This omitted section is turgid.
Finally, we must point out why the Second Continental Congress rejected the grievance on slavery and the king. The Congress had agreed that the Declaration had to be unanimous in order to create a united front against the king and his army. But the slaveholders, led by South Carolina, refused to vote for the Declaration if it included the section criticizing slavery. It was left out of the final version.
What makes this intimate bond of Enlightenment idealism and rank racism a grievance is the argument that the king is encouraging the slaves of the Patriots (not the Loyalists, truth be told) to kill the revolutionaries to obtain their freedom as members of the British Army. The slaves of the Loyalists were not offered that opportunity by Lord Dunmore in his recently famous but misunderstood Proclamation of 1775. After all, the Loyalists were supporting the king, so they could keep their slaves. This idea in the section comes out in the final grievance that says “he has excited domestic (slave) insurrection among us” which Jefferson couples with a separate grievance condemning the king for encouraging the “merciless Indian savages” to wage war against the Patriots by murdering our people of “all ages, sexes and conditions.” Students are not used to reading the word “savages:” shocking language for a document about equality. It is an assault on modern sensibilities, but of course it was a common way to refer to the Native Americans.
So this class began with a discussion of the causes of the turmoil in the 1760s and ’70s as examples of human agency, then moved to a description of unalienable rights based on nature or nature’s God, then finally to a justification of the Revolution as part of natural law, and an inclusion of the idea “all men are created equal” as one of those natural laws. But, as has become apparent in context, all this is bound up with the deep hypocrisy about the contradiction of holding humans with natural rights in the bonds of what the slave holding founding father called the “cruel” and “piratical” war that we call chattel slavery.
At this point we have read only about 80 words at the beginning of the final version of the document.
Part II of this essay will appear in the coming weeks on HNN.
FDNY Fireboat John D. McKean celebrates the 125th anniversary of the Brooklyn Bridge, 2008.
Editor’s Note: The Retired FDNY Fireboat John D. McKean was pulled into emergency service on September 11, 2021 supporting firefighting with its still-functional pumps and transporting evacuees from Manhattan. The McKean and its crew are discussed in this week’s essay by Jessica DuLong on the 9/11 boatlift.
New Yorkers and visitors to the city will have the opportunity this month to visit the retired FDNY Fireboat John D. McKean at the Hudson River Parks Friends’ Pier 25 in lower Manhattan. The McKean will be available to visitors and for public and private rides in New York Harbor.
The McKean was purchased at auction in 2016. In 2018, the Fireboat McKean Preservation Project was formed with the mission of preserving this historic vessel for museum and educational purposes. In 2019, the McKean underwent major repairs to its hull at the North River Shipyard in Upper Nyack, NY, with subsequent restoration of the ship’s above-deck areas. Through the Hudson River Parks Friends, the McKean will be able to dock at Pier 25 to be in proximity to lower Manhattan for the 20th anniversary of the attack on the World Trade Center on September 11, 2011.
John D. McKean was a marine engineer aboard the Fireboat George D. McClellan when he was fatally burned by a steam explosion in 1953. Despite his injuries, he kept his post to help bring the ship in safely. An already-ordered but yet-to-be commissined fireboat was named in his honor the following year. John McKean’s spirit was reflected in the vessel’s service supporting firefighting and transporting evacuees on September 11. The McKean was also used to fight the 1991 Staten Island Ferry Terminal fire and assisted with the rescue of passengers of the US Airways Flight 1549 Hudson River landing executed by pilot Chesley “Sully” Sullenberger in 2009.
For more information, go to www.fireboatmckean.org .
Secretary of State Henry Kissinger, President Richard Nixon, and Maj. Gen. Alexander Haig discuss the situation in Vietnam at Camp David, November 1972.
There have been any number of pundits comparing the fall of Saigon in 1975 to the horrors we are witnessing as the United States withdraws from Afghanistan. Events in Afghanistan are far worse, some argue; at least in Vietnam there was a “decent interval” between the Peace Accords signed in January 1973 and the collapse of South Vietnam in April 1975.
All of this misses the point. Both situations prove the folly of starting a war where there is little likelihood of unconditional surrender. History screams this lesson, and the United States in its delusional belief in its vast military power continues to fall into the trap that somehow this time will be different.
“Vietnamization” did not work any better than attempts by the United States to put the Afghan government and its military on their feet after the American invasion in October 2001. The chaotic end to both long wars was predestined from the first days of the conflicts. Unless an invading nation can impose its terms after an unconditional surrender, as seen at the end of the Second World War with Germany and Japan, the result will be the predictable loss of public support for the war and a withdrawal that leaves the situation in chaos.
Even the end of the First World War illustrates this problem. After four years of brutal war, the parties stopped fighting under the terms of an Armistice brokered in large part by the United States. The war had been a draw—there were no clear winners and no clear losers. But the peace treaty put the blame on Germany and a weak democratic government was installed, known as the Weimar Republic. After a brief time and intense disorder and fighting in the streets of Germany, the Nazis rose to power, arguing the peace treaty was “a stab in the back,” and a second even more catastrophic world war followed a 20-year interregnum.
In Vietnam, the key architects of the end of the war, Richard Nixon and Henry Kissinger, knew that the peace they were forcing on South Vietnam was highly unlikely to last and that President Thieu of South Vietnam was doomed by the very terms Kissinger was negotiating in Paris with the North Vietnamese Special Advisor, Le Duc Tho.
Looking back, the terms seemed almost ridiculous: leave the Vietcong in place in the South, remove the Americans, hold supposed free elections to reunite the country, and allow material to continue to pass through the demilitarized zone (DMZ). Worse, the United States had to agree to pay billions of dollars to North Vietnam to help it rebuild after years of war (euphemistically referred to in the peace accords as “healing the wounds of war and postwar reconstruction”).
In one of the more instructive Nixon tapes, Nixon and Kissinger spoke just days into the new year of 1973 in the Oval Office about the inevitability of the fall of South Vietnam. Two options were under consideration: Option One (peace now) and Option Two (continued bombing for return of POWs).
“Look, let’s face it,” Nixon told Kissinger, “I think there is a very good chance that either one could well sink South Vietnam.”
Nixon said he had responsibilities “far beyond” Vietnam alone and that he could not allow “Vietnam to continue to blank out our vision with regard to the rest of the world.” He spoke of his efforts at détente with the Russians and his recent opening of China.
And while the options both carried great risks, Nixon knew the American public wanted out and even a bad result was better than continuing. He had trouble even verbalizing the concept given how painful the reality was to him. “In other words,” he told Kissinger, “there comes a point when, not defeat—because that isn’t really what we are talking about, fortunately, in a sense—but where an end which is not too satisfactory, in fact is unsatisfactory, is a hellava lot better then continuing. That’s really what it comes down to.”
Kissinger responded that if President Thieu’s government fell, it would be “his own fault.”
There is a unique humiliation brought down on a warring superpower like the United States when the enemy combatants know they can outlast public opinion in America. The North Vietnamese correctly read the results of the national election in the U. S. in November 1972. Yes, President Nixon won in a landslide; but in Congress Democrats held onto their majorities in the House and the Senate. In fact, the Democrats gained six seats in the Senate—one being a freshman from Delaware named Joe Biden.
After that election, peace was no longer “at hand,” as Kissinger erroneously predicted. Instead the North Vietnamese dug in. Congress threatened to cut off all aid, military and otherwise, if Nixon did not end the war.
In Paris, Kissinger was rudely greeted with a request that the United States set a unilateral deadline for withdrawal. “North Vietnam’s sole reciprocal duty,” Kissinger sarcastically wrote, “would be not to shoot at our men as they boarded their ships and aircraft to depart.”
It took a massive bombing campaign in December 1972, the infamous “Christmas Bombings,” to finally exert enough pressure to obtain a peace agreement that was really nothing more than a veiled surrender.
We do not know what the future holds for Afghanistan. Predictions range from total disaster to perhaps, in time, normalization. Neither scenario is likely. Vietnam teaches us something. After 50,000 American dead and millions killed across the region, nearly fifty years later, Vietnam is on no one’s “axis of evil” list. Americans vacation there; trade is brisk between the two nations.
Korea, the one outlier to the rule that unconditional surrender is the only sure way to predict an outcome in war, actually proves the point given the incredible price the United States continues to pay in maintaining a military presence there—and all the United States and world have to show for this continuing commitment is a virtual madman in charge of North Korea who justifies an aggressive nuclear arms program because of the bogeyman of a continuing threat of war following an armistice that was signed nearly seventy years ago.
The lessons of the First World War, Vietnam and Afghanistan are plain: don’t overestimate American military power and don’t expect clean exits where it was predictable from the start that a long-term war was not winnable.
“No matter what antiabortion crusaders try, pregnant people will always find ways to have abortions — and networks that go beyond borders have long helped them navigate treatment options.”
“The Attica prison uprising was historic because these men spoke directly to the public, and by doing so, they powerfully underscored to the nation that serving time did not make someone less of a human being.”
Using comparisons to the Taliban or other Islamic radicals to attack anti-choice laws obscures the deep roots of misogyny in white Christian America.
A historian argues that a recent and influential book calling for reparations could strengthen its case by considering the arguments made by historians about the connections of American slavery to other manifestations of racism. What’s needed is to link reparations to a global overturning of racial inequality.
The adoption of rhetoric of “humane war” after Vietnam has allowed discussions of how to wage war to sideline discussions of whether to wage war at all, and encourages secrecy, surveillance, and long-term engagement.
Philadelphia’s housing crisis during the first world war shows that worker and citizen activism is essential to compel governments to act to secure adequate affordable housing.
The influential Black newspaper’s publisher Robert L. Vann has been criticized as a self-promoting tribune of the Black bourgeoisie. A historian argues he should be reconsidered as a pragmatist building alliances in a time of upheaval for Black America.
Political factions and then organized parties have fought over the size, composition and geographical ordering of the electorate since the founding. This legacy today undermines the legitimacy of government and the political will to protect the right to vote.
One scholar’s project is using Wikipedia and her students to recover the historical personhood of Dante’s women and elevate them above literary symbols or caricatures.
Whatever the causes of the decline in history enrollments, it’s not because history departments have rejected the study of war and military history.
It has now been twenty years since the terrorist attacks of September 11, 2001 plunged the nation into shock, consternation, grief, and fear. Amid the despair over the loss of nearly three thousand lives and the anxieties about further strikes, many questions arose over how such a devastating blow on American soil could have happened. The most important of them was also the most elusive: were the attacks preventable? After two decades of investigation, the answer remains an equivocal “perhaps.”
Presidents Bill Clinton and George W. Bush were well aware that the Islamist militant Osama bin Laden and his Al Qaeda network posed a serious threat to American interests and lives. Clinton compared him to the wealthy, ruthless villains in James Bond movies. To combat the dangers that Al Qaeda created, he and his advisers considered a wide range of military and diplomatic options that ranged from kidnapping bin Laden to U.S. military intervention in Afghanistan. But the use of cruise missiles against Al Qaeda camps in Afghanistan in 1998 produced acutely disappointing results. Other military alternatives seemed too risky or too likely to fail and diplomatic initiatives proved fruitless.
During the transition after the 2000 presidential election, Clinton and other national security officials delivered stark warnings to the incoming Bush administration that bin Laden and his network were a “tremendous threat.” The immediacy of the problem was heightened by Al Qaeda’s bombing of the destroyer USS Cole in the harbor of Aden, Yemen in October 2000, which caused massive damage to the ship and claimed the lives of 17 crew members. Clinton and his advisers strongly recommended prompt consideration of the options they had weighed.
Bush and high-level national security officials were not greatly impressed. They regarded terrorism as an important but not top-priority problem. The president later revealed that he did not feel a “sense of urgency” about bin Laden and that his “blood was not … boiling.”
The Bush administration viewed Clinton’s campaign against Al Qaeda as weak and ineffective, and it was dismissive of the advice it received. Rather than drawing on the experiences of its predecessor, it embarked on the preparation of a “more comprehensive approach” that National Security Adviser Condoleezza Rice believed would be more successful. During the spring and summer of 2001, it worked at an unhurried pace, even in the face of dire warnings from the U.S. intelligence community that Al Qaeda was planning attacks that could be “spectacular” and “inflict mass casualties,” perhaps in the continental United States.
Eight months after he took office, Bush’s White House completed its comprehensive plan to combat Al Qaeda. The steps it included in the form of a National Security Presidential Directive (NSPD) were strikingly similar to the options the administration had inherited from Clinton. The final draft of the NSPD called for greater assistance to anti-Taliban groups in Afghanistan, diplomatic pressure on the Taliban to stop providing bin Laden safe haven, enhanced covert activities in Afghanistan, budget increases for counterterrorism, and as a last resort, direct military intervention by the United States. This proposal was little different in its essentials than what the Clinton administration had outlined, and it offered no novel suggestions on how to carry out its objectives more successfully. Deputy Secretary of State Richard Armitage later commented that there was “stunning continuity” in the approaches of the two administrations.
The NSPD landed on Bush’s desk for signature on September 10, 2001.
The troubling question that arises is: could the calamities that occurred the following day have been prevented if the NSPD had been approved and issued earlier? There is no way of answering this question definitively; it is unavoidably counterfactual. Yet it needs to be considered. The 9/11 plot was not so foolproof that it could not have been foiled by greater anticipation and modest defensive measures.
The threat that Al Qaeda presented was well known in general terms within the national security apparatus of the federal government, even if specific information about possible attacks was missing. But responsible officials and agencies did not do enough to confront the problem. A presidential statement like the NSPD of September 10, if distributed sooner, could have called attention to the dangers of potential terrorists present in the United States. The CIA and the FBI failed to track the whereabouts or investigate the activities of two known Al Qaeda operatives who lived openly in California for about 20 months, took flying lessons, and participated in the hijackings on 9/11.
On July 5, 2001, high-level officials from seven agencies received a briefing from the National Security Council’s National Coordinator for Counterterrorism, Richard A. Clarke. He cited the dangers that Al Qaeda presented and the possibility that it “might try to hit us at home.” The agencies responsible for homeland security did not react in meaningful ways to the warning, largely because a terrorist strike seemed far less likely in the territorial United States than abroad. Perhaps an earlier NSPD, armed with the weight of presidential authority, would have sharpened the focus on the risks of a terrorist plot within America and galvanized security officials and agencies into effective action. Perhaps, for example, the Federal Aviation Administration would have tightened airline boarding procedures or made terrorists’ access to cockpits more difficult. The FBI instructed its field offices to make certain they were ready to collect evidence in the event of a terrorist assault, but it did not order them to take any special steps to prevent an attack from occurring.
Even if the “what-if” queries surrounding the failures that allowed 9/11 to happen cannot be answered, we can agree with Condoleezza Rice’s heartfelt admission in her memoirs: “I did everything I could. I was convinced of that intellectually. But, given the severity of what occurred, I clearly hadn’t done enough.” Earlier adoption of the NSPD might not have made a difference. But the haunting thought remains that it might have spared America the agony of 9/11.
Mural at Bagram Air Base. Photo by author.
“What we are seeing now is truly shocking and shows we missed something fundamental and systemic in our intel, military and diplomatic service over the decades — deeper than a single (horrible) decision. Something at the very core that unraveled 20 years in only days.” In the emotional week following Kabul’s fall to the Taliban, these were the words of a close colleague who spent years advising senior U.S. military and the Afghan government. Countless explanations emerged attributing the well trained and equipped Afghan Army’s loss to a barbaric insurgent militia, citing an antagonistically factional, corrupt, and illiterate army plagued by poor morale, lacking any incentive to keep fighting, and a long-sustained over-reliance on U.S. close-air support. Poor governance by the power-hungry Afghan elites in Kabul, the same ones who consistently ignored military and security reforms, was freshly scrutinized. Finger pointing in the District abounded, identifying intel failures, lack of a conditions-based withdrawal, a consistent strategy, or a military culture unable to admit failure.
After the U.S. dedicated two decades and trillions of dollars to defeat the Taliban, one must ask: why were the world’s superpower’s best efforts and superior military might insufficient? During my time as the Political-Military Advisor to U.S. commanders in Eastern Afghanistan, I witnessed the Taliban’s ability to swiftly defeat the Afghan Army in the provincial capital of Kunduz in 2015. Even then, problems plaguing the Afghan Army, such as high AWOL numbers or “ghost soldiers,” and the Taliban’s capabilities, were evident. But there is much, much more to it than simple metrics.
Recent justifications and excuses fail to consider the central flawed assumption underpinning U.S. efforts from day one: that the majority of Afghans were as opposed to Taliban governance as the Coalition. But local anti-Taliban uprisings are no more indicative of an entire nation’s political leanings than a mob storming the U.S. Capitol. We looked through our Western lens, anticipating the population’s embrace of a new government, believing we would be the liberator of a nation from its fundamentalist oppressors.
The West’s perceptions of Taliban human rights atrocities inflicted on the Afghan population are substantially graver than leading Afghan elders’ perceptions—specifically those Elders who negotiated Jalalabad’s handover, which took place hours before Kabul’s fall and met with minimal resistance. How could this happen? Jalalabad is the 5th largest city in the country and home of the 201st Corps, a top-performing Afghan unit, as well as the home of the Ktah Khas—one of the most elite special forces divisions in the country, consisting of army, police, and intelligence agency units. Just as in 1996, when the Taliban was welcomed as fellow Pashtuns filling a void and quelling the warring warlord factions created by Russia’s departure years prior, now also many cities have quickly surrendered to them. This is largely due to the same deeply rooted, patriarchal, conservative cultural and social conditions remaining throughout the last quarter of a century there.
Ridding Afghanistan of the Taliban is akin to eradicating components of a 1,700 year old “code of life.” For decades, the militant group has been intricately woven into the fibers of society – to include creating shadow governments where the state structure collapsed and facilitating transport of the country’s booming drug trade presided over by provincial leaders and esteemed village Elders. For many Afghans with roots in a culture drastically different than ours, Taliban governance was simply not as barbaric as what we saw through our Western lens. Or at least not worth sacrificing their lives to prevent.
Long before the Taliban emerged, there were tribal policies of gender segregation (purdah), represented by burkah clad women. These policies, considered draconian by the West, dominated the countryside of this patriarchal society. Confined to caring for families and working the fields, women gave little thought to pursuing Parliamentary offices or higher education where interaction with men might occur. Their burkah provides a gender barrier and purdah safeguards honor while ensuring they remain as intended: protected and invisible. While there were variances in severity dependent on region and socio-economic status, the Taliban’s barring of women from schools was not new but an imposition of conservative cultural village norms onto city women in Kabul or Kandahar. This did not suddenly change with the Taliban’s fall in 2001; Afghanistan does not now have an entire generation of educated and liberated women. In reality, only 29% of girls age 15 and older are literate today (compared to 55% of males). Similarly, honor crimes also continued to occur throughout the country over the last two decades; as recently as May of 2020 18-year-old Nazela was murdered after running away with her boyfriend in Badakhshan. As extreme as the militant group’s gender ideology is, it is not the sole source of responsibility for human rights abuses and oppression of women. Accordingly, opposition to the Taliban based on social repression could never serve as a basis for widespread opposition to its rule.
Like the Taliban, 40% of Afghans are Pashtun, a fiercely independent people long ruled by Pashtunwali. This ancient tribal code and way of life is similar to the strict interpretation of Islamic Law the insurgency promulgates. Their ideology ensured centuries of tribal survival and its custom, or tribal, law dominates the country’s informal justice systems. The Taliban have historically represented a unique blend of Deobandi Islam, Saudi Wahhabism, and tribal Pashtun beliefs and values. The punishments they administer have been present for centuries and, at times under Shari’a, were less brutal than tribal justice of stoning or honor killings. By incorporating their extremist interpretation of Shari’a in conjunction with tribal law policies the rural population had already been accustomed to, the Taliban minimized opposition throughout much of the country.
Following the 2021 Kabul takeover, President Biden accurately stated, “You can’t give them the will to fight.” There are reports of stories half a world away in remote Mexican highland towns like Pantelho in Chiapas Province, where indigenous vigilantes armed with makeshift weapons have been successful in driving out powerful drug cartels terrorizing their communities. In the absence of effective state security forces, advanced weapons, or training, they are only armed with the will to free their communities from militant oppression. Yet the Afghan Army, trained and armed with over 22,000 Humvees, 51,000 tactical vehicles, 600,000 key weapons, and 200 rotary and fixed-wing aircraft, opted to hand over the country. It’s clear that for many in Afghanistan, the Taliban threat was not dire enough to warrant defiance or resistance.
This is not to deny that many Afghans did sacrifice their lives in battle over the years, including those desperately rushing to the Kabul airport attempting to escape feared political retribution. A brilliant commander I served under recently publicly wrote, “I had served with some true Afghan heroes and had too many episodes of Afghan leaders and people who actually were genuine, who didn’t want a return of the Taliban. They wanted prosperity for their family and were humble. They were patriots in their own way. I now know and accept that these honorable, noble Afghans were actually very unrepresentative.” Whether these individuals were inspired to fight for their country, their families or a better life, remains unknown. And pockets of historical resistance remain, such as the Panjshir Valley, led by the son of famed Northern Alliance fighter Ahmad Shah Massoud, who continues his father’s legacy against the Taliban. These unfortunately are not the majority of a 332,000 strong military, which was provided with the capability and capacity to win.
We wrongly assumed the majority of the Afghan Army, representative of the Afghan people, was as opposed to the Taliban as we in the West, and especially that a predominantly male army would suddenly fight for freedoms for Afghan women - freedoms many never possessed. We assumed they’d wage war rather than allow rule by an extremist interpretation of a religion they already practice. We assumed their training and capabilities equaled motivation. But no amount of intelligence assets, military strategies, nation building efforts, or financial assistance could force the Afghans to fight for a Western version of peace and prosperity. This is not a U.S. loss, but an Afghan one.
This real estate advertisement, touting “No Malaria, No Mosquitoes, No Negroes,” was one of many James Loewen discovered in Arkansas while writing an entry on “Sundown Towns” for the Online Encyclopedia of Arkansas, an entry that still informs Arkansans about their state’s history.
One of the things I quickly learned working as an editor at the online Encyclopedia of Arkansas, a project of the Central Arkansas Library System, is that some scholars simply do not consider writing encyclopedia entries a valuable use of their time. Writing encyclopedia entries either does not count sufficiently toward tenure or simply does not pay enough. Granted, there are plenty of exceptions, and numerous examples of generosity out there, but to edit an encyclopedia is to face regular rejection by many of the leaders in their particular fields. I learned that as a graduate student serving as an editorial assistant on Dr. William M. Clements’s The Greenwood Encyclopedia of World Folklore and Folklife, and the experience carried over into this new position.
So it was a surprise to find, one day in early 2006, a message from the nationally renowned Dr. James W. Loewen in my inbox, and an even greater surprise to see that he was offering to write an entry on sundown towns for the Encyclopedia of Arkansas, which had not yet even gone live. Of course, we eagerly agreed, and we made sure his entry was ready to go by the time we formally launched the site on May 2 of that year. Later that summer, Dr. Loewen actually came to our state to speak at Arkansas Governor’s School, and while he was in the area, he dropped by our library to chat with us and see if our archives contained anything relevant to sundown towns. Sure, he could be a bit brusque, but he was deadly earnest in his desire find and expose these manifestations of American racism that had, for so long, gone unnoticed and unexplored by most scholars.
As it happened, I had taken a leave of absence from my Ph.D. program in 2005 to assume my job at the Encyclopedia of Arkansas, and I would soon be returning to my classwork and contemplating a dissertation topic. Talking with Jim, as I would come to know him, got the gears turning in my head, and I emailed him a few days later to see if he thought there was enough material left uncovered to warrant additional research on sundown towns in Arkansas. “Of course!” he responded. “I wish you had talked to me about this while I was there!”
Over the coming years, we would regularly correspond as I pursued this subject, and he even gave me his home phone number in case I needed it. The night before I defended my dissertation in 2010, we had a long conversation that was equal parts encouragement and equal parts him putting me through my paces. A few years later, I turned my dissertation into the book Racial Cleansing in Arkansas, 1883–1924: Politics, Land, Labor, and Criminality.
After that, I must admit, I lost touch with Jim for a while as I pursued research into other aspects of Arkansas history. But he got back in touch after I started writing articles for the History News Network (he particularly liked my piece “The Garbage Troop”). We would catch each other up on our latest projects, and he was always complementary toward our efforts in Arkansas to account for this state’s history of racial cleansings and lynchings.
I am reminded of Jim’s generosity every time I look at the analytics for the Encyclopedia of Arkansas, for his entry tops our most visited webpages by a large margin. Just yesterday, I took a glance at our real-time numbers to find that, of the ninety-or-so people on the Encyclopedia of Arkansas at that moment, fully one-third of them were reading about sundown towns. I think it would have pleased him mightily to know that.
Rest in Peace, Jim.
Russian director Nikita Mikhalkov with Vladimir Putin on the set of “Burnt By the Sun 2,” 2008.
A recent viewing of Nikita Mikhalkov’s Sunstroke (2014) on Amazon Prime got me reflecting on this famous contemporary Russian director as well as Ivan Bunin (1870-1953), the author of the short story of the same title which inspired the film.
While the director is a great admirer of Russian President Vladimir Putin (a former Communist KGB officer), the earlier writer wrote Cursed Days, a scathing criticism of the Russian Communists during their revolution and civil war, and emigrated from Russia in January 1920. But what Mikhalkov and Bunin before him both possess (and Putin also shares) is a peculiar nostalgia for certain aspects of old tsarist Russia.
All three men are major figures in the last century and a quarter of Russian history. By 1900 Bunin had already written his first stories and poetry and was a friend of some of the most famous writers of his day, for example, Tolstoy, Chekhov, and Gorky. In 1933, he became the first Russian writer (although now an emigré living in France) to receive the Nobel Prize for Literature–first awarded in 1901. Primarily known for such short stories as “The Gentleman from San Francisco” and “Sunstroke,” Bunin also wrote novels (e.g., The Village) and poetry, as well as non-fiction like “About Chekhov.” Mikhalkov is the “most famous living Russian director.” His Burnt by the Sun won a U.S. Academy Award (1995) for Best Foreign Language Film, and he has directed more than 20 films. In addition, he has been active in various other realms–acting (about two dozen films), producing, heading the TriTe film studio and the Russian Filmmakers’ Union, and occasionally speaking out for what he likes to think of as an enlightened Russian conservatism. Putin, of course, has been Russia’s chief political leader since the resignation of Boris Yeltsin at the end of 1999.
Regarding Bunin’s nostalgia, an Italian scholar, D. Possamai, has written, “there is no doubt that the sense of melancholic nostalgia for a Russia that no longer exists is one of the key stylistic features in Bunin’s works.” The scholar writes that this is primarily true in his short fiction and cites his “Antonov Apples” (aka “The Scent of Apples”), which first appeared in 1900, as a prime example. Other scholars such as V. Terras (Bunin “is a nostalgic writer” and M. Slonim (the emigre Bunin was a “seeker after things past and gone”) also highlight his nostalgia.
We can also sense it in “Sunstroke,” the 1925 story upon which the Mikhalkov film is based. A lieutenant stands on a summer night at the railing of a Volga River paddlewheel steamship; he suggests to a small, attractive woman he has met just hours before that they get off at the next stop. They do so and take a carriage, through a small provincial town, to an inn, where they make love in a large room with white curtains, still “smoldering from the day’s sun.” Their first kiss was so delirious “that for many years they remembered this moment, and neither one experienced anything like it for the rest of their lives.” But the next morning she insists on resuming her river journey while he stay behind. He agrees, but soon realizes “ he would never see her again, this thought overwhelmed and astonished him. No, it can’t be! It was too bizarre, unnatural, unbelievable! And he felt such pain and such a sense of how useless the rest of his life would be without her, that he was gripped by horror, despair.” In the evening, he boards another Volga ship. Bunin’s final sentence reads, “The lieutenant sat on the deck under the awning feeling like he’d aged ten years.”
In his film, Mikhalkov interweaves this story, displayed in appealing colors, into another, presented in darker, somber tones, about hundreds of White Army officers who have surrendered to Red (communist) forces near the end of the Russian Civil War in 1920. The lieutenant of Bunin’s story is now one of these White officers held at an internment camp near the Black Sea, and his Volga adventure is displayed in flashbacks. Mikhalkov provides both sympathetic and unsympathetic characters on both sides of the conflict, and suggests what personal defects led the lieutenant and other Whites to loose the war. But his main conclusion seems to be that the civil war was a Russian tragedy. At the end of the film, the following words are flashed on the screen: “From 1918 to 1922, only in the South and the Crimea, Russia lost more than 8 million of its people.”
Regarding Mikhalkov’s nostalgia, Birgit Beumers has written in her Nikita Mikhalkov: Between Nostalgia and Nationalism (2005), “The overall argument of this volume is that Mikhalkov performs a shift from a nostalgia for a past that is openly constructed as a myth to a nostalgia for a past that pretends to be authentic. I contend that this move is a result of the collapse of the Soviet value system … and the director’s inability to face up to the reality of the 1990s, when he turned both past and present into a myth that he himself mistook for real and authentic.”
Beumers chronicles Mikhalkov’s development from his first full-length film, At Home Among Strangers (1974), up until Dark Eyes (1987), starring the Italian actor Marcello Mastroianni. She believes the first typifies his early approach to basing “his work not on the historical facts but on a myth.” But the second moves “away from the reflective or ironic nostalgia of his earlier films towards a restorative nostalgia that tends towards nationalistic revival.”
Mikhalkov’s most acclaimed film, at least outside Russia, was his award-winning Burnt by the Sun (1994). Like his later Sunstroke, it is complex and reflects positive and negative aspects of both tsarist and Soviet Russia. Unlike Sunstroke, he is also one of the chief actors in this earlier work, portraying the communist officer Kotkov. Beumers concludes that Kotkov does not destroy “the beauty of Russia” inherited from the tsarist period, and quotes Mikhalkov as saying, “‘Yes, Bolshevism has not brought happiness to our country. But is it morally correct on the basis of this indisputable fact to put under doubt the life of entire generations only on the grounds that people happened to be born not in the best of times?”
Beumers adds that Mikhalkov “longs for a past when it was possible to believe in ideals,” but that he realizes that on the day portrayed in his film at the beginning of the Great Terror of 1936-38, “they are about to disappear… . By casting himself as a kind Bolshevik commander, who believes in the ideals of the Revolution and, furthermore, is a perfect father to his child, he offers himself in the role of a leader of the people, but one who would return to the roots of socialism … . This confusion of fiction and reality leads to the portrayal of a political Utopia, which Mikhalkov would
gradually mistake for an authentic ideal.”
Mikhalkov’s next important film was The Barber of Siberia (1998), which was a box-office hit in Russia. It appeared the same year as the remains of the last tsar, Nicholas II, and his family were reburied in St. Petersburg, and its nostalgic toward the reign of Nicholas’s father, the reactionary tsar (1881-1894) Alexander III, who is played by none other than Mikhalkov himself. Through the character of an American woman, Jane Callaghan (English actress Julia Ormond), the moral bankruptcy of the West is symbolized and contrasted to the moral superiority of Russians like Alexander III.
Although Mikhalkov directed other films between his 1998 one and his 2014 Sunstroke, including 12, a remake of 12 Angry Men (1957), a more significant work of his (at least for this essay) was his 2010 political manifesto, 63 pages that reflect his nostalgic nationalism and conservatism. It praises “law and order,” state power and loyalty to it, and the Russian Orthodox Church. It also considers Russia-Eurasia as “the geopolitical and sacred center of the world.” Although it mentions the Stalinist terror, Mikhalkov considers the overall achievements of the Soviet era more important.
Mikhalkov’s view are very close to those of Putin’s (for the similarities see my two 2015 essays, “Vladimir Putin: History Man?” and “Is Vladimir Putin an Ideologue, Idealist, or Opportunist?”). Since 2015, two further indications of his nostalgic nationalism have been his comments during a 2017 unveiling of a Crimean monument to Alexander III–the same tsar Mikhalkov played in The Barber of Siberia–and Putin’s long 2021 essay on “On the Historical Unity of Russians and Ukrainians.”
Speaking on Alexander III in Crimea (annexed by Russia in 2014 after earlier being part of Ukraine), Putin said:
We are unveiling a monument to Alexander III, an outstanding statesman and patriot, a man of stamina, courage and unwavering will.
He always felt a tremendous personal responsibility for the country’s destiny: he fought for Russia in battlefields, and after he became the ruler, he did everything possible for the progress and strengthening of the nation, to protect it from turmoil, internal and external threats… .
The reign of Alexander III was called the age of national revival, a true uplift of Russian art, painting, literature, music, education and science, the time of returning to our roots and historical heritage… .
He believed that a strong, sovereign and independent state should rely not only on its economic and military power but also on traditions; that it is crucial for a great nation to preserve its identity whereas any movement forward is impossible without respect for one’s own history, culture and spiritual values.
In his essay “on the historical unity of Russians and Ukrainians,” Putin stressed all in the past that had united the two peoples, and downplayed all that divided them (e.g., “Ukraine’s ruling circles decided to justify their country’s independence through the denial of its past… . They began to mythologize and rewrite history, edit out everything that united us, and refer to the period when Ukraine was part of the Russian Empire and the Soviet Union as an occupation. The common tragedy of collectivization and famine of the early 1930s was portrayed as the genocide of the Ukrainian people.”)
In her The Future of Nostalgia (2001), Svetlana Boym wrote that “the nostalgic desires to obliterate history and turn it into private or collective mythology.” In the case of Mikhalkov and Putin (less so with Bunin) their type of nostalgia does not so much “obliterate history,” but attempts to mythologize it to justify present political goals.
In doing so they are far from unique among modern cultural and political leaders. In his perceptive portrait of Ronald Reagan, Reagan’s America: Innocents at Home (1998), Gary Wills described the former actor who became president as the “sincerest claimant to a heritage that never existed, a perfect blend of an authentic America he grew up in and of that America’s own fables about its past.” Similarly, Donald Trump’s slogan “Make America Great Again” attempted to appeal to a nostalgia grounded more on myth than historical reality. Beyond Russia and the USA, as a 2019 Foreign Policy essay noted, “The problem is that the world is now primarily dealing with a toxic restorative nostalgia… . In many ways, nostalgic nationalism is the political malaise of our time.”
From Hero of Two Worlds: The Marquis de Lafayette in the Age of Revolution by Mike Duncan, copyright © 2021. Reprinted by permission of PublicAffairs, an imprint of Hachette Book Group, Inc.
As soon as he received the letter from President Monroe, Lafayette began arranging his return to America.
In July 1824, Lafayette, his son Georges, and his secretary Levasseur traveled to Le Havre, France, to meet their waiting ship. In France, local leaders couldn’t wait for Lafayette to leave. In America, they couldn’t wait for him to arrive.
Lafayette and the party arrived in New York on August 15, 1824. Newspapers publicized the imminent arrival of the Hero of Two Worlds, so when he reached Manhattan, boats of every shape and size packed the harbor. When he disembarked, an honor guard of aging veterans of the American War of Independence saluted the last surviving major general of the Continental Army. Lafayette had not set foot on American soil for forty years and already he could tell he was going to enjoy himself. It was nice to be loved again.
Here he was a living legend—a pristine icon of the most glorious days of the Revolution. He found himself as celebrated in Philadelphia as New Orleans; Vermont as much as South Carolina; rural hamlets as well as big cities. Lafayette belonged to everyone, and wherever he went he was described as the Nation’s Guest. Whether intended or not, his very presence reminded local and state leaders they were a single nation with a shared past and collective future. Lafayette certainly never let them forget it.
Lafayette’s admirer and friend Fanny Wright, who believed the United States represented the vanguard of human progress, met up with the party from England. As feared, the presence of the young woman was socially awkward. Eleanor Parke Custis Lewis, Washington’s step-granddaughter, who also joined Lafayette’s party in New York did not take kindly to the unaccompanied young woman.
Part of the reason for the tension with Nelly Lewis was Fanny Wright’s staunch abolitionism, while the Washingtons remained committed slave-owners. Lafayette was caught in between his own abolitionist principles and the desire for social harmony. Though he never publicly embarrassed his slave-owning friends in America, Lafayette also never missed an opportunity to demonstrate his own commitment to emancipation. Believing the universal education of the African population of paramount importance to successful emancipation, Lafayette made a point of visiting the African Free School, an academy established by the New York Manumission Society to give equal education to hundreds of black pupils.
Lafayette was greeted by an address from a bright eleven-year-old student named James McCune Smith: “Here, sir, you behold hundreds of the poor children of Africa sharing with those of a lighter hue in the blessings of education; and, while it will be our pleasure to remember the great deeds you have done for America, it will be our delight also to cherish the memory of General La Fayette as a friend to African emancipation, and as a member of this institution.” Young James McCune Smith would grow up to become the first African American to hold a medical degree, a prominent antebellum abolitionist, and mentor of Frederick Douglass.
As they moved into the southern states, Lafayette’s company confronted the unavoidable contradiction of American liberty and American slavery. Levasseur, as much an abolitionist as Lafayette and Wright, was not comfortable with the things he now saw. “When we have examined the truly great and liberal institutions of the United States with some attention,” he wrote, “the soul feels suddenly chilled and the imagination alarmed, in learning that at many points of this vast republic the horrible principle of slavery still reigns with all its sad and monstrous consequences.” As Levasseur published his journals under Lafayette’s general editorial direction, and whose political views the journal was meant to promote, we can take Levasseur’s observations as bearing Lafayette’s stamp of approval.
Reflecting on his travels, Levasseur remained hopeful emancipation was inevitable, partly because everyone agreed slavery was terrible. “For myself,” he wrote, “who have traversed the 24 states of the union, and in the course of a year have had more than one opportunity of hearing long and keen discussions upon this subject, I declare that I never have found but a single person who seriously defended this principle. This was a young man whose head, sufficiently imperfect in its organization, was filled with confused and ridiculous notions relative to Roman history, and appeared to be completely ignorant of the history of his own country; it would be waste of time to repeat here his crude and ignorant tirade.”
Lafayette and Levasseur shared a concern this racist ignorance threatened the standing of the United States in the world for its on-going violations of fundamental human rights: “If slave owners do not endeavor to instruct the children of the blacks, to prepare them for liberty; if the legislatures of the southern states do not fix upon some period, near or remote, when slavery shall cease, that part of the union will be for a still longer time exposed to the merited reproach of outraging the sacred principle contained in the first article of the Declaration of Rights; that all men are born free and equal.” And as Lafayette had written in his own declaration of rights, violation of this sacred principle always left open the right of the victims of tyranny to exercise another fundamental right: resistance to oppression.
The next stop on the official itinerary was a visit to Thomas Jefferson at his slave plantation, Monticello. Lafayette and Jefferson had not seen each other since 1789, during the hopeful days after the fall of the Bastille. Lafayette’s party spent a week at Monticello, and Jefferson escorted them to Charlottesville to tour his pride and joy: The University of Virginia. James Madison even made an appearance.
Levasseur noted while Lafayette stayed with his Virginian friends— all of them members of the plantation slave aristocracy—Lafayette did not shy away from bringing up emancipation: “Lafayette, who though perfectly understanding the disagreeable situation of American slaveholders, and respecting generally the motives which prevent them from more rapidly advancing in the definitive emancipation of the blacks, never missed an opportunity to defend the right which all men without exception have to liberty, broached among the friends of Mr. Madison the question of slavery. It was approached and discussed by them frankly It appears to me, that slavery cannot exist a long time in
Virginia, because all enlightened men condemn the principle of it, and when public opinion condemns a principle, its consequences cannot long continue to subsist.” Levasseur, however, was far too optimistic about the noble sentiments of the Virginians and the future prospects of slavery. Condemning something in principle has little bearing on whether it is allowed to persist in reality.
Lafayette made it back to Boston just in time for the Bunker Hill celebrations on June 17, 1825. After being escorted to the site in a carriage drawn by six white horses, he laid the cornerstone of the Bunker Hill monument. After a day of mutually admiring speechmaking, Lafayette requested a bag full of the dirt from the excavation site so he could take it home with him and always keep soil from the birthplace of American liberty.
Before Lafayette departed once and for all, George Washington Parke Custis conceived of sending a present to another liberty-loving American: Simón Bolívar. Bolívar recently completed a series of campaigns ending Spanish rule in Venezuela and Colombia and now campaigned in Peru. Citizens of the United States cheered the exploits of the Liberator, and Lafayette agreed Bolívar was the Washington of South America. The gift package included a pair of Washington’s pistols, a portrait of the late president, and a letter from Lafayette. Lafayette offered the President Liberator, “personal congratulations from a veteran of the common cause,” and said of the enclosed gifts, “I am happy to think that of all the existing men, and even of all the men of history, General Bolívar is the one to whom my paternal friend would have preferred to offer them. What more can I say to the great citizen whom South America hailed by the name of Liberator, a name confirmed by both worlds, and who, endowed with an influence equal to his disinterestedness, carries in his heart the love of liberty without any exception and the republic without any alloy?” Lafayette departed for home proud the great work of liberty continued its inexorable march through the Americas.
On September 8, 1825, a new naval frigate recently christened the Brandywine in Lafayette’s honor set sail for Europe with Lafayette aboard. He never returned to the United States. And while he sailed away content in the knowledge the legacy of his past glories would live forever in the New World, he hoped a few final future glories still lay ahead in the Old World.
On August 5, 1912, Frieda von Richthofen, a thirty-three-year-old German aristocrat and married mother of three, awoke to the sound of rain. It was four thirty in the morning. Quivering strips of pearly light seeped through the sides of the shutters. She opened her eyes, dimly aware of her young lover strapping up their rucksacks and humming beneath his breath. At last she was about to embark on a real adventure, the sort of escapade she’d dreamt of for the last ten years. It had been a long, dry decade in which her emotionally restrained life in a comfortable suburban house on the edge of industrial Nottingham had almost driven her mad.
Her lover was the ﬂedgling writer D. H. Lawrence, a penniless coal miner’s son whom she’d met four months earlier. The pair of them had been poring over maps and guidebooks for days, plotting a route that would take them through “the Bavarian uplands and foothills,” over the Austrian Tyrol, across the Jaufen Pass to Bolzano, and down to the vast lakes of Northern Italy.
Later, this six-week walk would become much mythologised as their “elopement.” But the evidence suggests this was less an elopement than a feverish bid for freedom and an inarticulate yearning for renewal. On the ﬁrst misty, sodden step of that six-week walk, Frieda began the process of reinventing herself as a woman without children, scissoring herself free from the restrictions and responsibilities that accompanied being a mother in Edwardian England. Almost overnight she transformed herself from a fashionably dressed and hatted mother and manager of multiple household staﬀ to someone else entirely: a woman who put comfort before fashion, who took responsibility for her own cooking and laundry, who swapped warm, soapy baths for ice-cold pools and the latest ﬂushing lavatory for speedy squats among the bushes.
Frieda’s isolation was exaggerated by her choice of paramour. Lawrence spoke with a Derbyshire accent. He dressed in cheap clothes and came from a rough mining village. He was also six years younger than she was, at a time when women were expected to marry older men. To leave children, a comfortable home, and a successful husband broke every taboo. To leave them for a man like this was unthinkable.
In 1912, this was not how women behaved. Least of all mothers.
Frieda and Lawrence put on their matching Burberry raincoats. Frieda donned a straw hat with a red velvet ribbon round the brim. Lawrence wore a battered panama. They squeezed a spirit stove into a canvas rucksack, planning to cook their supper at the side of the road. They had twenty-three pounds between them, barely enough to reach Italy.
The pair chose a punishing route that would fully occupy them with its steep climbs and its perilous twists and turns. Neither of them had walked or climbed in mountains before, neither was a skilled orienteer, and neither was particularly ﬁt. Lawrence found the mountains bleak and terrifying, seeing there the eternal wrangle between life and death. Later, he made full use of his Alpine terror in Women in Love, sending Gerald Crich to a lonely death in the barren glaciers of the Alps. Frieda, however, thought it was “all very wonderful.”
On this walk, the pair averaged ten miles a day, much of it uphill and strenuous, much of it cold, always with their packs on their backs. On some days they walked farther still. Only when the weather was particularly hostile did they allow themselves the luxury of catching a train to the next town.
Even as Frieda and Lawrence celebrated their new-found freedom—from their past lives and from the passionless provincialism of pious England—they were acutely aware of how hemmed in they were. Frieda’s presence had a profound eﬀect on Lawrence, sparking a creative surge that resulted in dozens of short stories, poems, and essays, as well as his three acknowledged masterpieces: Sons and Lovers, The Rainbow, and Women in Love. But as he led Frieda farther and farther away from her previous life, he began to see how necessary she was to him. Not only for his happiness but for the continued blooming of his genius. Many of his poems are testimony to this feeling of necessity, a feeling that occasionally tips into a terriﬁed dependency:
The burden of self-accomplishment!
The charge of fulﬁlment!
And God, that she is necessary!
Necessary, and I have no choice!
Frieda discovered that her new-found liberty was similarly compromised. She left her husband, children, and friends to discover her own mind, to be freely herself. But freedom is inﬁnitely more complicated than simply casting oﬀ the things we believe are constraining us. Hurting others in the pursuit of freedom and self-determination brought its own struts and bars, its own weight of guilt. Frieda never shared the great weight of her guilt. She couldn’t. Lawrence wouldn’t allow it. His friends joined forces with him, insisting that she put up or shut up, that her role was to foster his genius. At any price.
After six weeks, Frieda and Lawrence arrived in Riva, then an Austrian garrison town on Lake Garda. Vigorous ascents over steep mountain passes in snow and icy winds followed by nights in lice-ridden Gasthäuser had left them looking like “two tramps with rucksacks.” Within days a trunk of cast-oﬀ clothes from Frieda’s glamorous younger sister had arrived, swiftly followed by an advance of ﬁfty pounds for Sons and Lovers from Lawrence’s publisher. In a big feathered hat and a sequinned Paquin gown, an exuberantly overdressed Frieda and a shabbier Lawrence sauntered round the lake, celebrating their return to civilisation and rubbing shoulders with uniformed army oﬃcers and elegantly dressed women.
So why did Frieda devote less than twenty-ﬁve lines of her memoir to this pivotal time in her life? She writes in the same book: “I wanted to keep it secret, all to myself.”
This journey was so vivid and intense, so personal, that neither Frieda nor Lawrence wanted to enclose it or share it. When Lawrence ﬁctionalized a version of it in Mr. Noon, he never sought publication (unusually for him as they invariably needed the money). Instead, he consigned it to a drawer. Nor, after his death, did Frieda try and have Mr. Noon published—despite publishing other writings Lawrence had chosen to keep private. Mr. Noon stayed unpublished until 1984.
Twenty months after their Alpine hike, and at his insistence, Frieda married an increasingly restive and cantankerous Lawrence, arguably exchanging one form of entrapment for another. It wasn’t until his death in 1930 that she became free to live as, and where, she wanted. In a bold attempt to ﬁnally assert her own identity she used the name “Frieda Lawrence geb. Freiin von Richthofen” on the opening page of her memoir. That, I suspect, was her deﬁnitive moment of freedom.
After Lawrence died, she lived in the same ranch (“wild and far away from everything”) in Taos, New Mexico, for much of her remaining twenty-six years. Here she cultivated a close group of friends, a surrogate for the family she’d sacriﬁced. And she walked. Her memoir is peppered with references to walking: “We were out of doors most of the day,” she says, on “long walks.” Her ﬁrst outing with Lawrence, shortly after they met, had been “a long walk through the early spring woods and ﬁelds” of Derbyshire with her two young daughters. It was on this walk that she discovered she’d fallen in love with Lawrence. Later she wrote of “delicious female walks” with Katherine Mansﬁeld, walks through Italian olive groves, walks into the jungles of Ceylon, walks along the Australian coast, walks through the canyons of New Mexico, or simply strolls among “the early almond blossoms pink and white, the asphodels, the wild narcissi and anemones.” Frieda walked in the countryside for the rest of her life.
But the pivotal walk of her life—the six-week walk she skirted in twenty-ﬁve lines—was the most signiﬁcant. From here, Frieda emerged as herself, as the free woman she had always longed to be—dressing in scarlet pinafores and emerald stockings, swimming naked, making love en plein air, walking as she wished. She had also become the free woman Lawrence needed for his ﬁction. He made full use of her in his writing, continually remoulding her, most famously as Ursula in Women in Love, and Connie in Lady Chatterley’s Lover. His novels shaped history, but Frieda was the catalyst.
Excerpted from WINDSWEPT: Walking the Paths of Trailblazing Women by Annabel Abbs. Copyright (c) 2021 by Annabel Abbs. Published by Tin House.
Christine Kinealy, professor of history and the founding director of Ireland’s Great Hunger Institute at Quinnipiac University in Connecticut, has created an impressive academic juggernaut for the study of the mid-19th century Great Irish Famine and for bringing the famine to the attention of a broader public. Her more recent published work includes The Bad Times. An Drochshaol (2015), a graphic novel for young people, developed with John Walsh, Private Charity to Ireland during the Great Hunger: The Kindness of Strangers (2013), Daniel O’Connell and Anti-Slavery: The Saddest People the Sun Sees (2011), and War and Peace: Ireland Since the 1960s (2010). She recently released a collection of essays, prepared with Jason King and Gerard Moran, Heroes of Ireland’s Great Hunger (2021, co-published by Quinnipiac University Press and Cork University Press). In Ireland, it is available through Cork University Press. In the United States, it is available in paperback on Amazon.
Co-editor Jason King is the academic coordinator for the Irish Heritage Trust and the National Famine Museum in Strokestown, County Roscommon, Ireland. Co-editor Gerard Moran is a historian and senior researcher based at the National University of Ireland – Galway.
Sections in Heroes of Ireland’s Great Hunger include “The Kindness of Strangers,” with chapters on Quaker philanthropy, an exiled Polish Count who distributed emergency famine relief, and an American sea captain who arranged food shipments to Ireland; “Women’s Agency,” with three chapters on women who “rolled up her linen sleeves” to aid the poor; “Medical Heroes,” with five doctors who risked their own lives to aid the Irish; and sections on the role of religious orders in providing relief and Irish leadership. Final reflections include a chapter on “The Choctaw Gift.” The Choctaw were an impoverished Native American tribe who suffered through the Trail of Tears displacement to the Oklahoma Territory. They donated more than they could afford to Irish Famine Relief because they understood the hardship of oppression and going without.
Kinealy, King, and Moran managed to enlist some of the top scholars in the field of Irish Studies from both sides of the Atlantic to document how individuals and groups made famine relief a priority, despite official government reticence and refusal in Great Britain and the United States. In her work, Kinealy continually draws connections between the Great Irish Famine and current issues, using the famine as a starting point for addressing problems in the world today. The introduction to the book opens with a discussion of the global response to the COVID-19 pandemic.
Readers meet powerful individuals who deserve a special place in the history books. James Hack Tuke, a Quaker, not only provided and distributed relief, but attempted to address the underlying issues that left Ireland, a British colony, impoverished. His reports from famine-inflicted areas highlighted pre-existing social conditions caused by poverty, not just famine-related hunger. His reports challenged the stereotype popularized in the British press that the Irish were lazy and stressed the compassion the Irish showed for their neighbors. While working with famine refugees in his British hometown of York, Tuke became ill with typhus, also known as “Famine Fever,” a disease that caused him to suffer from debilitating after-effects for the rest of his life. After the famine subsided in the 1850s, Tuck continued his campaign for Irish independence from the British yoke.
Count Pawl de Strzelecki of Poland was an adopted British citizen who documented the impact of the Great Irish Famine so that British authorities could not ignore what was taking place and who spoke out against the inadequacy of British relief efforts. Strzelecki was a geographer, geologist, mineralogist, and explorer. As an agent for the British Association for the Relief of Distress in Ireland and the Highlands of Scotland, he submitted reports on conditions in County Donegal, Mayo and Sligo. The reports challenged the British government’s effort to minimize the impact of the famine of the Irish people. On a personal level, Strzelecki provided direct aid to impoverished Irish children and lobbied before Parliamentary committees for increased governmental and institutional attention to their plight. He later worked to provide assistance to women who were emigrating to Australia.
The chapter on Asenath Nicholson was written by my colleague at Hofstra University, Maureen Murphy, Director of the New York State Great Irish Famine Curriculum and author of Compassionate Stranger: Asenath Nicholson and the Great Irish Famine. Nicholson was an American Protestant evangelical who travelled the Irish countryside delivering relief packets and sending reports home to the United States in an effort to raise more money. While she distributed Bibles, she did not make participation in Protestant services a condition of aid, unlike a number of British aid workers. Murphy describes Nicholson as a “woman who was ahead of her time – a vegetarian, a teetotaler, a pacificist, and an outdoor exercise enthusiast” (96). Nicholson’s achievements were largely ignored by a male-dominated world until brought to public attention by Murphy’s work.
Daniel Donovan was a workhouse medical doctor in Skibbereen, perhaps the hardest hit town in County Cork and in all of Ireland. I consider him one of the most significant heroes included in the book. As epidemic diseases devoured the countryside, Dr. Dan, as he was known locally, treated the poor and documented conditions for the outside world. Donovan’s diary reported on the impact of the famine in Skibbereen was published in 1846 as Distress in West Carberry – Diary of a Dispensary Doctor and sections were reprinted in a number of newspapers in Ireland and England, including The Illustrated London News. Dr. Dan, who became a major international medical commentator on famine, fever, and cholera, continued to serve the people of Skibbereen until his death in 1877.
I do have one area of disagreement with the editors. I would have included a section on the leaders of Young Ireland and the 1848 rebellion against British rule, including William Smith O’Brien, John Blake Dillon, John Mitchel, and Thomas Meagher. Rebellion, as well as relief, was an important and heroic response to the Great Irish Famine.
At Hofstra University in suburban New York, I teach a course on the history of the Great Irish Famine and its significance for understanding the world today. Too often courses like these focus on horrors and I look forward to using this book in class to shift the focus, at least in part, to heroes.
In the past I have written extensively for History News Network about my extended Vietnamese family, but I have never written about their settlement in America after the war and the new roots they established in the Washington, D.C. Metro area. Because of the politically contentious prospect of resettling large numbers of Afghan refugees in the United States today, here is an essay about The Family, as we in our family call them, and some truths about the trials and tribulations of helping refugees enter and start a new life in America. In retrospect, 1975 was not today. Afghans are not Vietnamese. Though the end of each war was chaotic, the circumstances of each war and how they ended are vastly different. Not everything I have written about the Vietnamese applies to Afghanistan but there are enough similarities that bridge what took place 46 years ago and what is happening today.
To begin. It seems as if it was only yesterday that I woke up on a weekday morning in the spring of 1975 with 21 Vietnamese refugees living in the large, finished basement of my sprawling house in Rockville Centre, N.Y. In reality, I admit, it was no surprise. They were my wife Josephine’s mother and father. Her three brothers. Wives. Children. Cousins. They were there because I brought them there—they had no other place to go. The war had finally ended in Vietnam. They had become America’s newest refugees.
Without doubt, they missed their home and their former way of life. As far as I could tell they experienced no permanent scars over the next years of their lives because they had left Saigon in a hurry. in time they made a success of their new lives. Many years later some of The Family returned to Saigon, then renamed Ho Chi Minh City, to witness a life they were glad to have missed while they were becoming successful and entrenched in Maryland and Virginia. To help them in America, my mother-in-law carried with her a half million Vietnamese piasters that immediately lost their value after the country fell. I still have the bills that are as crisp as they were when first obtained.
I helped get them out of Saigon as the country was falling to the North Vietnamese and Viet Cong. With my Vietnamese wife, Josephine, we were able to move them from the American government camps, usually US Army bases, where they had been patiently waiting for the freedom they knew they would have once they walked unimpeded on American soil. They wanted freedom badly enough to move as fast as they could from their homes in Saigon as the country collapsed into near chaos caused by marauding forces led by Hanoi in North Vietnam. Escape for them was difficult and harrowing but it was an easy decision for my family to make to travel thousands of miles into the unknown when the war ended. If, after escaping, they had any doubts of what they had done, they never expressed them to me.
Now that they were free, taking the next step into their new life was living in my basement. I suddenly had the role of the all-knowing big uncle whose job would be to resettle his family into a healthy, changed and meaningful life. But there were challenges to overcome that had no immediate answers or solutions. Though they wore blue jeans and sneakers of every make and brand, they were not us, meaning they were foreigners without a hint of what an American was back then, at least on the surface. My family was a curious mix of Catholic, Buddhist, and I, my wife and children, Jewish. That mixture in one family was usually enough to create problems in a Western household, but it meant nothing to these Vietnamese to have three different religions under one roof. So, what some may have considered a major problem would not get in the way of the family’s resettlement.
My family had lived in well-built houses in central Saigon. But they did not understand the modern conveniences as we knew them in America. They cooked with charcoal on brick stoves and in brick ovens. In my home we had the usual gas stove, a microwave and a tabletop oven for quick meals, toast and bagels or heating a pastry or muffin. We had a huge refrigerator. They knew almost nothing of refrigeration because they only had a mini model bought for them in the PX during my days in Saigon. We had to teach them how to use the stove and how to use the refrigerator. For instance, close the door after using the fridge so the food stays cold and fresh. In Saigon, there was no cold to protect the food. A pot of rice was always out and open in the kitchen all day as family members filled a bowl of rice when hungry. Buying rice in 50-pound bags, we duplicated that need with great success to give our guests a taste of their former homes. To teach them how to use a stove meant telling them to turn off the gas when not in use, how to adjust the burners, and not to use matches to turn on the gas. They loved the washer and dryer we had in the basement and they learned how to use them effortlessly and frequently. With 21 people living together, washing and drying the few items they owned was necessary and important. Most did not speak very much English. I spoke almost no Vietnamese. It made communication and instruction that much more difficult but, with the help of my wife, who was a good linguist, we managed to muddle through because of the desire of the refugees to get everything right and to fit in as fast as they could.
Although we were new to the community, the people in Rockville Centre were very generous. There were many stories in Newsday about what we were doing. Every morning we found clothing and food on our doorstep in a meaningful effort to help us care for the new arrivals. I am not sure the same attitude would prevail today.
Soon, though, because of my work, we were on our way to Maryland and my new job as Washington producer for the Today Show. The 21 settled into my new home in Potomac, Maryland. Because of my industrious, hard -working and determined wife, Josephine, and a few of my contacts, she found them jobs as landscapers, in hotels and offices where they cleaned rooms and started working in kitchens. Thy eagerly accepted their new jobs and worked hard. They earned their own money and experienced how people lived outside the confines of my home. With money in their pocket, growing bank accounts, and more of my and Josephine’s continued tutorials in less than a year they were ready to rent their own homes and truly begin independent lives.
Granted, this is 46 years later and there have been enormous changes in the world. Forty-six years ago, Smart phones did not exist. The Internet did not exist. Laptop computers did not exist. Social media did not exist. Movements fighting for freedom and equality were small, often underground and not very influential. Today, all those entities are real, powerful and difference makers with which the new refugees are familiar. The differences, however, are still great. Cultures often created around deeply conservative religious beliefs play a bigger role in societies that are very dissimilar to even our diverse way of life. Our customs are not better than that of the average Afghan refugee. They are different. We are not the same and we must recognize that, adjust and learn to live with the differences the new refugees bring to our shores as they must learn to adjust to us. Experience shows it isn’t easy. But beginning with shared humanity, it is possible.
“Although the Black community of New Guinea has passed into history, its mark on the landscape remains, a reminder that Nantucket was once a place of working-class ingenuity and Black daring.”
The disjointed and haphazard global response to the COVID pandemic bodes poorly for the world’s capacity for coordinated action to face inevitable crisies in the near future. The problem isn’t a lack of means but a lack of commitment to collective action.
The UN Refugee Convention does not impose any real obligations on any nation to offer asylum. The United States must lead the way in recognition of the deeply interconnected world created in large part by American power.
“Structural problems need structural solutions. Don’t give charity to Louisiana because it’s unique. Demand that Congress take meaningful action, because Louisiana is not unique, and you may be next.”
White American Christians have embraced aggressive patriarchy as access to social and economic power has become more concentrated in fewer hands.
Widows and surviving children of Black veterans of the Civil War used their status as pensioners to claim belonging in the nation, but authorities frequently allowed notions of respectability rooted in white supremacy to undermine them.
When universities bend to political pressure and adopt “personal responsibility” policies for vaccination, masking, and social distancing they give agency to the irresponsible and take it away from those who are actively working to protect public health.
Buying antiquities without due diligence into their provenance feeds a black market for looted archaeological objects.
American workers, especially women, aren’t being lazy. They’re taking part in an unrecognized general strike against low wages, inadequate childcare, and dangerous workplaces made more dangerous by COVID.
Ed Asner fought for the representation of small-time actors in the Screen Actors Guild and protested American support for right-wing autocrats in Central America.
John Bigler was not a gold miner, but he was part of the forty-niner generation. He was one of many Americans who went to California to make his fortune by doing business with gold miners. In that sense he was also a gold seeker.
Bigler had trained as a lawyer but when he arrived in Sacramento from Ohio in 1849 there were no law positions. Bigler chopped wood and unloaded freight at river docks and then decided to try his hand at politics. Why not? Everything was new, wide open. Bigler won his first race for state assembly in California Territory’s first general election. He rose quickly in state politics and 1851 he received his party’s nomination for governor and won that election.
When Bigler took office in January 1852, California was swirling with debate over how to best harness the energies of the gold rush and develop the state’s economy, especially in agriculture. Some imagined that California would anchor a Pacific coast empire, even one that would stretch from “Alaska to Chili.”
Some boosters, like California’s pro-slavery U.S. senator William Gwin, advocated for the use of enslaved Blacks and native Hawaiians. Others envisioned the use of Chinese labor. Chinese had begun emigrating to California in 1850 as independent gold-mining prospectors and merchants. To some observers, the Chinese appeared to be a potential labor supply for the state. At the time there was no railroad connection over the Rockies, making the transportation of white labor from the east prohibitively expensive. The Presbyterian Reverend William Speer, a former China missionary who established the first Christian mission in San Francisco’s budding Chinese quarter, imagined Chinese labor in California as part of a grand vision of Sino-American unity.
Others advocated for the use of Chinese indentured labor. In February 1852, George Tingley and Archibald Peachy introduced “cooley bills” into the state legislature. Tingley and Peachy were no doubt aware of the “coolie trade” that was supplying indentured contract workers from China and India to Caribbean plantation colonies in the wake of the abolition of slavery. Now that the gold rush had opened transpacific trade, Chinese were also going to Hawai’i as contracted sugar plantation workers, a development that encouraged the ambitions of men like Gwin, Tingley, and Peachy. The bills, aimed principally at agricultural workers, proposed that the state guarantee labor contracts made abroad between Americans and foreign workers for work in the United States. Peachy’s bill set a minimum contract at five years and Tingley’s at ten years, which exceeded anything in the Caribbean or elsewhere, and a minimum wage of fifty dollars a year, a pathetically low amount.
Opponents of the California coolie bills were not necessarily against Chinese immigration in general. The Daily Alta California supported free immigration, which in principle it believed applied to all, regardless of origin. The Alta optimistically predicted that California’s immigrants would in time “vote at the same polls, study at the same schools and bow to the same Altar as our own countrymen”—including the “China Boys,” the “Don from Santa Fe and the Kanaker [sic] from Hawaii.” The Alta’s liberality bespoke the Free Soil politics that came from the antebellum North and that in this case assumed a multiracial Pacific perspective. By the same token, the Alta opposed bringing any system of servitude to California, and therefore it opposed the coolie bills before the legislature.
The coolie bills failed to pass the state legislature. But the distinction made by the Alta between free and indentured immigrants quickly blurred. Bigler himself was largely to blame for the obfuscation. On April 23 Bigler issued a special message to the legislature on the Chinese Question. Bigler raised alarm over the “present wholesale importation to this country of immigrants from the Asiatic quarter of the globe,” in particular that “class of Asiatics known as “Coolies.” He declared that nearly all were being hired by “Chinese masters” to mine for gold at pitiable wages, while their families in China were held hostage for the faithful performance of their contracts. Bigler called upon the legislature to impose heavy taxes on the Chinese to “check the present system of indiscriminate and unlimited Asiatic immigration,” and for a law barring Chinese contract labor from California mines.
What was Bigler’s intention? The coolie bills were “at last dead—very dead, indeed,” according to the Alta. Chinese in California were neither contracted nor indentured labor. But Bigler knew that whites were uneasy about the growing Chinese population, which had nearly doubled in the past year. Bigler saw political potential in the Chinese Question. He had won his first election in 1851 by fewer than five hundred votes. With his reelection in 1853 uncertain, he used the Chinese Question to excite the populous mining districts to his side. By tarring all Chinese miners as “coolies,” Bigler found a racial trope that compared Chinese to black slaves, the antithesis of free labor, and thereby cast them as a threat to white miners’ independence.
Bigler’s message was published in the Alta and also circulated on “small sheets of paper and sent everywhere through the mines.” As he had intended, Bigler roused the white mining population. Miners gathered in local assemblies and passed resolutions banning Chinese from mining in their districts. In May, at a meeting held in Columbia, Tuolumne County, miners echoed Bigler’s charges. They railed against those who would “flood the state with degraded Asiatics, and fasten, without sanction of law, the system of peonage on our social organization,” and they voted to exclude Chinese from mining in their district. Other meetings offered no reasons but simply bade the Chinese to leave, or to “vamose the ranche [sic].” Sometimes they used violence to push Chinese off their claims.
Bigler was the first politician to ride the Chinese Question to elected office. He won reelection in September 1853, with 51 percent of the vote. Bigler’s success in tarring the Chinese as a “coolie race” gave California politicians a convenient trope that could be trotted out whenever conditions called for a racial scapegoat. But more than a political tool, anticoolieism became a kind of protean racism among whites on the Pacific coast.
Tong K. Achick (Tang Tinggui) and Yuan Sheng were not afraid of John Bigler. Like Bigler and other white elites, they were not miners but businessmen and political leaders. The two men were hailed from the region in what is now Zhongshan county in Guangdong province. They were leaders of the Yeong Wo Association, educated men, successful merchants, and fluent in English.
Within days of Bigler’s special message, Tong K. Achick wrote a letter to the governor. He signed it along with Hab Wa, a merchant who had recently arrived on the Challenge. In their letter, Tong Achick and Hab Wa attested that, contrary to Bigler’s claims, none of the five hundred Chinese passengers on the Challenge were coolies. They explained that Chinese in California included laborers as well as tradesmen, mechanics, gentry, and teachers; “none are ‘Coolies’ if by that word you mean bound men or contract slaves.” They stated emphatically, “The poor Chinaman does not come here as a slave. He comes because of his desire for independence.”
In addition, Hab Wa and Tong Achick astutely argued that there was a positive relationship between migration and trade. Migration begat commerce, which contributed to the “general wealth of the world.” Whereas Bigler saw only hundreds of coolies disembarking from the Challenge and other ships, Hab Wa and Tong Achick saw both people and freight. Knowing Americans’ interest in business, they warned, “If you want to check immigration from Asia, you will have to do it by checking Asiatic commerce.”
Yuan Sheng also spoke out against Bigler’s message. In early May he wrote, under the name Norman Assing, to the governor and sent a copy to the Alta. He announced himself as “a Chinaman, a republican, and a lover of free institutions; [and] much attached to the principles of the Government of the United States.” The letter insisted, “We are not the degraded race you would make us. We came amongst you as mechanics or traders, and following every honorable business of life.” Yuan used his own case as a naturalized citizen to refute Bigler’s claim that no Chinese had ever made the United States his domicile or applied for naturalization.
Bigler did nothing to mitigate the racism that he had stirred up in the gold districts. Tong Achick sent a second letter a month later, filled with pain. “At many places they [the Chinese] have been driven away from their work and deprived of their claims,” he wrote. Some were “suffering even for want of food and … have fallen into utter despair. We are informed that grown men may sometimes be seen, sitting down alone in the wildest places, weeping like children.”
Tong pledged that the Chinese would “cheerfully” and “without complaint” pay the foreign miner’s tax of three dollars a month, which the legislature had imposed at Bigler’s urging. Tong understood that a license, even more explicitly than a tax, embodied a commitment on the part of the government that “when we have bought this right [to mine] we shall enjoy it.” He asked, “Will you bid [the tax collector] to tell all the people that we are then under your protection, and that they must not disturb us?”
The foreign miners tax of 1852 was California’s second such tax. The first, passed in May 1850, had aimed to punish foreign miners, especially Mexicans and Chileans, who were among the most skilled and hence the most competitive, as well as Europeans. (Chinese were not then the focus, as few had yet arrived on the goldfields.) The tax aroused angry opposition and the legislature repealed it in 1851.
Notwithstanding the disaster of the first tax, the legislature again used its power to tax to attack the Chinese. The new foreign miners tax was set at three dollars a month. Although it did not specify any particular group, it was understood in the legislature that “the bill is directed especially at Chinamen, South Sea Islanders [Hawaiians], etc. and is not intended to apply to Europeans.”
Although Chinese miners dutifully paid the tax, harassment and violence against them continued. At least some whites, it seemed, were not satisfied with punishing Chinese with an onerous tax but wanted their wholesale removal and exclusion from the goldfields and from the state.
In 1853 Tong Achick led a group of Chinese to meet with legislature’s Committee on Mines and Mining Interests that was considering amendments to the foreign mines tax. Tong suggested that the revenue from the tax be shared between the state and the various counties where it was collected. Such an arrangement, he explained, might provide an incentive for local communities to tolerate and accept the Chinese. Possibly, over time, they might become friends. In fact, the Chinese would be willing to pay a higher tax to offset reduced funds to the state.
Tong Achick and his fellow leaders displayed a sophisticated grasp of the stakes in the foreign miners tax. Their eagerness to embrace the tax, even an increased one, was not a capitulation to racism. They understood that racist fever against Chinese still ran high, and they sought to counter it by appealing to the economic interests of local communities in the form of shared revenue. The Chinese had already invested as much as $2 million in businesses and property in California. If the Chinese could prevent wholesale exclusion, if they could establish and hold some ground, even if marginal, they could survive and persist, and perhaps eventually prosper, on Gold Mountain.
Amendments to the foreign miners tax passed by the legislature in 1853 increased the license fee to four dollars per month and designated half of the revenue to go to the county where it was collected. The revenue generated by the tax was considerable: $100,000 to the state and $85,000 to the major mining counties in 1854 and as high as $185,000 to the state and nearly as much to the counties in 1856.
As Tong Achick predicted, the income from the foreign miners tax underwrote a grudging toleration of the Chinese on the goldfields. As well, many Chinese who had been chased away simply returned to their claims, and white miners did not have the energy to be constantly fighting them. This is not to say that that harassment and violence ended. Unscrupulous whites posed as tax collectors to take money from Chinese; in 1861 Chinese merchant leaders reported that during the 1850s whites had murdered at least eighty-eight Chinese, including eleven killed by tax collectors. Only two men were convicted and hanged.
Still, an uneasy coexistence settled on the goldfields, marked by both suspicion and transaction, and by both occasional violence and occasional cooperation. In Yuba County, for example, white miners were not successful at eliminating the Chinese from their river claims. By 1860 Chinese in Marysville were working as garden farmers, cooks, servants, and laundry operators, including a few located in white neighborhoods. Through experience, Chinese also became wiser in their dealings with white miners. H. B. Lansing, who mined in Calaveras County, frequently sold, or tried to sell, his claims to Chinese. In 1855 he lamented in his diary, “Tried to sell a claim to some Chinese but could not come it. They are getting entirely too sharp for soft snaps.”
If by the 1860s race relations on the gold fields seemed calmer, the Chinese Question took on new life in San Francisco in the 1870s. The completion of transcontinenal railroad in 1869 had not brought unalloyed prosperity to the Pacific coast. Workers in California faced economic competition from new migrants and mass manufactured goods from the east coast, as well as the long tail of the depression of the mid-1870s. The Chinese Question, born during the gold rush years, offered a ready nativist playbook that could be adapted to new conditions: racist theories appealed to grievances among sectors of the population and weaponized by politicians for partisan gain. The Chinese Question tore San Francisco asunder with violent agitation, swept state politics, and relentlessly drove national politics. Congress passed the Chinese Exclusion Act in 1882, thanks to an alliance of the nation’s two bastions of white supremacy, the south and the west.
The Chinese exclusion policy of the United States inspired white supremacists in the British settler colonies. There had been opposition to Chinese gold seekers in Australia but anti-Chinese sentiment on the goldfields was much more inchoate than in the California. European gold seekers did not theorize Chinese as coolies because there was no proximate association with enslaved Black labor. But anti-coolieism gained force later in the nineteenth century among urban workingmen’s movements, which borrowed freely from the American example. As in the U.S., the Chinese Question was a weapon of emergent racial nationalism. In South Africa it was a political flashpoint that served to mitigate tensions between the British and Afrikaner descended populations, towards resolution of the “Native Question” (that is, how whites would rule over the African majority), which loomed over all else. In both Australia and South Africa, the Chinese Question served the movements for federation in 1901 and 1910 respectively. As British “dominions” they exercised maximum political autonomy within the British Empire, setting exclusionary race policies that were not possible in the metropole. London acquiesced to them in order to protect imperial interests, not least its control over two-thirds of the world’s gold output.
The Chinese Question and Chinese exclusion policies thus circumnavigated the Anglo-American world in the late nineteenth and early twentieth centuries. It did not emerge fully formed, like Athena from Zeus’s head. Rather, it grew in local soils and shifted and evolved as it crossed the Pacific world and supported the consolidation of British and American power over global emigration and trade.
Members of September Eleventh Families for Peaceful Tomorrows visit Afghanistan, January 2002.
The current debate over the American departure from Afghanistan has focused attention on the mistakes of presidents and national policymakers and the chaos of getting out of that distant land. Warnings made twenty years ago about the pitfalls of attacking Afghanistan remain closeted somewhere in the distant past. Less than one month after al-Qaeda flew planes into the Twin Towers in New York, George W. Bush moved quickly to secure authorization from Congress to take military action against any “nations, organizations, or persons” that were determined to be linked to the 9/11 attacks. Only Barbara Lee, a representative from California, voted against the measure, sensing that the resolution was simply too vague and would allow for attacks on any country or enemy anywhere. Public opinion was all in on Bush’s plans. A Gallup poll in October 2001 reported an approval rate for an assault on Afghanistan of 88%.
Although antiwar sentiment was minimal at the time, it was forceful and often cast within a humanitarian framework that challenged Bush’s own appeals to higher morality. In a speech at the National Cathedral three days after 9/11, the president cast the incipient war on terror as a crusade to “rid the world of evil” and asked for divine guidance as he decided on the actions he would now take. Peace groups such as the American Friends Service Committee quickly challenged the president’s intentions and called for a “higher form of patriotism” more resistant to war and brutality. Pax Christi, a Catholic peace organization, sent a letter to Bush in November 2001, explaining why it stood against the move into Afghanistan. It argued that bombing raids and ground attacks, especially the use of cluster bombs that killed indiscriminately, were creating a humanitarian disaster. The group insisted that war only begets more war and would not address the root causes of terrorism which they believed to be found in despair and exclusion. “We cannot use God’s name to justify the recourse to violence,” they argued. Several months later the U.S. Catholic Bishops issued a pastoral letter that condemned the government’s “indiscriminate attacks on innocent people.” The prelates offered prayers for Americans asked to risk their lives in Afghanistan but felt “national leaders” should bear a “moral burden” to explore nonviolent responses to the problem of terrorism.
Opposition from peace groups was to be expected. Resistance from Americans who lost loved ones on 9/11 was much more surprising. Upon hearing Bush mention her brother’s name in his speech at the National Cathedral, Rita Lazar decided to write to the New York Times. Her sibling had died when he remained behind in the towers to help a wheelchair-bound friend. Bush had referred to his selflessness in his address. Lazar’s point to the Times was that she hoped that the nation was not so deeply hurt that it would “unleash forces we will not have the power to call back” in “my brother’s name and mine.” David Potorti, who also lost a brother in the terror attacks, expressed regret over hearing radio-talk programs calling upon America to bomb “them.” He recalled that that his mother grieved so much over her son’s passing that she did not want anyone else to feel the pain that she did. Darrel Bodley, whose daughter Deora died on Flight 93 when it crashed in Shanksville, Pennsylvania, told a reporter that “we must not retaliate in kind as if our cause allows us to.” He was concerned that the United States would mimic the ideology of the terrorists by unleashing violence because it thought it had God on its side.
Phyllis and Orlando Rodriguez were also devastated by personal loss on 9/11 but still rejected the idea of a violent reprisal. They worried that the death of their son Greg and others would be used as a rationale to start a war in Afghanistan. They did not want anyone anywhere to experience the misery they felt. Just three days after the 9/11 assault on America they circulated an email on the Internet entitled “Not in Our Son’s Name.” They wrote about the experience of sharing their grief with their son’s wife, friends, neighbors, and colleagues. Instead of retribution, they insisted there be no more suffering or dying in the name of their lost child.
The couple even wrote to Bush. They let him know that their son had died at Ground Zero in New York and that they had read about Congress giving him “undefined power” to respond to terror attacks. “Your response to this attack does not make us feel better,” they asserted, “it makes us feel worse.” They made it clear to Bush that they did want him to take any actions that would bring pain to parents in other lands in Greg’s name. For them this was not the time for America to act as a bully. They pleaded that Bush develop peaceful solutions to terrorism that would not sink us to the inhuman level of terrorists.
Some war resisters went beyond expressing their views and took specific actions. In November, 2001 the group September Eleventh Families for Peaceful Tomorrows was formed by people who had suffered the loss of loved ones on 9/11. Convinced that a global war on terror would not eradicate the root causes of terrorism and extremism, they preferred America look for “life-giving” responses to the attacks rather then simply submit to emotions of fear and anger. Speaking for the group, Potorti explained that they drew their title from a 1967speech by Martin Luther King on the Vietnam War in which the civil rights leader asserted that “wars are poor chisels for carving out peaceful tomorrows.” In 2002 a delegation from Peaceful Tomorrows went to Afghanistan and visited homes of families directly harmed by American bombs. They wanted to raise awareness of the human toll the American attack was taking on guiltless people in Afghanistan, in contrast to the vast public attention directed toward the American victims of 9/11. They met children whose homes were destroyed, who were not able to sleep because of nightmares, and who stopped talking completely. At the end of their tour, they issued a public rebuke of American media outlets that had referred to the bombings as “bloodless” and a “flawless use of sophisticated weapons.”
In a climate of war-fever that permeated the autumn of 2001, many Americans attacked Peaceful Tomorrows and others who called for a more restrained American response to 9/11. One irate citizen wrote to the group calling them traitors or “Jane Fondas for the Millennium.” Another offered condolences to members for their personal losses but castigated their “liberal, victim position” as “pathetic” and felt they should have their citizenship revoked. And another called them “chicken shit morons” who were “killing this country” by refusing to support a forceful response to terrorism.
Today Americans worry over the humanitarian crisis at the Kabul airport. But they mostly forget the one that transpired over a period of two decades and led to the death of some 150,000 Afghans in a war to eliminate evil from the world. The crisis today is real but so was the one they were warned about two decades ago, when most of them were not listening.
The “Viking,” a replica of the recently excavated Gokstad ship is displayed at the 1893 Columbian Exposition in Chicago. Recent archaeology has deepened understanding of other ship designs made for coastal sailing and frequent portaging.
In the midst of World War II, with the Nazis extolling their Viking heritage, the Swedish writer Frans G. Bengtsson began writing “a story that people could enjoy reading, like The Three Musketeers or the Odyssey.”
Bengtsson had made his literary reputation with the biography of an 18th-century king. But for this story he tried a new genre, the historical novel, and a new period of time. His Vikings are common men, smart, witty, and open-minded. “When encountering a Jew who allies with the Vikings and leads them to treasure beyond their dreams, they are duly grateful,” notes one critic. “Bengtsson in effect throws the Viking heritage back in the Nazis’ face.”
His effect on that Viking heritage, however, was not benign. His story, Rode Orm, is one of the most-read and most-loved books in Swedish, and has been translated into over twenty languages; in English it’s The Long Ships.
Part of the story takes place on the East Way, which the red-haired Orm travels in a lapstrake ship with 24 pairs of oars. Based on the Oseberg ship’s 15 pairs of oars or the Gokstad ship’s 16, such a mighty vessel would stretch nearly 100 feet long and weigh 16 to 18 tons, empty. To cross the many portages between the Baltic Sea and the Black Sea, Red Orm’s “cheerful crew” threw great logs in front of the prow and hauled the boat along these rollers “in exchange for swigs of ‘dragging beer,’” Bengtsson wrote.
This, say experimental archaeologists, is “unproven,” “improbable,” and—after several tries with replica ships—”not possible.”
But Bengtsson’s fiction burned itself into popular memory. Early scholars were convinced, too: A drawing of dozens of men attempting to roll a mighty ship on loose logs illustrates the eastern voyages in the classic compendium The Viking from 1966.
“Seldom has anything been surrounded by so much myth and fantasy” as the Viking ship, notes Gunilla Larsson whose 2007 Ph.D. thesis, Ship and Society: Maritime Ideology in Late Iron Age Sweden, has completely changed our understanding of the Vikings’ eastern voyages.
Like the myth of the Viking housewife with her keys, the myth of the mighty Viking ship is so common it’s taken to be true. But the facts do not back it up.
In the 1990s, archaeologists attempted several times to take replica Viking ships between rivers or across isthmuses using the log-rolling method. They failed. They scaled down their ships. They still failed. Their ships were a half to a third the length of Red Orm’s mighty ship. They weighed only one to two tons, not 16 tons. Yet they could not be cheerfully hauled by their crews, no matter how much beer was provided. The task was inefficient even when horses—or wheels or winches or wagons—were added.
We think bigger is better, but it’s not.
The beautiful Oseberg ship with its spiral prow and the sleek Gokstad ship, praised as an “ideal form” and “a poem carved in wood,” have been considered the classic Viking ships from the time they were first unearthed. Images of these Norwegian ships grace uncountable books on Viking Age history, uncountable museum exhibitions, uncountable souvenirs in Scandinavian gift shops.
But a third ship of equal importance for understanding the Viking Age was discovered in 1898, after Gokstad (1880) and before Oseberg (1903), by a Swedish farmer digging a ditch to dry out a boggy meadow. He axed through the wreck and laid his drain pipes. The landowner, a bit of an antiquarian, decided to rescue the boat and pulled the pieces of old wood out of the ground. His collection founded a local museum, but the boat pieces lay ignored in the attic—unmarked, unnumbered, with no drawings to say how they had lain in the earth when found—until 1980, when a radiocarbon survey of the museum’s contents dated them to the 11th century. Their great age was confirmed by tree-ring data, which found the wood for the boat had been cut before 1070.
In the 1990s, archaeologist Gunilla Larsson took on the task of puzzling the pieces back into a boat. She had bits of much of the hull: of the keel, the stem and stern and five wide strakes, even some of the wooden rail attached to the gunwale. She had most of the frames, one bite, and two knees. About 2 feet in the middle of the boat was missing: where the ditch went through. The iron rivets had rusted away, but the rivet holes in the wood were easy to see and, since the distance between them varied, the parts could only go together one way. The wood itself had been flattened by time, but it was still sturdy enough to be soaked in hot water and bent into shape—the same technique the original boatbuilder had used.
When she had solved this 3D jigsaw puzzle, she engaged the National Maritime Museum in Stockholm to help her mount the pieces on an iron frame; the Viks Boat went on display in 1996. Then she created a replica, Talja, and tested it by sailing, rowing, and portaging around Lake Malaren. Talja glided up shallow streams, its pliable planks bending and sliding over rocks. With only the power of its crew, it was easily portaged from one watershed to the next, from Lake Malaren to Lake Vanern in the west, itself draining into the Kattegat.
A second Viks Boat replica, Fornkare, was built in 2012 and taken on the Vikings’ East Way from Lake Malaren to Novgorod the first year, then south, by rivers and lakes, some 250 miles through Russia the second year. Concludes Fornkare’s builder and captain, Lennart Widerberg, “The vessel proved itself capable of traveling this ancient route” from Birka to Byzantium.
The Viks Boat is 31 feet long—longer than two earlier replicas that failed the East Way portage test—and about 7 feet wide, comfortable for a crew of 8 to 10. Its replicas passed the portage test for two reasons. First, they were built, like the original, with strakes that were radially split, not sawn. The resulting board is easy to bend and hard to break—at less than half an inch thick. The resulting boat is equally seaworthy at almost half the weight of the same size boat built with the same lapstrake technique, but using sawn boards. Empty, the Viks Boat replicas weigh only half a ton—about as much as a horse.
The second reason the Viks Boat replicas proved adequate for the East Way was that archaeologists had set aside Frans Bengtsson’s fantastical log-rolling technique for crossing from stream to stream.
By studying the ways the Sami had portaged their dugout canoes through the waterways of Sweden and Finland throughout history, the archaeologists began to see signs of similar portage-ways around Lake Malaren. They built some themselves and had teams race replica ships through an obstacle course of portage types: smooth grassy paths, log-lined roads or ditches (with the logs aligned in the direction of travel), and bogs layered with branches. A team of two adults and seven 17-year-olds finished a winding, half-mile course with Talja in an hour. When the portage was straight over 4-inch-thick logs sunk into the mud so they didn’t shift, the boat raced at 150 feet a minute.
The beauty of the Gokstad ship, its poetic quality, comes from its curves, the hull swelling out from the gunwale then tightly back in, making a distinctive V-shape down to the deep, straight keel. These concave curves improve the ship’s sailing ability at sea. But the keel cuts too deep to float a shallow, stony stream like those that connect the Baltic to the Black Sea.
Over a portage, even the minimal keel of the Viks Boat replicas needed to be protected with an easily replaceable covering of birch, as had been found on the original. The Old Norse name for this false keel was drag. To “set a drag under someone’s pride” was to encourage arrogance.
Historians and archaeologists of the Viking Age have long benefited from an ideological false keel. With the Viks Boat taking its rightful place as an exemplar of the Viking ship, it’s time to knock off that damaged drag and replace it. Says Larsson, “We should get used to a completely different picture of the Scandinavian traveling eastward in the Viking Age, one that is far from the traditional image of the male Viking warrior in the prow of a big warship.”
Historian Richard J. Evans is a preeminent scholar on Hitler and Nazi Germany, most pointedly through his trilogy on the history of the Third Reich. His most recent work, “The Hitler Conspiracies: The Stab in the Back - The Reichstag Fire - Rudolf Hess - The Escape from the Bunker,” takes on the key conspiracy theories generated out of the Hitler era. Aaron J. Leonard recently conducted an interview with him via email to discuss his work, the current invoking of fascism in some quarters, and the contrast between solid historiography and work amplifying and propagating conspiracy theories.
Why do you think Hitler and his murderous regime — which ought to be repellent — loom so large in the popular imagination?
They loom so large in the public imagination precisely because they are so repellent. Hitler has come to stand as a kind of substitute for Satan in an increasingly secular world: he is the epitome of evil. When we think of Hitler, we think of dictatorship, war and genocide, of cultural repression, racism and the looting of art on an unprecedented scale. The more the Holocaust has become part of mainstream public memory, the more it has brought Hitler into the center of public attention as its originator.
Your chapter detailing the “Stab in the Back” myth, which claims the German army was sabotaged from victory in World War I by various anti-patriotic left-wing forces, made me think of a Vietnam veteran I encountered a few years back who was adamant that in that war, the US was forced to fight with ‘one hand tied behind its back.’ It seems one of the features of many of these conspiracy theories or ‘alternative histories’ is to take a loss or weakness, and turn it into something less. Is that accurate, or is there something else going on?
The idea that a war – or an election – wasn’t really lost, but betrayed by a backstairs conspiracy, is an easy and perennially attractive way (to some people at least) of explaining defeat: defeat, after all, is very difficult and painful to admit. It also disqualifies a whole section of society as not really part of it, whether that’s the Jews or the socialists in Germany in 1918, or the Democrats in America in 2020.
In your book Lying About Hitler — which recounts the libel suit brought by David Irving against historian Deborah Lipstadt, a trial in which you testified — you literally had to chase down footnotes to show how he manipulated evidence to minimize and deny the Holocaust. Irving is arguably more insidious than some of those you challenge in your current work because he was seen as a scholar, rather than a crackpot — and yet, his methodology is not far removed from the crassest of conspiracists. How would you contrast the two methods employed between conspiracy-based ones – which are not wholly devoid of evidence — versus those based on the method of honest historians?
Conspiracy theorists very frequently imitate the methods of honest historians: You will find their works weighed down with footnotes and crammed with elaborate, solid-looking detail. Only when you subject them to detailed scrutiny does it become clear that the detail isn’t solid at all – it’s full of deliberate errors, falsifications, manipulations, misquotations, mistranslations, calculated omissions and manufactured connections, speculation, innuendo and supposition. Irving’s Holocaust denial work was full of mistakes, as the judge in the libel action he brought against Deborah Lipstadt in 2000 noted, but they were not honest mistakes, since they all went to support his arguments. Honest mistakes are random in their effect; his were not. Honest historians know they have to abandon their arguments when the evidence turns out to disprove them; dishonest historians and conspiracy theorists bend the evidence to fit the argument.
Quite a few people, particularly on the left, have taken to invoking the word ‘fascism’ or otherwise draw parallels to the National Socialists of the 1930s & 40s, to describe various current phenomena. What do you see as the limits – and benefits if any — of such historical analogies?
Fascism is one of those concepts that can seem almost infinitely elastic; it’s just too tempting for polemical purposes to accuse any authoritarian politician of being like Hitler, or any populist movement of being fascist. But we have to remember that fascism was a militaristic movement, aiming at war and conflict, territorial expansion and empire. Fascists put every citizen into uniform, drilled the people into uniformity and obedience in training camps, and subordinated private life, business companies, and institutions of all kinds to the state. Fascists were ultimately genocidal, whether it was the Nazis exterminating the Jews, or the Italian Fascists exterminating the Ethiopians (among other things, by using poison gas). Nazism and Fascism also put science at the center of their belief systems, in particular, racial and eugenic ‘science’, and regarded religion as a leftover from medieval times that would soon disappear. In all these respects it differed from 21st-century populism, which is hostile to the state, anti-scientific, and opposed to militarism both within the country and outside it. The classic fascist mass consisted of endless marching columns of identically uniformed men; today’s populist mass, as in the storming of the US Capitol on January 6th, 2021, consist of thousands of informally and in some cases eccentrically attired individuals heaving about in a chaotic heap, violent and aggressive but not organized in any military way. The problem with calling today’s right populism ‘fascist’ is that it’s fighting today’s battles with the weapons of the 1920s and 1930s. Time has moved on since then.
I am constantly astonished, and not a little frustrated, that so much taken for ‘common knowledge’ has already been countered by professional historians and yet it seems we live in a world where too often “alternative history” operates as actual history in the popular imagination. How can that ever change, or at least not command such power?
The Internet and social media are largely though not exclusively responsible for undermining belief in truth and objectivity. Society has become increasingly polarized through the rise of ‘identity politics’ – my truth is not the same as your truth (though in fact there can never be two opposing truths; only one of them can ever be the real truth). The spread of hyper-relativism through the dominance in universities of postmodernist culture has also played its part. The mass media, above all television and the movie, have blurred the boundaries between truth and fiction. Holding social media companies to account for the lies they allow to be spread is a beginning. But more needs to be done.
Although right-wingers like Rudy Giuliani argue that left-wing cancel culture is dangerous to free speech, the ongoing right-wing movement to ban Critical Race Theory (CRT) from school curriculums fits into the right’s long history of attacks on progressives’ free speech. The Texas Senate bill removing Martin Luther King, Jr’s “I Have a Dream” speech, Native American history, and the history of white supremacy from public school curriculums may be blocked from passing right now, but it has made waves throughout the internet. This bill comes amidst nationwide right-wing outrage over CRT, which Fox News reportedly mentioned nearly 1300 times between March and June this year.
This hysteria reached a boiling point last month when a Virginia school board meeting was shut down by right-wing protestors over a curriculum that allegedly promotes CRT, although Loudoun County Schools officials publicly stated that CRT is not part of their curriculum. The ongoing distress over CRT is fueled by a massive, right-wing media-backed movement to control school curriculums. Fox News host Tucker Carlson, for example, recently called for teachers to wear body cameras to monitor CRT teaching, despite previously arguing in favor of free speech on campuses.
The panic over CRT may seem to have come out of nowhere, with media coverage of it skyrocketing in recent months, but progressive movements in academia have caused alarm for decades. This began with conspiracy theories about critical theory (CT), a method of systemic critique which was the predecessor of CRT. These conspiracy theories focus on the developers of CT, the Frankfurt School thinkers, who were mostly Jewish, and claim that they infiltrated American universities with the goal of destroying Western culture and implementing “Cultural Marxism.”
While these theories may seem far-fetched, they are still promoted today by right-wing thinkers like Ben Shapiro and Jordan Peterson. Frankfurt School historian Martin Jay traced these conspiracy theories back to LaRouche movement writer Michael Minnicino’s essay that relies on little to no source material to make false, exaggerated claims. Minnicino claims, without evidence, that “the heirs of Marcuse and Adorno completely dominate the universities” and teach their students “’Politically Correct’ ritual exercises.” The essay reduces the Frankfurt School’s complex “intellectual history into a sound-bite sized package available to be plugged into a paranoid narrative,” according to Jay. Despite the suspicious beginnings of this conspiracy theory, right-wing thinkers like Jeffrey A. Tucker and Mike Gonzalez continue to blame the Frankfurt School thinkers for today’s attacks on free speech, going as far as to suggest executive action to prevent their influence.
While the evidence supporting CT conspiracy theories is dubious, there is historical evidence to suggest that the Frankfurt School thinkers were far too divided to have devised such a world-changing plot. One must only look towards Adorno and Marcuse’s final letters to each other—their correspondence on the German student movement in the 1960s—to see these divisions on full display.
In these letters, the two thinkers debated whether it was justified for Adorno to have called the police on a group of students who occupied his classroom demanding that he engage in self-criticism. While Adorno dismisses the students and their demands as “pure Stalinism,” Marcuse aligns himself with the students and their goals, finding it more helpful to aid the movement than disparage it. These thinkers differ in one key aspect: while Marcuse finds solidarity with the students in their goals, and is less concerned with how they achieve them, Adorno is repulsed by the means. How can a group that cannot even agree on which movements are good for society have possibly conducted such a mass, societal shift? The historical, fact-based evidence makes it clear—they didn’t.
Some figures on the right have cancelled the Frankfurt School, reducing their complex history into buzzwords, and rendering their ideas meaningless. This is just one example of how right-wing figures cancel things that counter their worldview through misinformation. CT conspiracy theories fit with former Trump advisor Sebastian Gorka’s claim that the Green New Deal will take your hamburgers, despite the proposal making no mention of meat. The theories also fit with the Governor of South Dakota Kristi Noem’s claim that we need to defend the “soul of our nation” against gay rapper Lil Nas X, just because he released Satan-themed shoes. We saw this pattern of regressive fear-mongering at its worst last month when there were two-stabbings at a protest at a Los Angeles spa, spurred by a transphobic hoax. There is a pattern of misinformed reactionary cancelling in which even former President Barack Obama has been tied to the recent outrage over CRT. And these cancelling efforts clearly have had a wide-reaching effect, with 26 states making steps against CRT just recently.
Although reactionary cancelling is doing some damage, we can fight it through progressive cancelling. While reactionary cancelling serves oppression, pushing racist, anti-Semitic, or homophobic agendas, progressive cancelling advocates consequences for socially unjust actions and amplifies marginalized voices.
In his essay “Repressive Tolerance”, Marcuse says that to realize universal tolerance, we first need to escape from our repressive society. One part of doing this is, instead of tolerating all opinions equally, to retract tolerance from opinions that perpetuate violence and oppression. Marcuse calls on us to fight the forces that serve oppression. We cannot play into the pocket of the oppressor like the Loudoun School Board meeting protestors. Instead, we must resist oppression like the 1960s student protestors who used progressive cancelling against perceived injustices like the United States’ involvement in Vietnam.
Progressive cancelling is the same form of cancelling that hit J.K. Rowling, Harvey Weinstein, or even Christopher Columbus—one that centers marginalized people and says “enough” to violence and oppression. On a wide-enough scale, we could achieve what Marcuse called a “Great Refusal.” To change our societal trajectory to one towards Marcuse’s “opposite of hell,” we need to fight reactionary cancelling through progressive cancelling.
Members of Women Strike For Peace picket near the United Nations headquarters calling for an international resolution to the Cuban Missile Crisis.
Is it possible to build social solidarity beyond the state?
It’s easy to conclude that it’s not. In 1915, as national governments produced the shocking carnage of World War I, Ralph Chaplin, an activist in the Industrial Workers of the World, wrote his stirring song, “Solidarity Forever.” Taken up by unions around the globe, it proclaimed that there was “no power greater anywhere beneath the sun” than international working class solidarity. But, today, despite Chaplin’s dream of bringing to birth “a new world from the ashes of the old,” the world remains sharply divided by national boundaries—boundaries that are usually quite rigid, policed by armed guards, and ultimately enforced through that traditional national standby, war.
Even so, over the course of modern history, social movements have managed, to a remarkable degree, to form global networks of activists who have transcended nationalism in their ideas and actions. Starting in the late nineteenth century, there was a remarkable efflorescence of these movements: the international aid movement; the labor movement; the socialist movement; the peace movement; and the women’s rights movement, among others. In recent decades, other global movements have emerged, preaching and embodying the same kind of human solidarity—from the environmental movement, to the nuclear disarmament movement, to the campaign against corporate globalization, to the racial justice movement.
Although divided from one another, at times, by their disparate concerns, these transnational humanitarian movements have nevertheless been profoundly subversive of many established ideas and of the established order—an order that has often been devoted to maintenance of special privilege and preservation of the nation state system. Consequently, these movements have usually found a home on the political left and have usually triggered a furious backlash on the political right.
The rise of globally-based social movements appears to have developed out of the growing interconnection of nations, economies, and peoples spawned by increasing world economic, scientific, and technological development, trade, travel, and communications. This interconnection has meant that war, economic collapse, climate disasters, diseases, corporate exploitation, and other problems are no longer local, but global. And the solutions, of course, are also global in nature. Meanwhile, the possibilities for alliances of like-minded people across national boundaries have also grown.
The rise of the worldwide campaign for nuclear disarmament exemplifies these trends. Beginning in 1945, in the aftermath of the Hiroshima bombing, its sense of urgency was driven by breakthroughs in science and technology that revolutionized war and, thereby, threatened the world with unprecedented disaster. Furthermore, the movement had little choice but to develop across the confines of national boundaries. After all, nuclear testing, the nuclear arms race, and the prospect of nuclear annihilation represented global problems that could not be tackled on a national basis. Eventually, a true peoples’ alliance emerged, uniting activists East and West against the catastrophic nuclear war plans of their governments.
Much the same approach is true of other global social movements. Amnesty International and Human Rights Watch, for example, play no favorites among nations when they report on human rights abuses around the world. Individual nations, of course, selectively pick through the findings of these organizations to label their political adversaries (though not their allies) ruthless human rights abusers. But the underlying reality is that participants in these movements have broken free of allegiances to national governments to uphold a single standard and, thereby, act as genuine world citizens. The same can be said of activists in climate organizations like Greenpeace and 350.org, anticorporate campaigns, the women’s rights movement, and most other transnational social movements.
Institutions of global governance also foster human solidarity across national borders. The very existence of such institutions normalizes the idea that people in diverse countries are all part of the human community and, therefore, have a responsibility to one another. Furthermore, UN Secretaries-General have often served as voices of conscience to the world, deploring warfare, economic inequality, runaway climate disaster, and a host of other global ills. Conversely, the ability of global institutions to focus public attention upon such matters has deeply disturbed the political Right, which acts whenever it can to undermine the United Nations, the International Criminal Court, the World Health Organization, and other global institutions.
Social movements and institutions of global governance often have a symbiotic relationship. The United Nations has provided a very useful locus for discussion and action on issues of concern to organizations dealing with women’s rights, environmental protection, human rights, poverty, and other issues, with frequent conferences devoted to these concerns. Frustrated with the failure of the nuclear powers to divest themselves of nuclear weapons, nuclear disarmament organizations deftly used a series of UN conferences to push through the adoption of the 2017 UN Treaty on the Prohibition of Nuclear Weapons, much to the horror of nuclear-armed states.
Admittedly, the United Nations is a confederation of nations, where the “great powers” often use their disproportionate influence—for example, in the Security Council—to block the adoption of popular global measures that they consider against their “interests.” But it remains possible to change the rules of the world body, diminishing great power influence and creating a more democratic, effective world federation of nations. Not surprisingly, there are social movements, such as the World Federalist Movement/Institute for Global Policy and Citizens for Global Solutions, working for these reforms.
Although there are no guarantees that social movements and enhanced global governance will transform our divided, problem-ridden world, we shouldn’t ignore these movements and institutions, either. Indeed, they should provide us with at least a measure of hope that, someday, human solidarity will prevail, thereby bringing to birth “a new world from the ashes of the old.”
[This article is a revised version of a previous article by the author: “The Inspiring Legacy of Global Movements.”]
“Frederick Douglass argued against John Brown’s plan to attack the arsenal at Harpers Ferry,” Jacob Lawrence
“Telling the truth about the past helps cause justice in the present.”
This was the manifesto of sociologist James Loewen, who died last week. As History News Network noted in a recent tribute, Dr. Loewen wrote the famous and bestselling “Lies My Teacher Told Me: Everything Your American History Textbook Got Wrong.” Chapter titles included “Gone With the Wind: The Invisibility of American Racism in American Textbooks” and “The Truth About the First Thanksgiving.”
Whatever the subject, Professor Loewen never stopped insisting on “the truth about …” I saw this for myself when someone made a comment on my History News Network essay “John Brown’s Body” that Brown was “crazy.” Loewen stepped up to mount a spirited counter-attack in the comments section. “No Black person who met him ever called Brown crazy …” he responded. “Read my short discussion of Brown in LIES MY TEACHER TOLD ME. Open yourself up to a larger view.”
When it comes to telling the truth about our history, we need that larger view now more than ever. RIP Professor Loewen.
The best place to contend with James Loewen’s legacy is the website he inspired, History and Social Justice.
A pre-Pearl Harbor protest against U.S. involvement in the second world war. Still from the 1942 film “Prelude to War” in the Department of War’s “Why We Fight” series
“Anyone who has opposed the policy of the moment has been labeled as an obstructionist and an isolationist.” This wry observation was made by “Mr. Republican,” Ohio Senator Robert A. Taft. Taft’s comments were delivered in reference to his opposition to the Marshall Plan, the post-
World War II recovery proposal which aimed to restore European economies, promote liberal commercial integration, and thwart communism’s spread in Europe. Taft led a body of conservative Republicans, who coupled with progressives to oppose this and other components of the Cold War. They believed that the suite of diplomatic policy changes would eliminate America’s economic advantage, benefit certain exporters, violate U.S. sovereignty, guarantee confrontation with the Soviet Union, and transform the United States from a republic into an empire. For their trouble, this transpartisan band of troublemakers were accused of advocating for appeasement and isolationism.
For over 80 years, the word “isolationist” has been used by the U.S. foreign policy establishment to narrow the range of acceptable public opinion on America’s role in the world. Popularized during America’s rise as an international superpower, the label has been used since 1945 to control dissent and limit the window of public discourse on foreign policy matters. As the United States enters another round of agonizing soul searching after another lost war, Americans should learn this word’s fraught political history.
“Isolationist” began its career as a strawman prior to American entry into World War II. The myth of an isolated America was created by foreign policy elites as a rhetorical foil for its own desire for U.S. dominance in the postwar world. “Isolationist” became a pejorative shorthand for narrowminded Americans who desired economic autarky and political neutrality from the outside world. The label was liberally applied to opponents of U.S. entry into World War II (left and right), none of whom would have used it themselves, and many of whom would have been considered internationalists a few years prior. Additionally, the burgeoning U.S. foreign policy establishment used the myth of isolationism to blame the rise fascism upon a benighted America which sought to politically disentangle itself from Europe after the Great War. “Isolationism” became the twin of another rhetorical hammer: appeasement. Together they would be used to shape a triumphant, “good war” narrative of World War II, one which would be mobilized to serve an American-dominated postwar order.
During the nascent Cold War, “isolationist” was again mobilized to burn off the vestiges of political opposition which stood athwart a bipartisan Cold War consensus. Despite America’s newfound status as an international superpower, trepidation remained, particularly on the fringes of the political spectrum. While leftwing opponents of the Cold War were primarily silenced via McCarthyism, rightwing dissidents were assailed with the “isolationist” label. In a mirror image of McCarthyite tactics, supporters of the Cold War paradigm accused dissenting Republicans of “playing into the hands of the communists” by opposing to the Marshall Plan, NATO, the draft, and a permanent national security state. Their reasoned (and prescient) opposition was labeled by Cold War hawks from both parties as a naïve and imprudent attempt to return America to an idyllic and isolated past…a past which never in fact existed. The isolationist label, coupled with the civil liberties abuses of McCarthyism helped to freeze relations with the Soviet Union and thereby deepen the Cold War.
In the aftermath of the Vietnam War, the label was again used to constrict the postmortem of that disastrous conflict and maintain the narratives which animated American foreign policy. During the malaise of the 1970s, foreign policy experts fretted that a wave of “neoisolationism” was going to undermine “the cherished lessons of the thirties, [and] the war years.” The specter of isolationism was used to bolster the position of hawks who still felt that the Vietnam War was justified. The result was a foreign policy establishment which was able to castigate the war on tactical or leadership grounds, while keeping the larger international project intact. This narrowing of disagreement took the wind out of the sails of a vibrant antiwar movement and paved the way for President Ronald Reagan’s reescalation of the Cold War.
The “I” word was again marshaled against opponents of the first Gulf War and those who decried President George H.W. Bush’s dreams of a “New World Order.” One of its primary targets was controversial paleoconservative Pat Buchanan. Buchanan and his cadre of antiwar right-wingers represented the largest conservative revolt against U.S. foreign policy in 40 years. As with their forebears, they were met with the dismissive label which stifled debate as America entered an era of global hegemony. Into the 1990s, the organs of the Cold War state would remain intact and repurposed towards buttressing an unrivaled American empire. This transition was made possible because to merely discuss a return to a status quo ante bellum, was to be an “isolationist.”
And again, 20 years after 9/11, “isolationism” has been used to obfuscate any substantive discussion of America’s role in the world. As with Vietnam and the early Cold War, debate around the calamitous bipartisan interventions into Afghanistan, Pakistan, Iraq, Somalia, Libya, and Syria are limited by the “isolationist” label. These interventions are often framed as well-meaning if misguided mistakes because their undergirding motivations are rarely challenged in public discourse. Opponents of these conflicts who make deeper critiques are labeled as retrograde “isolationists” who want to America to selfishly take its ball and go home. The label has become one of the few bipartisan institutions in Washington. Some of those tarred with the misnomer include progressive politicians, such as former House Representative Tulsi Gabbard and Senator Bernie Sanders. The label has also been applied to a growing body of libertarian leaning and populist Republicans who question Washington’s foreign policy orthodoxy and threaten decades of Republican uniformity on the issue. As in years prior, this country’s foreign policy establishment, and those who benefit from the status quo are attempting to the limit range of public debate in the wake of a series of extremely unpopular and costly wars. If Americans truly desire a new foreign policy, then it is time that they roundly reject the labels which seek to control their associations, silence their voices, and narrow their minds.
A Sharpie-drawn mustache embellishes Trump’s face on a poster. A Hitler caricature wears a MAGA hat. Satirists ridicule both of them. Academics ponder strong man analogies. A new Netflix documentary pairs the Donald and the Führer. The similarities between the flamboyant leaders in critics’ crosshairs, however, blind us to a crucial contrast. Hitler, a charismatic leader, was also an astute politician. Trump channels his supporters’ mood, but lacks even the rudimentary ability to govern.
The differences begin with the different messages by which each attracted masses of followers. Hitler muted his rabid racism and promised to combat the Great Depression with massive government investment in social welfare, rearmament, and infrastructure. The Trumpist GOP rejects big government and relies on white nationalism to sustain loyalists’ allegiance.
Adolf Hitler built a mass following by promoting economic and civic revival, not, at first, antisemitism. It’s comforting to associate good governance with liberal democracy. A backward glance, however, alerts us to an inconvenient history.
To us, the virulence of Hitler’s Judeophobia is clear. It was not so obvious, however, to most of the approximately 30 percent of Germans who voted Nazi in the run up to Hitler’s takeover. A decade earlier, when Hitler was in prison after the ludicrous failure of his “beer hall” coup, he fulminated in Mein Kampf about Jews as “bloodsuckers,” “tapeworms” and “parasites,” who should “be exterminated.” His delusional hatred roused party radicals, but hardly anyone read Mein Kampf, and the Nazi Party remained on the crackpot fringe. But after the 1928 election yielded only three percent of the vote, Hitler rebranded his public self from rebellious upstart to responsible leader.
The “new” Hitler raged against Bolshevism, the victors of World War I, and corrupt politicians whom he blamed for Germany’s ruin. He still celebrated “Aryan” superiority. But he mentioned Jews less often in public, and, when he did, his language resembled Henry Ford’s complaints about “destructive” Jewish influence in the media, the stock market, and the Soviet Union. In Germany, where Jews constituted less than one percent of the population, “the Jewish question” seemed like a side issue. More relevant to most voters were the Nazi Party positions on hot-button issues that were popularized by more than 100 mass-market pamphlets. In addition to these rather humdrum works, other genres, like young adult fiction, campfire poetry, a humor magazine, songbooks, and picture albums, contributed to the party’s mainstream sheen in the early 1930s.
To voters who boarded the Nazi bandwagon after 1928, Hitler presented himself as a capable outsider who pledged to end legislative gridlock, repel Bolshevism, fight joblessness with massive government expenditures, and expand the pensions and universal health care that Germans took for granted. As unemployment climbed to 30 percent between 1928 and 1932, voters went to the polls in five national elections and propelled the Nazi Party from ninth to first place. In January 1933 the German President appointed Hitler chancellor. When arsonists set the Reichstag building ablaze, Josef Goebbels launched a propaganda blitz about a Bolshevik revolution that justified mass arrests of Marxist leaders. In mid-March, after more repression, tighter censorship, and negotiations with non-Nazi conservatives, the Reichstag voted to give Hitler dictatorial power.
Hitler consolidated Nazi power through bureaucratic incursions, media censorship, state-sanctioned concentration camps, banishment of rival parties and labor unions, and, in June 1934, the murder of violence-prone dissidents in his own party. By late 1936 investment in government programs (including rearmament) revived the economy, and diplomatic successes boosted national pride. Only then, when his popularity was secure, did Hitler escalate the so-called “legal” persecution of Jewish Germans through stringent educational and occupational quotas, “Aryan oversight” of Jewish-owned businesses, and prohibitions against “mixed-race” marriages.
In short, Nazi voters mostly got what they wanted – “Aryan” revival and a robust social state. Few of them anticipated the November 1938 pogrom and the exterminatory war against “international Jewry” that gathered force with the invasion of Poland in 1939.
Hitler ended the depression in Germany with the kinds of federal programs the GOP rejects. Although Trump promises “our historic, patriotic, and beautiful movement to Make America Great Again has only just begun,” his performance as president casts doubt on his ability to deliver material support to his core following, white families plagued by low wages, rising rents, and food scarcity.
President Trump defaulted on his economic populism and, instead, delivered a tax bonanza for the super-rich and bungled his response to the COVID-19 pandemic. In the thrall of super-wealthy donors, most Republicans in Congress reject Biden’s infrastructure program, which is supported by 83% of Americans, as well as higher corporate taxes to pay for it, supported by 66 percent of Americans. Without alternative proposals, Trump can only goad GOP lawmakers, “don’t let the Radical Left play you for weak fools and losers!” After failing to overturn Biden’s election by bullying government officials and inciting mob violence, Trump falls back on promises to protect the status of white Christian Americans.
When Trump tells cheering crowds, “you are the real people, you are the people who built this country,” he endorses the systemic racism that remains in place a half a century after civil rights legislation threatened to dismantle it. While denouncing federal regulation and social as well as infrastructure programs, Trump revs up white victimhood and relies on emotional gratification to sustain the loyalty of his aggrieved low-wage and middle class base.
Biden bets that voters will reward his administration’s effective COVID response, rapid economic recovery, and financial support for low-income families, which disproportionately benefits red state voters. Unlike citizens in mono-ethnic Germany, whose political loyalties were influenced by class, white Americans’ allegiance is increasingly shaped by racial identity. And historians note that white Southern voters tend to value preserving their privilege above federal programs like Medicaid expansion that benefit everyone. The Biden administration delivers effective governance. The Trumpist GOP, increasingly, is rooted in systemic racism dating back 400 years. Given the Constitution’s skewed distribution of power and a Supreme Court likely to uphold voter suppression, elections will be close.
Hitler built a mass following by promoting economic and civic revival, not the anti-Semitism that power would allow him to indulge. Trump, an incompetent would-be Führer, has released white rage. If he were to vanish, his loyalists would remain on high alert.
Getting a COVID-vaccine is now totally politicized. The anti-vaxxers, from rural towns to the halls of Congress, gleefully claim that their opposition to vaccination is a principled political position. I think it’s about fear. They are afraid of the shot.
There are good reasons to be afraid. The shot only hurts for seconds, but some people are deeply afraid of all shots. The prospect of getting a needle in the arm genuinely alarms about 1 out of every 20 adults, whose fears have been labeled “blood-injection-injury phobia”. It doesn’t help that the real experience of getting an injection results in no injury, just a pin-prick.
Some people are specifically afraid of the COVID-vaccine. While American medicine has done wonders for most of the population, with new wonders occurring every year, the medical establishment has been especially unkind, even deadly, to some specific groups of people. Black Americans have good reason to distrust the care they might get from the overwhelmingly white and historically racist medical profession, and that mistrust was documented long before anyone knew about COVID. In surveys, the proportion of Black Americans who said they “would definitely or probably get a coronavirus vaccine if it were available” has hovered below half. That legacy of systemic racism is so strong that it can overcome the knowledge that the COVID pandemic has had twice the deadly impact on African Americans as on whites.
People I know have inquired fearfully about side effects, which nearly everyone experiences. Feeling awful when you still have to perform, losing a day of work or child care that you can’t afford, is an obstacle. Assurances that getting the disease is much worse than any side effects don’t help much to allay this fear.
But the loud voices against the vaccine don’t proclaim their own fears. They say that the rest of us shouldn’t be persuaded to get the vaccine, that we are dupes to be vaccinated, that they are righteous patriots to resist the idea of a vaccination program. They obviously have much greater fears.
The anti-vaxxers say that the whole vaccination discussion is political, because for them the decision can have life-altering political consequences. Accepting the vaccine means rejecting their whole political orientation about fake science and fake news and Trump love. It means realizing that the voices they have trusted, Republican leaders and conservative media heroes, have been lying about the disease since the beginning. Maybe they have been lying about other things, too. Maybe about everything.
That is something to be afraid of.
The political fears of Republicans are being tested in real time as the Delta variant causes rising death rates in red states, and by the confusing argument among prominent Republicans. Mitch McConnell said,“These shots need to get in everybody’s arm as rapidly as possible. I want to encourage everybody to do that and to ignore all of these other voices that are giving demonstrably bad advice.” That bad advice comes from other Republicans, who invite anti-vaxx conspiracy theorists to testify in state houses, or try to ban the military from requiring vaccinations, as part of their new blend of anti-government propaganda and Trump cult membership.
Most Republicans who refuse to get vaccinated even think Trump was lyingwhen he revealed that he and Melania were vaccinated. In this case, cognitive dissonance can be deadly.
Conservative politics revolves around promoting fear of imaginary enemies, who are often real people. They have been remarkably successful in driving the rational faculties of their white Christian supporters into submission to fears of mythical beasts. Q is the letter of fear, whether it is carried like the Olympic torch or hidden behind a veil of gibberish.
Fearful people commit dangerous acts against imagined enemies. The rest of us will play the role of those enemies in the minds of red America for some time.
The COVID pandemic is going to be the future baseline case study for the social impact of pandemics, and is unfortunately likely to be a cautionary tale, says a medical historian.
“As Shuler takes office, union women will look to her to champion their expansive visions and specific concerns; employers will continue to try to pit groups of workers against one another in their crusade to depress conditions for everyone.”
Democrats have not lost elections because of inflation, but because they have imagined austerity politics as the only solution to inflation.
A Texas appeals court decision may lead to a Supreme Court case that will test whether the current justices can accommodate a public respect for precedent with a political preference for outlawing abortion.
Conservative education reformers have hijacked the original charter school movement, stripping away labor protections and turning the management of public resources to private interests.
Haitians’ vulnerability to harm from natural disaster is conditioned by centuries of foreign interference and exploitation.
Failure to teach the ongoing history of Native people in the US validates the credo of the Carlisle Industrial School and other Indian residential schools to “kill the Indian, and save the man,” perpetuating a view that consigns Natives to the past and erases them from the present.
Both Amanda Knox, an American student accused of murder in Italy, and Crystal Lee Sutton, the southern labor organizer portrayed in “Norma Rae,” have challenged the way that Hollywood films have reinterpreted their stories for commercial gain.
“This two-track recovery, where protection against the disease mirrors wealth and power, unfortunately reflects a historical pattern that is several centuries old. The world’s only hope lies in breaking it.”
“In an era when rock drummers were larger-than-life showmen with big kits and egos to match, Charlie Watts remained the quiet man behind a modest drum set. But Watts wasn’t your typical rock drummer.”