January 2026 Archives - First Things Published by The Institute of Religion and Public Life, First Things is an educational institute aiming to advance a religiously informed public philosophy. Fri, 09 Jan 2026 13:07:41 +0000 en-US hourly 1 https://firstthings.com/wp-content/uploads/2024/08/favicon-150x150.png January 2026 Archives - First Things 32 32 Recovering a Christian World https://firstthings.com/recovering-a-christian-world/ Fri, 09 Jan 2026 06:00:00 +0000 https://firstthings.com/?p=118785 We’ve lost touch with reality. Technology is certainly a factor. A few years ago, people on airplanes began pulling down the window shades. The world outside, alive with light,...

The post Recovering a Christian World appeared first on First Things.

]]>
We’ve lost touch with reality. Technology is certainly a factor. A few years ago, people on airplanes began pulling down the window shades. The world outside, alive with light, interferes with the screens that have become the focus of our attention. The darkened airplane cabins epitomize much of our existence these days. We’re cocooned in a shell of technology.

Our flight from reality has other and earlier ­sources, however. As Henry Vander Goot details in Creation as an Introduction to Christian Thought, a great deal of modern theology has turned its back on God’s creative work. Theologians concede the task of analyzing reality to modern science, purporting to focus on the greater truth of salvation in Christ. The effect, Vander Goot argues, is acquiescence in a practical atheism beneath the veneer of a “personal relationship with Jesus.”

Vander Goot’s nemesis is Karl Barth. Vander Goot argues that the great Swiss theologian overdetermined his thought in reaction to National Socialism. By Barth’s reckoning, German Christians were susceptible to blood and soil ideology, because they had already been seduced by “natural theology.” The notion of a created nomos paves the way for the perversion of a Volksnomos. Barth’s solution was to absorb creation into salvation history. From the very beginning, God was “saving” reality from the dark void of “nothingness.” In Vander Goot’s reading of Barth, redemption in Christ becomes the be-all and end-all of Christian revelation. In practical terms, this “Christomonism” means that theology does not contend with secular academic culture’s statements about everything else, ceding the terrain. Again, a practical atheism reigns.

As Vander Goot recognizes, there were deeper forces at work in twentieth-century theology than the crisis of National Socialism. At the end of the eighteenth century, Immanuel Kant claimed to solve the difficulties in Enlightenment theories of knowledge, which swung uneasily between dogmatic rationalism and despairing skepticism. Kant shifted the quest for certainty away from our perception of things, urging us to place our trust in the ways in which we analyze and synthesize our mental images of things. Put differently, according to Kant, we don’t know things “objectively.” To use his terminology, we can’t know the “thing-in-itself.” Rather, we know things rationally, which is to say in accord with reason’s a priori patterns, which impute cause and effect and other relations.

 Kant developed an elaborate technical vocabulary to describe these forms of understanding, but we need not be detained by the details. More important is the overall effect. After Kant, philosophical emphasis fell on how we think rather than on what the world is like. Very quickly, philosophers questioned Kant’s presumption that reason’s pre-set patterns are universal. Having articulated the project of modern idealism (the mind frames reality), Kant opened the way for the many modern reflections on how our thinking is shaped by historical, social, and psychological factors. 

Karl Barth’s theology had tremendous influence in the middle decades of the twentieth century. He ­defended the “realism” of God’s revelation in Christ, which was an exciting break from liberal theology. But he did so without challenging Kant’s anti-realism, his strictures against knowledge of things-in-themselves. This accommodation of modernity was as much a source of Barth’s influence as his theological bravado.

Vander Goot offers an especially astute assessment of Jürgen Moltmann, who followed in Barth’s footsteps. Vander Goot wryly notes that “the future” is the topic of Moltmann’s book, The Future of Creation—not creation as we experience it here and now. In this conception of Christianity, the power of reality rests in the future, the coming realization of Christ’s lordship over all things. Theology is thus excused from the task of describing (and defending) the order and distinction of things established “in the beginning” by God’s creative Word. Again, the upshot may be lots of theological talk about Christ, but everything else falls under the authority of science. Practical atheism.

Vander Goot does not wish to gin up a revival of Aristotelian metaphysics. He endorses an aspect of Kant’s skepticism about knowledge of things-in-themselves, noting that we lack firm bases on which to determine which metaphysical schemes are correct and which are misguided. But we are not unmanned. Following ­Calvin (and the Dutch Reformed tradition), Vander Goot holds that Scripture provides Christians with a divinely authorized construction of reality, as it were, one that avoids metaphysical conundrums and vindicates things as they appear.

The inspired text authorizes us to trust our common-­sense perceptions. The matter-of-fact tenor of the creation account in Genesis licenses us to reason about the nature of things without worrying about philosophical foundations. Moreover, the first chapter of Genesis does not encourage speculative efforts to get underneath or above what we experience. Vander Goot quotes Calvin to good effect: “It must be remembered that Moses does not speak with philosophical acuteness on occult mysteries, but related those things which are everywhere observed, even by the uncultivated, and which are in common use.” Moses was the first ­common-sense realist.

St. John Paul II often spoke of the anthropological crisis of modernity, our confusion about what it means to be human. He regularly cited a passage from the Vatican II document Gaudium et Spes, which teaches that we discover the truth of humanity in the person of Jesus Christ. I don’t think Vander Goot would disagree. But he recognizes that modernity has fomented a metaphysical crisis, a despairing sense that we cannot know the stable, enduring place in which we live out our lives. Modern science offers little consolation, because most presume that its tacit metaphysics is a mute materialism, epitomized by Richard Dawkins’s metaphor of a blind watchmaker.

Vander Goot warns against too much talk of the ­doctrine of creation. By his reckoning, Scripture authorizes us simply to talk about things, all things, as Christians formed by the self-same Scripture. He ­recommends what the Church Fathers called Christian philosophy, by which they did not mean an academic discipline, but the wisdom that arises when the eyes of faith are trained on all things—and report what they perceive.

I would add liturgical worship to Scripture as the divinely authorized means by which our perceptions of reality are trained and sharpened. And I’m inclined to defend the precision found in the Catholic tradition of natural law, which Vander Goot criticizes. But we should take to heart the larger point of Creation as an Introduction to Christian Thought. Vander Goot often returns to an epigram: “Life is religion.” God asks us to live in accord with his creative will, here and now. This vocation will not get us to heaven. But it is religious nonetheless, because it cleaves to the order and purpose God has ordained “in the beginning.” That God has ­done more, that he offers us fellowship, despite our sin and defilement, is another matter, more important, to be sure, but distinct. And this offer does not efface or supersede the way things are. As St. Thomas teaches, grace does not destroy nature but perfects it.

The post Recovering a Christian World appeared first on First Things.

]]>
Goddity https://firstthings.com/goddity/ Thu, 08 Jan 2026 06:00:00 +0000 https://firstthings.com/?p=118643 The Nativity of our Lord—born an infant, laid in a manger. It’s an utterly strange story: The Creator of all things takes the flesh of and lives as a...

The post Goddity appeared first on First Things.

]]>
The Nativity of our Lord—born an infant, laid in a manger. It’s an utterly strange story: The Creator of all things takes the flesh of and lives as a newborn human child. Christmas songs often take up this bizarre conception. Consider Charles Wesley’s verse, from his hymn “Let Earth and Heaven Combine”: “Our God contracted to a span, / Incomprehensibly made man.” Or Christina Rossetti’s “In the Bleak Midwinter”: “Our God, Heaven cannot hold Him, nor earth sustain; / Heaven and earth shall flee away / when He comes to reign; / In the bleak midwinter a stable place sufficed / The Lord God Almighty, Jesus Christ.”

In recent times, this hymn tradition of wonder has turned the amazing statement into a principle: God likes “small things.” “God, immortal, invisible, / Love indestructible, / In frailty appears,” writes the popular British Christian musician Graham Kendrick. There is a subtle shift in conception here. Frailty is an attribute, not a person. And the attribute, then, in typical modern fashion, turns into a principle: Littleness—babies and stables—is like the law of gravity: It’s how God works. Indeed, we often hear that God not only works with “small things” but prefers them. God speaks in “the still small voice” (1 Kings 19:12), not just to Elijah, but to all of us and all the time, as so many sermons today tell us. The Universal Small Voice, the Universal Baby.

This is certainly a comforting notion. Most of us are basically nonentities on the stage of human history. It’s a condition of smallness that comes with being human: “What is man that thou art mindful of him?” (Ps. 8:4) Contemporary preachers quite like this principle on several counts. First, it’s an encouragement to the unexceptional or unsuccessful (most of us). God likes you just the way you are. Second, it encourages religious, moral, or political action. You may not be much, but like Gideon or the mustard seed, big things can come from small beginnings. Take heart and get involved! And third, the principle of divine preference for the small underwrites a certain kind of political action, one that labors on behalf of the poor and marginalized, the “small” in worldly economic terms. Now we know how to order our votes, policies, and protests! 

In the 1970s, “small is beautiful” represented claims about what or how social relations work best. I am myself persuaded by many of these functionalist appreciations of the small. But today’s apotheosis of the small does not concern quotidian judgments about how to organize society on a human scale. It’s about cosmic moral values. Call it “nanotheology.” The term has been used around the edges of religious discussions on nanotechnology. But I think “nanotheology” has a better use, one that signifies a kind of ethical metaphysics in which God orders the whole of reality in a way that values the little over the big. By whatever name, it is a pervasive theology in our time.

The notion that there is a reciprocal patterning between the Baby Jesus and the shape of the world has its origin deep in the recesses of ancient Greek philosophy and pre-modern Christian thought. Drawing on ideas from Plato on the correspondences between the human soul and true reality (found in the Timaeus), Stoics later developed the idea that human beings were “microcosms” of Nature, a miniature pattern of natural processes. That idea captured the Christian imagination, buttressed by scriptural notions of human beings created “in the image of God.” Gregory of Nyssa called man “a small world within the great,” containing within himself all the parts of creation, material, rational, and spiritual. And Maximus the Confessor outlined the specific microcosmic vocation of humanity, fulfilled in Christ as the true Microcosm. All this was intricately elaborated by medieval and Renaissance natural philosophy and alchemy.

I find the microcosm–macrocosm paradigm helpful, even invigorating in its impulse to explore the world’s astounding coherence in God’s creative hands. But it is also potentially misleading when it becomes a grammar, let alone an ethical code for human life: The Baby-in-the-Manger Grand Principle is a human invention. Christmas celebrates this baby (lowercase “b”) in this manger, just one, whose personal name is Jesus. And ­only he. No other baby, no other manger, no other place. Not “smallness” as a category, but that particular “young child” in this particular “house” (one could say, “with a given address”) into which three men “enter,” who “see” the child with “Mary his mother” (Matt. 2:11). 

This is not a microcosm, the cosmos writ small, but instead what one might call “goddity,” expressed in the quip of the British journalist William Norman Ewer from the early 1920s: “How odd of God to choose the Jews.” That “given address” raises an obvious question. Why would God be so peculiarly particular in a universe of infinite possibilities? How can this singular infant move history forward when justice can only function well according to comprehensive and impartial (which is to say impersonal) laws? Yet God does one thing, chooses one thing, orders one thing. Many other things, too! But they are not all the same. Not only that, but some differences, or even one difference, one thing (small though it seems), bears the weight of all the others, big, small, and medium. Here is God—just here, in this one and only Israel, this one and only Jesus, singled out from all other peoples, births, and infants. How odd!

The reality of the true God (versus a god of principles) is not about having Jesus confirm or valorize one’s insignificance. Hence the great and to some extent novel thirteenth-century Franciscan focus on the Nativity and its rustic setting—crèches and the rest—was not aimed at generating the order’s principles of humility and poverty. Rather, it put before us God’s choice for the “poor one,” Jesus. The motive for the Christian ideal of poverty was not about embodying a general divine principle in favor of the small. The embrace of poverty comes simply from following this particular God-Man Jesus of Nazareth, the “elect.” Our goddity, then, is also about following just this Jesus. “Baby Jesus” is Jesus as a baby, not the Baby, who happens to be presented as Jesus.

The difference is profound. Goddity evokes oddness rather than following logic. To be odd is literally “not to fit”—to stand apart from a pattern or reasonable presumptions about cause and effect. The way of the Cross, the call to the lowest seat, the command to forgive enemies and to turn the other cheek do not make sense. They cannot be deduced from one man Jesus, showing how he “fits” into the flow of the universe. They could not be derived from the grand schema of Nature, nor scanned in the panoply of the heavens. Natural theology may well seek to discern some coherence between Golgotha and the empty tomb. After all, the odd choice of God to become a human infant is not arbitrary in any divine fashion: It is who God is. And thus the world that God has made must somehow reflect the same “who.” But the order here is paramount: The world follows the oddity of Jesus, not the other way around. To embrace Smallness as a principle is not to follow, but to leave Jesus behind, to trade the person for a concept. By contrast, if there is a pattern to the world, it is itself always “odd,” the fact of “just this person.”

That is why it is impossible to apply Jesus’s smallness to the world—to the world of politics and economics and architecture and nanotechnology. Rather, the world is applied, as it were, to the Jewish infant of Bethlehem, to his and his parents’ specific prayers and devotions in the Temple of Israel, to his hidden life in Nazareth in a home of workers and family with their savory or banal meals passing down the throat on Shabbat or otherwise, to a journey into the desert for baptism and temptation, to his particular reading of the Scriptures of Israel, to his wandering and teaching in Galilee, to the calling of a few named disciples, each with their own history, and to his explicit end “under Pontius Pilate” and to the counting of three days to his rising—God’s odd choice for his own life is just where the world is going. It is a wrenching conformance. We do not apply Jesus to the world, but the world to him.

“God chose what is foolish in the world to shame the wise; God chose what is weak . . . low and despised” (1 Cor. 1:27–28). How would we know such a thing as this? Only because God did indeed choose to be born in Bethlehem, and to be despised and rejected (Isa. 53:3). And yet all the while to be God, the maker of all things, who “incomprehensibly made man.” Only because we have heard that God-Man’s voice, “Come, follow me!” (Matt. 19:21), and in doing so, we discover a world where everything, large and small alike, is wondrously odd and divinely touched. Even the hairs on our head (Luke 12:7).

The post Goddity appeared first on First Things.

]]>
Shakespeare and the City  https://firstthings.com/shakespeare-and-the-city/ Wed, 07 Jan 2026 06:00:00 +0000 https://firstthings.com/?p=118842 Recently I checked into a pleasant, fairly sterile Marriott in Shoreditch ahead of my London debut as a playwright...

The post Shakespeare and the City  appeared first on First Things.

]]>
The Dream Factory:
London’s First Playhouse and the Making of William Shakespeare

by daniel swift
farrar, straus and giroux, 320 pages, $30

Recently I checked into a pleasant, fairly sterile Marriott in Shoreditch ahead of my London debut as a playwright. Today, the streets are lined with Pret, Starbucks, and ­McDonalds; but as I learned from Daniel Swift’s The Dream Factory, they once ran with butcher’s blood and were—as Swift tells it—where the young William Shakespeare apprenticed as a playwright, absorbing the shapes, smells, characterological quirks, devices, games, genres, and moods of the theater. Or rather of “The Theatre,” the playhouse built by the impresario, actor, businessman, and frequent con artist James Burbage, father of the famous actor Richard. The Theater “was Shakespeare’s workshop.”

Swift’s method is squarely in the tradition of Stephen Greenblatt and James Shapiro: a historicism with a suggestive, speculative bent and an appreciation for art and artists that doesn’t reduce them to minor nodes in the grimly quantitative unfolding of history.

For Swift, The Theater is both a physical structure and a metaphorical “dream factory”: a place where material labor, risk, and improvisation generated the conditions for Shakespeare’s genius. The Dream Factory reconstructs the building of The Theater, the many financial difficulties encountered along the way, and the schemes and ruses that Burbage improvised to keep it alive. Shakespeare, who as a young man arrived in London from the countryside for reasons we don’t fully understand, is a peripheral, ghostly, suggested figure. Swift asks us to imagine what it must have been like for the young man from Stratford to land here. And as we meet the carpenters, tenants, landlords, lawyers, scriveners, government officials, clowns, and groundlings who played a role in the building and maintenance and flourishing of The Theater, the young playwright is a variable in this historicist quantum physics—a particle that must be there to explain the rest.

As other critics have noted, The Dream Factory is a strange mixture of hard fact and compelling leaps. There is a case to be made, for instance, that the theatricals in A Midsummer Night’s Dream—Peter Quince, Bottom, and the gang—are drawn from Shakespeare’s youthful memories of The Theater. In this instance, the connection is compelling: The vision of the theater that comes through in Midsummer—along with Hamlet, Shakespeare’s most overtly theatrical play—is hardy, peasant, semi-literate. Theater is a working-class enterprise, and the men who build the stages might also become the men who act on them; there is a fluidity between the types of craft. The blood that ran under the doors of Shoreditch’s slaughterhouses and past The Theater is a good metaphor for the highly visceral language of Elizabethan poets and playwrights. The vivid, visual, earthy words of Titus Andronicus

Hark, villains! I will grind your bones to dust
And with your blood and it I’ll make a paste,
And of the paste a coffin I will rear . . . 

—did not body forth in a clean, sterile environment. The physical context of Shakespeare’s early career was bloody, malodorous, probably foul. You could smell and feel The Theater and its environs.

Swift’s book makes evident, as does Midsummer itself, that the high lyrical art of the ­theater—Shakespeare’s own art, as well as Marlowe’s and Jonson’s—was conceived of, practiced, and developed in homely, rough, dirty, unpoetic, and sometimes dangerous circumstances. The Polish Shakespeare scholar Jan Kott observed that a prince in Shakespeare always has “a foot in the mud, an eye on the stars, a dagger in his hand.” In Swift’s book, “mud” really means something more like offal, blood, sawdust, and excrement (both animal and human). Dreams emerged from the seething biological humus of early modern London and from the rickety economic engine of The Theater itself.

Early London probably best ­compares—of course, not in scale, but in density and rate of change—to a city like Mumbai in the early 2000s. Swift writes that the London of the 1580s was a place “of a thousand changes,” and he describes how they were made visible in Shoreditch:

The thoroughfare of Bishopsgate ran a gentle northeast past the City walls and out towards the parish of St Leonard’s, where the Holywell site was. Along the way it passed through Norton Folgate, where a dozen new tenements—three-storey, timbered, tall and narrow—lined the road by 1576. Development accelerated in the closing years of the 1570s. In March 1582 the road was re-covered in new sand. There were small ­houses on each side of the road, with slightly larger houses behind, and then the fields behind these. This is known as ribbon development, as it flows in long thin lines next to the main roads, but as the 1580s went on development started to creep into the surrounding fields. . . . It was common to build in wood but the big green fields that sat behind ­Bishopsgate—Lolesworth Field, Spital fields—were filled with brickearth. This was stripped and fired and because bricks were cheap and easily available this spurred the development further and faster.

The world of the High ­Elizabethan period (the late 1590s) and early Jacobean period (which ­encompasses Shakespeare’s work from Henry IV through Antony and Cleopatra) is comparatively stable. Swift’s hypothesis, however, is that the ramshackle, comparatively unregulated, and less imperial London of the 1580s made a deep cultural imprint on Shakespeare—and on his closest creative collaborator, the actor ­Richard Burbage, James Burbage’s son. Shoreditch was on the edges, but it was still a part of London: an unregulated “edge zone” ringing the city, where production and manufacturing were on the rise. There it was, able to attract audiences from the old medieval city without being under the nose of the royal authority.

As a young man, Shakespeare could apprentice at The Theater and be a part of a burgeoning London theater culture without being immediately subject to the oversight of official authorities or influential aristocracy. And though James ­Burbage did, in fact, eventually run into trouble with the government in 1584, The Theater’s physical distance from medieval London is probably the reason it survived. At a time of broad political instability, its position meant that Burbage could indefinitely forestall unwanted official attention.

Swift’s theory of the “edge zone,” its advantages and consequences, makes immediate sense to me, since I have staged theater underground in illegal squats or semi-legally in industrial lofts at the edge of Greenpoint. Writers like Shakespeare and Marlowe, in the 1580s and perhaps the early 1590s, incubated their ­theatrical talents in a wild and woolly environment in which there were few to say no to their best and worst ideas. This wouldn’t have been true by 1595, nor certainly by 1600 or 1605. The 1580s might be the best explanation for why we got Shakespeare when we did—and why no comparable writer emerged after 1600.

In highlighting the difficulty and dirtiness, the sweat equity of the environment in which Shakespeare learned his trade, Swift has designs on us: He hopes that we—writers, artists, directors, designers, lost and wandering souls of the very late rather than early modern—might be invigorated by the homology between our time and Shakespeare’s. We might see how relative chaos, uncertainty, and scarcity create opportunities for creative artists to demonstrate their anti-fragility: their capacity to thrive amid uncertainty. As Swift put it in a Reddit “Ask Me Anything” interview, he wanted to examine

the history of how people made a living in the creative arts: and, specifically, one particular author, who as well as being an extraordinary poet and playwright was a very canny businessman. He is, of course, William Shakespeare, and we might see him as the patron saint of freelance writers, or a 16th century gig worker in the creative industries.

As I’ve already suggested, Swift’s suggestiveness and historical extrapolations seem sound to me. Walking to the theater for the opening night of Doomers—again, through well-capitalized, safe, affluent parts of London—with The Dream Factory in mind, I felt acutely aware of the relationship between the materials of life and the ­materials of imagination. The landscape of the Western world in 2025 is largely bloodless and disembodied, but that sense of unreality is nevertheless productive of dramatic reckoning and critique. Through Swift’s lens, I could see my own choice to write a play about an AI company as a simple and direct response to my own environment and the anxieties of the age. To historicize oneself is to understand why poetry is necessary at all.

The point is not that poets should become absorbed by the material constraints and determinations in their lives, but that there can be too much or too little chaos, and too much or too little institutional structure. We might think of the relative boringness and safeness of theater culture, both in New York and in London in this decade, as a result of the pricing out of edge zones and a general cultural overemphasis on safety and inclusiveness—even safety and predictability in theater education itself. Shakespeare’s theatrical education was weird, dangerous, working-class. Many of its formal qualities would be codified by both Shakespeare the playwright and Shakespeare the entrepreneur later in his life, but not before. Nobody knew better, in other words. So there was room for something new to happen.

Back in Brooklyn, where I run a rather shabby theater company out of an industrial loft beneath a rave space, we contend with noise, neighbors, high rent, bad electrical wiring, bad plumbing, local bums and eccentrics, online trolls and haters. There are many days when I can’t conceptualize why I willingly endure the stress of keeping the Brooklyn Center for Theatre Research running, other than that I don’t see how to stop. Most of my energy, on a daily basis, is consumed by the business, not the art. And yet somehow, as if by magic, the art, the plays come, pushed into existence by the simple mechanism of monthly rent. There must be plays to perform, so plays are written.

I am not, as a playwright, in a position to assess Swift as a historian. It is fair to wager that, as with the works of Shapiro and Greenblatt, there are specious connections and questionable claims in The Dream Factory. But as the father of the new historicism, Michel Foucault, knew, historiographies are a covert form of myth and philosophy. And Swift, I think, in constructing a vision of The Theater beam by beam, is teaching us that the business of building a theater, or any other material space for collaboration among artists, is the greater part of what dreams may come. 

The post Shakespeare and the City  appeared first on First Things.

]]>
Practitioners of Infanticide https://firstthings.com/practitioners-of-infanticide/ Tue, 06 Jan 2026 06:00:00 +0000 https://firstthings.com/?p=118570 A physician declares his dying patient—a seven-pound baby boy—“dangerous as dynamite,” a “menace to society.” A routine medical procedure could save the boy’s life, but he was born deformed....

The post Practitioners of Infanticide appeared first on First Things.

]]>
A physician declares his dying patient—a seven-pound baby boy—“dangerous as dynamite,” a “menace to society.” A routine medical procedure could save the boy’s life, but he was born deformed. Later reports will find that most of the deformities are cosmetic: He is missing his right ear, and the skin on his shoulder is defective. But, critically, there is a blockage at the end of his intestine.

This last seals the boy’s fate. There will be no lifesaving operation. The crying baby with chubby legs and wide-open blue eyes, facing the flashbulbs of the press, is instead to be starved and dehydrated to death. It is an act of the “kindest mercy” for the child to be “put out of its misery,” the physician has told the parents. For the next decade, in newspaper columns, in public speeches, and even in a feature film that he will write and star in, the physician will present his patient as an exhibit in his argument that compassion and the scientific method compel American medicine to bring about rational ends to “lives of no value.” The editorial board of the New Republic, Helen Keller, and many leading physicians will agree with him.

The Bollinger baby—christened by his relatives Allen after his father, yet unnamed in the press and even in modern accounts of the tragedy—became the first publicized case of a newborn in America forced to die because of his disabilities. The year was 1915. The physician became a celebrity. Decades before Jack Kevorkian, decades before either abortion or assisted suicide was legalized anywhere in the United States, there was Harry Haiselden, the surgeon and showman at the head of the ­German-American Hospital in Chicago.

No jury would convict Haiselden. He insisted that he treated his “defective” infant patients as he did “because he love[d] them.” He loved them to death. Sometimes he actively accelerated their deaths: He removed the umbilical ligature of one patient, leaving him to bleed to death, and prescribed another potentially lethal doses of opiates. It was an ambivalent love. “Horrid semihumans drag themselves along all of our streets,” Haiselden warned at the end of his autobiography. “What are you going to do about it?”

It is tempting to dismiss Haiselden’s odious question, precisely because it is odious. That would be a mistake. Today Haiselden is achieving a posthumous conquest of the medical field. His victories are not just in Canada, where the Quebec College of Physicians and many clinical ethicists have urged Parliament to legalize the euthanasia of disabled newborns, or in the Netherlands, which under the infamous Groningen Protocol has been euthanizing “neonates” with terminal illnesses for two decades.

It is in less likely places that Haiselden’s victory is taking shape, pitting parents against the physicians of their disabled children—parents like Krystal VanderBrugghen, who alleges that her child with Down syndrome received inadequate, discriminatory, even life-threatening medical care, in “the best children’s hospital in the world.” Stories like hers have been a century in the making.

The best children’s hospital in the world for 2026, according to Newsweek and Statista, is the Hospital for Sick Children (SickKids) in Toronto. I walked into SickKids in the summer of 2025 to see Krystal, a “mama bear” according to one of her friends, and Mo, who asked me not to use his ­real name because one of his children is receiving treatment at the hospital, and he fears retaliation. Krystal befriended Mo’s wife in the coffee lounge over the summer, and soon Mo was friends with Krystal, too.

We decided it would be best to speak in the wing of the pediatric unit, whispering whenever a nurse walked past. Mo and Krystal both credit religious faith—Mo is Muslim, while Krystal and her husband Jeremy are Canadian Reformed Christians—with fortifying them to bring children with Down syndrome into this world. Mo said his wife felt guilt-tripped by their healthcare team, who asked her immediately what quality of life she, Mo, and their three other children would have if she gave birth to a child with Down syndrome. “At the end of the day,” Mo told me, “I am not God. I cannot decide who lives, who doesn’t live.” Now, with his child with Down syndrome already five years old, the experience of raising him is “probably . . . the best thing in my life.” Krystal experienced the same pressure and reward. She was advised three times by clinicians that she could “terminate at any point and start again.” She didn’t want to start again. She wanted her child to be born.

On December 4, 2023, eighteen months before I met Krystal and Mo, Veya was born at McMaster University Medical Centre in Hamilton, Ontario. Like many children with Down syndrome, she had a cardiac defect, which in her case meant that she was in active heart failure for the first four months of her life. She needed cardiac surgery, which required her to be transferred to SickKids Hospital in Toronto. It is an hour-plus commute for Krystal on “a good day,” especially since the AC in her car stopped working. It was worth it; the surgery worked. “It’s funny,” said Krystal. “They try so hard to end this life, but the second she’s born, they do everything they absolutely, possibly, humanly can do to preserve her life and get her here to get her heart repaired. But once we started getting involved with GI [the gastro­intestinal team] and she started having more problems, that’s when it was like they drew the line.”

A month after her heart was repaired, Veya developed an undiagnosed liver disease, causing her bile to be thick. She underwent liver surgery. This time, the surgery didn’t work. Veya desperately needed a liver transplant, and although the rest of her individualized specialty care team approved her for a liver transplant, the trans­fer team ­denied her this lifesaving treatment. ­Krystal still doesn’t know the reason. Veya needed to stay in the ICU.

Without a liver transplant, Veya’s immune system was vulnerable. I asked how Mo was ­recruited to help with Veya’s medical journey. “I invited him into my meetings,” Krystal said. Mo continues for her: “Yeah. I hear stories. Krystal tells me what’s happening. She’s gone through a lot, like, mentally. I’ve lived here almost a year. That’s hard. So God knows what she’s going through, right?” Mo’s child was being treated for leukemia in the hospital, and he had no complaints against SickKids. “It’s interesting because I’m seeing two sides, right?” said Mo. “I’m seeing my side and then I’m seeing her side. Two different teams, but from her side, ­Krystal’s team, and I’ve used this word a lot, I’ve been baffled on what’s happening.”

The quality of care for Veya dropped precipitously, Krystal and Mo believe. Shortly after Veya was denied her liver transplant, while she was unattended, she received a potentially lethal amount of potassium, ten times her usual dosage. Her heartbeat exceeded 350 beats per minute. The hospital told her that the overdose “passed through four pharmacists and two nurses,” Krystal said on a recent podcast. “We’re really sorry but it was around Christmas time,” was the only excuse she received from the hospital about the incident that nearly killed her daughter.

SickKids declined to answer my questions about the incident and about whether any steps were taken to prevent a similar incident in the future. In an emailed statement, a spokesperson commented: “We cannot comment on individual cases due to patient privacy. . . . Decisions about care for each child’s unique case are guided by clinical expertise, ethical standards, multidisciplinary collaboration, and partnership with families.”

At first, Krystal believed the potassium overdose had been an innocent mistake. Now she is not so sure. At several points, the physicians in the ICU have seemed to “[want] to free up a bed spot and rush her out because she’s been here for too long.” Three days into her care, a doctor said that if Veya needed a ventilator, she would not receive one, despite being on full code, because “it would do more harm than good.” (Due to a 2019 court decision in Ontario, physicians need not seek consent for a Do Not Resuscitate order, or even inform patients that one has been placed against them, if their care is deemed “medically futile.”) Krystal had to enlist the patient relations department in order to get her daughter’s DNR lifted.

She felt coerced into giving up. “SickKids is very ableist,” Krystal told me. Another ICU physician put his hand on her shoulder and said, “You know, Mom, it’s been such a long road for you guys. You can admit when enough is enough, and you can let someone die with dignity.” Was the overdose intentional? An effect of neglect? Or a simple accident? Whatever the case, it happened, Krystal and Mo believe, only because Veya is disabled. At one point, when she asked whether Veya was being denied a transplant because of her Down syndrome, a transplant physician answered, “Mom, I think you know the answer to that deep down in your heart.” She heard similar comments from the physicians. “[Another] ICU doctor said, ‘We look at Veya, all that she is and all that she was born with,’” Krystal said. “And I said, ‘What, a head, two arms, two legs?’ I’m like, ‘Yeah, she came with a cardiac defect. That’s fixed. That’s not causing the problem. Or are you isolating her extra chromosome here?’”

The accidents—if accidents they were—continued, always occurring when Veya was by herself. “Every time I step away, something happens,” Krystal said. Mo interrupts: “Twenty-minute lunch break.” Krystal continues: She went on a “twenty-minute lunch break, and they shut off a medication that they knew from a couple days ago she had withdrawal symptoms from.” Another incident occurred when Veya was struggling to breathe. It was a code blue, but the crash team, instead of rushing to help Veya, was slowly walking to her. Krystal had to raise the alarm herself.

Krystal and her husband Jeremy felt that Veya was unsafe at SickKids. The Delta Hospice Society and the Euthanasia Prevention Coalition organized a round-the-clock daily watch over Veya. But SickKids began to clamp down on the visits. It also banned Veya’s general pediatrics team from visiting unless they first asked for permission. When I spoke with Krystal, she was in the last steps of organizing an ambulance to move Veya to another hospital.

At the same time, Veya was meeting her development milestones. She liked geese and her bravery beads; she played with her brothers and sisters. “That’s the thing,” Krystal told me. “This ICU admission, she’s actually met three milestones, or two—I guess popping [your teeth] is not really a milestone. Maybe it is, but I’m like, you learn to smile. You learn how to coo. You just can’t make noise because she’s got the [respiratory] tube. But then, you popped your first tooth. I’m like, look at this! This isn’t a kid on death’s door. But they’ve been treating her like she’s on her way out and ­palliative.” When SickKids was handing Veya over to another hospital—at the time, Krystal was considering either McMaster or a hospital in the United States—SickKids said that she was not on ­palliative care.

As is the case throughout Canada’s healthcare system, it is difficult to find conclusive evidence of neglect or wrongdoing when medical care is subpar. But SickKids Hospital is no stranger to euthanasia. Just two years after Canada legalized medical assistance in dying (MAID)—a euphemism for euthanasia—a panel inside SickKids Hospital, co-chaired by the director of its department of bioethics, envisioned MAID for minors without the need for parental consent, a practice unheard of even in the Netherlands, which permits euthanasia for “mature minors.” (Currently, MAID in Canada is legal only for those over the age of eighteen.) The policy was written to address the need for “MAID-providing institutions to reduce social stigma surrounding this practice.” SickKids declined to answer my questions about this policy, including whether it is in force today.

SickKids has historically been at the forefront of letting children die of their disabilities, especially children with Down syndrome. A study found that between 1952 and 1971, of fifty children with Down syndrome and blocked food passages, twenty-­seven were left to die of their obstructions instead of receiving routine medical treatment. In 1979, the institution was lambasted by the Canadian Psychiatric Association, which warned that “this increasingly common act in medical practice is being vigorously promoted by able and influential advocates within our profession and within our society at large,” despite the fact that it was likely illegal without a court order.

Between June 1980 and March 1981, a spree of murders struck SickKids hospital. Over the course of several nights, thirty-six babies and infants died, many of them due to an overdose of digoxin, a drug used to control heartbeats and often used for assisted suicide in the United States. A judge confirmed that at least five of those deaths were murders (though the defense believed the number was closer to seventeen), and yet the judge at the preliminary hearings absolved the only suspect, a pediatric nurse. No one else was ever charged, despite statistical evidence from the U.S. Centers for Disease Control that tied another nurse to the deaths.

Two years later, with the scandal refusing to die down, a Royal Commission of Inquiry investigated the deaths. ­Richard Rowe, the chief cardiologist at the hospital, was asked by the commission whether he disapproved of so-called mercy killing. His response: “almost.”  He explained that since the thirty-six babies had a “minimal chance of surviving,” the motive behind their deaths might “perhaps be that of ­mercy-killing.” It was not true: Many of the children had been likely to survive. Some were barely sick. Adrian Hines described his son Jordan: “He entered the hospital a healthy baby with a touch of pneumonia. He didn’t even have a heart ­condition.” An autopsy revealed that Jordan had died of digoxin overdose, a medication he was never prescribed.  

The main medical associations in Canada declined to condemn the homicides at SickKids. The president of the Ontario Medical Association claimed, “I don’t know if withholding surgery is legal,” while a spokesman for the Canadian Medical Association emphasized that the CMA had revised its ethics code “to allow patients to die in dignity.” When asked by investigators whether the dramatic increase in the number of deaths in the hospital’s cardiac ward could have been caused by euthanasia, the chief cardiologist was vague. “[Euthanasia] may have come into those discussions. We talked of many things and we didn’t keep notes.”

SickKids declined to answer my questions about its history concerning infanticide and discriminatory treatments, the institutional norms that might have enabled them, and what steps, if any, were taken to prevent similar incidents going forward: “At SickKids, we are deeply committed to upholding our core values of compassion, dignity, respect, and equity in every aspect of patient care. Our staff bring extraordinary skill, judgement, and dedication to their work to ensure that every child and family receives the highest standard of care, regardless of diagnosis or ability.”

SickKids Hospital is not the only institution that has allowed children with disabilities to die of treatable illnesses. It accords with the direction of the field of medicine over the past century. The preventable deaths of children with disabilities occur, for the most part, without media interest. To understand why the law and societal outrage have failed to stop this practice, we must trace the history of child murder in North America since 1915.

And we must discard a fiction: that infanticide, being illegal, was not historically practiced by physicians in North America. As the medical historian Martin Pernick stresses, “the history of infanticide by lay people—parents, midwives, and governments, dating back to ancient Greece—was widely discussed in these debates [over selective non-­treatment of disabled children]. But the role played by past American physicians in such decisions is now virtually unknown.”“Therapeutic homicide”—a term used in an editorial of the Canadian Medical Association Journal a decade ago, before Canada legalized euthanasia—is, as a rule, practiced by physicians before receiving legal sanction. Under these conditions, it is uncommon but not rare. Its fatal logic is the starting point for the devaluation and killing of people with disabilities.

Even as Roe v. Wade was being argued before the Supreme Court in 1971, Who Should Survive?, a film produced by the Joseph P. Kennedy Jr. Foundation, dramatized the decision to let a baby with Down syndrome die of a treatable intestinal blockage. The film was based on real deaths at Johns Hopkins University Hospital. Over the course of fifteen days, as the medical team and parents wait, the child dies of starvation.

Near the end of the film, a litany of questions is posed to the audience. “Do all children have a right to life? Who should protect the child’s rights? Do physicians have a duty to preserve life? Does mental retardation diminish the right to life?” The film is quick to note how the excruciatingly slow death of the child affects the nurses and the physician. The child’s interests are not considered. The film merely asks questions, as if doing so were nonpolitical: “The film you are about to see has a beginning and an end—but no conclusion, because it provides no answer to the questions it poses.”

Left unsaid is that a child without Down syndrome, presenting the same physical defects, would never be left to die. A non-disabled baby with a treatable life-threatening illness would receive treatment, even if the parents and medical team disagreed. Anything else would be medical malpractice or child abuse. By contrast, those deemed “profoundly disabled,” whose lives have “no value,” receive no protection.

The slippery slope that Roe’s critics warned of in fact happened in reverse: first infanticide, then mercy killing, and finally eugenic abortion on demand. Legalization followed clinical practice, not the other way around. The logic continues to prevail in the courts, whether in America, Canada, Colombia, or the Netherlands: If passive euthanasia is valid, why not active euthanasia? If prenatal abortion, why not “post-natal abortion”? If disability is a qualifying condition for assisted death, based on the empirical determination of medical experts in view of “medical futility,” then to what extent is consent necessary or even desirable?

Yet the justification for these acts is offered ­only after the fact. Ronald Regan’s 1983 “Evil Empire” speech is more often invoked than read, and most people would be surprised to learn that its early paragraphs are not about the USSR, or even ­communism: They are about evil at home. Every year, publications such as the Washington Post and the New York Times were reporting that thousands of babies were being left to die from hunger or treatable medical conditions—for the sole reason that these babies had disabilities, whether terminal or not, and were deemed “defective.” This reporting sparked the Baby Doe laws after Ronald Reagan’s surgeon general, the pediatric surgeon C. Everett Koop, denounced the nontreatment of viable babies as contrary to medical ethics. In March 1983, the Department of Health and Human Services, under the auspices of a federal law that protected people with disabilities from discrimination—a precursor to the Americans with Disabilities Act—passed an executive action to stop selective nontreatment and starvation. Yet the courts overturned the action, as Congress had not passed the requisite legal protections. In response, Congress passed a weakened amendment to the Child Abuse Prevention and Treatment Act—which the American Academy of Pediatrics continues to claim is irrelevant to physician or institutional standards. So the legal protections enacted by Reagan lapsed.

The practice of selective nontreatment based on disabilities has continued. Though as a medical option this practice is presented as rational, the people it kills are often those with conditions that are at the same time being fear-mongered in the media—whether trisomy 13 and 18, HIV, or Thalidomide poisoning. One senior medical director at a faith-based perinatal center in New York told me, “Back in the early nineties when I started on the faculty, the chairman [of pediatrics] at the time said that although other hospitals were starting to withhold nutrition and hydration from children with terminal illness, that’s something that would never, ever happen at this facility. But you know ten years later, into the early 2000s, it was something that when the parents asked, the ethics committee would often approve it.”

Today, in most facilities across the United States, the ethics committee would not need to be involved. Though neonatologists are split on the ethics of withdrawing food and water for ­newborns—surprisingly, more so for the terminally ill than for the disabled—a recent survey of neonatal intensive care units (NICU) published in the Journal of Perinatology found that a majority of NICU units in North America now practice “withdrawal of artificial nutrition and hydration” for newborns. Of those facilities, more than 80 percent reported not requiring an ethics consultation before ceasing all food and water; virtually none had a policy on which diagnoses would qualify a patient for withdrawal of nutrition and hydration. The American Academy of Pediatrics now classifies feeding children as morally optional.

Since this is how disabled children are treated, it is no wonder that the medical field is nonchalant about the fates of children born alive after botched abortions. In January 2019, Ralph Northam, a pediatric neurologist and then-governor of Virginia, caused a furor by describing what happens when infants survive third-trimester abortions:

When we talk about third trimester abortions, these are done with the consent of, obviously the mother, [and] with the consent of the physicians, more than one physician, by the way, and it’s done in cases where there may be severe deformities, there may be a fetus that’s non-viable. So in this particular example, if a mother is in labor I can tell you exactly what would happen. The infant would be delivered; the infant would be kept comfortable; the infant would be resuscitated if that’s what the mother and the family desired, and then a discussion would ensue between the physicians and the mother.

To ask whether this is legal is beside the point. It does not need to be legal to be practiced, since any law is in force only to the extent that it is followed.

There is an irony to the story: In its final summer, the Biden administration, hardly a pro-life administration, quietly reintroduced some of the Reagan administration’s protections. Section 504 of the Rehabilitation Act of 1973 now explicitly prohibits healthcare discrimination based on disability. Thus a newborn with Down syndrome and a heart problem must, by law, receive whatever “medical treatment is provided to other similarly situated children.” But this legal requirement is not enforced. Most hospitals, despite the fears of ethicists, made no changes to their policies, and the media failed to report on the regulation. So the regulation became moot. Last year, a study reported that between 2019 to 2022, adults with both Down syndrome and Covid were more than six times more likely than patients with similar comorbidities to have had a DNR placed on them—a rate far higher than any other illness or disability, including any terminal illness. Yet there was no reckoning among healthcare clinicians or institutions. The soul of medicine does not easily change after a century of practice.

It is impossible not to wonder what would have happened to medicine if the Bollinger ­baby—Allen—had not been killed by his primary physician. After all, Allen was nearly saved. The Chicago Tribune reported on the figure whom Harry Haiselden called a “wild eyed, interfering, hysterical woman,” a certain Catherine Walsh of 4345 West End Avenue, who attempted to convince either the mother of the baby or the physician to spare the boy after the medical commissioner of Chicago had failed to do so.

It is an astonishing piece of journalism, made even more so by the fact that Walsh’s voice was ­uncommon in contemporary debates over the Bollinger baby. Though opposition to Haiselden’s actions was voiced, mostly by Catholics quoted in the press (with opinion more divided among secular, Jewish, and Protestant experts), prominent figures who might have advocated for the child instead ­condemned him. Helen Keller, otherwise an ­advocate for the disabled, endorsed Haiselden’s work as a “­service to society,” since “no one cares about that pitiful, useless lump of flesh.” The Baltimore Catholic Review, published under James Cardinal ­Gibbons, claimed that “no one could be blamed if the child was let die according to nature” and supported Haiselden’s actions.

Yet Catherine Walsh, by her own account, nearly succeeded. She sought and received permission to baptize the child, although the child, unbeknownst to her, had already been christened. All we know is that Catherine belonged to a local Catholic Church; she likely was the mother’s friend. Her comments to the Tribune were quoted in full: 

I went to the hospital to beg that the child be taken to its mother. It was condemned to death, and I knew its mother would be its most merciful judge. I found the baby alone in a bare room, absolutely nude, its cheek numb from lying in one position, not paralyzed. I sent for Dr. Haiselden and pleaded with him not to take the infant’s bloom on his head.

It was not a monster—that child. It was a beautiful baby. I saw no deformities. I patted him. Both his eyes were open, he waved his little fists and cried lustily. I kissed his forehead. I knew if its mother got her eyes on it she would love it and never permit it to be left to die.

“If the poor little darling has one chance in 1,000,” I said to Dr. Haiselden, “won’t you operate and save it?” The doctor laughed. “I’m afraid it might get well,” he replied.

As I left the hospital a man said to me. “I guess the doctor is right from a scientific standpoint. But humanly he is wrong.” “Thank God,” I answered, “we are all human.”

It took five days for Allen to die. Anna Bollinger, the boy’s mother, never recovered from the death of her fourth child. She never saw the child; the medical staff would not permit it. Even amid the savagery of the First World War, Anna’s death almost two years later was front-page news across the country. Her husband, Allen Bollinger, told the Associated Press: “After the baby’s death, my wife fell into a settled melancholy and wasted away. If ever a woman died of a broken heart she did.”

It is callous to claim that a moral lesson can be gleaned from this level of suffering. Yet in 2025, in the lobby of SickKids, the best children’s hospital in the world, I found myself walking away from Mo and Krystal repeating Catherine’s words from 1915: “Thank God, we are all human.”

On August 1, five weeks after the last time I spoke with Krystal, Veya, whose middle name was Hope, died after a nineteen-month fight. Just like Allen, Veya was the fourth child. Her parents managed to move her to another hospital, the same hospital in which she was born. “Through the incredible team at McMaster [Hospital], God brought deep healing to our hearts from the trauma we left Sick Kids with,” Krystal wrote on her Instagram account. “Her last days were tender, peaceful, and full of love.”


Image by Lyfhospital, licensed via Creative Commons. Image cropped.

The post Practitioners of Infanticide appeared first on First Things.

]]>
The Lessons of Woodrow Wilson https://firstthings.com/the-lessons-of-woodrow-wilson/ Fri, 02 Jan 2026 06:00:00 +0000 https://firstthings.com/?p=118773 In his excellent book about our troubled times, Democracy and Solidarity: On the Cultural Roots of America’s Political Crisis, James Davison Hunter notes that...

The post The Lessons of Woodrow Wilson appeared first on First Things.

]]>
In his excellent book about our troubled times, Democracy and Solidarity: On the Cultural Roots of America’s Political Crisis, James Davison Hunter notes that enduring solidarity rests on common affirmations and shared loves: “Solidarity . . . is a richer term than mere consensus.” It arises when people recognize in each other similar sentiments and attitudes, which produce “a sense of ‘we-ness’ or ‘usness.’”

Hunter argues that America’s “we-ness” and our common affirmations and shared loves have diverse sources. He allows that John Locke and other figures from the British Enlightenment influenced the Founders in decisive ways. But he also notes the enduring role of Protestant Christianity, especially dissenting forms such as Puritanism and Methodism, as well as revivalist and populist Christianity. By his reading, these forms of Protestantism lent themselves to political syncretism, which he calls America’s “hybrid-Enlightenment.” Liberal rights were advanced to protect individual freedoms, but their formulation was not accompanied by a rationalist assault on religious faith and traditional ­mores. Hunter cites Revolution-era luminaries who firmly believed that America would be a vehicle for God’s consummation of his millennial plans for ­humanity—and “the home of free government, reason, progress, and the ‘rights of man.’”

In Hunter’s telling, America’s hybrid-Enlightenment underwrote an ongoing expansion of solidarity. What began as rights accorded only to property-holding white men came to encompass wage-earners, blacks, and women. An American “we-ness” that was originally limited to Protestants was expanded to include ­Catholics, Jews, and religious outsiders such as Mormons. These developments were contested. But the rationales for reform came from within the American tradition, which is why we can tell a story of development, not ­discontinuity.

Democracy and Solidarity offers an American history in which solidarity is expanded and those once excluded are included. But this is only the liberal half of the story. Our history has also been marked by periods during which illiberal methods were employed to renew and buttress solidarity. The political response to the upheavals of the industrial revolution was one such period. In that era, Woodrow Wilson played a central role.

Wilson is a hate-figure among conservatives. Along with Franklin Roosevelt, Wilson is credited with the erection of the dreaded “administrative state,” which is said to betray the great American tradition of freedom. Conservatives accuse Wilson and Roosevelt of favoring the direction of society from above, inaugurating an illiberal tyranny of technocrats.

There’s something to these criticisms. As a young man, Wilson made his reputation with an influential book about America’s constitutional system, Congressional Government (1885). In that volume, Wilson bemoaned the immobility of the committee-driven process of legislation. He urged a more dynamic and energetic form of governance, the better to address the new problems and challenges facing the American nation. In practice, pivoting away from the checks and balances that limit government meant empowering the executive branch, allowing the president to serve as the functional leader of the legislative branch, as the prime minister does in the British parliamentary system.

When Wilson entered politics, first as governor of New Jersey and then as president of the United States (first elected in 1912), he did not attain his goal by altering the Constitution. Instead, Wilson established himself as the undisputed leader of the Democratic Party. He pushed through legislation to establish the Federal Reserve, passed a federal income tax, created the Federal Trade Commission to enforce antitrust laws, regulated child labor, and set an eight-hour workday for railroad workers.

These measures addressed the economic problems of his day. Many were denounced as violations of one or another aspect of the liberal principle of freedom of contract, which is the foundation of a free market ­unhindered by governmental intervention and regulation. (The Supreme Court’s 1905 landmark decision in ­Lochner v. New York upheld this liberal principle.)  Moreover, Wilson achieved these legislative successes, because he often appealed over the heads of legislators to the American people. And as the Founders knew well, direct democracy is not a friend of liberal principles.

Wilson was not alone in his progressive zeal for reform. Had Theodore Roosevelt prevailed in the three-way election of 1912, he would have pursued his own version of muscular executive leadership. Both men were by nature attracted to power. But this does not explain their roles in our political history. The American people were anxious about plutocratic control. In the early twentieth century, trusts such as Standard Oil and U.S. Steel controlled entire industries. Labor unrest roiled the country. Left-wing political radicalism spawned acts of terrorism. Mass migration was transforming the demography of the country. And the extraordinary growth of American industry propelled the nation to the forefront of global affairs, a role difficult to square with older American traditions.

Put simply, the nation was rich but ill at ease, prosperous but at odds with itself. With his energetic and illiberal methods and programs, Wilson sought to stabilize and consolidate the country. And he largely succeeded. None of his signal achievements were reversed when Republicans assumed control of government in 1920. They persisted as central elements of America’s unique approach to regulated capitalism, an approach that renewed the social contract in the twentieth century and thus ensured that a broadly liberal approach to politics and economics would endure. Something similar can be said of Franklin Roosevelt, a demagogue in the mold of Wilson who likewise used illiberal methods to stabilize and unite the country in a time of greater crisis.

Wilson and FDR dominated the political culture of America in the first half of the twentieth century. ­Unlike the figures surveyed by James Davison ­Hunter, they do not fit into the liberal story of America. They did not expand the circle of inclusion into the American promise of freedom. (Wilson was a reluctant supporter of the Nineteenth Amendment, which accorded to women the right to vote; Roosevelt did not act to ensure civil rights for blacks.) Their vocation was different. They sought to renew American solidarity, which required taming and restraining certain kinds of freedom, especially freedom of contract. (Roosevelt intimidated the Supreme Court to secure the overturning of Lochner.) In a word, Wilson and FDR administered strong doses of illiberalism.

We are living in a similar period. Immigration, economic vulnerability, globalization—the American people are anxious. Once again, a powerful, energetic executive presses against liberal norms, as did Wilson and FDR. I don’t wish to commend any of the particular measures taken by the present administration, although some strike me as wise and necessary. My point is more fundamental. We do not need to read Carl Schmitt or Charles Maurras to meet today’s challenges. We’ve been here before as a nation, and we have had statesmen who addressed liberalism’s failures so that the American ideals of liberty could be renewed and reshaped for new circumstances. In 2026, we would do well to study the methods of Wilson and FDR and weigh their achievements as well as failures. For we need something of their innovation and daring to navigate our present crisis.

The post The Lessons of Woodrow Wilson appeared first on First Things.

]]>
Caravaggio and Us https://firstthings.com/caravaggio-and-us/ Tue, 30 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=118584 Nicolas Poussin, the greatest French artist of the seventeenth century, once said that Caravaggio had come into the world to destroy painting. ­Poussin’s concept of beauty led him to...

The post Caravaggio and Us appeared first on First Things.

]]>
Nicolas Poussin, the greatest French artist of the seventeenth century, once said that Caravaggio had come into the world to destroy painting. ­Poussin’s concept of beauty led him to depict whatever he painted in the best possible light. ­Caravaggio, by contrast, sought to represent nature as he saw it. In other words, Caravaggio favored truth over beauty. It was a revolutionary approach in his day. But was it a sensible one? ­Caravaggio was one of the most influential painters of the seventeenth century. Was he a good influence?

Today Caravaggio is the most popular of the Old Masters. We celebrate him for the darkness, turbulence, and occasional violence of his pictures, and we are intrigued by his life story, which reads in parts like a trashy nineteenth-century novel. He exemplifies a twentieth-century neo-Romantic vision of the artist as a loner, rebel, and outsider who respects neither laws nor authorities, shuns restraint, and transgresses boundaries, even to the point of committing serious crimes. Yet there is also a mystical element in Caravaggio’s work, transcending the sleazy air that is impossible to ignore even in certain of his altarpieces. ­Caravaggio is not merely a sinner revelling in excess. Not all the time, anyway.

Even if we think it pompous to moralize about Caravaggio’s behavior, he still confronts us with an approach to painting that is self-evidently iconoclastic. In some ways he turns his back on the entire Renaissance movement, which had developed over the previous few centuries and culminated in the achievements of Raphael and Michelangelo. When Poussin said that Caravaggio had come into the world to destroy painting, he meant that he had set himself against the Renaissance.

We had better clarify what this means. Our narrative of a “Renaissance” in Italian art originates with Giorgio Vasari and his monumental series of biographies, The Lives of the Most Excellent Painters, Sculptors and Architects, first published in 1550. To Vasari we owe the notion of a “rebirth” of art that began in the thirteenth century and progressed and advanced until it reached perfection at some point in the mid-­sixteenth century. Was it possible for art to improve still further after the era of Michelangelo, or was it doomed to stasis, repetition, and decline?

Many historians regard “the Renaissance” as an unhelpful concept, but it may be useful for us as we try to understand what it was that Caravaggio reacted against with such force. For many contemporary historians, “the Renaissance” begins, not with artists, but with the activities of medieval scholars. In this view, the Renaissance should be dated to some point in the thirteenth century and seen first of all as a literary and philological movement to restore the precision, accuracy, and range of the Latin language, as it had been spoken and written in antiquity. This quest to revive the glories of ancient Rome inevitably spread beyond the work of writers and scholars.

The greatest artists of the Renaissance aspired to equal the ancient sculptors, architects, and painters they read about in Pliny the Elder’s Natural ­History (an encyclopedia left incomplete upon its author’s death in October 79, during the eruption of Mount Vesuvius). Architects and sculptors could at least pick through Roman ruins, which gave them hints of what Pliny was talking about. Painters, by contrast, had no examples to imitate. They had to imagine what classical-era painting looked like by extrapolating from Latin ­descriptions of long-lost pictures, then supplementing those words with the evidence of Roman portrait busts, sculpted sarcophagi, and the odd imperial coin.

Somehow, Renaissance artists succeeded in this mad project. Witness the most impressive surviving example of ancient Roman painting, the frescoes from the Empress Livia’s garden room. They were discovered in 1863 and can now be seen in the Palazzo Massimo alle Terme. They are exquisitely beautiful. But compare them to wall paintings by Botticelli, or Ghirlandaio, or the other great painters who were active in Florence during the 1480s at the court of Lorenzo the Magnificent. In technical terms, the Florentines had already exceeded what could be attained by any painter from antiquity. And of course, the development of slow-drying oil paint made it possible for Renaissance artists to create images of a depth, richness, and permanence that no classical Greek or Roman artist could dream of. By the early sixteenth century, ancient painting had already been surpassed—though Renaissance artists did not necessarily realize the fact.

Raphael died in 1520. His final canvas, The Transfiguration, now on display in the Vatican, was considered for centuries the summit of artistic perfection. Until the early twentieth century, it was the most famous oil painting in the world. For Poussin, Raphael had established a universal standard for artists in his balance, restraint, and simplicity. When we talk about “classicism” in art, and use the term in opposition to “Romanticism,” the idea we have in mind derives from Raphael.

Michelangelo was a more obviously classical artist than Raphael, in the sense that he had a more direct relationship to ancient Roman art—in his sculpture and architecture even more than in his paintings, which contain the seeds of mannerism, another dominant movement in ­sixteenth-century art. Raphael-style classicism aims at idealized depictions of beauty as we see it in nature; mannerism involves a more deliberately artificial approach to reality. A mannerist painter tries to represent a bit more than the eye can see and go beyond what is captured by a still, two-dimensional image. Inevitably, his work is stylized and somewhat ­exaggerated. Mannerist paintings are frequently impressive in their sheer technique, although they often seem to refer to other paintings rather than to reality itself.

When Caravaggio arrived in Rome in the early 1590s, he was surrounded by repetitive mannerism and stale classicism. Art seemed to have stagnated in the Eternal City since the death of ­Michelangelo in 1564. All the interesting, innovative painters appeared to be in Venice or Bologna. Caravaggio observed that Roman painters were not depicting the world around them in any recognizable form. This insight led him to develop a third way of painting, which was known after his death as “­Caravaggismo” and became dominant in Rome for a generation.

Caravaggio could never have been a classicist in the Raphael mold; his work is too obviously an expression of his own personality and temperament. His imagery and preoccupations appear to arise from his experience. This leads to difficulties of interpretation, since we have relatively little information about Caravaggio’s life. Two of his biographers knew him. One, Giulio Mancini, was a physician and art collector who treated him during an illness; the other, Giovanni Baglione, was a ­mediocre painter who sued Caravaggio for libel in 1603. A third ­biographer, Giovan Pietro Bellori, was a failed ­painter who became a noted antiquarian and was personally close to Poussin. Bellori’s 1672 biography is the most thoughtful early account of ­Caravaggio’s life and art; unfortunately, twentieth-century scholars maligned Bellori unjustly due to his classicism. Bellori was more concerned with art than with gossip. His narrative needs to be supplemented with documentary evidence of Caravaggio’s various lawsuits and prison sentences.

We know that Michelangelo Merisi da ­Caravaggio was born in Milan on the Feast of St. Michael the Archangel, September 29, 1571. He is said to have died at Porto Ercole in Tuscany on July 18, 1610. During his short life he may have completed anywhere from eighty to ninety identifiable paintings, though documentation is often incomplete, attributions can be insecure, and many dates are controversial. About two dozen of his works are on display in Rome. Florence has around half a dozen Caravaggios; in other major cities (Berlin, London, Paris, New York, Naples) you might see three or four at most. Here—at the risk of giving an overly narrow account of his achievement—I want to focus on those that we can see in person in Rome.  

Caravaggio’s family fled Milan in 1576 to escape the famous Plague of San Carlo, and he was brought up in the small town whose name he later adopted as his own. His father died in 1577, his mother died in 1584, and from 1584 to 1588 he returned to Milan to serve as an apprentice to the painter Simone ­Peterzano, who had been a pupil of Titian’s. We know very little about Caravaggio’s early life other than that at age twenty he had to flee Milan a second time. Giovan Pietro Bellori, in his marginal notes to Baglione’s 1642 biography, records his suspicion that ­Caravaggio had killed a man. Bellori’s published text is more circumspect but suggests that the fugitive’s first destination was Venice, where he admired the paintings of Giorgione, whose influence may be visible in Caravaggio’s only surviving landscape, Rest on the Flight into Egypt. Records suggest that whatever it was that caused ­Caravaggio to flee, an officer of the law was wounded in the process.

By autumn 1592, Caravaggio was in Rome, eking out a living as an apprentice in various painters’ workshops. One artist compelled him to paint as many as three heads per day. He seems to have led a miserably unstable existence, spending no more than a few months in any apprenticeship and subsisting largely on disgusting green salads. He was not a model employee. His best-known master was Giuseppe Cesari, known as the ­Cavaliere d’Arpino, at that time the most successful painter in Rome, thanks to the patronage of Pope Sixtus V and Pope Clement VIII. We now know the ­Cavaliere d’Arpino mainly because he put the young Caravaggio to work painting flowers and fruit. Yet his paintings are everywhere in the Eternal City. Perhaps his best-known work is a series of monumental frescoes in the Great Hall of the Palazzo dei Conservatori, now the seat of the Capitoline Museums.

When you consider the work of the Cavaliere d’Arpino, you begin to understand what it was that Caravaggio sought to destroy. The Cavaliere d’Arpino was a mannerist. His paintings are decorative, deferential to tradition, and technically competent, but they go in one eye and out the other. Lifeless and inert, they are constructed according to a series of patterns, symbols, and conventions. Caravaggio knew he was better than this, and he had to prove it.

Despite his day job of mindless drudgery in other people’s workshops, Caravaggio found time for his own work. Roman apprentice painters had the option of producing canvases for the open market, and certain of the art shops near the church of San Luigi dei Francesi were known to be frequented by collectors who were on the lookout for new talent. Caravaggio’s earliest pictures were produced with the aim of attracting these connoisseurs. One of the most interesting is Sick Bacchus, which is now in the collection of the Galleria Borghese.

Giovanni Baglione, the artist who later took ­Caravaggio to court for character assassination in verse, plausibly claims that Sick Bacchus is a self-portrait—not a flattering one, but it expresses exactly who and what Caravaggio thought he was. This painting ­also shows off Caravaggio’s skill at painting realistic fresh fruit, the symbolism of which is tantalizingly ambiguous here. Yet Caravaggio is too honest to ignore the consequences of self-indulgence. This image is at once an enticement to vice and a warning about the morning after. There is a strong element of mockery and provocation in Sick Bacchus: This is not Bacchus the jolly god of wine, but a hungover deity, as dangerous to himself as he is to you.

Sick Bacchus usefully encapsulates Caravaggio’s attitude toward tradition. Giovan Pietro Bellori says, with some shock:

He devoted himself to painting according to his own nature, with no regard whatever for the great marbles of the ancients and the celebrated paintings of Raphael. In fact, he despised all of this, and took nature alone as the subject for his pictures.

Caravaggio had no interest in copying antique sculptures, because he was more interested in color and in living flesh. Bellori describes him roaming the streets of the city looking for subjects and models whom he would represent faithfully, with attention to their imperfections. This was of course the opposite of standard practice. The ­Cavaliere d’Arpino was like most artists of the period in seeking to correct and idealize his subjects instead of depicting exactly what he saw. By emphasizing design and composition, the mannerists often neglected the element of illusion in the images they ­created. The Cavaliere ­d’Arpino, for all his skill as a draftsman, sometimes handled color as though he were painting by numbers.

Caravaggio’s disdain for smoothness and convention was not a matter of technique alone. He also sought subjects who would be instantly recognizable to his contemporaries in Rome. The Fortune Teller, now in the Capitoline Museums, is an excellent example—another amusing image with an edge of danger. Caravaggio painted several fortune tellers and cardsharps over the course of the 1590s, as he sought topics that would set him apart from the other young painters in Rome. His work was often copied by his peers, as you will realize if you spend any time wandering through a collection of seventeenth-­century Roman paintings. The shadowy dens of vice all have their origins in Caravaggio.

Caravaggio’s strategy for attracting patrons soon paid off. In 1595 he was invited to join the household of Cardinal del Monte, who would remain his patron until at least 1600, when Caravaggio was arrested and briefly imprisoned for violently beating one of the cardinal’s houseguests. Art historians speculate continually about why he remained in the cardinal’s household for so long and why the cardinal took an interest in him in the first place. But for all his volatility and lack of self-control, Caravaggio had another side. It is on display in his depiction of the Penitent Magdalene, which has been in the Galleria Doria Pamphilj for centuries.

Caravaggio never celebrates vice uncomplicatedly, as certain of his seventeenth-century followers do. According to Pope St. Gregory the Great, Mary Magdalene was not only a witness to the Crucifixion and Resurrection but also the sinful woman who anointed the feet of Jesus in the Gospels. Since Gregory, she has traditionally been portrayed as a prostitute who repents her past sins. Sixteenth-century depictions of this saint are often provocatively erotic—but Caravaggio breaks with tradition here. His Penitent Magdalene is sick with contrition. It is one of the only genuinely ­convincing depictions of this saint in existence, precisely because Caravaggio defied artistic convention and simply represented the model he saw in front of him, without reference to any earlier pictures of Mary Magdalene. Unless you know the name of this painting, you recognize the woman only as a fellow sinner, not as one of the most famous women in Christian history.      

Of course, there are drawbacks to ­Caravaggio’s radical approach to realism. Not even his genius could overcome some of the paradoxes inherent in his position. He was not painting “real life”—he staged this scene in his studio with a model. The pearl necklace and earrings on the floor, and the jar of oil beside them, did not get there by accident. Caravaggio’s images are no less “invented” than the Cavaliere d’Arpino’s are. The difference is really in the originality of his inventions and in his insistence on creating the illusion of everyday life. 

Also, the Penitent Magdalene reveals some of the artist’s more serious deficiencies. Caravaggio could paint flesh more convincingly than almost anyone around him, but he had a weak grasp of human anatomy. He also failed (or refused) to learn the science of perspective, which had developed over previous centuries. He consequently struggled to paint images that had significant depth. Sometimes the shadows in his pictures seem suspiciously like an excuse to avoid painting a background.

Caravaggio was a surprisingly weak draftsman. He rarely, if ever, bothered to make preparatory drawings for his work, but preferred to attack the canvas straight away at high speed. His compositions were often sloppy, as in the Penitent ­Magdalene, which combines emotional sensitivity and painterly bravado with an awkward, almost amateurish depiction of the space behind the saint. Caravaggio could get away with being so flawed only because of his God-given genius.

Did Caravaggio even need to be technically competent? He could get quite far on shock. His painting of Judith Beheading Holofernes was rediscovered in 1951 and has become one of the most celebrated pictures on display at Palazzo Barberini. This picture was commissioned by the Genoese banker Ottavio Costa, one of an increasing number of collectors who admired Caravaggio’s bold approach to narrative scenes, which often took place in closed rooms lit from above and featured intense contrasts between light and dark. Caravaggio’s world featured little in the way of fresh air or sunlight.

A conventional painter of the time would not have depicted Judith beheading Holofernes so explicitly. Just as classical tragedies never enact death onstage, only the buildup and aftermath, Renaissance painters tried to observe a sense of decorum in depicting gruesome events. Caravaggio’s picture makes even jaded modern viewers squeamish—so much so that most fail to notice errors in his depictions of anatomy and drapery. Caravaggio’s fixation on realism was tempered by his cavalier attitude toward mere correctness. Yet the picture remains powerful even after the shock wears off and its technical weaknesses become apparent. There is more to it than mere horror.

Thanks to Cardinal del Monte, ­Caravaggio was awarded a commission to paint the Contarelli Chapel in San Luigi dei Francesi, the French national church in Rome. He beat out the Cavaliere d’Arpino for the job. The contract was signed on July 23, 1599; the first two paintings were installed a little less than a year later. A third was added in 1602. All three remain in their original location. These pictures made ­Caravaggio the most famous painter in Rome.

The first two paintings were supposed to represent the conversion of St. Matthew and his martyrdom. As neither subject was particularly common in Roman churches, there was no obvious set of conventions to follow. Even now, the paintings are startling in context. Before he became an apostle, St. Matthew was a tax collector—and Caravaggio had the insolence to paint him as one, sitting in what looks like a shady tavern but is in fact the “custom house” mentioned in the Gospels. The scene appears like something out of contemporary urban life rather than the remote past.

As for The Martyrdom of Saint Matthew, it seems to depict the sort of chaotic brawl that occasionally led Caravaggio to spend another night in prison. It takes some time to work through the confusion and make out the narrative. From the painting itself, it might not be obvious that St. Matthew was murdered by a soldier while celebrating Mass. The violence of the action distracts the spectator from seeing that this is in fact a moment of victory: St. Matthew is not holding up a hand to beg for mercy but reaching for the martyr’s palm held out to him by an angel.

The original patron of the Contarelli Chapel, Matthieu Cardinal Cointerel, had died in 1585, leaving detailed instructions for how the chapel was to be decorated. Even so, Caravaggio enjoyed relative freedom in his approach to the subject matter. Yet he sometimes took one liberty too many. The third painting in the Contarelli Chapel, featuring St. Matthew writing the Gospel, was installed in 1602. What we now see is a second version, which ­Caravaggio painted in haste to replace an image the clergy found objectionable. Sadly, this original painting was destroyed during the Second World War.

Surviving black-and-white photographs indicate why the priests at San Luigi dei Francesi considered it inappropriate. St. Matthew is shown sitting with his legs crossed, hunching over his manuscript like a nearsighted tailor attending to his sewing, as an angel on his left dictates to him and patiently guides his hand. The scene is touchingly intimate, but its theological implications are controversial.

From one point of view, Caravaggio’s original depiction of St. Matthew might seem less irreverent than Nicolas Poussin’s depiction of Saint John on Patmos, in which the saint sits in a landscape, calmly composing the Apocalypse as though he were writing a letter to his parents. The intimacy of Caravaggio’s conception of how the Gospels were put down on paper is fully in line with some of the guidelines on sacred art that the Church had been developing since the 1560s. But not with all of them. 

Caravaggio’s conception of divine inspiration, and his conflation of inspiration with revelation, contradicted Church teaching, as the French priests saw it. Moreover, the image lacked decorum. Matthew the Evangelist was made to seem barely literate, in need of angelic guidance simply to form letters. According to Catholic tradition, the Gospels were not an act of mindless dictation. God did not inspire the Four Evangelists by intoxicating them like the Delphic oracle, nor did he use them as a sort of wind chime or Aeolian harp. ­Poussin’s Saint John on Patmos might have less emotional impact, but at least Poussin did not treat the saint as a mere instrument. When Caravaggio’s painting of St. Matthew was destroyed in 1945, we lost our most important indication of how ­Caravaggio understood his own inspiration. It seems he had no idea how or why he created the images he did. He did not necessarily think he had a hand in them.

Caravaggio did not understand saints; he was more comfortable with the people who venerated them. This is particularly obvious in his depiction of Our Lady of Loreto. This picture is often known as Pilgrim’s Madonna and can still be seen in the first chapel to the left in the basilica of Sant’Agostino. The Virgin Mary might be any mother in Rome, except for the halo around her head. She is barefoot and wears ordinary clothing, rather than the traditional garments painted with lapis lazuli, or ultramarine, to signify that she is the Queen of Heaven. Her baby is, in a similar way, just a baby. He bears no sign that he is the Savior of the World and Redeemer of Mankind, but is treated simply as an attribute of his mother. She stands on the threshold of a modest building where the plaster is cracking on the exterior walls, revealing the brick beneath. The two pilgrims might as well be begging for food. The most notable feature of this painting is the dirty soles of the male pilgrim’s feet. This is a striking innovation in religious art, perhaps the first pair of unclean feet in the tradition.

We appreciate the realism of this altarpiece if we view the painting from a modern, secular point of view. Today we tend to take a dim view of idealizing subjects, or of omitting dirt and other gritty details. But look at it from a Christian point of view, specifically the Catholic perspective of a believer who has come to venerate Our Lady of Loreto. Is the realism appropriate? Should Caravaggio have emphasized the dirtiness of the pilgrims, rather than the glory of Our Lady? We might identify with the pilgrims in being soiled and unworthy ourselves. Might there be something morbid about that?

Caravaggio frequently clashed with Church patrons because his pictures did not always suit their purposes. An example is the painting now known as the Madonna dei Palafrenieri, originally commissioned for an altar in St. Peter’s Basilica. It was commissioned on December 1, 1605, installed on April 8, 1606, and removed on April 16. Scipione Cardinal Borghese, the well-known art collector and sociopath, snapped it up for himself, paying 100 scudi on July 20 to the Arch-Confraternity of Papal Grooms (palafrenieri). Since then it has been one of the striking religious images in the Borghese collection.

The infant Jesus is represented as an uncircumcised naked baby with no halo around his head. With the help of his mother, he crushes the head of the serpent. But why does the Redeemer of the World need help to crush evil incarnate? And why is the Virgin Mary dressed in red rather than blue? Why does she reveal so much cleavage? Why is St. Anne, mother of the Virgin and grandmother of Jesus, represented as a dour, stiff, joylessly passive grandmother? In the Catholic tradition, St. Anne symbolizes grace. Here she is a background figure who perhaps wishes that the child had been a girl instead. But this painting was originally meant for an altar dedicated to St. Anne.

Even if you admire this painting as a work of art, you see that it might not be fit for its original purpose. Moreover, the representation of Jesus plainly does not fit the Catholic Church’s general guidelines for sacred art. The problem is not ­excessive “realism.” The lack of any special status for the Savior of the World and his mother, the Queen of Heaven, and the strange awkwardness of St. Anne, combine to make the painting singularly ­unsuitable as an altarpiece. A failure as an altarpiece, perhaps, but not as a painting, as Cardinal Borghese understood.

Caravaggio was not famous merely as a painter. We have no space to list his many arrests on charges that included possession of illegal weapons, verbal and physical attacks on policemen, violent assault, hurling rocks through his landlady’s window, and throwing a plate of artichokes in a waiter’s face. His first recorded murder took place on May 29, 1606. The victim was a young man named Ranuccio Tomassoni. The circumstances are unclear, but it may have been a case of a tennis match that turned violent. According to some sources, the two men were competing for the love of a local prostitute named Fillide Melandroni. She is often identified as the model for ­Judith in the beheading picture in Palazzo Barberini. Caravaggio fled Rome, was convicted in absentia, and was sentenced to death by beheading.

As part of his effort to be pardoned for ­Tomassoni’s murder, Caravaggio sent an unusual present to Cardinal Borghese: a picture of David with the head of Goliath. The giant’s severed head is a self-portrait, and the model for David is alleged to be one of Caravaggio’s assistants, Cecco del ­Caravaggio, who may also have been his lover. The picture bears evidence of haste; it may have been completed in hiding. On the sword are the enigmatic initials H-AS OS, which may be an abbreviation of the Latin phrase humilitas occidit superbiam. The precise date of this picture is controversial, as is its exact significance.

We might note here, as an aside, without drawing any conclusions, that Caravaggio had a fixation with cut-off heads even before he was sentenced to die in this manner. This fact might well explain his otherwise inexplicable fixation on John the Baptist, whom he painted up to nine times, once in mid-beheading, twice as a severed head on a platter. He painted only three versions of David with the head of Goliath. 

On fleeing Rome, Caravaggio went first to ­Naples, where he was in demand as a painter. But he was impatient for a papal pardon and decided that the best way to get one was to present himself to the Grand Master of the Order of Malta and ask for a knighthood. Incredibly, he received one on July 14, 1608, two days and a year after his arrival on Malta, as a reward for completing a picture of the beheading of St. John the Baptist. This can still be seen in the co-cathedral of St. John in Valletta. A little after his induction, he was imprisoned for attacking a fellow knight with his sword. He was imprisoned on August 19 and escaped on October 6. From Malta he fled to Sicily. He completed some of his most impressive surviving paintings in Syracuse, Messina, and Palermo before deciding that he was not safe in Sicily, either. In the autumn of 1609, he arrived in Naples, where, toward the end of October, he was attacked outside a tavern by a gang of armed men who wounded him in the face.

Still Caravaggio dreamed of being pardoned. One of his last paintings depicts a bored, sullen John the Baptist and might have been sent as another gift to Cardinal Borghese. Caravaggio’s earlier depictions of John the Baptist often seem at least vaguely erotic; in one of them, the saint leers like a faun or satyr. The versions of John the Baptist in the Capitoline Museums and Doria Pamphilj gallery may have been depictions of Cecco del Caravaggio, the supposed young lover. There is also a Palazzo Barberini St. John the Baptist, which is a little more brooding. Here, in the Galleria Borghese painting, the saint is disillusioned. We are tempted to read this painting as an autobiographical expression of despair. One might almost conclude that Caravaggio, like John the Baptist, was getting impatient about his imminent beheading. But this might be an over-reading. In any case, Caravaggio did not die on a scaffold, or without his head. He expired of an illness, in mysterious circumstances, on July 18, 1610, while trying to get back to Rome.

After his death, everybody imitated him, often with dark, gloomy “Caravaggesque” paintings featuring shadows, vice, and at least the prospect of violence. But the fashion did not last very long. By 1630, Caravaggio’s approach was no longer in vogue—at least around Rome, where Raphaelite classicism was being revived. Order, beauty, classical ideals, and fresh air returned to Roman painting. Caravaggio’s reputation reached its nadir in the nineteenth century, when the great English art critic John Ruskin dismissed him as a mere ruffian, distinguished only by his preference for candlelight and villainy. Pace Ruskin, it is hard to find candles in Caravaggio’s work, but you get the point. Ruskin thought Caravaggio vulgar and depraved, and nothing more. We who admire ­Caravaggio ought to admit to ourselves that Ruskin was not entirely wrong in some of his detailed criticisms, even if we think he finally missed the point.

In 1951, the art historian Roberto Longhi organized a major exhibition of Caravaggio’s work in Milan. Since then, Caravaggio’s fame has come to surpass that of virtually every other Old Master. He is now spoken of in the same breath as Titian, Rubens, Rembrandt, and Velázquez, even though he lacks the range, skill, and sheer versatility of these apparent peers. His pictures are far more limited than any of theirs, and he never quite shows us anything like a positive ideal of beauty. Yet Rembrandt and Velázquez, at least, are unthinkable without the innovations they consciously adopted from Caravaggio. Was Ruskin’s judgment better than Rembrandt’s?

By contrast, painters who sought to restore a classical ideal, such as Mantegna, Botticelli, Raphael, and Nicolas Poussin, are less and less popular today. Art students tend to dismiss their work as pedantic or merely pretty. To modern eyes, their depictions of ideal beauty often appear ludicrous, even oppressive; their demonstrations of positive virtue seem boring, even repulsive. We moderns prefer artists like Caravaggio who neither teach nor preach, and who sometimes make us feel preemptively forgiven for our weaknesses and indulgences. When we are shown a picture like Raphael’s Transfiguration, our instinct is to shrink from the light. Caravaggio is more comforting. He assures us that it’s okay to have dirty feet. But can we justify our taste for Caravaggio if we think of ourselves as having moral standards, as at least trying to avoid vice and depravity?

Of course we can. Caravaggio smashed the tradition of the Cavaliere d’Arpino to bits. It deserved to be destroyed. That entire set of conventions was finished. As for the Renaissance itself, it had run its course long before. Caravaggio took his notions of realism as far as he could, until they began to show how limited and unsustainable they were. But were his ideas any more absurd than the ideas behind the classicizing project that gave us the glories of the High Renaissance? If Caravaggio destroyed the art of painting as the Cavaliere d’Arpino knew it, we should thank him, because in doing so he cleared the way for the renewal of classicism that was already beginning in the 1590s with Annibale Carracci, and would reach a new height with ­Poussin from the 1630s onwards.

We moderns are too saturated with sordid ­images and ideas to understand how to view a Raphael. He leaves us cold because we want to be shocked, indulged, and gratified, all at once. We are desensitized to his ideals of beauty. By contrast, in our fallen state, we are strongly ­attracted to ­Caravaggio, who seems as low-minded and appetite-­driven as we are. He cannot see beauty directly, or show it to us except by the by. But he can teach us how to look for it. 

The post Caravaggio and Us appeared first on First Things.

]]>
Dark Phantoms https://firstthings.com/dark-phantoms/ Mon, 29 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=118638 It happened quickly, so quickly that you’d think it was impossible to retain the image. The Ohio Turnpike in October 2024, 5:30 a.m. and pitch-black, the road straight and...

The post Dark Phantoms appeared first on First Things.

]]>
It happened quickly, so quickly that you’d think it was impossible to retain the image. The Ohio Turnpike in October 2024, 5:30 a.m. and pitch-black, the road straight and level and empty of cars, 80 miles per hour—just right to make it to central Wisconsin by sunset—high beams and no radio, only the hum of the engine, when out of nowhere a buck leapt into my lane, a thick tan body and towering antlers dashing in from the right. I knew instantly what it was. I wasn’t drowsy—I’d just spent five hours sleeping in the back at a rest stop near the state line—but I couldn’t avoid him. I’d barely touched the brake and cut the wheel an inch before we hit. I doubt more than a half-second passed since he entered my sight. The crunch of metal, plastic, flesh, and bone was just that, a crunch, not sharp or loud. The hood of the car swung up to the windshield, the airbag blew, the car slowed and drifted to the shoulder. Ten seconds passed with no noise or motion. The skin on my face started to burn (from the chemicals in the airbag, I am told). I opened the door, stepped out, and spotted a dark mass a hundred feet back, lying half in the roadway, unmoving. I reached inside and clicked the hazard lights.

A tow truck driver dropped me at the Toledo airport, where I rented a car and made it home that night. A week later I left Wisconsin for good, moving to Boston for a time before settling in Washington, D.C., far from the place where I had seen more Amish carriages than cars passing my front door. Every week or so, the image comes back. The road, the lights, the deer flash in my mind as if I were in a theater at the start of a movie, the black screen suddenly illuminated. The sight of a Ford Flex on Wisconsin Avenue might cause it, or the deer that were nosing around the gardens outside my building last week. Sometimes it has no cause at all—it just happens. Each recurrence is a jolt. The ordinary day is broken. I wasn’t hurt, wasn’t shocked, suffered no trauma physical or mental, no tender feelings for the poor buck (though I loved that car). I felt only astonishment at how suddenly the world had changed. It is not the impact but the moment before it, the split-second awareness that something bad is going to happen, and I can’t prevent it—that is what comes back again and again.

The study of dreams may be regarded as the most trustworthy approach to the exploration of the deeper psychic processes.” So wrote Freud in Beyond the Pleasure Principle (1920), which addressed nightmares suffered by veterans of war that seemed to contradict the theory he had laid out twenty years earlier in The Interpretation of Dreams. In that earlier book, Freud had characterized “dream work” as a mode of ­wish-fulfillment. In a dream state, the ego relaxes and repressed desires are given expression, though in distorted form. Those desires are shameful, cowardly, selfish, disturbing, or otherwise contrary to the moral sense, but they are in us and have been since early childhood. We dream because we must, because “the return of the repressed” can’t be checked, for such desires never go away, only simmer in the unconscious. In healthy individuals, the repressed returns by means of sublimation, whereby destructive instincts are channeled into safe habits that meet our psychic needs without endangering social relations (as lust, for instance, is contained by marriage). It is, in Freud’s view, a tragic compromise that leaves us never fully satisfied. But civil society cannot survive without it.

What about the ex-soldier who falls asleep and dreams of the trenches, mad with fear and ­uncertainty as bombs fall for hours, a friend beside him clutching his rifle and soon to die in the mud? It was common in 1919. There were an “immense number of such maladies,” Freud writes, with no “basis of organic injury.” Such dreams reenacted the worst moments, pulling “the patient back to the situation of his disaster, from which he awakens in renewed terror.” The agony put the Freudian model to the test. What wish was granted, what pleasure got its release? What perverse mechanism forced the veteran to relive what he never deserved to experience in the first place? The source of these miseries was like a foreign body lodged deep inside, Freud observed, hidden and malignant. It kept the trauma fresh as a living torment. Psychoanalysis did not help these patients. The analyst couldn’t get the patient to recognize what had been repressed in his waking life and expressed in those nightmares, because the dream content didn’t belong to the patient. Something else, a daemon inside (Freud uses the Greek term), was in control. The patient could not claim and examine the content of the dream, only endure it.

To have an image in your mind, unpleasant or disastrous, which may pop up at any time, with or without a relation to the present, isn’t so different. Many people experience memory-flashes, and though the objects are less intense and lethal than those of Freud’s subjects, the mechanics are the same. It used to be that everyone in America recalled exactly where they were and what they were ­doing when they heard that JFK had been shot. A special announcement, the newscaster’s voice, the look on the face of a person beside you, all rushed into consciousness whether you wanted them or not. The memory has a will of its own. Freud described how hard it is to discuss these visions when they reach traumatic levels: The patient “is obliged rather to ­repeat as a current ­experience what is repressed, instead of, as the physician would prefer to see him do, recollecting it as a fragment of the past” (emphasis in original).

This is correct. When the deer flares up these days, shying back too late, my reflex too slow, I’m not ­remembering. The moment is ­repeated (not by me), and the bus I’m riding or the corner I’m standing on dissolves, and I’m back on the road in the dark, and the impact is coming. I can’t do anything with this apparition, can’t make it stop or start, can’t find a meaning in it. When I recall the moment deliberately, the effect is different, a shiver, not a jolt. It doesn’t help; I can’t be cured. My desires are sometimes shameful, but at least they’re human. This occurrence isn’t human at all. Freud gives it a name, “repetition-compulsion,” and tries to grant the nightmares a purpose when he says they are the mind’s attempt to face danger by staging over and over the traumatic moment as if it were an exercise, so that we can better respond when another threat arrives.

It’s a hollow rationale. Look at the man who underwent a shock and wakes up trembling long after, or the woman who picked up the phone one lazy afternoon and heard the news that a loved one was gone, and the ringing in her head breaks the dead of night for years. Try telling them that a terrible thing has happened, yes, and that they must re-experience it until the edge has softened, ponder and interpret it, step back and get some distance—and they’ll answer with a grimace or a moan. They can barely describe it. Nothing in these echoes is revelatory or forward-looking. What purpose can repetition serve? What moral instruction does it impart, what good comes of it?

Why is human nature like this, self-tormenting? Freud the scientist replies, “It just is.” In Beyond the Pleasure Principle, repetition becomes a rule. The best outcome for a trauma victim is never to think of what happened again, but the ­daemon won’t allow it. When a trauma upsets the placidity of daily life, the sufferer is compelled to repeat the experience in some other mode (a dream, a game), even if it means re-experiencing the pain, until the troubling side falls away. Life goes on. The inscrutable will doesn’t care about his feelings, only its serenity. Quiescence, not happiness, is the goal. In fact, for Freud, the ultimate repetition is death—out of nonexistence we came and to nonexistence we go: “The goal of all life is death.”

But I don’t see any lessening of intensity with each repetition of that early morning in Ohio. I’m no more in control of the image ­many months later than I was of the event at the time. I think millions of Americans are walking around at this very moment with cursed traces in their heads that strike without notice and have lost none of their bite, remnants of a past they’d much rather forget, concentrated into an instant. It might be a ­fender-bender, the loss of a job, a breakup, or much worse. Probably it’s a matter best avoided when meeting a new co-worker, chatting in the yard with a neighbor, falling in love by the third date. What dark phantoms lie within, what bits of life that hit hard and linger but give no relief or insight, only shadow the bearer and teach us sternly that we are not entirely ourselves?

The post Dark Phantoms appeared first on First Things.

]]>
What Does “Postliberalism” Mean? https://firstthings.com/what-does-postliberalism-mean/ Fri, 26 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=118769 Many regard “postliberalism” as a political program. In 1993, when the tide of globalized liberalism was at its highwater mark, the contrarian John Gray published Post-Liberalism: Studies in Political...

The post What Does “Postliberalism” Mean? appeared first on First Things.

]]>
Many regard “postliberalism” as a political program. In 1993, when the tide of globalized liberalism was at its highwater mark, the contrarian John Gray published Post-Liberalism: Studies in Political Thought, the book that brought the word into currency. But it was not political philosophers who first used the term. In the 1980s, I studied theology at Yale University. Many of my teachers and fellow graduate students called themselves “­postliberals,” drawing on the suggestive subtitle of George Lindbeck’s quirky and influential 1984 book, The Nature of Doctrine: ­Religion and Theology in a Postliberal Age. We used “postliberal” to signal our dissent from the liberal tradition in theology.

Although I have not done enough research to be ­confident in my judgment, I’d venture that “liberal” first entered the lexicon in reference to German theology in the early nineteenth century, just as “­postliberal” was first used in theological circles. The term “liberal” refers to the conditions under which a Christian ­intellectual operates. In the early nineteenth century, government ministers of religion oversaw theological faculties at German universities. Liberals in theology sought freedom from official oversight. Research and reflection were to answer to ­academic standards, not to governmental or ecclesias­tical norms.

Free in this way, liberal theology took up modern historical methods for the study of the Bible. Modern modes of thought, such as romanticism and German idealism, were used to reframe Christian doctrine. These innovations were not undertaken to weaken or undermine faith. According to proponents of liberal theology, the inherited faith was narrow and superficial. The use of modern methods and contemporary idioms would renew and deepen Christian faith and practice. Christian self-understanding would become more historically accurate and intellectually responsible, more contemporary and relevant, more personal and authentic.

My teachers at Yale were trained in the liberal tradition of theology. But they harbored misgivings. They noted that university-based biblical study had shifted its subject matter away from what is written in the Bible toward what is “underneath” or “behind” the text. For example, scholars became experts in “ancient Israelite religion” or the “Johannine community.” My teachers recognized that the ideal of “intellectual responsibility” was suspect. Too often it meant chasing after academic fashions. The quest for “relevance” bleached out the distinctive and supernatural elements of the Christian message. Perhaps most decisively, in the last decades of the twentieth century the mainline Protestant churches that were guided by the liberal tradition in theology became spiritually flat and increasingly moribund. Long before Patrick ­Deneen penned his influential book, my teachers judged that when it came to theology, liberalism had failed.

For these reasons, “postliberal” had an important negative meaning in my years of study at Yale. It signaled a loss of confidence in liberalism as a theological project. And because the central premise of that tradition is freedom from authority, to one degree or another, we were drawn in the opposite direction. Our goal was to be less inventive, less original, and less modern in our thinking, and we pursued this goal by being more docile to tradition. We sought something new to us, and quite radical, given the general spirit of modern culture: “thinking under obedience.” 

After I received my degree, I taught theology for two decades. Although most of my work remained within a theological frame of reference, I reflected more broadly on the cultural prestige of notions such as creativity, originality, open-mindedness, and especially “critical thinking.” These cultural and pedagogical ideals are still around. They make a promise akin to that of liberal theology: Creativity and originality will enliven society and make our lives more meaningful. Open-mindedness will allow divergent views and opinions to enrich our reflection. Critical thinking will protect us from bias and deliver us from the narrowness of our social and historical backgrounds.

But here, too, liberalism has failed, or so it seemed to me during my years as an undergraduate instructor. The ideal of open-mindedness led to superficiality. Creativity was an invitation to navel-gazing. Critical thinking produced the intellectual enervation that comes from the implicit message that all truth is historically or culturally relative.

As the educational failures of liberalism pressed ­upon me, I began to see a larger context for postliberalism. God alone deserves our uncritical love and devotion. But the promise of obedience applies to many aspects of life. If we will but give ourselves to what we love, we will enter more deeply into its life-­giving power. That’s true for marriage, it’s true for our vocations, and it’s true for the life of the mind.

I’m not someone who reliably remembers a great deal of what he reads. For this reason, I can only guess that my reading of Martin Heidegger and John Henry Newman as a young professor in the final years of the twentieth century instilled in me a suspicion that the culture of the West had undertaken a dangerous experiment. To a striking degree, we dismiss the promise of receptive obedience and seek to rely entirely on our capacity for independent invention. We believe only in the ideals and sentiments that have fueled the liberal tradition, not just in theology, but also in education and culture more broadly.

There are many ways to explain the origins of this remarkable turn against obedience. The standard Enlightenment narrative is one of progress. We have left behind stultifying inherited authorities, and now, for the first time in history, we are free to live in accord with nature and reason. There are other, less triumphalist accounts. The great German sociologist Max Weber coined the term Entzauberung, or “disenchantment.” He observed that the sacred authorities that once beckoned us to obey have lost their power. In a godless era, we have no choice but to embrace the implicit nihilism of making our own meaning.

In Return of the Strong Gods, I do not tackle big questions about the origins of modernity and our condition of disenchantment. I focus on the events of the twentieth century, which, I argue, led to the dominance of creativity, innovation, transgression, and “openness.” But the winds of change are blowing. The open society—a creature of the liberal tradition—has failed. Old imperatives are reemerging, those of protection, conservation, and consecration. Old sentiments are reasserting themselves, those of love and devotion, loyalty and obedience.

Forty years ago, I found myself among the rebels against the liberal theological tradition. My teachers did not formulate a programmatic alternative, but they were clear-minded about the failure of liberalism in religion. And they were aware of the ways in which the ­intellectual culture of the West had come to dead ends, also under the influence of liberalism and its rejection of the promise of obedience. In subtle and sometimes exasperatingly indirect ways, they pointed toward a way of being in the world that is different from what liberalism seeks to midwife, a form of life guided by authority and obedience.

Today, thoughtful people wonder whether liberalism has failed in public life as well. That suspicion is sufficient to earn them the label “postliberal.” With suspicion of failure comes reflection on alternatives, however partial, however tentative. As was the case for my theological mentors, those alternatives depend, sometimes explicitly, sometimes implicitly, on the restoration of authority and the rehabilitation of ­obedience—the return of the strong gods. 

Like my teachers, I cannot put this return into programmatic form. And like them, I hope I have the wisdom to recognize that what comes after liberalism will not be liberalism’s antithesis. To be postliberal is not to be anti-liberal. The negation of what has failed does not limn an alternative. For example, my teachers did not reject historical-critical study of the Bible. Rather, they urged me to read Scripture and pay attention to the words. Today’s political task is similar. I see no reason to reject the First Amendment. Rather, we should read the book of nature and pay attention to the depth and scope of our humanity, which is not exhausted by the very real and important role of freedom. And note well, freedom cannot be secured by rights. It, too, finds its power and importance in the perennial call to love, honor, and serve.

The post What Does “Postliberalism” Mean? appeared first on First Things.

]]>
Make Me A Lutheran https://firstthings.com/make-me-a-lutheran/ Wed, 24 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=118827 John Fisher, the renowned Bishop of Rochester, is known to history chiefly for having been put to death by King ­Henry VIII...

The post Make Me A Lutheran appeared first on First Things.

]]>
A Defense of Free Will Against Luther:
Assertionis Lutheranae Confutatio, Article 36

by st. john fisher, translated by thomas p. scheck catholic university of america, 324 pages, $75

John Fisher, the renowned Bishop of Rochester, is known to history chiefly for having been put to death by King ­Henry VIII in 1535 for the “treason” of denying that Henry was “the only supreme head in earth of the Church of England.” In his own lifetime, however, Fisher had achieved European fame as one of the most influential early opponents of Martin Luther—a role into which he had been led by the example of none other than his king. Thomas P. Scheck has put English-speaking readers in his debt with this translation of two key sections of the most significant book Fisher published, his Assertionis Lutheranae Confutatio (Confutation of Luther’s Assertion).

This text, which first appeared in January 1523, aimed to refute Luther’s defense of his 41 theological statements that Pope Leo X had condemned in the 1520 bull Exsurge Domine. The Confutation was a fairly comprehensive attack on Luther’s system, even though the condemned articles did not include either of ­Luther’s two fundamental doctrinal principles—justification by faith alone and the dogmatic authority of Scripture alone. Fisher addressed this lack by prefacing his work with a couple of essays identifying and engaging with those two principles.

The analysis of Lutheranism as a system founded on the two basic doctrines of “faith alone” and “Scripture alone” is still a commonplace of textbooks, and Fisher was one of the first to present it. He originally offered this analysis in a sermon to the citizens of London at St. Paul’s Cathedral on May 12, 1521, when the papal condemnation of Luther was promulgated in England and when it was announced that the king himself had written a book against Luther. Henry’s decision put the kingdom of England at the forefront of the Catholic response to Luther for almost a decade. The “U-turn” that was to set the king against the papacy by the early 1530s would have been unimaginable at that moment.

Scheck’s translation offers English renderings of the first of ­Fisher’s prefatory essays, the one on the principle of “scripture alone,” together with the longest of his responses to Luther’s individual articles—Article 36, on free will. Fisher’s Confutation was widely read in its time, running to about 20 editions by 1600, many of them published in the 1520s, and it left its mark on the Catholic response to what would come to be called “the Reformation.” It was often cited or invoked at the Council of Trent in the middle of the century, its reputation enhanced by its author’s status as one of the first Catholic martyrs of the English Reformation. Article 36 was especially important. Free will was, famously, the issue over which the great Christian humanist scholar, Desiderius Erasmus, began to sever his ties with the Reformation, despite his initial sympathy with the concerns and goals of the reformers who came to be known as Protestants. Erasmus’s Diatribe on Free Will (1524) showed that he could not accompany them on their new path. Notwithstanding its title, the Diatribe was a measured and polite engagement with Luther’s teaching (the word diatribe, directly imported from the Greek, only acquired the pejorative sense of “rant” centuries later). As Scheck emphasizes in his substantial introduction to this volume, it has long been acknowledged that Erasmus was heavily indebted to Fisher’s treatment of this topic, and it may well have been Fisher’s work that alerted him to the crucial role free will played in ­Luther’s theological system.

Erasmus entered the lists against Luther because he was put under considerable pressure to do so, not least by English friends and patrons who were very important to him. For the most part, the things he wrote were things he freely chose to write, in pursuit of his own intellectual goals and aspirations. Publishing books, together, perhaps, with writing letters, was what defined him as a person. He was a man of letters. By contrast, Fisher wrote against Luther because it was his duty to do so. Everything Fisher published (as well as most of the things he wrote that remained unprinted) was the fulfillment of some very definite duty. Thus his Sermons on the Seven Penitential Psalms (1508) were preached and published at the behest of Lady ­Margaret Beaufort (mother of Henry VII), whom he served as what we would call a spiritual director. His funeral sermon for Henry VII and his memorial sermon for Lady Margaret were evidently court commissions. And his sermon at the burning of Luther’s books in 1521 was presumably delivered at the command of Henry VIII or Cardinal Wolsey, or both, since they had planned the entire occasion. Fisher made clear that his Confutation of Luther’s Assertion, like all his efforts in polemical theology, was written in pursuance of his duty as a bishop to defend the souls entrusted to his care from the snares of heresy. If Erasmus’s identity was essentially authorial, Fisher’s was above all pastoral.

Fisher took great pains, for example, to urge repentance on the ­unknown miscreant who had scrawled some graffiti on a papal bull (the source for this episode says it was an indulgence) that had been posted on the door of Great St. Mary’s, the university church in Cambridge. He addressed the university assembly on three separate occasions in his capacity as its chancellor, but the culprit kept quiet. He was much later identified as a Frenchman, Pierre de Valence, who went on to spend some years in the service of Henry VIII’s chief minister, Thomas Cromwell, and to commit himself wholeheartedly to the Reformation. 

The progress of Lutheran and other heretical doctrines at Fisher’s beloved alma mater caused him increasing grief over the next ten years. In February 1526 he preached another sermon at Paul’s Cross, this time at the first public recantation of the newfangled dissidents in England. Most of those disavowing heretical beliefs on that occasion were German merchants from the “Steelyard” (the Hanseatic trading station in London), whom Thomas More had found in possession of Lutheran books during a spot search. But with them was Dr. Robert Barnes, who had preached a sermon modeled on one of ­Luther’s at the little church of St. Edward’s, just off the market square in Cambridge, on Christmas Eve 1525. This airing of Lutheran ideas by a member of the Cambridge Divinity Faculty must have seemed to Fisher an almost personal betrayal. Publishing his sermon soon afterward, he explained: “My duty is after my poor power to resist these heretics, the which cease not to subvert the church of Christ.”

He went on to invite anyone ­unpersuaded by the arguments of his sermon to come to him confidentially so that they could thrash matters out for as long as it took for the doubter to “make me a Lutheran” or for Fisher to “induce him to be a Catholic.” There is no reason to think anyone ever took up that offer, with its naive confidence in the power of debate to resolve disagreements. 

Barnes himself appeared several times before a panel of bishops (including Fisher), but although he abjured at St. Paul’s, he certainly wasn’t won over. Within a few years he had fled the country to join ­Luther at Wittenberg, and many years later he would die a martyr’s death at Smithfield on account of his outspoken Lutheranism, though in political terms his execution was collateral damage from the fall of Thomas Cromwell in 1540.

Fisher’s arguments against ­Luther were often pithy and at times irrefragable, as in his version of one of the most obvious rebuttals of the exclusion of “works” from the process of justification, presented in his 1526 sermon. Taking up one of Luther’s favorite proof texts for this theory, Jesus’s assurance that “your faith has saved you” (Luke 7:50), Fisher wrote:

Our Saviour saith, not only Fides, but Fides tua. Thy faith (a truth it is) is the gift of God. But it is not made my faith, nor thy faith, nor his faith, as I said before, but by our assent. . . . But our assent is plainly our work. Wherefore at the least one work of ours joineth with faith to our justifying.

Logic, however, usually loses out when passions are engaged, and Kołakowski’s Law of the Infinite Cornucopia reminds us that there is an endless supply of arguments that can be deployed against this elegant rebuttal. But they do not leave it any the worse for wear, and so they only confirm that scriptural interpretation can never be the plain and simple thing that Luther said.

Fisher himself was perhaps too close to the action to decipher the full significance of what was ­unfolding around him. Thomas More saw a little further into it and into the future, warning his son-in-law William Roper even in the 1520s, when England’s Catholicism seemed so secure under royal patronage, that there might come a time—it came far sooner than he imagined—when Catholics would be happy to let the heretics “have their churches quietly to themselves” as long as “they would be content to let us have ours quietly to ourselves.” The essence of what Luther was about was the quest for a personal certainty of salvation (“­assurance,” as it came to be termed by its exponents). And for him, reasonably enough, the intrinsic sinfulness of human nature meant that salvation could not be certain if it depended to any degree on human operation or cooperation. In the words of another of his favorite biblical tags, omnis homo mendax (“all men are liars,” Ps. 116:11). Only God was true. So faith had to justify “alone,” as God’s pure gift, without any operation or even cooperation on the part of the believer. And Scripture had to instruct “alone,” as a totally transparent text, without any interpretation by any human agent or intermediary—whether the believer himself, a priest, or the church as a whole.

To a scholar such as ­Fisher, who had imbibed at Cambridge the theological traditions of the via antiqua—the “old way” of Thomas Aquinas and Duns Scotus—Luther’s cavalier dismissal of most of the sacraments, together with his via moderna (“new way”) emphasis on the sheer will—one might almost say the sheer ­willfulness—of God, was always likely to be rebarbative. On the subject of scholastic theology, by the way, Scheck advances in this book, as he has done elsewhere, a case for seeing Fisher as a Scotist. It remains my own view that Fisher was more an eclectic than a disciple of any specific figure, but that nonetheless he had a particular predilection for Thomas Aquinas, whom he more than once acclaimed as “the flower of theologians.” Yet whatever might be thought of his precise theological affiliation, his theological center of gravity lay firmly within the “realism” of the High Middle Ages rather than the “nominalism” of the Later Middle Ages.

Luther’s doctrine of the transparent text, that alluring fancy, had a profound effect on English and even more on American (that is, U.S.) culture, formed and shaped as it was from the start by Protestantism. The American commitment, or rather fidelity, to the Constitution has not unreasonably been traced to that respect for the Bible that underpinned American culture until within living memory. The analogy, however, is more revealing than is often ­appreciated by those who draw it. The Constitution is, as texts go, coherent, and it was written with clarity in mind. Yet it cannot function alone. You cannot appeal simply to the Constitution. You have to appeal to that other SCOTUS, whose interpretative authority far exceeds any ever ascribed to Scotus. Without those nine supreme pontiffs, even this simplest and most transparent of texts is nothing more than a piece of paper. 

Luther’s specious notion of the self-interpreting or transparent text burst upon a world in which “the Bible” was being made much more real and accessible by the technological miracle of the printing press, in a potent coincidence that greatly enhanced its plausibility. But the limitless doctrinal fragmentation that was already evident by 1530 amply vindicated the observation of early Catholic polemicists such as ­Fisher and More that “Scripture alone” made every man his own pope. The Protestant world started by decrying this conclusion as a calumny but by 1700 often treated it as a dogma, under the label “private judgment.”

The tragedy was that this early sixteenth-century moment, the moment of print, the moment of the “three languages” (Latin, Greek, and Hebrew), the moment that briefly spoke through Erasmus and, among other effects, inspired John Fisher to diversify the curriculum at Cambridge by introducing both Hebrew and Greek, was poised to herald a new era in scriptural scholarship. As it happened, this new era was one of bitter confessional strife, thanks to Luther’s relentless and egotistical insistence on absolute assent to whatever he happened to decide was the plain meaning of Scripture. The Scottish humanist Florentius Volusenus later told the story of a conversation at Rochester, probably around 1530, in the course of which Fisher admitted to him that he wondered what divine providence meant by making some Lutherans such fruitful commentators on Scripture despite their being heretics. Even Fisher, it seems, was impressed by their scholarship. Even he felt the pull of Luther’s evangelical summons to unquestioning faith in Jesus’s promises, as Volusenus later emphasized in the same book (De Animi Tranquillitate Dialogus). Yet Luther’s evangelical rhetoric did not persuade Fisher, as it persuaded so many, to subscribe to his logical absurdities or theological foibles.

Luther’s theology, notoriously, is heavily based in the Latin tradition and the Vulgate text. For all his grappling with the Hebrew and the Greek, his theological vocabulary is the legacy of late scholasticism, and Erasmus took the measure of him pretty fairly in his Diatribe, identifying him as just one more needlessly dogmatic scholastic. Although the Christian humanist program of Erasmus did nothing so crude as to reduce the Bible to the level of a human text, it did seek deeper understanding of the Scriptures by treating them, in many respects, like any other human text. Luther’s approach was, at the theoretical level, exactly the opposite. It was to treat the Bible as entirely different from any other human text, because of his overwhelming need to deliver human salvation from anything that made it contingent on merely human causes. This emphatically anti-humanist approach explains why the phrase “Word of God” took off so strongly among Luther’s followers and, ultimately, among all Protestants as a name for the ­Bible. It was about emphasizing the difference. This is not to say that humanist critical approaches were eschewed by Protestant exegetes. Far from it. Many highly skilled humanist scholars were persuaded by the teachings of the Reformers, and they applied their skills to the teaching and understanding of the “new learning” that Luther had brought to light and to the understanding of the Scriptures. It was doubtless some of those efforts that so impressed John Fisher.

Thomas P. Scheck’s translation is careful and faithful, and his introduction full, informative, and illuminating, even if there are one or two fine points of interpretation on which I might beg to differ. There is one error, however, that cannot pass without correction, because it goes to the heart of the issue for which John Fisher was to give his life. Scheck rightly notes that, as J. J. Scarisbrick showed nearly forty years ago, Fisher actually entered into discussions with the Emperor Charles V’s ambassador to Henry’s court, Eustace Chapuys, about the possibility of imperial intervention to correct the English king as he became increasingly opposed to the pope. However, Scheck’s elaboration, that “this secretive endeavor was exposed, and Fisher was arrested,” has no basis in the historical record and must be rejected, together with the implication that Fisher’s intervention explains his execution, which took place more than a year after his arrest. Henry’s regime never heard so much as a whisper of Fisher’s discussions with Chapuys, and had they done so Fisher would have been for the chop in a month, not a year. Those who actually conspired against Henry got very short shrift. Fisher’s dealings with ­Chapuys would indubitably have been deemed treasonous had they come to light at the time. But they did not. To that extent, it seems, the bishop was as adept at secret communication as at most other things he turned his hand to. 

Fisher was arrested in April 1534 because he refused to take the Oath of Succession. More than a year later, he was executed because, in the words of the indictment,

on 7 May 1535, in the Tower of London in the County of Middlesex, [he] did, contrary to his due allegiance, falsely, maliciously and traitorously utter and pronounce the following English words, namely, “The king our sovereign lord is not supreme head in earth of the Church of England.”

It all took so long for two reasons. First, the regime had to enact new laws to criminalize Fisher’s position. The Act of Supremacy and the Treason Act were therefore passed during the winter of 1534–35. More importantly, though, the king wanted submission, not sacrifice. He wanted John Fisher, like Thomas More, to agree with him, or at least to acquiesce in his will. So they were given plenty of time to weigh an inevitable punishment against the mere words necessary to save their skins.

Henry would far rather have had their acquiescence than their blood; indeed, there were probably no men whose approval he craved so desperately. Thomas More had been his secretary and companion at court in happier times, with an international reputation as a scholar and Latinist thanks to his Utopia. As chancellor, he had zealously implemented ­Henry’s determined policy of censorship, propaganda, and repression against English heretics. Fisher had never been a courtier, and he was an austere, somewhat forbidding figure, of whom it was once said, “not only of his equals, but even of his superiors, he was both honoured and feared.” But in the heyday of the campaign against Luther in the early 1520s, he was in the highest favor with the king. On his way back from Canterbury to London in June 1522, Henry had called to see the bishop at Rochester and it was reported that “the king called on my lord as soon as he were come to his lodging, and he talked lovingly with my lord all the way between the palace and his chamber in the abbey.”

Henry’s piety was always ­oriented to the Book of Psalms, and at some point in the mid-1520s he commissioned from Fisher a full-scale devotional commentary on the Psalms. The commentary still survives, unfinished and fragmentary, in ­various working drafts, obviously set aside by the bishop when he was drawn into defending the validity of ­Henry’s first marriage, to Catherine of Aragon, the task that made his former friend an enemy. That the two best known scholars in his kingdom would neither of them publicly endorse his divorce and his break with Rome was a rebuke the king could not bear. It remains the most revealing judgment on the events that started that very peculiar thing, the English Reformation.

The post Make Me A Lutheran appeared first on First Things.

]]>
Tucker and the Right https://firstthings.com/tucker-and-the-right/ Tue, 23 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=118595 Something like a civil war is unfolding within the American conservative movement. It is not merely a dispute about policy agendas, foreign alliances, or the boundaries of political discourse. It is a deeper conflict—a struggle over the meaning of conservatism itself...

The post Tucker and the Right appeared first on First Things.

]]>
Something like a civil war is unfolding within the American conservative movement. It is not merely a dispute about policy agendas, foreign alliances, or the boundaries of political discourse. It is a deeper conflict—a struggle over the meaning of conservatism itself. The recent controversy ­surrounding Tucker Carlson’s interview of Nick Fuentes revealed a fissure that has been widening for years: a clash between two visions of the right, one grounded in universal moral principle, the other in cultural and civilizational loyalty. What might otherwise have been a marginal ­media dustup became a moment of revelation about the future of American conservatism.

For figures such as Robert P. George, McCormick Professor of Jurisprudence at Princeton University and perhaps the single most influential moral philosopher within conservative intellectual circles, conservatism begins with the claims of natural law. Its founding premise is the inherent dignity of every human being—an anthropology that descends from classical philosophy, Christian theology, and the Enlightenment. For George, conservatism is first a moral project: It safeguards life, liberty, marriage, family, and religious freedom because these institutions reflect universal truths about the human person. George has spent his career articulating these principles in philosophy, public policy, and constitutional thought. His is an approach to conservatism that emphasizes the primacy of the permanent things, the universals that transcend time and place.

Opposing this universalist strand is the ascendant nationalist wing of the right—a coalition influenced by the populist energies that surged after 2016 and represented by Tucker Carlson, Kevin Roberts of the Heritage Foundation, and polemicists such as John Zmirak. This faction sees conservatism less as an expression of moral philosophy than as a defense of Western civilization: a concrete culture, a historical inheritance, with its own people, faith, memories, and vulnerabilities. This conservatism is particularist rather than universalist. It begins not with abstract principles but with cultural loyalties. Whereas George begins with human dignity, Carlson begins with civilizational survival. Whereas George sees imperatives and violations of the moral law, Carlson sees a beleaguered West beset by global elites, porous borders, and cultural disintegration.

The recent dispute over Carlson’s treatment of Nick Fuentes brought these differences into sharp focus. Carlson’s critics—including George, Ben Shapiro, and others in the moral-universalist camp—argued that he had given a platform to a figure who traffics in anti-Semitic rhetoric and white-nationalist themes. For them, this was not merely a lapse in judgment but a failure of moral responsibility. Carlson’s defenders countered that conversation does not equal endorsement, and that conservatives must not mimic the left’s “cancel culture” by excommunicating those who question dogmas about foreign policy or Israel. They argued that a movement committed to free inquiry must not shrink from difficult conversations.

Beneath this quarrel lies a more fundamental question: Is American conservatism about preserving a moral order or protecting a civilizational identity? Is it grounded in rights and duties that apply to all human beings or in the defense of a particular way of life that belongs to a specific people? One could say that the universalist right worries about moral illegitimacy, whereas the nationalist right worries about cultural extinction.

This tension is not new. It has antecedents in the intellectual history of American conservatism, stretching back to the mid-twentieth century. The original conservative coalition—the so-called “fusionist” project—sought to reconcile libertarians, social traditionalists, anti-Communist hawks, and Catholic natural-law theorists. Buckley’s National Review, Irving Kristol’s neoconservatism, Goldwater’s libertarianism, and Reagan’s evangelical alliance all depended on maintaining a precarious balance between universalist commitment and civilizational ­loyalty.

That equilibrium was always fragile. The Old Right of the 1930s and 1940s, led by Robert Taft and the America First Committee, had been isolationist, nationalist, and wary of foreign entanglements. It was skeptical of global institutions and suspicious of cosmopolitan elites. The New Right that emerged after World War II reversed these tendencies, embracing international responsibility and moral universalism—especially once the Cold War framed America’s struggle against the Soviet Union as a defense of global democracy.

The rise of neoconservatism in the 1970s and 1980s intensified this universalist impulse. Figures such as Irving Kristol, Norman Podhoretz, and Jeane Kirkpatrick argued that America should use its power to defend democracy and human rights abroad. Many leading neoconservatives were Jewish Americans whose commitment to universalist ethics aligned with the American creed. For decades, this coalition kept nationalist particularism in check.

But the paleoconservative critique—articulated by Pat Buchanan, Sam Francis, and others in the 1990s—never disappeared. Paleoconservatives warned that the conservative establishment had subordinated national interests to abstract ideology and foreign entanglements. They were skeptical of immigration, free trade, and especially America’s close relationship with Israel. Their warnings did not dominate conservative politics—until the populist wave following 2016 revived them.

Carlson, whether he claims the mantle or not, stands squarely in the paleoconservative lineage. His skepticism of U.S. foreign policy, his warnings about demographic change, and his view that elites betray ordinary Americans place him in a tradition that prioritizes civilizational cohesion over universalist doctrine. The controversy over his interview with Fuentes cannot be understood apart from this lineage.

Understanding this history also helps clarify why Zionism has become the main flash point in the conservative civil war. Zionism is, in essence, a communitarian nationalism: the assertion of a people’s right to self-determination in its ancestral homeland. It is a repudiation of cosmopolitan universalism in favor of historical continuity and particular identity. By rights, the nationalist wing of the American right—which champions cultural sovereignty and civilizational rootedness—should admire Zionism. Israel is the very embodiment of the communitarian values that the New Right claims to defend: tradition, identity, faith, resilience.

And yet, the nationalist right has grown increasingly hostile to Israel. Carlson argues that American foreign policy has been excessively shaped by pro-Israel interests. Some of his followers express a deeper suspicion—one that veers into old patterns of anti-Semitism masked as anti-Zionism. Meanwhile, the universalist right sees criticism of Israel as a sign that the nationalist project is incubating bigotries long dormant but never extinguished.

These ironies reveal a conceptual flaw: The nationalist right’s suspicion of Jewish influence and of Israel makes little sense within its own stated values. It is driven less by philosophical coherence than by a populist resentment of perceived elites—elites who, in the nationalist imagination, overlap with Jewish identity. What begins as criticism of foreign policy slides, easily and dangerously, into ­civilizational suspicion. That suspicion contains a further irony, for what is Western civilization if Judaism is not one of its central pillars? Is it really possible to stand up for the Christian West by treating Jews as aliens?

On the other side, the universalist right, though morally correct in rejecting anti-Semitism, sometimes speaks as if universal principles alone can sustain a society. Their tendency to abstract from culture, tradition, and inherited forms can make them appear insensitive to the anxieties that fuel the nationalist revolt. They underestimate the importance of belonging, memory, and communal cohesion.

The conflict between these two factions would be difficult enough if it concerned only geopolitics or intellectual style. But it also touches on the internal dynamics of American Jewish identity—a subject that must be approached with care.

American Jews inhabit a dual identity that includes a universal moral tradition, rooted in prophetic ethics and the rule of law, and a particular solidarity with the Jewish people, rooted in shared history, ritual, and the existence of the state of Israel. This duality is not contradictory; it is the product of a long history. Jewish Americans have contributed profoundly to American life—in law, medicine, culture, academia, journalism, and politics—often championing the universal ideals that inspired the American founding and shaped the American creed. But global ­anti-Semitism and recent violence at home have heightened the sense among ­many Jews that Israel is essential not ­only as an idea but as a guarantor of ­survival.

The nationalist right’s skepticism of Israel places Jewish Americans in a difficult position. It implies—sometimes subtly, sometimes explicitly—that Jewish loyalty is divided, that the Jews’ commitment to America is compromised by their attachment to Israel. This accusation has a long and dark pedigree. It is the same charge historically leveled at Jews in Europe: that they are perpetual outsiders, cosmopolitans, disloyal to the nation, agents of foreign influence.

One need not accuse Carlson of anti-Semitism to recognize that the nationalist critique can activate these ancient suspicions. That is why George and others respond so sharply. They understand that criticism of Israeli policy is legitimate, but they also understand how easily such criticism can become a cover for more ominous attitudes.

To make sense of all this, let us turn to the experience of black Americans, not because the histories are equivalent—they are not—but because the structural problem both groups have faced is analogous: how to reconcile a strong subgroup identity with full membership in the American civic nation.

Black Americans have a unique and foundational place in American history. Unlike Jewish Americans or other immigrant groups, we are not a diaspora with an external homeland. Our ancestors’ arrival on these shores was coerced and brutal, but our presence is inseparable from the nation’s founding contradictions. We are not an added population but an integral one—a people forged in America’s crucible.

And yet, for centuries, our identity was viewed as incompatible with American citizenship. We were faced with the perpetual question: Were we Americans? Could we be? Should we be? The black freedom struggle answered those questions with clarity: We are Americans, and our fate is tied to the nation’s fate. But that patriotism was not naive. It did not ignore the injustices ­inflicted upon us. It was a patriotism grounded in struggle—a love of country that demanded moral redemption.

This posture—what I have called black patriotism—offers a model for resolving the tension between universal ideals and particular identity. Black Americans did not shed their cultural inheritance in order to claim American citizenship. We transformed the nation by insisting that the nation’s universal promise of equality applied to us. Our struggle did not weaken America; it strengthened America by forcing it to live up to its own principles.

In this sense, the black experience reveals the possibility of a civic nationalism that is both universal and particular, both aspirational and rooted. It shows that layered identities—cultural, religious, historical—need not threaten civic belonging. On the contrary, they can enrich and deepen it.

The comparison with Jewish Americans must be handled carefully. The histories differ profoundly. Black Americans are of America in a way that Jewish Americans, with their diasporic ties and in view of the existence of Israel, are not. Black Americans did not choose America; America was imposed on us, and we turned that imposition into a claim of ownership. Jewish Americans are immigrants or descendants of immigrants who have integrated into the American project with extraordinary success.

But both groups confront a similar challenge: how to be fully themselves and fully American. For Jews, it involves balancing solidarity with Israel and adherence to a universal ethical tradition with commitment to an American civic identity. For blacks, it involves reconciling the memory of slavery and segregation with the aspiration of constitutional equality.

The lesson from both histories is that civic nationalism need not require erasing particular identities. Rather, it requires a political framework that is capacious enough to accommodate them. When conservatism becomes too narrow, too suspicious of internal diversity, it risks undermining the civic unity it seeks to preserve. When it becomes too abstract, too detached from the lived experience of particular communities, it loses its cultural grounding.

The conservative movement, in its current turmoil, faces this very choice. It can embrace a cramped vision of America—one that mistrusts layered identities and treats cultural particularity as disloyalty. Or it can embrace a richer conception of the nation—one that honors universal principles while recognizing the importance of inherited traditions and communal attachments.

A conservatism worthy of the name must find room for Jewish particularism and black particularism within a shared civic framework. It must reject anti-Semitism and racism, not only because they are morally abhorrent but because they violate the very foundations of the Western civilization it reveres. At the same time, it must resist the temptation to use universalist rhetoric as a way to ignore the cultural preconditions of liberty. And it must avoid turning particularist suspicion into a politics of resentment.

A mature conservatism recognizes that universal principles require concrete communities to sustain them, and that those communities are enriched rather than threatened by the presence of diverse histories and identities. The American nation is not a tribe but a covenant—a shared project rooted in moral aspiration and historical inheritance. Both black Americans and Jewish Americans have contributed vitally to that project, each in ways shaped by their distinctive histories. We are a more vital nation for that.

The civil war within conservatism will not be resolved by the choice of universalism over nationalism or nationalism over universalism. It will be resolved by an integration of the two: by a vision of America that honors both the dignity of every person and the particular heritage of its people.

The future of our nation depends on whether we can achieve that synthesis. So does the future of conservatism. And if we are willing to learn from the stories of those who have wrestled longest with the tensions of identity and belonging—black Americans and Jewish Americans among them—we may yet find a path that avoids both abstraction and resentment, both moralism and tribalism.

A conservatism that achieves this synthesis will be intellectually coherent, morally serious, and ­culturally grounded. It will conserve not only the inheritance of the past but the promise of the future.


Image by Gage Skidmore, licensed via Creative Commons. Image cropped.

The post Tucker and the Right appeared first on First Things.

]]>
Work Is for the Worker https://firstthings.com/work-is-for-the-worker/ Mon, 22 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=118612 In these early days of his pontificate, Pope Leo XIV has made one thing clear: The responsible use of AI will be one of his central themes. It has...

The post Work Is for the Worker appeared first on First Things.

]]>
In these early days of his pontificate, Pope Leo XIV has made one thing clear: The responsible use of AI will be one of his central themes. It has me thinking about landscaping.

Ten years ago, I lived with my wife and children in a two-bedroom house with a small yard. My job every weekend was to cut the grass and trim the bushes. Done right, it would take an hour. And though it wasn’t back-breaking work, I ­usually did it in thick humidity, and there was much sweating. Afterward I would take a shower, put on fresh clothes, and grab a cold beer, and then I would take the first sip while admiring the lawn, low and neat and striped. It would be hard to overstate how satisfying that moment was.

Ours wasn’t the finest yard on the block—there was a lot of crabgrass, and the lines weren’t flawless. But when all was said and done, I could stare at this small patch of manicured land and say, “You know what? I did that.”

Eventually we moved, and our family and our yard grew larger. I was needed for other things on Saturdays. So we outsourced the mowing. It wouldn’t be practical for me to keep doing it, my wife said, and I agreed. Today, I can still look out over the lawn on Saturday evenings with a beer in hand. And to be honest, the lawn looks better than when I was cutting it. But I can’t shake the thought that Saturdays are somehow thinner and smaller and less complete. Something has been lost.

In his 1981 encyclical Laborem Exercens, Pope John Paul II highlighted the two ends of human work: the objective and the subjective. The objective end, the object of work, is to make things that improve the world, like inventing a sewing machine or building a house or teaching double-entry accounting. When I mow the lawn, I produce something of value: a cleaner, more walkable, more aesthetically pleasing patch of land. Work is for others, for society.

The other end of work is its subjective value. As a person works, John Paul wrote, “these actions must all serve to realize his humanity, to fulfill the calling to be a person.” In other words, work is undertaken not just for the sake of the thing produced, but for the sake of the person producing it. The creation of something new doesn’t merely transform raw materials; it changes the person who produces it. When I mow the lawn, it moves something in me. It brings about a sense of learning or ­accomplishment or humility that makes me more human. Work is for the worker.

Ideally, societies are built and economies are run with both the objective and subjective ends in mind. In practice, the two are often at odds. New machines destroy jobs. They create jobs, too, but the old job, that thing that once existed, is destroyed. There are no more musket manufacturers.

Of course, human life has always been about disruption and its tradeoffs. You have a new sibling (good!), but now you get less undivided attention (sad!). It’s beautiful and sunny outside (good!), but now the beach is crowded (sad!). Your single topped the charts (good!), but now you can’t go to a restaurant in peace (sad!). We always hope that new technologies bring about real progress, that the good outweighs the bad. But that’s not always the case. Electric blankets kept us warm (good!), but they caused house fires and leukemia (bad!).

Our great task, when it comes to markets and the economy, is to weigh the true costs and benefits of things. We gain a more complete and nuanced view as we learn more. This is in the nature of negative externalities—things whose true cost is hidden or not immediately apparent. Dumping a factory’s garbage into the river may boost profit margins in the short term, but it exacts a terrible cost from society over the long term. The idea, then, is that over time people or governments recognize this hidden toll and amend it.

What is striking about the debate over artificial intelligence is how haphazardly we’ve weighed the negatives. The powers of AI are mind-blowing and immediately apparent. In twelve seconds, you can write a press release, code a website, or analyze the use of foreshadowing in Hamlet. Artificial intelligence clearly aids the objective ends of work. It mows a lawn much better than I can.

But as a society, we have overemphasized AI’s progress toward work’s objective goals and underemphasized what it does to work’s subjective ends. Pope Leo stressed this point at the Vatican’s recent AI conference, saying that any judgment of artificial intelligence “entails taking into account the well-­being of the human person not only ­materially, but also intellectually and spiritually. . . . The benefits or risks of AI must be evaluated precisely according to this superior ethical ­criterion.”

This “superior ethical ­criterion,” the subjective end of work, is immediately evident to parents. When your daughter is dangling from the monkey bars, if your only concern were the objective end of the work—namely, getting her body from one end of the apparatus to the other—you would just carry her to the other end.

But what a stupid idea! We all know that getting across the monkey bars is worthwhile precisely because of the time and difficulty and ­failure—the inefficiencies, if you will—­involved in accomplishing it. As it turns out, time and difficulty and failure are the only way to achieve the subjective end of work—which is also called ­character.

Great managers, great businesses, and great economies produce both objects of value and people of character. Artificial intelligence thus far has produced only the former. Consider a recent study by Microsoft and Carnegie Mellon that tracked 319 knowledge workers who used AI tools. It found two things: Generative AI both improves the efficiency of workers and makes them lazier thinkers. A similar MIT study found that prolonged use of ChatGPT produces an “accumulation of cognitive debt”—one of the more creative euphemisms for brain rot. Study after study confirms what many of us already knew: AI makes us both more efficient and worse versions of ourselves.

It’s easy to criticize AI for making us dumber. It’s harder to prescribe how to deal with it. What guidelines should we follow in determining how—and ­whether—we should use AI tools?

One answer is prudential judgment. When it comes to deliberations over whether to use a tool or not, it’s obvious that I should use a knife to cut vegetables and that I shouldn’t use a robot to read my kids’ bedtime stories. In the in-­between cases, we have to make judgment calls.

If you need to decide how or whether to use an AI tool—in writing an essay, graphing a chart, analyzing survey data, creating a song, editing a video, writing a thank-you card, or deciding where to live—here are a few questions to aid your judgment call.

Does AI stimulate critical thinking or outsource it? If it generates time savings, what are you doing with the surplus time? If the primary gain is efficiency, how much have you learned in life from doing things inefficiently? Since you’ve begun using AI tools, have you become more fulfilled or less? If you were teaching your son to do this task, would you have him use the tool or not? What do you, the worker, see as the purpose of work? Does this tool help you fulfill that purpose? If you were presenting this work to God, how would he view the process by which you created it?

Henry David Thoreau wrote, “The cost of a thing is the amount of . . . life which is required to be exchanged for it.” The cost of AI must be assessed by a similar question: How much of my humanity must I exchange for the privilege of using this tool?

The post Work Is for the Worker appeared first on First Things.

]]>
Just Stop It https://firstthings.com/just-stop-it/ Fri, 19 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=118625 Earlier this summer, Egypt’s Ministry of Religious Endowments launched a new campaign. It is entitled “Correct Your Concepts,” and you need exert little effort to figure out which perceptions...

The post Just Stop It appeared first on First Things.

]]>
Earlier this summer, Egypt’s Ministry of Religious Endowments launched a new campaign. It is entitled “Correct Your Concepts,” and you need exert little effort to figure out which perceptions the government deems in need of correcting. Online ads and videos, designed to appeal even to the youngest of viewers, explain it all in the simplest terms imaginable. One ad, for example, shows a young man smiling beatifically while feeding stray cats in the streets, followed by another man frowning menacingly while attempting to pummel a cat with a stick. The first is adorned with a green checkmark; the second with a big, red X. 

Joining animal cruelty in the campaign’s list of discouraged behaviors is smoking, vaping, cheating on tests, consuming online pornography, spending too much time on social media, beating children, and wasting water. It’s an extensive list, and the government informed its subjects that the stakes are high: Whenever you see or do something wrong, one campaign slogan earnestly declares, “You turn off the light in your heart.”

American observers may be forgiven for looking enviously at Cairo. We, too, have perceptions to correct aplenty. According to a recent Siena poll, for example, 22 percent of all Americans and a whopping half of all men aged eighteen to forty-nine regularly indulge in online betting. Two-thirds of the general population—and a heartbreaking three-quarters of all teens—regularly view pornography. We pop pills, post bilious missives online, and increasingly believe that violence is the inevitable path forward; 30 percent of respondents said just that in a recent survey conducted by NPR.  

Is it time for us to follow the Egyptian example and ask Washington to embark on a campaign to snuff out vice? Most Americans are likely to find this proposition absurd. Overall trust in the government is the lowest it’s been in seven decades, which means that most of us barely expect civil servants to pick up our trash, let alone heal our minds and our hearts. 

But while most of us agree that government intervention is unlikely to spur moral revival, the problems we face persist, and they exact an ever-growing toll. We now have decades of research suggesting that pornography (to focus on just one of our social ills) alters the brain, creates dependency, and leaves its habitual consumers far less likely to partake in healthy sexual relationships with other human beings, which may explain why the Centers for Disease Control and Prevention informed us this year that our national birth rate has reached an all-time low. 

An account of how and why we’ve let things get this bad could and should occupy battalions of scholars. But if you’re looking for a quick and easy fix by technocrats, don’t get your hopes up. We’ve become a nation of porn-addled, stoned gamblers because we attempted to solve a mighty spiritual crisis with the feeble tools of politics and policy.

How, for example, might we cure the young of their ruinous dependence on pornography? Whenever this question was asked in recent decades, our best and brightest—well-intentioned and sincere thinkers and lawmakers—have offered variations on the same theme: Ban it. Which, naturally, has rallied free-speech absolutists to rise and defend the smut flooding our screens as protected expression. Before you knew it, the debate sounded like it belonged more in the law school seminar room than in the public square. The same is true of the scourge of online gambling. The debate often starts—and frequently ends—with a 2018 Supreme Court case that allowed betting behemoths such as FanDuel to soar into a $31 billion valuation. 

Don’t get me wrong. Legal niceties and policy-­wonkery have their place. But missing from these conversations about legislation has been a simple and profound admission: We shouldn’t gamble on sports or watch strangers fornicate because, well, our Egyptian brothers have it right—because when you do so, you turn off the light in your heart. That light, once extinguished, is difficult to turn back on. And without it, we’re not recognizably human. 

Why are we so reluctant to talk forthrightly about right and wrong? Maybe it’s because we now live, as the writer Aaron Renn reminds us, in the “negative world,” an America in which the Judeo-Christian values that once informed the culture are viewed as oppressive, regressive, and bad. This means that parents, teachers, and members of the clergy must defend themselves and their young not only against torrents of temptation, but also against a zeitgeist eager to strip them of all moral authority. 

Some resisted; many, alas, shrugged their shoulders and slunk away. Which, maybe, helps explain how an unlikely figure like Jordan Peterson—a Canadian psychologist who speaks and writes with the academic’s measured tone—became a cultural sensation simply by advising young Americans to make their beds each morning. He speaks plainly about things once taught matter-of-factly by mothers and fathers.

And herein we have the answer to our predicament. How do we get young people to swear off self-destructive behaviors? Easy enough: We tell them to. Make your bed. Stop watching porn. Delete the FanDuel app. Spend less time on your phone and more time in church or synagogue. Do it because you’re a human being, created in God’s image, not some animal governed solely by its appetites. 

This may sound like a laughably simplistic solution. But some things are not complicated, and everything we observe all around us confirms that a direct approach works. Consider the so-called groypers—fans of the online provocateur Nick Fuentes. They revel in his declarations of love for Adolf Hitler and Joseph Stalin, not because they share these demented views, but because they’re struggling to understand the place they ­occupy in society. This should come as no surprise. Spend ­decades telling young, white men that not much more than a bedsheet separates them from the Klan and, eventually, they’ll feel abandoned, enraged, and ready to pledge their allegiance to the first charlatan who swings by with tough talk and promises of cultural ascendance. 

Rather than bemoaning Fuentes’s influence, we should recognize that he reveals an auspicious opportunity. Because if a spiteful and sordid worldview like the one peddled by Fuentes quickly gains followers, imagine how many more would follow the properly amplified word of God. 

What we have, in other words, is a problem of communication, which is really a problem of finding a stiffer spine. Society’s lost boys (and girls) don’t want a marketer eager to appear relatable, or a scholar citing facts, or a politician offering frothy platitudes. They want real talk, the kind you can only get from a ­serious grown-up who loves you very much, so much so that he’ll be firm and demanding when strong words are needed. And when they can’t find grown-ups who speak directly and without equivocation, they rush to the next best thing—to petty demagogues who urge them to burn down the house, never telling them that doing so will leave them homeless.

If you doubt that the “right is right” and “wrong is wrong” approach works, you might want to take a stroll down Manhattan’s Fifth Avenue. There, between 50th and 51st Streets, you’ll see the world-famous St. Patrick’s Cathedral, the cornerstone of which was laid down by John Hughes, the archbishop of New York from 1842 until his death in 1864. Known as Dagger John, Hughes recognized that his parishioners, mostly impoverished immigrants newly arrived from famine-stricken Ireland, were in danger of losing all hope. Hughes fought for them—sometimes, legend has it, quite literally, by thrusting himself into scuffles. And as he battled, he preached the rigid gospel of personal responsibility and discipline. Within a generation, a population known primarily for producing prostitutes and petty thieves gave the city its schoolteachers, police officers, and community leaders. Dagger John, as one biographer observed, “re-spiritualized” his flock, not with soapy, feel-good slogans or “meeting people where they are,” but with clear instruction: Do the right thing and shun the wrong. He instilled in his flock a moral purpose that allowed them to see themselves as something greater than the sum of their indignities.

Let us, then, unleash a few dozen more Dagger Johns and Dagger Janes, as loving as they are strict, ready to sacrifice much for those whom they serve, but demanding growth and change in return. Let us parent today’s errant children (some of whom are well into adulthood). Give them clear and unequivocal moral instruction, and they won’t turn their insecurities into political performance pieces. Our politics and our technologies might’ve changed, but human nature never does. All we need is a firm hand, sometimes caressing us lovingly, sometimes giving us a much-needed shove in the right direction.

The post Just Stop It appeared first on First Things.

]]>
Hegemon or Empire? https://firstthings.com/hegemon-or-empire/ Thu, 18 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=118813 Was the First Gulf War a mistake? The histor­ian Paul Schroeder thought the answer was "yes." In his arresting formulation, Operation Desert Storm was “a just, unnecessary...

The post Hegemon or Empire? appeared first on First Things.

]]>
America’s Fatal Leap:
1991–2016

by paul w. schroeder
verso, 336 pages, $34.95

Was the First Gulf War a mistake? The histor­ian Paul Schroeder thought the answer was “yes.” In his arresting formulation, Operation Desert Storm was “a just, unnecessary war.” He was hardly the only one to think so. Easy triumph silenced the war’s ­naysayers—except for Schroeder, and therein lies his distinction. He sensed that victory was no sure guide to sound judgment. At the end of the Cold War, the United States was taking a wrong turn. The country’s leaders overestimated the effectiveness of battlefield victory. As a historian of European balance of power, Schroeder saw that our elites failed to understand that preponderant military and economic power should be used to marshal allies and to marginalize enemies, not defeat and rule them.

America’s Fatal Leap collects Schroeder’s essays on U.S. foreign policy from 1991 to 2016, a few years before his death. In Schroeder’s view, the “fatal leap” involved the transition of the United States from benevolent hegemon to aspiring architect of a global empire.


The writings of such an eminent historian are rich and refreshing. The essays provide an incisive, historically informed critique of American foreign policy. For skeptics of America’s recent military adventures overseas, the collection may seem like a bracing ­anti-imperialist counterpunch. But on closer inspection, Schroeder’s views appear no less imperialistic than those of the neoconservatives he criticizes. He simply outlines a different kind of imperialism. It’s soft rather than hard, but it partakes of the same idealistic assertions of global responsibility that inspired America’s adventures in the Middle East. We won’t be able to resist the seductions of global empire until we have the courage to look at conflicts in far-off corners of the world and say, “That’s not our problem.”

The First Gulf War was the closest George H. W. Bush ever came to the “vision thing.” With the “New World Order,” Bush dazzled the imagination with the dream of a universally authoritative international system, ensuring global peace and prosperity. Building and defending that system meant punishing states that broke the rules; in 1991, it meant punishing Saddam Hussein’s Iraq for invading Kuwait. Many observers had theoretical and practical misgivings, since neither America nor an American ally had been attacked. Memories of Vietnam reminded some of the risks of defeat. But given the almost surreal speed of success, most agreed afterward that the war had demonstrated America’s power to build a new global system and enforce its rules—and its moral responsibility to do so. Schroeder, by contrast, saw that victory brought its own risks.

America’s actions were paradoxical. To reverse Saddam’s resort to offensive force and deter the future employment of force elsewhere required the United States to resort to offensive force, to act militarily when the nation itself was not threatened. Saddam no doubt had violated international law. But among America’s leadership class, it came to be taken for granted that responsibility fell to a country that had not been attacked—the United States—to match force with force and eject him. It was far from clear why this had to be so. Despite the dazzling victory, the First Gulf War foreshadowed later conundrums. The United States appointed itself the world’s policeman, declaring itself an exceptional nation. Leaders in Washington presumed they could bend and break the rules-based international order in order to preserve it.

This exceptionalism defines America’s post-1989 leadership class, and it fuels vaulting ambitions. The ’89ers imagined they were building a better world, as one of ­Condoleezza Rice’s books puts it, creating a historically ­unprecedented “global commonwealth.” Strivers and scholars such as Rice thought they could leap between academia and national security posts, theorizing and implementing the future of international relations, leading the world toward ever more benign destinations. In their minds, American power could be used to replicate the stability of the nineteenth-­century balance of power and surpass it, building a system that enjoyed the clarity of universally applicable principles. It’s the vision thing, enhanced by academic expertise.

This conceit makes Schroeder’s critique particularly biting. A specialist in nineteenth-century European diplomatic history, Schroeder shows just how badly America’s leadership class missed its mark at the end of the Cold War. Despite America’s ­unchallenged supremacy, despite its immense capacity for experimentation and innovation, the country’s leaders never got anywhere near the peace secured by the balance of power orchestrated by the Concert of Europe. The ’89ers produced a more volatile, destructive arrangement in every region of the world to which they turned their attention.

The crux of Schroeder’s critique of America’s post–Cold War foreign policy lies in his distinction between hegemony and empire, and his strong preference for the former. Schroeder sees the responsible use of hegemonic power as the key to maintaining stable international relations in modern circumstances, where functioning states are presumed to have equal rights in international law. A responsible hegemon exercises power in a cooperative way, acting as first among equals and seeking to make decisions through persuasion and consensus. An empire acts by ruling over subordinates, imposing its own decisions by coercion. Hegemony exists in an international system of autonomous, independent, and juridically equal states. Empires are incompatible with that arrangement. True, empires once brought stability. But relations of domination are stable “only in premodern” settings. Today, there exists an international order, formal or informal, that is defined by autonomous and nominally equal states. For this reason, empires are obsolete; attempts to resurrect them are doomed to fail. Schroeder saw the Cold War as a contest between American hegemony in Western Europe and the Soviet empire in Eastern Europe. Because the United States acted as a responsible hegemon, and the Soviet Union as a typical empire, the writing was on the wall.

As Schroeder saw as far back as 1991, after the Soviet Union fell, the United States would have no trouble smacking down dictators or terrorist organizations in the Middle East. But each campaign pulled the country into the region even further. Each victory would “unfailingly commit us to an even more direct and intrusive hegemony than before.” This dynamic carried with it the risk of America’s metamorphosing from hegemon to empire.

Schroeder argues that America failed the test of 9/11, becoming more involved in the Middle East on more ambitious terms, aiming to crush any actor that defied it. The climax came in 2003. Concocting a dire threat from Saddam Hussein—“existential exceptionalism,” in Schroeder’s formulation—the Bush Administration launched a preventive war that would prove one of the most spectacular foreign-policy blunders in American history. According to Schroeder, the invasion of Iraq was a quintessentially imperial action, because one country, without being attacked or threatened by the other country, invaded and conquered the latter for the purposes of changing the government.

Schroeder has two lines of criticism against the Iraq war. Both reveal him to be a doppelgänger of those he criticizes. The first line of criticism is rooted in moral universalism. For Schroeder, the United States violated the international laws it purported to uphold. In one of his more creative historical analogies, he compares the Iraq War to France’s Dreyfus Affair, which exposed corrupt institutions, an opportunistic government, and a divided society—in short, “a serious deficit in intellectual and moral integrity.” The Third Republic did not live up to its ideals. Iraq showed the same about America.

One can appreciate the earnestness of Schroeder’s moral critique. Yet it is based more in liberal ideals than in actual events. Breaking international rules, in this theory, is the equivalent of violating the moral law in Kant’s universe: One’s actions cannot be translated into the universal law, and so they damage the agent’s moral credibility and legitimacy. But just as the Third Republic lasted long after the Dreyfus Affair, the exposure of corruption doesn’t bring an end to a republican empire. The deeper problem with republican-imperial hybrids is the one the Roman Republic faced. As Rome’s empire expanded, the old form of republican government could not meet the new demands. A system designed for governing a city on the Tiber was not suited to governing the Mediterranean. Schroeder perceives the threat an aggressive empire can pose to a stable international system. He is less clear-sighted about the threat imperialism poses to the republic at home.

In De Officiis, Cicero distinguished the days when Rome was patrocinium of the whole world from the days when it was an imperium. Cicero’s distinction is not unlike Schroeder’s distinction between hegemon and empire. When Rome was a patrocinium, her commanders acted to defend her provinces and allies; as an imperium, they acted to oppress and ruin others. Cicero viewed this change with dismay largely because, as Rome expanded, a despotic cycle emerged that hollowed out the institutions of republican self-government. The dictator Sulla, whose star rose in large part because of his imperial adventures, is a key figure in this shift. We might call his dictatorship the prelude to the Roman imperial security state. It demonstrated that to preserve her empire, Rome needed to become a military despotism, albeit one that would use republican edifices to dissemble this reality. “And so,” lamented Cicero, “only the walls of Rome’s houses remain standing.” 

Schroeder’s second line of criticism is historically determinist. The imperial adventure may have worked in the past; now, in an age of globalization, it will inevitably end in self-destruction. New circumstances and the new supranational institutions (the EU, the UN, and NATO) combine to make “nineteenth-century-style empire structurally impossible and unthinkable in the twenty-first century.”

It’s no accident that Schroeder’s condemnation of certain actions as “nineteenth-century” echoes the Obama administration’s wailings about Russia’s annexation of Crimea. In the twenty-first century, we are told, one does not simply walk into a former Soviet oblast (though somehow it keeps happening). On closer reflection, Schroeder’s conservatism looks like the progressivism of John Kerry.

Both Kerry and Schroeder opposed the First Gulf War, the latter for far more thoughtful reasons. Unlike Kerry, Schroeder had an alternative theory of how the “new era in international relations” should develop. Rightly understood, the “New World Order” should put an end to the old resort to force, repudiating the politics of military brinkmanship. Bush Sr. (like his son) described foreign policy along the lines of a Wild West gunfight. The ultimatum to Saddam was a “line drawn in the desert.” Schroeder imagined another path. The responsible hegemon of the New World Order needed to dispense with the gunfighters and adopt the analogy of judo, whereby the goal is not to unleash maximum force but to throw your opponent off-balance, then use just enough force to disarm him. For Schroeder, military force in the Middle East was effective, but unnecessary. It would have been more efficient to pin Saddam down in Kuwait by means of worldwide economic sanctions, coordinated by the United States. The sanctions would have paralyzed Saddam, isolating him from the international community and teaching the lesson that violent solutions do not work—all without normalizing the use of force in itself.

These are the orthodox presuppositions of 1990s globalization. Economics is preeminent. Given how much states depend on ­mutual trade for their prosperity, war is useless. Some states, such as ­Saddam’s Iraq, need to be reminded of this new geopolitical reality. But for Schroeder, the way to go about it is not “compellence-deterrence” [sic]: the coercive, often violent means by which superior states enact their will. The new system should rely on “­association-exclusion.” A bad actor is not invaded; he is ­excommunicated. He loses the benefits of economic association with others, possibly the whole world. No violence is ­needed: Sanctions teach that the price of exclusion is far too costly to risk violating the international consensus. The neutral and impartial rules of the international system are thereby preserved, equally enforced upon all. No actor is endowed with special emergency police powers that others do not possess. 

This system has already been road-tested, Schroeder argues. In 1945, Germany and Japan were not taught to become liberal democracies by force. Instead, they were shown that adopting this regime type was the price of association with the West—markets, security, and respectability. France and Britain were encouraged to decolonize (“sometimes with brutal clarity as in the Suez Crisis of 1956”) because they were taught that they would lose the economic and political benefits of association if they did not. 

This picture of a New World Order is elegant, all the way down to its legitimating appeal to the postwar moment. Closer inspection exposes its blemishes. First, Schroeder’s description of postwar America as a benevolent hegemon fits uncomfortably with the postwar reality of a shattered Europe. Even if we believe that West Germany could really have decided for a different regime after 1945 (when a foreign power was writing its constitution for it), a large part of the postwar picture involved states that were far from equal and far from autonomous. ­Eisenhower’s 1956 threat to provoke a run on the pound if Britain did not withdraw from Suez was not the action of a hegemonic “first among equals.” It was a credible threat to oppress and ruin the British economy. ­Eisenhower demonstrated that Britain had lost the capacity for independent sovereign action due to its financial reliance on the United States. The British couldn’t defend what they perceived as their interests without first getting America’s permission. Schroeder is too smart to deny what happened in 1956. But he masks the raw inequality of power. A broken Britain opted to become part of an informal American empire that dared not speak its name.

There’s another problem with Schroeder’s narrative. The American occupation of Iraq, however ­imperial, didn’t cascade into a great-power conflict or lead to a collapse of American power in its traditional zones of influence. One gets the impression that America’s power over its allies strengthened rather than weakened during the first two decades of the twenty-first century. Appealing to the law of nations, Germany protested that the second Iraq war was illegal, and yet the Germans allowed their ­country to be used as a base from which to wage it. In a gesture of Gaullist defiance, France’s threat of veto shamed the Americans at the United Nations; several years later, the country ­reversed de Gaulle’s bid for strategic sovereignty and rejoined NATO’s military command structure. Iraq may have exposed America’s limits in the Middle East; it also showed that Europe was willing to submit to an ever less informal American empire.

And there’s a third problem: Schroeder imagines a largely straightforward demarcation between coercion through “compellence-deterrence” on the one hand, and consensus through “association-exclusion” on the other. The latter presupposes economic and social networks to which one must belong if one is not to be left out. That’s the basis of Schroeder’s distinction between empire and hegemon. But both his hyphenated terms describe deeply ambiguous and potentially coercive relationships. A state may join an economic and social network not because it brings intrinsic benefits, but because the penalty for being left out is too steep to bear. (These coercive aspects of the modern world are outlined in ­David Singh Grewal’s brilliant book ­Network Power: The Social Dynamics of Globalization.) Schroeder, like many liberal proponents of globalization, sees economic and institutional associations—network power—as providing the impartial, neutral, and necessary rules for prosperity. Unlike more naive liberals, he sees that these associations are powerful means to enforce one particular vision of the international system. But he doesn’t see just how destabilizing this vision of soft power is. 

Is “association-exclusion,” enforced by economic means, really a judo-like method of responsible, restrained statecraft? ­Judo presupposes a fixed set of tools and tactics that can be brought to bear on one’s opponent: hands, legs, grasps, kicks. That’s not the economic world. Technological innovation introduces new forms of economic competition and new forms of rivalry, with political and strategic ­implications—so that beating China in AI or shipbuilding is not just a sporting contest over GDP growth. And as long as economic exclusion from the family of nations is taken as a legitimate policy goal, technological change will create new opportunities to strike at recalcitrant actors—as well as new dangers. When one nation attempts to remove another nation’s central bank from the world’s financial system—which would have been inconceivable before the onset of digital banking—it may end up a cobelligerent in a brutal war of attrition.

Moreover, implementing targeted “association-exclusion” changes the laws of the republic. Maintaining sanctions and ensuring that a rogue nation remains excluded requires a whole new institutional, legal, and diplomatic regime, foreign and domestic. It requires clandestine measures to hide the full scope of the nation’s involvement from a skeptical public. Abroad, it’s just as intrusive—if not more so—than the occasional military operation exercised against recalcitrant dictators. Constantly reordering social and economic practice to ensure that an actor remain excluded from the international system will have the effect of expanding state pressure, power, and administrative reach.

One might argue that hegemony is not violent, at least not intentionally. But ostensible anti-imperialists should ponder the way sanctions get used. Take four examples from the Cold War to the present: ­Rhodesia, Slobodan Milošević’s Serbia, the Islamic Republic of Iran, and the Russian Federation. It is unclear whether sanctions against these actors were designed to punish them for specific actions—or to undertake that quintessentially imperial action, regime change.

And therein lies the greatest danger of “association-exclusion.” The excluded actor is made into an unperson, a non-state. In such an actor, as Woodrow Wilson put it in 1917, “we can never have a friend.” The excluded becomes the outlaw of humanity; as at least one commentator on Wilsonianism realized, such a monster “must not only be defeated but also utterly destroyed.” The principle of “association-­exclusion” culminates in annihilation.

From this angle, Schroeder’s New World Order does not seem a workable alternative to neoconservative global imperialism. It endorses different means, suggesting that tools other than military might can work just as well (perhaps better) to marginalize deviants, change their ­governments, and eliminate monsters from the earth. The endgame is the same.

But, of course, Schroeder’s New World Order is not much different from the policies he imagines himself opposing. “Association-exclusion” began in the Rhodesian Civil War of the 1970s, when Washington decided that the dictates of the Cold War necessitated the use of economic power to force regime change on Rhodesia. Even if George Bush’s wars of the 2000s relied on more direct action, his successor was shrewd enough to recall that the imperial playbook left him ­many other options of the sort Schroeder endorses. One might argue that the real lesson of the Iraq War was that the Washingtonian imperial security state should never again be caught having to justify its actions at the ballot box. The safest way to run an empire is to hide it. Invisible economic coercion is a great way to do that.

Barack Obama was the last president able to pose as anti-war. He promised that he would not deploy hundreds of thousands of troops to topple governments; military strikes against recalcitrant actors would be “unbelievably small,” in Kerry’s phrase. For anti-imperialists, this side of Obama’s rhetoric brought solace. It masked a deeper shift.

During Obama’s presidency, America cemented itself as the world’s high priest. It pronounced ever more dogmas of exclusion and association and expected everyone around the world to adopt them. It was already enforcing those rainbow dogmas at home. The strategy for enforcing them was exactly what Schroeder prescribed in the 1990s: to use economic power to teach what one could and could not do in the twenty-first century. The pronouncements of exclusion by Obama and others cajoled and compelled haute finance to rain down economic punishments on rogue states and actors, foreign and domestic. This strategy of “association-exclusion” reached its frenzied peak in in 2022, the time of truckers debanked, of the FBI’s “Arctic Frost,” and Biden’s “this man cannot remain in power.”  

Schroeder was a sharp thinker, superb historian, and careful student of foreign policy. Yet he embodied the central contradiction of so much of today’s foreign-policy commentariat. On one hand, he opposed America’s imperialist adventures and invasions. On the other, he embraced America’s universal responsibility to adjudicate and enforce “association-exclusion,” and to do so by a ­variety of ever more coercive tools. If he had had his way, the global American empire would be at once dead—no more ­forever wars!—and very much alive. It would be ­orchestrating, ­directing, and disciplining the ­global economy to ensure that the rogue and reactionary actors that break the norms set by America’s priestly caste are extir­pated from the global system.

In these essays Schroeder, like his peers, arrives at this position because he does not question the most important post-1989 axiom: America’s world-historic, universal responsibility. Questioning that responsibility brings disaster. In his writings, the America First movement of the 1920s and ’30s is the second-greatest villain of the twentieth century, just as it is for the neoconservatives he despises. 

Reading Schroeder, one has the impression of witnessing a vicious family quarrel. He loathes what the neocon cowboy branch of the household has done. But it’s a domestic dispute within the wider Wilsonian clan. None of the disputants repudiates the twenty-eighth president, nor does any repudiate his ends. The quarrel is over whether, in effecting regime change around the world, it’s better for America to act as the world’s policeman, its priest, or its pickpocket—the confiscator of foreign financial assets. It’s a typical uniparty dispute, premised on how best to bear the burden of America’s global responsibility. The family quarrel is resolved once the ’89ers settle on some amalgamation of the three. Then they can go back to bashing the isolationists.

We often hear how postwar American conservatism broke down because of irreconcilable tensions within the “fusionist” camp. The argument over the philosophical coherence of fusionism brackets two more concrete questions. Can one admit that, during the Cold War, America was already a global empire? Can one accept that, even then, Americans didn’t like that very much? Cold War conservatives, including Schroeder, helped reinforce the paradigm that distracted Americans from asking those questions.

Schroeder was a Cold War conservative because he opposed the challenge of the New Left in the 1960s, whereby every American action abroad disclosed the tentacles of vicious imperialism, and every American action at home disclosed an irrational, jingoistic mob. Schroeder protested. There was a sensible, pious patriotism in resisting the New Left. But Schroeder’s motivations for doing so are complicated.

In an early essay in this volume, Schroeder argues that America’s Cold War hegemony rested on a vote of confidence by the American people. America’s presence in Europe and South Korea and its support for Israel reflected “the public’s ability, sensibly led and instructed, to understand the central realities of international politics, gird itself for the long term, and wait patiently for results.” In the 1990s, against those who thought that America needed to withdraw from parts of the world because of public skepticism of further imperial adventures, ­Schroeder wanted to stay abroad. He argued that the country’s leaders could win ­another vote of confidence. It sounds like a republican expression of faith in the American people. But “sensibly led and instructed” isn’t that. It’s an expression of faith in their ­malleability. The American people will vote in the right way—after their leadership class has curated their options.  

Americans have never been particularly fond of being told by their leaders how to think. In the twenty-­first century, they have gotten increasingly wise to this trick. So it’s no surprise that after America’s fatal leap, Schroeder became much more pessimistic about the country and its people. He describes American history as the drama of a typical empire: a “rent-collecting state living off and profiting from capital accrued from the struggles and achievements of others” and now reckoning with reality. “In that respect, as in others, it is no longer exceptional.” The final essay of the volume reflects the conventional opinions of 2016. Schroeder made the choice of the academic and ­national-security establishment, preferring to keep the global American empire running rather than disrupt the status quo by supporting a garrulous outsider. Whatever anxieties the public had—about a dying republic, the cost of forever wars, the state-sponsored rainbow flag flying abroad and at home, and the kind of regime America had ­become—were irrelevant. Schroeder hurled his condescension on the candidate and his irrational voters. In ­America, he wrote, “there is a considerable rabble to rouse.”

Schroeder, like so much of the foreign-policy class, couldn’t really face the possibility that the skeptical instincts of the voters might be right. Running a global empire has kept up the edifices of the old republic but emptied the houses of their treasures. The end of the Cold War was the right time not just to question wars in the Persian Gulf, but to question the presupposition of universal responsibility. It was a time to renovate America’s dying republican homesteads. After the fall of the Berlin Wall, our elites could have redrawn the boundaries of a responsible patrocinium. Yet the ’89ers chose a different road. Their critics, Schroeder among them, were bewitched by the dreams of limitless American responsibility. They ultimately helped the ’89ers to stay the course, in Iraq and elsewhere. And so in America only the walls of her houses remain standing. 

The post Hegemon or Empire? appeared first on First Things.

]]>
Tunnel Vision https://firstthings.com/tunnel-vision/ Wed, 17 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=118795 Alice Roberts is a familiar face in British media. A skilled archaeologist, she has for years hosted the television show Digging for Britain, which is a superb piece...

The post Tunnel Vision appeared first on First Things.

]]>
Domination:
The Fall of the Roman Empire and the Rise of Christianity

by alice roberts
simon and schuster, 352 pages, $31.50

Alice Roberts is a familiar face in British media. A skilled archaeologist, she has for years hosted the television show Digging for Britain, which is a superb piece of scholarly popularization. Throughout, she appears unfailingly lively and colorful (her frequent changes of hair color constitute a kind of trademark), and she must be an outstanding teacher. She is particularly good at taking objects that appear dry and dead and presenting them in their vivid and comprehensible reality. She has taken burials and tombs as her specialty, with a major subfield in plague and pandemic. Some of her books on these themes, such as Ancestors, Buried, and Crypt, are quite brilliant.

Roberts’s strengths are manifest, and all are on display in Domination. There is much to learn here for anyone interested in that epic story of the fall of Rome and the rise of Christianity, and she does a fine job of bringing objects and ­places to life. After all, so much of the evidence for the spread of the new Christian faith comes from her beloved “burial archaeology”—from tombs, catacombs, and memorial stones. In this area she knows the literature very well indeed.


Where she fails, sadly, is in applying anything like the same skills to the reconstruction of minds, ideas, and beliefs. Roberts is a devout atheist and militant secularist who rejects all religious claims. That in itself certainly would not disqualify her from studying Christian origins or histories of spirituality. But in practice, she views these early eras with a tunnel vision that ­evinces no awareness at all of spiritual motives. When she describes all the saints and scholars, her architects and illuminators, she has not the slightest appreciation of the beliefs that evidently motivated them, and she clearly doubts whether any sane person could think such ludicrous things. 

When Roberts is describing the material remains of the eras she is analyzing, her accounts are informative and evocative. But as she approaches the mainstream histories of Christianity, she becomes all too willing to tell the tale in terms of cynical motives and deception, as a centuries-long grab for power and domination. Her title seems intended to challenge and subvert the arguments of Tom Holland’s successful book Dominion (2019), which showed how Christianity had shaped the moral universe of Western civilization. For Roberts, the story has nothing to do with morality, and precious little with religion, in any recognizable sense. It is a story of power, of oppression, or to borrow Orwell’s horrible image, of a boot stamping on a human face—forever.

You could actually write a very worthwhile history of “The Fall of the Roman Empire and the Rise of Christianity” by looking only at the concepts that Roberts either omits or mentions briefly, almost as an afterthought. One spectacular example is healing, which any worthwhile scholar would rank very high among the reasons early believers accepted the new faith—and in the category of healing, I would include exorcism and demon-fighting. To take just one author: If we have learned anything from the writings of Peter Brown over the past half-century, it is the central importance of healing and “wonderworking,” and of the charismatic holy person as a conduit between Earth and Heaven. Closely related is the idea that God intervenes directly to express his satisfaction or dissatisfaction with how such figures are treated. He might express it through the state of the harvests, or the climate, or such natural phenomena as earthquakes. Or, of course, through the plagues and pandemics that are such essential components of Roberts’s own historical work.

Roberts shows no awareness of this (and astonishingly, given her topic, she makes just one reference to Peter Brown in the whole book). She offers a brief description of the sixth-century Welsh saint Samson of Dol, who engaged in “various miracles, including restoring sight to the blind, healing lepers and exorcising demons.” I quote that because, as far as I can see, this is the book’s only reference to the activities of healing and exorcism. 

Although she often mentions shrines, Roberts gives no sense of the healing activities that would have been the main draw for most pilgrims and visitors: She has a few generic references to miracles and visions, most frequently in the British and Celtic context, but staggeringly few, given the book’s theme. Neither “miracle” nor “exorcism” nor “healing” nor “demons” features in the index. 

Scarcely less startling is the almost total absence of the Virgin Mary, who appears chiefly in the context of a couple of church dedications. Yes, we finally have a historian who imagines you can examine the appeal of Late Antique Christianity without exploring the cult of the Virgin, which is overwhelmingly powerful at all social levels. There had to be one.

The book’s geographical emphases are also, well, quirky. By any reasonable estimate, in the period Roberts is examining, the Eastern churches were the heartlands of the faith, the most productive in terms of innovative thought, and the setting for most worthwhile debate. Roberts makes some general comments about the East, about Egypt, Syria, or Mesopotamia, but the treatment is very thin, and accounts of the culture in those regions are close to nonexistent. The word “Coptic” occurs once in the book and “Syriac” not at all. Nor does “Armenia.” This is overwhelmingly a Western European study, of Latin Late Antiquity, in an era when such a focus is, at best, eccentric. 

As a book, Domination uses a somewhat surprising structure, with almost a reverse chronology. Of the five chapters, the first two focus on the last phases of the Roman empire in the West, a time when Christianity was already well established. The opening chapter, “Land of Saints,” offers an evocative survey of post-Roman Britain, in what we sometimes (controversially) call the Dark Ages, the great era of the Celtic saints. This choice makes excellent sense in terms of Roberts’s own enthusiasms, and the material will be quite familiar to any British person who has watched the many archaeological popularizations that have flourished over the past quarter-­century. It may be harder going for American consumers. The next chapter, “End of Empire,” focuses on Gaul, Spain, and the West during the barbarian ­settlements, and the means by which the old Roman order gave birth to the new Christian worlds of bishops and monasteries.

So far, we might think that we are reading a detailed and specialized account of those late-Roman centuries, and the story is well told. But Roberts’s narrative then takes us further back to the time of the “Heart of Empire,” with a focus on the debates that culminated in the Council of Nicaea in 325, then earlier still to the “Business of Empire,” and then finally to the chapter “Ruler of All,” which discusses Christian origins. Nicaea, in fact, becomes a leitmotif in the book, as an exercise in the imperial co-optation of faith, with Constantine (of course) as the archvillain. We have been here very often before, not least in The Da Vinci Code. Nicaea, incidentally, gets dozens of references in the index, against zero for Chalcedon. That is significant, because the sparse early words of the Nicene definition asserted Christ’s incarnation but left immense room for debating just how those divine and human elements might have coexisted within the one flesh. It was Chalcedon that settled those mind-stretching questions, and explained just how a man walking in Galilee could be regarded as fully divine and yet fully human.

Roberts’s book appears to be structured like a detective novel that begins with a mystery and then progressively unveils the circumstances until finally we arrive at the surprising truth—that Christianity was a centuries-long quest for domination. If that was indeed Roberts’s goal, I apologize for offering the spoiler here. In this instance, the initial mystery, the body in the locked library, is the ­saint-dominated world of the British Isles, with its monasteries and pilgrimage routes. But Roberts’s solution is so crude as to astonish. It is very much like reading a nineteenth-century rationalist who is still furious at his fundamentalist upbringing and who marshals every piece of polemic he can scrounge together, from whatever discreditable source. The attitude to organized Christian faith is a mix of raw anger and chilly contempt. 

Much of Domination is so simplistic and ill-informed as to demand quotation at length, to prove that it really is this bad. In this vision, of course, St. Paul is the villain of the story and the real founder of the Christian faith, but Roberts ignores a huge amount of scholarship when she suggests that he was actually little known in the first few Christian centuries. Paul became prominent, she says, only when his views were taken up around a.d. 400 by Saints John Chrysostom and Augustine of Hippo. “Perhaps they recognized the power of Paul’s relatively simple message. Would the name of Jesus have become so well known had it not been for Paul? Would Paul’s name be so well known if it were not for the golden-mouthed John or the bishop of Hippo? I don’t think we’ll ever know.” Well, yes, I actually do know the answer to those silly questions and so does any competent scholar of early church history. As to Roberts’s first point, about spreading the name of Jesus, she herself rightly notes that “none of the early centers of Christianity—in Jerusalem, Antioch, Alexandria and Rome—were actually established by Paul”; and we could expand that geographical list at some length, to Lyon, Carthage, Edessa, and so on and so forth. Does this fact not answer her question fully?

As to just when Paul’s name became so well known, we need only look at the New Testament itself.  Certainly, beginning in the second century, all attempts to construct a canon of Christian Scriptures identified Paul’s letters as a major component of that corpus, even if there was not total agreement on every document. That fact certainly suggests, does it not, that Paul’s name might have aroused at least a glimmer of recognition among believers a couple of centuries before Chrysostom and Augustine?

The crude materialism of ­Roberts’s arguments is amazing for a scholar of her undoubted intelligence, as is her brusque dismissal of any rival approach. Christianity, she says, exercised no obvious appeal to any likely mass constituency. It conquered an empire solely and entirely because it offered a means by which threatened elites could preserve their power and wealth in a ­situation of social and political collapse. Here is the core of her argument:

Christianity wasn’t a grassroots movement—or at least, not for long—it was very much led by the middle classes and the elites. We’ve seen, in the west, how the Roman Empire fragmented but its structures, its wealthier citizens and its elite families, its ways of doing things, all stayed in place—under the aegis of Christianity. 

With emperors coming and going, barbarians seizing power in the west, and the political landscape of the fourth century looking decidedly unstable, the families who effectively ran the economy—from the middling sort, the merchants, lawyers and doctors, to the social elites—had hit upon an elegant solution. It may have crept up on them, almost inadvertently, but it worked. They’d found a way to protect their interests and to keep everything running, no matter who was, officially, in charge, whether that was a usurping Roman emperor or a new Visigothic king.

That new solution, the Christian empire, the Faith-Imperial Complex, held its supremacy until modern times. “And from then on, it was bishops supporting kings supporting bishops . . . all the way down.” 

Although Roberts does not quote Thomas Hobbes explicitly, every chapter echoes his famous remark that “The Papacy is no other than the ghost of the deceased Roman Empire, sitting crowned upon the grave thereof: for so did the Papacy start up on a sudden out of the ruins of that heathen power.” Her theme, however, is the very direct continuity from old pagan and imperial Rome to the whole Christian church, rather than merely to the papacy. In effect, we are to imagine the tenants of a great estate centered on a Roman villa, who one day awake to find that the villa has become a church or monastery, where the old landlord now serves as bishop or abbot, the same man with a different hat. The names have changed, but the domination continues. “Romanitas became Christianitas.” And repeatedly, Roberts portrays the continuity as a deliberate survival strategy by those old elites. If ordinary people accepted the new faith, Roberts tells us, they did so wearily, in order to obey the stern commands of these landlords-turned-clerics. 

Not a word of this comports with the abundant evidence we possess from multiple sources of the enormous appeal of charismatic figures, of the wonderworkers and prophets, monks and stylites, who drew the ordinary faithful in countless numbers.

Christianity, Roberts says, was from the beginning a corporate enterprise, which like any modern counterpart was focused centrally on preserving and promoting its brand. “Beyond the ideas contained in Christian scripture, there were some great marketing techniques.” This spiritual Ponzi scheme succeeded thanks to its “businesslike worldliness.” Paul excelled at these techniques. “Paul’s version of Christianity was alluringly simple. Modern politicians know the power of a slimmed-down message, a three-word slogan; and that marketing technique already existed two thousand years ago.” Moreover, “He opened the market up. He’d ­also sown the seeds of the movement as a financially viable operation—collecting donations on a regular basis and funneling those funds back to the center.” She is undoubtedly correct to note that any rising movement needs to secure a solid financial base, but her corporate analogies go far beyond that, and really are close to obsessive.Christianity, she says, succeeded because of its mastery of brand identity, its manipulation of slogans, jingles and logos.

Roberts scorns anyone who fails to accept her analysis:

I’m sure that apologist historians (including some who claim not to be Christian, but seem to be suffering some kind of Stockholm syndrome) and theologians, and lots of other people who want to believe that organized religion is about something other than money or power, will be queuing up to shoot me down on this.

A reasonable estimate would suggest that the historians who do not agree with that vulgar materialism constitute, what, around 98 percent of the profession? Of course, these people would accept that sound financial foundations are essential for the success of a sect or denomination, and political power can be an inestimable advantage (although it can also be discrediting). But is there nothing more than that? Just test this opinion with reference to any great religious movement in history—the Great Awakenings of the eighteenth century, the Anglo-Catholic and Tractarian upsurges of the nineteenth, the Pentecostal explosion of the early twentieth, the mass Christian expansion through Africa in our own times. All about money and power? ­Really? Nothing whatever to do with the spiritual worldviews of the constituencies involved, of tensions arising from class, gender, education, race, and demography, as these multiple strains are intensified by perceived heavenly signs and wonders?

I have no wish whatever to shoot Roberts down, but I would very much like to see her confine her writing to areas where she has a vague sense of what the very sizable scholarly consensus actually holds, together with a pinch of self-doubt and an openness to contrary arguments. She could ­also stick to what she is good at. She should assuredly carry on writing her moving accounts of archaeological sites, as interpreted through the generous sensibilities of a person who is surely far wiser than these rants might suggest.

By all means, read Roberts’s other works, and Ancestors is a fine place to start. Let’s just write off Domination as a horrible mistake.


Image by Michal Osmenda, licensed via Creative Commons. Image cropped.  

The post Tunnel Vision appeared first on First Things.

]]>
In Praise of Translation https://firstthings.com/in-praise-of-translation/ Tue, 16 Dec 2025 11:00:00 +0000 https://firstthings.com/?p=118535 The circumstances of my life have been such that I have moved, since adolescence, in a ­borderland between languages. Borderlands tend to be rugged, at least if...

The post In Praise of Translation appeared first on First Things.

]]>

This essay was delivered as the 38th Annual Erasmus Lecture.


The circumstances of my life have been such that I have moved, since adolescence, in a ­borderland between languages. Borderlands tend to be rugged, at least if they are ancient, not the doings of spoil-­sharing cartographers drawing absurdly straight lines through living habitats, or cancelling them altogether. Old borders, like that between Norway and Sweden, say, or England and Scotland, tend to wind. They are defined by the terrain, a firth or a mountain range, that permits a no-man’s-land of wildness on either side of the frontier. Travelers, mounted or on foot, are aware of performing a passage when they move from nation to nation. If the passage is fierce, it may amount to an assault, a transgression. Carried out in hope of encounter and barter, mutual enrichment, it can represent a bearing-across of goods, a translation. I have been drawn to excursions in such country always—well, at least since I realized such country exists.

While growing up, I presumed that the whole earth had one language and the same words. I heard foreign speech on television but thought it a crackable code, the mother tongue distorted ­into alien sounds that were salvageable by subtitlers. My perception reflected a self-sufficient outlook on the world. Such an outlook is childish, but not the prerogative of children. I used to know an old ­codger who was convinced that tourists who turned up in his Norwegian town were putting on an act with all their gibberish, that they would grasp what he meant as long as he spoke loudly. At some level, we are all prone to this assumption.

When I started learning English at nine, teaching was based on explications of concordance: “This says that.” It is a time-honored method. By it, the Austrian couple who feature at the end of ­Casablanca have assimilated English to prepare for exile. The fruit of their study is displayed as they drink to America, then fret:

Liebchen, ach . . . Sweetnessheart, what watch?”
“Ten watch.”
“Such much!”

A degree of sense is pressed through the sieve of language in this way, but one might as well communicate in Morse, or by means of Google Translate. I started German next. It was conveyed through a manual with drawings of a dachshund. The grammar did appeal to me: It was akin to mathematics. But I could not really see the point of acquiring it.

At the same time, though, I read. Lots. I was falling in love with literature, as I loved music. An older relative noticed these interests, which made me stand out as rather odd. He was a bit of an oddball himself, an artist who sensed life keenly, sometimes cripplingly so. He affirmed my awakening to meaningful expression in a way school did not, and perhaps could not. He sent me records for my birthday: Mozart, Rachmaninov, Ravel. Once while I was visiting, he and I, before a patient sunset, listened to Schubert’s Quintet together. I saw how piercing melos can be.

He recommended books also. It was on my Schubert visit, as far as I recall, that he urged me to read Hermann Hesse’s Narcissus and Goldmund. I went straight to the library, sought out a Norwegian version, then thought, “But what in heaven’s name am I learning German for?” and got the original instead. I was fifteen.

What I discovered entranced me.

Narcissus and Goldmund was not my first non-­Norwegian book. I had read a few English ones: yarns by Frederick Forsyth, Orwell’s ­Animal Farm. I had even, very self-importantly, gone out to purchase The Penguin Book of English Verse. Still, constant exposure to English had made me tone-deaf to it. It was too familiar. I heard it like elevator music, read it code-crackingly. German was something else. It transported me. Reading Hesse was like taking a first unaccompanied trip abroad. I found myself in a strange universe full of unfamiliar, fascinating features. I realized, for the first time, how one might perceive and speak differently. Hesse’s very first sentence was a revelation. Covering two-thirds of a page, it is made up of 143 words sinuously interconnected through organic successions of subclauses to describe a gnarled chestnut tree. The sentence is itself arboreal. It not only speaks of but shows the stem and branches sprung from a sapling brought north by a pilgrim come from Rome, subtly reminding us that German’s structure surges from a Latin root, a source to be duly acknowledged even as one savors the tree’s sweet fruits on one’s home turf, like the monk in the novel who roasts choice chestnuts in his cell’s fireplace.

Having glimpsed the scenery and heard the songs of this new land, I could no longer stay quietly at home. I had lost native confidence in my own way of talking. Not that I spurned it; simply, I knew that there were more things to be said, other ways of saying them. An explorer was born in me—and a dissident. I dimly conceived a conviction I was glad to find stated a decade later by the Siberian novelist Andrei Makine: “Monolingualism produces a totalitarian vision of the world. This object is called a book and that’s it. Whereas the bilingual child, faced with one object with two names, will have to grapple with abstract and philosophical ideas early on in life.” I thought myself awfully adult at the time but was still child enough, thank God, for this grappling to be lodged in me as something constitutional. I have since been driven by the need to gauge the resonance and trace the genealogy of words. For words are forms of life; and life, to be spoken of well, requires critical, deliberate, humane articulation.

Such articulation is under threat. Language is being impoverished. The world is turning monoglot. It happens not just through the closing down of language schools or the extinction worldwide, at the rate of one a fortnight, of indigenous tongues. A more insidious form of linguistic totalitarianism encroaches through the dumbing-down and hemming-in of terminology. Words are shorn of their semantic substance the way a pig’s foot is shaved by a butcher who stuffs the meat into a sausage machine, then leaves the bone, at the end, to be ground.

The loss of differentiated vocabulary (the German for which is Wortschatz, “word-treasure”) leads to dull and simplified communication. I have heard it said that you can get by in everyday talk with a lexicon of five hundred words supported by occasional gesticulation. I have no idea what the empirical foundation for this claim is, but I see it peddled often enough, so it must correspond to some sort of received wisdom. Be that as it may. I hazard the guess that, were you to examine government communiqués or your own text messages over the course of, say, the past month, the count of five hundred distinct words, with emojis substituting for gestures where appropriate, would not be far exceeded. That should make us thoughtful.

Then, of course, there is artificial, or inhuman, intelligence. Thanks to it, translation is mechanized. Feed in some verses by, say, Li Bai, a Chinese poet of the Tang Dynasty, and you find them spewed out in the twinkling of an eye in colloquial American, contractions and all. AI does not limit itself to rendering one “foreign” language into another. It offers to translate even the vaguest notions that arise out of our fancy’s pond-fog into ­cogent speech. As a result, we can increasingly excuse ourselves from coming up with our own words. There are, here and there, pragmatic advantages to this. But they come at the cost of catastrophic loss. For what will man turn into as he, formed in the image of the Word, surrenders poetry to algorithmic patterns, logos to digits, with everyone’s speech, everywhere, sounding the same?

From an aesthetic point of view, there is agreeable symmetry in the fact that hubristic engineering in our post-post-modern world is effectively, by reducing discourse (the way broth is reduced), restoring a status quo that is said to have been lost through hubristic engineering in pre-history. The origin of speech-diversity, the birth of babble, is proverbially traced to the scattering abroad of men after their ill-fated attempt to make gods of themselves by constructing a tower “with its top in the heavens.” Language-lovers such as George Steiner have qualified the notion that Babel’s sanction was all bad. After all, a wealth of sound ensued, and ­untold potential for nuance and polyphony enriching both our intuitions and our palaver.

I am partial to this view, but the real stakes of Babel may in fact be pitched at a different level. When you think of it, Scripture talks of linguistic multiplicity before anyone had thought of tower-construction. In an account of the offspring of Noah’s son in Genesis 10, we read: “These are the descendants of Japheth in their lands, with their own language, by their families, in their nations.” Scholars embarrassed by this verse have stressed that Scripture’s narratives are not always in strict chronological order. We may, though, responsibly get round a sense of felt contradiction. In Hebrew, the beginning of the next chapter, Genesis 11:1, reads: 

וַיְהִי כָל-הָאָרֶץ שָׂפָה אֶחָת וּדְבָרִים אֲחָדִים

The Revised Standard Version, voicing an established norm, translates: “Now the whole earth had one language and the same words.” This spells out, as I have noted, what is for many of us a natural supposition regarding an original status quo.

The learned Rabbi Joseph Herman Hertz, though, thought it better to translate the Hebrew phrase in this way: “Now the whole earth had one language and few words,” that is to say, a small vocabulary. If this is right, the tower of Babel stands for confederation through a handful of mobilizing slogans. Ideals tend to call for nuanced explication. Slogans are much the same in any age. We might conjecturally think that the investors and workmen of Babel were driven by timelessly attractive thoughts of, say, “greatness,” “progress,” “mastery,” “profit.” Jewish legend, adds Rabbi Hertz, “tells of the godlessness and inhumanity of these tower-builders.” Their language did not extend to the register of philanthropy: “If, in the course of the construction of the ­Tower, a man fell down and met his death, none paid heed to it; but if a brick fell down and broke into fragments, [then] they were grieved and even shed tears.” This image from a legendary source has, alas, perennial applicability.

The connection drawn between fewness of words and inhumanity is striking. Men and women were created, the Bible would have it, to ­enunciate creation. Endowed with logos, speaking in time as God spoke “in the beginning,” Adam was graced to give things adequate names. Conversant with his surroundings, he was to be creation’s steward, to “dress it and keep it,” but not to lord over it: Lordship was God’s. The enterprise of Babel sought to turn the tables. Man had grown bored with contemplatively naming things for the sake of wise appreciation. His new design was pragmatic and goal-driven: “Let us make a name for ourselves!”

The tower instantiates humanity’s ambition. Few words were needed to raise it: Graphs and figures would go a long way. The mathematical, calculable nature of the project is shrewdly envisaged by some of those early modern artists—Valckenborch, Brueghel, van Cleve—who painted the subject. On their canvases not a lot of talking goes on. Babel, it appears, brings about the death of conversation. Man is locked, with gritted teeth, in ­self-aggrandizing pursuit. The one word that touches him is “Me!” now and again transposed, for form’s sake, into a plural “Us!”

If we adopt this exegesis, the subsequent confusion of language and decentralization of mankind are not so much punishment as respite. Relieved of megalomaniac obsession, man is forced to look around. Once again he notices, lo! the mountains and plains, the blue sea and starry skies. He recalls what he, creation’s spokesman, owes them. The breakdown of Babelish, intercourse contracted to figures and a few mantras, provokes the rebirth of conversation. The aftermath of Babel in effect redeems man as a logical being. Even as his expulsion from Eden was a blessing in heavy disguise, effected to prevent fallen man from eating of the tree of life and thus remaining forever in a state of sin-woundedness, so his driving-forth from Babel was beneficial. Recovering word-wealth, he learned again to engage with alterity. Any notion of “thus” is tempered henceforth by knowledge that there is an “otherwise”; one can no longer speak of “here” without considering an “elsewhere.” Any biblical prospect of home is bound up, from this time on, with remembrances of exile. Man learns to refrain from absolutizing—that is, from putting all his trust in—any earthly attachment, any political or technological project, any fixed jargon.

A consciously acknowledged variety of tongues, whether or not it already existed in Japheth’s days, makes of translation a paradigm for getting by after Babel. Terms must be tried, words weighed. Confidence in private concepts is tested by the need to explain them. Sensitized to strangeness, man is asked to think of himself as a pilgrim, a word that does not designate someone backpacking to Lourdes so much as one who comes from foreign parts, a passer-through, a dweller-for-a-while.

Why was Abram, prototype of this new humanity, called out of Haran? To learn to be an alien. This happened first in Canaan, where his proclamation of “the Lord” witnessed to a metaphysical Otherness it would take centuries to name. In Egyptian exile his experience took flesh: Abram found himself talking and acting inexplicably. Israel’s other great patriarchs, too, bearers of words that exceeded them, faced incomprehension. God, it seems, would have it so, forestalling territorial pretension in geographic and semantic terms. Man is the creature that, drawn hither and thither, must ask: “What does this mean?” When Israel at last crossed the Jordan, two types loomed large over the nation’s destiny: that of Moses, the hard-of-speech, born and buried in strange lands, and that of Jacob, the wandering Aramaean.

The give-and-take of converse with the Pharaonic south and Mesopotamian north, then with the eastern tribes, with Gaza, Syria, and Sidon, was inbuilt in the Abrahamic, and later Israelite, project of nationhood. We find, too, that in Israel’s God-given Law, the stranger, designated oxymoronically a “resident alien,” appears intrinsic to the nation, lodged within it, we might say, as a providential irritant.

By having to engage with differently formed foreign-tongued people, Israel is kept in a tension of non-familiarity regarding its ultimate identity and task, reminded that the God behind its social contract says, “My thoughts are not your thoughts,” that the people’s freedom is premised on their memory of having once been slaves abroad. The nation is not an end in itself, but an instrument with a universal purview: “Out of Zion shall go forth instruction” for “many peoples,” so that together they might “beat their swords into ploughshares, their spears into pruning hooks.”

Whenever the chosen people gets too sure of itself and reduces revelation to platitudes, sacred things to talismans (as when, under Eli, the Ark was hauled into battle against Philistines), providence occasions some exclusion or loss by way of re-enacting exile. God will not let faith be instrumentalized for purely territorial ends. To relieve the people of its presumption, he reimmerses it in foreignness. A fresh thirst is born to hear new words, to be given at the opportune time.

“The word of the Lord came to me!” By this acclamation prophets would later legitimate themselves in sacred history. The gift of some new word sheds light in darkness. New words require fresh translation; fresh translations, the rereading of tired words. Settled scenarios may come to seem new. Deadlocks may loosen. A parable can be drawn from this. At times of crisis, when a nation is at odds with itself or its environs, what is needed is not the paring-down of discourse but its enrichment. Hope for restored togetherness lies in engagement with allophony, a linguistic term we may paraphrase as “other-speak.” Once I let go of the demand that my stock of words—the sense I give them, my way of speaking them—must be the norm for exchanges with others, a number of realities may rearticulate themselves.

What, then, is translation? How does it work? In an essay from 1937, Miseria y esplendor de la traducción, José Ortega y ­Gasset spoke of translation as a utopian exercise. By way of example he asks picturesquely how the resonant syllable of the dense, dark green, fragrant German Wald might hope to have its sense conveyed by the plosive bosque, which to Spanish-speakers summons up a tuft of trees in a parched plain. Both words mean “forest,” but do they speak of the same thing?

The semantics of a language are rooted in specific soil and in local sensibility. Its grammar, sounds, and syntax, too, presuppose a framework that makes us slide, in the way we think, “along preestablished rails prescribed by our verbal destiny.” This image is proffered in Ortega’s text, crafted like a dialogue, by an unnamed “scholar of linguistics.” He maintains that the structure of language forms our perception of the world so deeply that a statement made in one cannot fully be rendered in another. The best we can hope for is approximation. Translators traditionally follow, the linguist says, quoting Schleiermacher, one of two trajectories: “either the author is brought to the language of the reader, or the reader is carried to the language of the author.” The French, with their knack for definition, call the first option cibliste: It favors the target-language. The other, focused on the source, they call sourcier. Ortega’s linguist thinks the target language has to suffer; there is no other way. A reader of any translation must know in advance that “he will not be reading a literarily beautiful book but will be using an annoying apparatus.” It is no translator’s task to make a work easy to read. He is to make it clear, if necessary by piling up footnotes. Schleiermacher’s version of Plato is cited as an instance. It is, the linguist says, unlovely. It gives scant reading pleasure. But it may stand a chance of helping us “transmigrate within poor Plato, who twenty-four centuries ago, in his way, made an effort to stay afloat on the surface of life.”

Ortega does not quarrel with this stern view. He concedes that, in translation, language must be pushed “to the extreme of the intelligible,” as if transfer of sense from one linguistic medium to another were inevitably an imposition. His claim that the task is utopian is softened when he adds that “everything Man does is utopian.” Still, a note of wistful sadness penetrates his often-witty text. One is left with the overall impression that it is just damn hard for people to talk to each other, that translation is at some level force-funneling from person to person. And does not each of us speak, to some degree, a customized language, conditioned by unique experiences and associations? We might remember that Ortega published this piece in a Buenos Aires journal in 1937, when Spain was in the throes of a fratricidal war whose tragedy is evoked in a short story from 1943 by Stefan Andres titled, precisely, “We are Utopia.”

There are times when it seems sense can be neither spoken nor heard, when violence reigns, issuing in muteness, plunging us into incommunicable solitude, as shown in Víctor Erice’s film about the aftermath of the Spanish Civil War, The Spirit of the Beehive, made in 1973, when remembrance was still censured in Spain. Destinies play out in the film within a collective, implicit wound that is unspeakable. The husband and wife under whose roof the drama is enacted never exchange a word short of calling out each other’s names, like freight trains whistling as they pass at dusk. The tensions that have divided a house against itself represent the opposite of what translation, however imperfectly, exemplifies. Rails have been dug up and used as bludgeons.

When such tensions are abroad in society, it is necessary to practice the art of translation assiduously lest conversation be demeaned as railroading. A throwaway comment about Schleiermacher’s Plato in Ortega’s text refers to translation as transmigration. That theme is picked up in an essay, altogether more hopeful, from 2012 by Mireille Gansel, who has rendered the German poetry of Nelly Sachs and Paul Celan into French. Gansel thinks of translation as transhumance. This pastoral term, drawn from humus, Latin for “fruitful soil” (which yields “humility” as well), refers to the seasonal droving of animals practiced in mountainous regions. Whether we spontaneously see before our mind’s eye the image of a Swiss shepherd with his sheep and barking dogs winding their way through a ravine, or that of a snow-scooting Sámi driving reindeer across the Scandinavian tundra, the scene vibrates with a liveliness and energy that stand in sharp, freeing contrast to the image of inflexible railway lines we have considered in Ortega y Gasset.

Transhumance, an Intangible Cultural Heritage of Humanity since 2023, is by definition constant drifting across borders. If you are caught up in transhumance you do not stay behind a wall trying to work, or shut, out what people say and do on the other side; you move back and forth seeking sustenance in changing landscapes for yourself and your flock. Gansel develops the metaphor of transhumance in two ways. On the one hand, languages are pastures. Shepherded from one to the other is significant content—of a poem, novel, treatise, or confessional statement seeking new form. On the other hand, language itself can be thought of as a flock led from winter to summer, grazing, with the translator as its shepherd. Language, in this account, is a nomad reconciled to the transient nature of any “home.”

Gansel’s hypothesis is at once theoretical and deeply felt. She cites an early memory of her father sitting somewhere in France, in rapt silence, reading letters from his sisters in Budapest, promising: “This evening I’ll translate.” Probing revealed treasures waiting to be brought across language boundaries. Gansel remembers her puzzlement as her father, in messages to her from her aunts, kept pausing before a given phrase he would invariably render as “my dear.” “Is it the same word again?” she asked. Her father answered, “Well, it means the same,” but then began to sound subtly different strings of sweetness, drawing out endearments such as “my darling,” “my golden one,” “my little-one-made-of-sugar.” In a flash, says Mireille ­Gansel, “the diagrammatic clarity of French was ablaze with this rainbow of sensations, each one enriched by a possessive [term] that embraced me tenderly.” Such awakening to the pregnancy of words was one factor that made her resolve to be a translator. Another was the need, simply, to get to know what was left of her family, decimated by the anti-Semitic fury of the Third Reich. “If you wish to talk with them,” her father said, “you have only to learn German,” the lingua franca of the Middle European world in which the family’s roots were buried. German had been the language of intimacy between Gansel’s Slovak grandmother and Hungarian grandfather. Their son, her father, would not speak it, hearing in it ever the echoes of marching boots and curses.

The pasture into which Gansel moved by appropriating German was not one of restful waters. It was a ravaged land, the “world of yesterday,” which had thrived in now burned-out cities like Odessa and Czernowitz. She learnt to pick and gather—in Old German lësan, a verb that now means “to read”—elements of beauty and horror. She got to know a soft, sonorous German enriched by the melody of confluent vernaculars: Hungarian, Czech, or Yiddish. German had been a roving, cosmopolitan language, used by Kafka in Prague, Roth in Paris, Canetti in London. After the war this medium of global culture was corralled within national boundaries. Suppressing other tongues, replacing their speakers’ names with numbers, it had diminished itself, a vehicle of transhumance, into the soulless code of what the philologist Victor Klemperer called LTI, lingua tertii imperii. Gansel understood why native German-speakers like Imre Kertész or Aharon ­Appelfeld would not use German for creative writing after 1945, forswearing it as a form of ­Babelish. She saw no less why Sachs and Celan resolved to make German speak the Shoah’s awfulness in ­poetry soaring toward the ineffable, which Gansel decided to shepherd into French. Confronted with the unheard-of, languages have to burst boundaries. To translate is not to domesticate the unknown in known terms. It is to renounce the certainty that I am the knowing insider while everyone else is outside. When one does this, something new can be said. Gansel records her realization “that the stranger is not the other, it is I—I who have everything to learn, to understand from him.” “That,” she adds, “was no doubt my most essential lesson in translation.”

Such a statement may make us think of Martin Buber’s I and Thou, high on the list of the world’s most-cited-and-least-read texts. Published in 1923, the year the anti-Semitic rag Der Stürmer was launched, and turned into English in 1937, when Ortega published his essay on translation, the book is famously a concentrate of Buber’s dialogical thinking. Less well known is the fact that this philosophy was bound up with a labor of translation. Buber had been planning a version of the Hebrew Bible since before World War I. The project began in 1925 in alliance with Franz Rosenzweig, already deprived of speech by the lateral sclerosis that would carry him off in late 1929.

In engagement with Holy Writ, general problems of translation are brought to a head. The chasm between eternity and time is so vast that any attempt to bridge it seems hopeless. Men can try only because God has equipped them with words: the ­unchangeable ones of revelation, the moldable ones of the land. Rosenzweig and Buber were that for which LTI had no term: Jewish Germanists. Consummate stylists both, they revered German’s architecture, but would suffer no violence done to the Hebrew. They upheld the integrity of the ­Masoretic Text, understood as an immense, symphonic, sometimes playful commentary on itself. To render this scriptural wholeness, a handful of essential rules of translation were laid down. Since the Bible had speech as its origin, this new version must be orally attuned in phrasing, euphony, and rhythm. As well as the message, there was the music to convey. Hebrew effects interreferentiality by means of similar sounds, so that a distinct motif in one place recalls, as in a Wagner opera, the same motif elsewhere, inciting the reader to connect the two; therefore these features, alliteration for example, must be maintained. The etymological wealth of each Hebrew root must be given attention to keep its substance intact; by extension, a set German roots must correspond to each Hebrew root. The word for a burnt-offering, הלָעֹ, from הלָעָ, “to raise,” would be rendered, for example, as Darhöhung, “lifting-up.”

Would translation on these terms jolt the reader in the way Ortega’s linguist foresaw? No. ­Neologisms were required. The language of Goethe was stretched. Yet Rosenzweig could observe with relish: “It is remarkably German! Luther by ­comparison seems almost Yiddish.” The two men complemented each other, growing together in understanding of the text through joint endeavor. Sometimes their “exchange of letters went back and forth for weeks over a single word.” When Rosenzweig died, Buber felt bereft but carried on. He had interiorized his friend’s genius to such an extent that it remained for him a guiding light. The translation was substantially finished before the outbreak of World War II, with one exception: the Book of Job. Buber wrote: “I was simply unable to translate it.” After the war he returned to the task: “Then I was able.” That elliptic phrase is rich in pathos. It tells us that translation is more than a strictly intellectual process; it is existential. Sometimes, to perfect it, we must be changed by pain, compassion, or delight in a dilation of the heart that in turn permits the broadening of our mind and our words’ late blossoming.

A friend of Buber’s, the Catholic exegete Fridolin Stier, set about translating the New Testament by applying the principles of Buber and Rosenzweig. He enriched German literature as they had done. Stier’s paradigm for translation was hospitality. A host, he wrote, must receive guests with grace, as they are. It would be a violation “to pull a turtle-neck jumper over the head of one appearing in Oriental garb.” This snide reference to the mandatory outfit of modish academics (the remark was made in 1970) is a timely reminder that Scripture cannot be forced to wear any uniform or national dress: no ushanka, no baseball cap. Stier’s insight has, though, a wider remit. For someone else’s sense to pierce me, something in me must break:

Whoever translates must dislocate. Sworn to fidelity, he must break it. Languages will not, cannot work otherwise. Each speaks of, sees and experiences, senses and judges, perceives and relates earth and heaven, humanity and stuff variously—languages, like painters painting one mountain, differentiate the same.

The wonder is that they can, in this way, actually let us see the mountain differently.

Variety of perspective belongs to those who are pilgrims on the earth, having here no abiding city. In the Vulgate, Paul exhorts the Philippians: “­nostra autem conversatio in caelis est.” Heaven will be conversation conducted in a language we have yet to learn, yet our present search for worthy words prepares us. We must stick to that search, not surrender it to automation. The Logos’s antitype is, in the Bible’s final book, a digital cipher. The Word became flesh to translate the Father’s love—“to exegete him,” says the Fourth Evangelist. Our flesh is in turn illumined by words. According to the RSV, Paul told the Ephesians that we human beings are “God’s workmanship.” What Paul wrote in Greek was: “We are God’s ποίηµα.” A ποίηµα, true, is anything made: The noun is derived from the verb “to do.” But it makes at least as much sense to translate the noun literally, as “poem.” To postulate that we are poetic creations is to own that the Logosseeks to speak itself in us beautifully. Our life’s task is to find the appropriate words, punctuation, and pauses. That done, we shall be translated as Bottom was in Shakespeare’s A Midsummer Night’s Dream. When the amiable weaver is half-turned into an ass by Puck, Quince exclaims: “Bless thee, Bottom! Thou art translated!” Mirth presages mystery: Our present nature is provisional. Our own life’s poem will be perfected only in translation when “we shall all be changed, in a moment”—when the last trumpet sounds and the dead will be raised imperishable.

Even in this life, translations may excel original designs. I think of the versions of St. John of the Cross done by Roy Campbell, an English poet who lived in Spain in the 1930s. He picked up the work to process his grief at the killing of his confessor, a Toledo Carmelite shot in the Civil War. The result was, he reckoned, “a miracle,” alight with a brilliance few of his own poems reached. Translation can call forth our richest, most various words. At the last, translation will manifest our true selves configured to God’s Word, apt to resonate as praise forever. We can thrive hospitably on our earthly pilgrimage if we are mindful that the borders that order existence now are not final. Learning to listen and to speak across them, we may yet pursue and envisage a boundless fellowship imbued with humanity, a word that used to signify “intelligent kindness.” God knows we could do with more of that.

The post In Praise of Translation appeared first on First Things.

]]>