Aug/Sept 2025 Archives - First Things Published by The Institute of Religion and Public Life, First Things is an educational institute aiming to advance a religiously informed public philosophy. Wed, 10 Dec 2025 21:11:53 +0000 en-US hourly 1 https://firstthings.com/wp-content/uploads/2024/08/favicon-150x150.png Aug/Sept 2025 Archives - First Things 32 32 The Substance of Our Lives https://firstthings.com/the-substance-of-our-lives/ Fri, 01 Aug 2025 05:00:00 +0000 https://firstthings.com/?p=93201 While I was in college, the local priest got me to come along with him on his nursing home rounds and play old hymns on the broken piano in...

The post The Substance of Our Lives appeared first on First Things.

]]>
While I was in college, the local priest got me to come along with him on his nursing home rounds and play old hymns on the broken piano in the community room. I would bang out “Great Is Thy Faithfulness” and “The Old Wooden Cross” and try to sing at the same time. The residents nodded or dozed; some hummed along. Later, when I became a parish priest, communion services at similar way stations for the aged found me jumping from prayer book to keyboard, playing “I Need Thee Every Hour.” But did I need them?

I always had a sense of having slipped into this musical role ignominiously, rather than seizing it with eagerness. My parents had made great emotional and financial investments in my musical training, but at university, with much angst, I chose another direction. It is easy to rue these decisions. I have friends now— we are all at retirement age—who look back with regret at what they chose to do, even though they have pursued remarkably successful careers, becoming doctors, academics, and leaders of organizations.

I sometimes assuage my own sense of guilt and disappointment—out-of-tune hymns for the elderly; how the musically mighty have fallen!—by the morally indulgent rationale that I was doing something “good” with my passable renderings of “How Great Thou Art” and “Make Me A Blessing.” Here, at least, I had chosen a far more righteous path than the commercially self-aggrandizing road of the professional musician that I had left behind. Or so I tell myself.

But is my thinking correct? The elderly and bereft, the poor and disowned, are no different from others in their value. “Rich and poor are both alike,” together “lighter than vanity” (Ps. 49:2; 62:9). The Church’s “preferential option for the poor” (the great phrase from the 1968 Medellin Conference) may be a proper imperative, but not on the basis of intrinsically different worth among classes. What counts is to offer the depth of God’s love to those with whom one lives. The poor are preferred only because they are “there”—or “here”—but are ignored. Put frankly, those who are not poor prefer to avoid living with those beside them. The “preferential option” works against this all-too-human reality. It’s an ascetic calling for the rediscovery of neighbors. Neighbor means, literally, “the dweller nearby.” The preferential option means: Open your eyes and share. We give what we have received to whomever we meet, wherever we find ourselves. That is, in part, the nature of what it means to be a creature of God: riding with the current of God’s outgoing love for what he has made.

The social philosopher Ivan Illich was right when he insisted that the Good Samaritan is about encounter, not social (or even ecclesial) policy. Neighbors, according to Jesus’s telling, are those we meet, in flesh and blood, beside whom God has placed us in whatever odd circumstances, unplanned meanderings, or simple communal proximities. There they are—and here are we: Give of yourself, Jesus says. The grand visions we have unfurled for ourselves—careers, vocations, investments, plans—are at best window dressing to these real rendezvous of destiny, which end up forming the actual soil of God’s field, the foundation of his building (1 Cor. 3:9). Neighbors: the substance of our lives.

The great English composer Edward Elgar, I recently learned, spent five years at the start of his career (1879–1884) playing, conducting, and composing at the Powick lunatic asylum near his home in Worcester. The head of the institution, James Sherlock, believed that music was good therapy for the patients, and he had the asylum staff learn instruments and play for the inmates once a week, accompanying dances. Elgar was hired to write and lead these weekly events. Some of these compositions, from his early twenties, were only recently discovered and recorded.

Elgar was pursuing his own professional hopes, of course. He was of modest upbringing, the son of a musical tradesman who, among other things, tuned pianos. (The two played the violin at Powick, before the young Elgar was hired as leader of the band; an odd familial bond, perhaps, as father and son joined to entertain the mad residents.) The young Elgar worked hard to make his way as a recognized musician and composer. It was not until his forties that he gained any real—and in his case spectacularly sudden—public recognition. For years he cobbled together mostly local gigs, playing violin, composing, conducting. His years at Powick (and then at an institution for the blind) were an early part of this long apprenticeship to fame. His was a hard road known to most musicians today, though mostly without Elgar’s successful endpoint. Critics speak of the wonderful education Elgar received from his composing at Powick: adjusting to the unusual instrumentations and limited capacities of amateur players, and learning basic forms and popular styles that would later color his mostly self-taught blossoming as a renowned composer. Musical historians write as if Powick’s lost souls were most interesting as an occasion for a genius learning counterpoint.

It is unclear what Elgar thought of his years of asylum music-making. In fact, the asylum staff for whom he wrote and with whom he played were his neighbors (as were some of the patients). He would dedicate his quadrilles and polkas to individuals—clerks, nurses whom he both respected and, in the way of the local region, loved. Later, an elderly and revered Elgar would surprise unknown visitors with the introductory comment: “When I was at the lunatic asylum . . .” Proud of his origins? Of his hard work from modest beginnings? A bit. But perhaps his arresting remark mostly reflected the reality of those who had been his neighbors. You have tools—as well as dreams—and you share them with those next door. The greatest beauty unfolds and blossoms in those encounters.

Many of us have heard, even if we cannot identify, Elgar’s early violin piece, the gentle and wistful “Salut d’Amour.” The melody was used as a closing song for a celebrated documentary on the Norwegian artist Edvard Munch, himself plagued by a family history of insanity. One sister was placed in an asylum, and many who knew Munch and his own tortured emotional life dubbed him a “madman.” In 1908, Munch entered a residential psychiatric clinic and emerged the next year a calmer, if not happier, soul. The 1974 film Edvard Munch, directed by Peter Watkins, is considered pioneering in its form and production. Watkins used non- professional actors to dramatize, often improvisationally, documentary accounts, emphasizing in overlapping and often claustrophobic images and sounds the obsessive immediacies of Munch’s psyche and relations.

I went to see the film with my father in 1976, not long after the tragic death of my mother, who had long been in the grip of mental illness. The asylum had, as it were, embraced our home. Or was it the other way around? We heard Elgar’s piece at the end of the film, background to the closing summary of Munch’s nervous breakdown and the playing out of his family’s troubled journey. Elgar’s song embodies the late-nineteenth-century middle-class drawing room culture that the young Munch had moved within and from whose “bourgeois” grip he attempted to escape, perhaps without success. As the beautiful, hymn-like music played over the credits, my father and I both wept silently. Sorrow lodged in the center of our home. But the depth of that sorrow arose only because love had dwelt there first, close by, its preferred place of flourishing.

Elgar’s tune is ripe and sentimental in a classic late-Victorian way, if light in touch. It was a seemingly odd accompaniment to Munch’s story of loss, regret, and extended suffering. In fact, though, Elgar had written and presented it to his wife, Caroline Alice Roberts, as an engagement present. (She in turn had offered Edward a poem, “The Wind At Dawn,” that he later set to music.) The song is about love, and the encounter in which love, at least as we understand its human form, can only arise. “Salut d’Amour”—hello! I greet you. I am in this place, and you are, too.

I don’t know why Watkins chose to use Elgar’s piece to end his film. But what I heard those many decades ago was the greeting of love still whispered in the encounters of the mad. The Powick lunatic asylum was a place of greeting; so, too, the nursing homes down the street with their faltering hymns and the small chapels with their broken singing; so, too, the street itself with its people, whether shuffling, stooping, or striding. So too the homes along the street, however fraught with their tensions, furies, and lassitudes, the households of our first and last encounters. We change neighborhoods, move to new homes, over time. But we cannot cease, wherever we find ourselves, to stand beside someone and make music for and with them. We do need our neighbors, and they are always close by.

The post The Substance of Our Lives appeared first on First Things.

]]>
Engineers for the Gospel https://firstthings.com/engineers-for-the-gospel/ Thu, 31 Jul 2025 05:00:00 +0000 https://firstthings.com/?p=93100 About twenty years ago, a lecturer in philosophy stopped by my office in the Engineering Faculty. He was preparing to teach a class to a group of engineers, and...

The post Engineers for the Gospel appeared first on First Things.

]]>
About twenty years ago, a lecturer in philosophy stopped by my office in the Engineering Faculty. He was preparing to teach a class to a group of engineers, and he wanted my advice. The philosopher outlined his teaching plan, which began with the concept that “facts” about nature represent culturally conditioned mental constructs. I interjected, “Your students will dismiss that right off the bat. They are engineers. They believe that building a bridge requires real concrete and steel. They take for granted that, if you design it badly and it falls down, real people get hurt. And real lawyers sue you to take real money out of your bank account.” Disconcerted, he jumped to a different topic. As he left, I smiled to myself, “Good thing I’ll never have to deal with tenuous stuff like that in my technical work.”

I was wrong. Preferred pronouns are springing up in the email signature lines of many of my colleagues and students, a development that seriously undermines engineering’s grounding in tangible reality. Transgender ideology claims that gender identity is a construct, with no foundation in the physical world. An individual’s gender cannot be known, even in principle, until that individual defines and discloses it. To the degree that its characteristics differ from those of the individual’s natural body, the physical world participates in a matrix of oppression. This ideology militates against the intellectual foundations of my discipline. It also shows that engineering and Christianity may rise and fall together.

Much public controversy has swirled around the question of whether science and engineering are compatible with religion. A great deal of Christian evangelization answers “yes,” but this answer misses two important points. First, the question itself is becoming outmoded in the face of transgender ideology and other new beliefs. Richard Dawkins, who has spent decades disparaging Christian doctrines, now regularly finds himself lumped with Christians in the catch-all category “transphobes,” deemed worthy of censure and cancellation. Second, simply asserting compatibility undersells the Gospel message. Jesus came so that people might have life to the full (John 10:10). Immersion in authentic Christian living should enable believers to practice better science and engineering than a secular approach can sustain.

The Beatitudes point the way. For example, a policymaking scientist who is poor in spirit will dread the temptation of self-righteous, for-your-own-good paternalism, which appeared so prominently during the COVID era. An academic scientist who properly mourns will discard a favored theory to search for a better alternative when contradictory evidence arises, instead of ignoring or disparaging the evidence. A design engineer who is meek will cultivate the humility to seek assistance with a knotty project from colleagues who possess complementary expertise, instead of forging ahead alone with an inferior plan. Such conduct is possible for those of a secular persuasion as well. For the Christian, however, the Beatitudes offer an unmatchable rationale, teleology, and roadmap for the cultivation of such practices, especially in the face of difficulty and opposition.

The engineer who lives a full life formed by the Beatitudes will produce technical explanations with greater originality, data with more trustworthiness, designs with more practicality and beauty, and greater benefits to the common good than the engineer who does not. The many-faceted goals of engineering—solving problems, increasing efficiency and productivity, improving ease and comfort, reducing risk—all ultimately aim to serve one’s neighbor. Christianity specializes in that objective.

This might seem an unrealistic hope: Doesn’t secularization tend to accompany technological progress? Our age has become increasingly irreligious while also witnessing a vast social transformation by means of materials, medicines, devices, and measurement tools that were inconceivable a lifetime ago. The world’s oldest living person was born during the first decade of the twentieth century, a decade that saw the invention of airplanes, radios, air conditioners, and vacuum cleaners. American life expectancy at birth hovered near fifty years. Pollution was on hardly anyone’s radar. Today, hypersonic drones reach speeds near Mach 7. Computers perform two quintillion operations per second. Scanning probe microscopes manipulate individual atoms on surfaces. Antibiotics and other medical advances have boosted life expectancy to nearly eighty years. Advanced spacecraft divert dangerous asteroids 7 million miles away, and chlorofluorocarbon production has been curtailed to protect the ozone layer.

Yet despite these extraordinary achievements, engineers must confront hard questions about their social role. In the first place, engineering can be held partly responsible for weapons of mass destruction, the surveillance state, and environmental degradation. More immediately, there is a crisis of trust. Technical leaders have lost considerable public respect and in some quarters are openly reviled as never before. Why? Top-tier technical journals have formally endorsed political candidates, and they helped to suppress legitimate hypotheses about the origin of the COVID virus. The UK’s Scientific Advisory Group for Emergencies (SAGE) misleadingly advanced worst-case pandemic scenarios to the government as normative. Ceaseless admonitions by policymakers in the U.S. to “follow the science” implied that technical evidence leads to policy decisions in straight lines. Few technical experts spoke up to challenge this absurd canard.

Another emergent crisis, as I have hinted, is philosophical. Engineers who add preferred pronouns to their signature lines lend support, often unwittingly, to an ideology that scorns bedrock assumptions of the technological enterprise. Engineering practice acknowledges that culture and particular experiences validly shape technical goals and approaches. However, engineering also assumes that nature possesses an objective reality that is intelligible through methods of observation, principles of logic, and instinctive ways of learning shared in common by all humanity. This reality must be discerned, not defined according to personal choice. Moreover, apprehension of this reality is available in principle to all people; no aspects remain ontologically restricted to particular individuals or groups. To the extent that such assumptions are abandoned, the scientific foundation of engineering evanesces.

Accordingly, the weakening of these assumptions has led to successful competition from other worldviews. In New Zealand, for example, indigenous Maori beliefs about nature have been slated to be taught in school classrooms with equal standing alongside “Western science.” In recent decades, leading thinkers in the West have embraced various forms of de facto gnosticism driven by emotivism and based on racial, gender, and environmental concerns. The underlying anthropology views the self through a psychological lens as self-defining, with constraints of social convention and the natural world acting mainly to oppress.

One shudders to think where these currents may lead. Fifteen years from now, it’s conceivable that a student may say the following to a professor of chemistry or physics: “Newton’s laws of motion? The guy who cooked them up ran England’s Royal Mint, which financed a huge colonial empire that enslaved millions. Worse, he invented calculus, which oppresses underprivileged students even now because it’s so hard to learn. Calculus should be purged from courses, or at least identified as a tool of oppression every time it’s used.”

We must hope that this scenario proves overwrought. Its very conceivability refutes the notion that engineering and science operate serenely in some objective sanctuary free of anthropological squabbles. The processes of design and discovery unfold according to metaphysical assumptions (often implicit), as well as social, financial, and career incentives. These factors influence which questions are asked, the plans for addressing them, and how the outcomes are interpreted and assessed. Current trends do not conduce to sustaining the technological progress humankind has enjoyed in recent decades.

For example, technical education in the U.S. is deteriorating, especially at earlier educational levels. Statistics compiled by the National Science Board show that mathematics achievement by fourth and eighth graders exhibited no measurable improvement between 2007 and 2019. More recently, the National Center for Education Statistics reported a dramatic decrease in mathematical aptitude for thirteen-year-olds between 2020 and 2023. This decline, combined with falling birthrates, will diminish the technically competent workforce. A Defense Department report in 2021 warned that the U.S. is “fast approaching another Sputnik moment” for this reason. Much human suffering will ensue.

Meanwhile, engineering has a communication problem. Despite the omnipresence of engineering—chemical, mechanical, electrical, biomedical, agricultural, civil, petroleum, nuclear, aeronautical, materials, computer, environmental, and industrial—much of the public has only a hazy idea of what engineers do. Engineering differs from “technology,” which overlaps with it but also includes practices based entirely on crafts, artisanship, and empirical know-how. Modern engineering relies heavily on formal analytical and mathematical methods, and it rests on a foundation of science. And yet it must be distinguished from science. The physicist and aerospace engineer Theodore von Karman, who developed swept-wing aircraft, suggested that “Scientists study the world as it is; engineers create the world that has never been.” The maxim is helpful, although of course it is too neat a division. One might say, more precisely, that engineering differs from science (including applied science) by habitually incorporating three distinguishing modes of thought: constraints, systems and design. Even in these modes, the communication problem emerges again. “Systems” is a fuzzy concept, and the public often associates “design” with art.

If engineering is little understood, and if it indeed faces a set of crises, what difference might Christianity make? At least four answers come to mind. First, Christianity insists that nature is fundamentally good, not oppressive. And yet it does not view the Earth as a god-like being, which humans must not affect or alter. Second, Christianity provides engineering with a rightly ordered rationale: To serve others. Benedict XVI and many others have pointed out that technological advances may be judged good or bad according to how they express love of neighbor. Third, the incarnational teleology of Christianity provides engineering with a supremely noble goal: to build for a new creation, which will become fully realized at the end of time. Engineering understood in this way becomes an enterprise for transforming and redeeming nature. Fourth, engineering practice can offer a distinctive window into the spiritual realm for the purpose of prayerful redemption of the individual.

If engineering would benefit from taking Christianity more seriously, it should be obvious to Christians that any evangelization of the West must take engineering seriously as well. High technology pervades our civilization, and many an issue of our age has a technological component: Global environmental changes, for instance, involve large-scale technology manufacture and use, and require large-scale mitigations, which inevitably entail engineering. This century will witness the increasing importance of advanced semiconductor manufacturing, quantum computing, artificial intelligence, the commercialization of space, the ubiquity of security cameras, biometric identification, the manufacture of vaccines, and more.

Evangelization has responded to its social context ever since St. Paul’s earliest preaching. After the Roman empire legalized Christianity, so that martyrdom diminished, the Church needed a new model for all-in commitment to the gospel. Accordingly, the desert fathers arose to show that martyrdom could be spiritual rather than bodily. The Benedictine order adapted and institutionalized that insight in Western monasticism, which pursued a communal form of prayer and work. When urban life in the West revived during the thirteenth century, the Franciscans and Dominicans arose to meet evolving spiritual needs through preaching and evangelical poverty. As the West grew increasingly secular and sophisticated in the sixteenth century, the Jesuits transmitted the gospel message through education and missionary work. At present, much of the West has rejected its spiritual heritage in favor of aggressive materialism or de facto gnosticism. The culture is technocratic, which means that Christian evangelization must develop the habitual capacity to proclaim in a technological idiom.

It is striking, then, that whereas the relationship between Christianity and science has occupied many fine minds since at least Albert the Great, far less thought has gone into Christianity’s relationship with engineering. This is partly due to the relative novelty of engineering: Certain examples of modern engineering date back quite far, but most engineering has appeared only in the past two centuries. Green shoots are appearing—Pope Francis’s Laudato Si is the first papal encyclical that refers to engineers explicitly—but much remains to be done. Where, then, should a theology of engineering begin?

There is, admittedly, a respectable theological case that engineering arose only as a result of original sin. Gregory Nazianzen and Maximus the Confessor believed that Adam and Eve were created atechnos, that is, with no need to craft anything because they lived in perfect harmony with God and nature. The fashioning of loincloths after the Fall signaled the first stirrings of engineering; Cain, the first murderer, went a step further by founding cities.

But Maximus himself sounds a more upbeat note by pointing out that technē—which encompasses engineering—flows from God’s gift of the human intellect. When guided by acquired and infused virtues, engineering not only protects and comforts humans living in nature-turned-hostile (Gen. 3:16-19), but also transforms creation in unprecedented ways to mirror truth, goodness, and beauty as God always intended. Moreover, Scripture brims with references to God as purposeful designer: Proverbs 16:4 and Psalm 139:13- 14 are obvious examples. And as any engineer would expect, creation is designed to spawn systems: planetary systems, weather systems, nervous systems, circulatory systems, and many more. Even the constraints (“weaknesses” in 2 Cor. 12:9) imposed by nature can serve to perfect the use of human power for creation’s stewardship.

In Science and Creation, Stanley Jaki advances a compelling historical argument that Christianized civilization in the West was uniquely able to incubate modern science. Perhaps less recognized is that modern engineering co-incubated as a fraternal twin. Science and engineering both required widespread faith in a transcendent creator god who is personal and rational. That faith led pivotal thinkers to trust in the comprehensibility of nature, believe in historical progress, and grasp the significance of quantitative methods. However, this thinking did not exhaust the Incarnation’s implications. The Church Fathers recognized that the crucifixion, resurrection, and ascension link the spiritual and physical realms in a profoundly new way. An essential consequence is that redeemed people should seek to build for God’s kingdom on Earth in anticipation of the end of time, when the fullness of transformed creation will be revealed. Engineers thrill to building things, whether bridges, reactor systems, integrated circuits, or software systems. To a faith-filled engineer, it is an exceptionally appealing notion that designing and fabricating systems, artfully optimized in the face of constraints, helps to build, literally, for a new creation.

For Maximus, the tree of knowledge of good and evil (Gen. 2:9, 17) represents the universe, which offers knowledge of good when contemplated through a spiritual lens and knowledge of evil when viewed through an earthly lens. God’s glory shines transparently only through the spiritual lens, which became badly clouded after Adam’s fall. But transparency improves as a result of acquired and infused virtues.

Throughout the patristic age, between Origen and Augustine, nature was conceived metaphorically as a book whose proper reading revealed spiritual truths in a manner analogous to Scripture. Subsequent developments traced a complicated path and added layers of meaning to the metaphor. As the patristic age closed, Maximus expounded the metaphor at length in his Ambigua. In an exegesis of the Transfiguration, Maximus writes that in the Book of Nature, “The Word . . . is rendered legible when He is read by us. For through the reverent combination of multiple impressions gathered from nature, He leads us to a unitary idea of the truth.” More boldly, Maximus claims, “Whoever wishes blamelessly to walk the straight road to God stands in need of both the inherent spiritual knowledge of Scripture and the natural contemplation of beings according to the spirit.” Furthermore, “the two laws—the natural and the written—are of equal value and dignity. Both of them reciprocally teach the same things, and neither is superior or inferior to the other.”

Medieval writers adopted different theological foci and developed the Book-of-Nature theme accordingly. Hugh of St. Victor believed that, before the Fall, the Book of Nature sufficed for disclosing divine wisdom. The Fall obscured our vision. To restore the loss, Bonaventure described a three-stage progression by which people may gradually ascend and better perceive their maker in the Book—beginning with “vestige,” moving through “image,” and culminating in “likeness” for those most conformed to God. However, both Bonaventure and Thomas Aquinas gave priority to the Book of Scripture for revealing divine revelation fully and without error.

The metaphor continued to evolve thereafter but lost its luster due to the slow rise of interpretations suggesting that nature could supplant Scripture as a vehicle of divine revelation. Yet in recent years, the metaphor has regained prominence in orthodox Christian thought. Encyclical letters by the last three popes mention the Book of Nature. Several writings and discourses of Benedict XVI ponder the theme, giving it perhaps the most extended and significant theological treatment of the past century. In a characteristically integrative approach, Benedict’s thought links the Book of Nature to contemplation, faith, natural law, and liturgy—centering all of them on Christ.

Benedict pointed out in a 2006 address that the Book employs mathematical language, and that the correspondence between the mathematical structures deduced by human intelligence and the objective structures in creation suggests a single original intelligence. He portrayed engineering (termed “technology”) as the systematic application of mathematical instruments to harness nature for our service. His 2009 encyclical Caritas in Veritate admonished against using this service for selfish purposes and enjoined using it for greater freedom to worship and contemplate the Creator. Thus, “in technology . . . man recognizes himself and forges his own humanity.” Clearly Benedict’s thought perceives both intrinsic and instrumental purposes for engineering.

The Book of Nature metaphor may be adapted and elaborated fruitfully for a technological era. Engineers and scientists who encounter spiritual realities by reading the Book of Nature may help us to understand how the supernatural can be represented in the natural—as it is in icons—and how natural and supernatural meet in the Eucharist. Conversely, traditional theology provides important safeguards against interpreting the Book of Nature with unwarranted tendencies toward deism or pantheism.

Engineering is also a useful analogy for the Church, the Body of Christ, in all its variety and complexity. The days of individual geniuses doing solitary technical work—Galileo, Einstein—are long gone. “Big science,” as in the human genome project or experimental high-energy physics, requires enormous organizations comprising thousands of people. Discovery-oriented technical work in academic laboratories involves teams of graduate students, postdoctoral associates, faculty, and others. In government and corporate laboratories and in production facilities, teams of mixed disciplinary background are often the norm. Reading the Book of Nature with technical proficiency is often a collective enterprise.

Evangelization also, of course, requires prayer and action. Maximus and other patristic writers insist that proper reading of the Book requires extensive discipline to cultivate virtues that order the soul and conform it to God. The conformed soul transforms the senses so that they may perceive the spiritual through the natural, as happened with the burning bush of Moses and the Transfiguration. Engineers are well positioned to understand all this, since their work requires the discipline of extensive study and practice over many years.

What action might spring from this sort of prayer? Not necessarily the foundation of new organizations. Christian societies, such as Engineering Ministries International and the Christian Engineering Society, already exist, though they deserve to be more widely known among the nontechnical faithful. Meanwhile, the existing array of Christian missionary and educational organizations is vast and deep. If some of those organizations add technological components that accord with their existing charisms, the cause of evangelization will advance considerably.

As C. P. Snow argued in The Two Cultures, “science” (tacitly including engineering) and the humanities have diverged into separate domains with poor intercommunication. Snow advanced this concept in 1959, and it generated controversy, which continues today. However that debate resolves, most engineers don’t regularly use words like “epistemology,” “logoi,” or “eschatology.” Any understanding of philosophy or theology is usually rudimentary. The technical proficiency of humanists is likewise typically weak. We need to think about how to bridge this gap.

As I argued above, over time the secular West will increasingly struggle to sustain its technologically based well-being, partly due to ideological factors and partly due to broader social and cultural dysfunctions. The opportunities for evangelization by engineers will be numerous and far-reaching in education at all levels—elementary, secondary, undergraduate, and postgraduate. Education has long been a major venue for evangelization, but giving it a specifically technological orientation will help to bring the incarnational dimensions of Christianity into high relief.

The historian Richard Rex has proposed in these pages that the City of God has undergone three great crises. The first crisis involved the theological question “What is God?” and emerged with the Councils of Nicaea, Constantinople, and Chalcedon. The second crisis involved the ecclesiological question “What is the Church?” and occurred in conjunction with the Council of Trent. We live in the midst of the third crisis, which involves the anthropological question “What is man?” The stakes are high.

Yet there is hope for the future. Many engineers seem congenitally unable to relinquish their grounding in objective reality concerning both material objects and human anthropology. For example, when answering a recent academic survey to gauge attitudes toward transgender ideology among engineering undergraduates, one student wrote, “There are two genders, male and female. If an engineer creates a bolt and a nut but then whimsically labels them, then he’s not that great of an engineer.” In this environment, the coupled incarnational and metaphysical dimensions of Christianity become assets, rather than the liabilities they sometimes were thought to be when atheistic materialism was the challenge. Christianity not only takes the created world very seriously, but also possesses an exquisitely developed metaphysics, which overmatches the inchoate, weakly articulated metaphysics of transgenderism. Moreover, most engineers resonate better with logical argumentation than with emotive diatribes.

The theology of Vatican II may be helpful here. After all, few clergy are engineers; the evangelization of our technological age must be an apostolate mainly of the laity. Fleshing out the theological and practical implications of the Book of Nature will require a ressourcement that draws on the insights of Augustine, Maximus, Bonaventure, and many others. Expressing the fundamental affirmations of Christian faith in an accurate yet technologically savvy vocabulary will be a challenging exercise in aggiornamento.

The evangelization task sketched here may seem daunting, given the social realities of our time. It is always worth remembering John Paul II’s exhortation: “Be not afraid!” The Resurrection impels Christians to build for a new creation, which undoubtedly includes specific devices, technologies, and softwares that never existed in nature. Yet the engineering process of discovering, designing, and manufacturing with the eyes of faith transforms people. Thus God works through people to redeem his creation, and his faithful people are redeemed in the process. Evil throws up constraints. But in the elegance of this design for our salvation, one can readily discern the hand of the supreme systems engineer.

The post Engineers for the Gospel appeared first on First Things.

]]>
War on the Weak and a Bad Bishop https://firstthings.com/war-on-the-weak-and-a-bad-bishop/ Wed, 30 Jul 2025 05:00:00 +0000 https://firstthings.com/?p=93213 A case study in brain-dead deconsolidation: marijuana legalization. Marijuana has been legal in New York State since 2021. In 2025, nearly half of the states in the Union have...

The post War on the Weak and a Bad Bishop appeared first on First Things.

]]>
A case study in brain-dead deconsolidation: marijuana legalization. Marijuana has been legal in New York State since 2021. In 2025, nearly half of the states in the Union have made weed legal and widely accessible. Ongoing efforts to normalize marijuana use represent a grotesque failure of political leadership. Our elites are determined to erode what remains of moral norms and abandon the weakest and most vulnerable to self-destructive behavior.

New York and other states have not just legalized marijuana. They spend money to promote its production and sale. As Steven Malanga reports in a recent Wall Street Journal opinion essay (“It’s High Times for the State-Subsidized Pot Businesses”), New York State has allocated $5 million “to train community-college students how to grow, market, and retail pot.” Maryland’s idea of reparations is to fund programs for marijuana entrepreneurs at its historically black colleges. “Illinois state colleges offer courses in ‘applied cannabis studies.’”

Pot promotion hits the bottom of society the hardest. Malanga notes: “A national Gallup poll last year found that those earning less than $24,000 annually were twice as likely as those with annual household income above $90,000 to smoke pot.” The young are swept into drug abuse. “The portion of the population 19 to 30 using pot in the past 30 days has increased to 28.7% from 16.6% since 2012,” the year in which Washington and Colorado became the first states to legalize marijuana.

Advocates claimed the legalization would put an end to drug dealing in the shadows. This has not happened. Malanga: “A 2022 study by Whitney Economics estimated that three quarters of pot consumed in the U.S. originated in the black market.” Legalization has removed the stigma, but it has not diminished the criminality.

Another promise was that black men, often black-market purveyors of pot, would no longer be subject to “over-incarceration,” which was said to do grave harm to poor black communities. Legalization has come at a great cost, however. Cannabis-use disorder is a debilitating condition not unlike alcoholism. “According to the 2023 National Survey on Drug Use and Health, more than a fifth of blacks 18 to 25 suffer from the disorder.” According to the New York Office of Addiction Services, black adults constitute 37 percent of admissions to marijuana treatment programs, although blacks represent 18 percent of the state’s population.

Over the last thirty-five years, the leadership class of the United States did nothing to stem the avalanche of internet pornography. They countenanced the expansion of gambling, which is now universally available online. Only with the inauguration of Donald Trump have any serious measures been taken to address the tidal wave of fentanyl.

There are deeper failures. For decades, our elites have waved the rainbow flag. University professors preach about the evils of the “patriarchal family.” The New Yorker publishes gee-whiz articles about polyamory. Meanwhile, homeless encampments metastasize in many cities. I’m confident that very few residents of these encampments are from intact homes, and many have horrifying stories to tell about domestic violence and abuse. Our elites have overseen the destruction of traditional norms, at great cost to those at the bottom of society.


The Catholic diocese of Charlotte erupted in late spring. On May 23, the local ordinary, Bishop Michael Martin, announced a policy, the purpose of which was to “complet[e] the implementation of Traditionis custodes,” the 2021 motu proprio by Pope Francis designed to stamp out the traditional Latin Mass. On Bishop Martin’s order, the old Mass was to be suppressed in all parishes, aside from a chapel to be established in Mooresville, North Carolina.

Like many other dioceses, Charlotte has seen growing interest in the Latin Mass, especially among young Catholics. In the wake of Traditionis Custodes, several parishes in the diocese applied to Rome for permission to continue to offer the Old Mass, and permission was granted, as it has been in other dioceses. In view of this history, Bishop Martin’s blanket order to shut down traditional communities of worship was not well received. An uproar ensued. Bishop Martin temporized. He has announced a ninety-day delay before implementing his plan of suppression.

Martin is new to Charlotte. He was installed a little over a year ago. But he apparently came with very definite ideas about what the good people of that diocese need. In his first year, he developed a document meant to “reform” liturgical practice. It included prohibiting the use of altar rails, which encourage the apparently dangerous spiritual temptation to kneel while receiving the Eucharist.

It’s not just altar rails and the dreaded prospect of pious young people kneeling. The Pillar reports that “Martin had originally contemplated a far broader swath of liturgical restrictions related to the ordinary form of the Mass, among them a prohibition on Roman style vestments, altar crucifixes and candles . . . , the use of the Latin language, and the recitation of vesting prayers [by priests].” Bishop Martin seems determined to drive out any hint of the sacred.

In the event, the presbyteral council of the diocese and chancery officials convinced Bishop Martin that conducting a jihad against a return to tradition was unwise. The return to tradition is a trend throughout American Catholicism. Though deterred in this instance, by all accounts Bishop Martin is not a man who lacks confidence in his spiritual intuitions. He was dissuaded from his comprehensive enforcement of liturgical banality, but one wonders for how long.

To anti-traditional zeal, this new bishop adds an unfortunate love of his own authority. In January of this year, an anonymous letter circulated in the diocese, outlining concerns including “arbitrary micromanagement” and an “autocratic approach.”

Bishop Martin also has a penchant for placing himself in the spotlight. My sources tell me that in diocesan communications and at public events, the message is: “Bishop Martin! Bishop Martin! Bishop Martin!” Sadly, it seems that Bishop Martin is a spiritual ideologue, the mirror image of the liturgical traditionalists who insinuate (and sometimes openly assert) that the new Mass is illegitimate and Vatican II is suspect. And like many ideologues, he’s willing to pull every lever to smash opposition. It’s a classic case of clerical malpractice.

The post War on the Weak and a Bad Bishop appeared first on First Things.

]]>
Waugh Against the Fogeys https://firstthings.com/waugh-against-the-fogeys/ Tue, 29 Jul 2025 05:00:00 +0000 https://firstthings.com/?p=92545 On June 17, 1953, the historian Hugh Trevor-Roper wrote to a friend: “I am now preparing a booklet which I hope (but perhaps it is too much to hope)...

The post Waugh Against the Fogeys appeared first on First Things.

]]>
On June 17, 1953, the historian Hugh Trevor-Roper wrote to a friend: “I am now preparing a booklet which I hope (but perhaps it is too much to hope) may cause a paralytic stroke to my old enemy Evelyn Waugh.” The “booklet” in question was a historical study meant to make the Catholic Church look ridiculous. He eventually abandoned the project.

Trevor-Roper loathed Catholics in general but cultivated a special scorn for Waugh, with whom he carried on a feud that began in 1947, when Waugh attacked Trevor-Roper’s The Last Days of Hitler, and ended only with Waugh’s death in 1966. As late as 1986, Waugh was still on Trevor-Roper’s mind. Trevor-Roper told his protégé Alasdair Palmer:

I forgive him a great deal because of his genuine love of our language. His wild fantasy and black humour are aspects of his genius, as well as of his warped character.

Yet his overall assessment was far from favorable:

He was, I believe, utterly cold-hearted: all his emotions were concentrated (apart from his writing) upon his social snobisme and his Catholicism, which was a variant of it, or rather, perhaps the ideological force behind it. He was a true reactionary—not just a troglodyte . . . but a committed, believing, uncompromising, intellectually consistent reactionary like (say) [Joseph] de Maistre.

He picked a quarrel with me in 1947—wrote me, out of the blue, a very nasty letter, attacked me in The Tablet, and then in other papers. I bit back occasionally, and then he became, as it seemed to me, somewhat paranoid. I heard many stories of his wild, and often intoxicated, denunciations, and since his death his published (and unpublished) letters have given further evidence of his hatred of me. He evidently regarded me as a particularly poisonous serpent who had slid into the garden of Brideshead and was corrupting its innocent Catholic inhabitants; which perhaps, to a certain extent, I was—or, as I would prefer to say, was provoked into being. In the end I tried to make peace with him, but my civil letter received only a curt formal acknowledgement.

The “nasty letter” was not in fact “out of the blue”: Trevor-Roper admits that it was provoked by “an admittedly injudicious remark by me about Jesuits.” Perhaps he saw in retrospect how it might have been offensive to claim (in The Last Days of Hitler) that Joseph Goebbels learnt his skills as a propagandist as the “prize pupil of a Jesuit seminary,” especially given that Goebbels had not in fact been educated by the Jesuits. But such details were omitted; Trevor-Roper preferred to fixate on Waugh’s alleged vendetta:

since his death, I have seen letters from him which attacked me well before that publication, so I no longer know the original cause of his hostility. The general background to it was certainly ideological.

No evidence has so far been published to corroborate Trevor-Roper’s claim that Waugh was aware of him before the middle of 1947. But he was right to suggest to Palmer that there was an “ideological background” to all this. As Trevor-Roper fancifully portrayed the situation:

During the war, and throughout the 1950s, a group of very articulate, socially reactionary Roman Catholics— all, or nearly all, converts—pushed themselves forward and evidently thought that they could be the ideologues of the post-war generation. They established themselves, by patronage and infiltration, in certain institutions (the British Council, the Foreign Office) and they wanted to establish themselves in the universities.

Perhaps there really was a modest Catholic resurgence in England prior to the Second Vatican Council. But Trevor-Roper overstates it to the point of paranoia.

Waugh and Trevor-Roper mirrored each other in ways that may have helped sharpen their mutual spite. Both came from professional-class backgrounds and fell in love with Oxford, and there became enamored of the aristocracy; later, both married into the upper classes; they shared a taste for luxury, grandeur, and the high life; but the greatest romance of both their lives was with their old university. Waugh and Trevor-Roper were both famously witty, combative, and flamboyantly reactionary (albeit in crucially different ways). Savage in print, they could be kind and generous in private. At heart, each was as sensitive as a poet.

Nevertheless, Waugh and Trevor-Roper had incompatible visions of reality. Trevor-Roper declared late in life that, though he had some sympathy with religious attitudes, “I also dislike 90% of Christians and Christianity.” Waugh, of course, was received into the Catholic Church (on September 29, 1930). Fr. Martin D’Arcy, in his 1976 essay “The Religion of Evelyn Waugh,” notes how easy it was to instruct Waugh in the run-up to his conversion. He had already made up his mind about the truth of Catholic teaching and taken the trouble to inform himself in some detail about what he believed. On October 20, 1930, a month after he was received into the Church, Waugh published an article in the Daily Express, “Converted to Rome: Why It Has Happened to Me,” outlining the less spiritual elements in his conversion:

The loss of faith in Christianity and the consequent lack of confidence in moral and social standards have become embodied in the ideal of a materialistic, mechanised state, already existent in Russia and rapidly spreading south and west.

It is no longer possible, as it was in the time of Gibbon, to accept the benefits of civilisation and at the same time deny the supernatural basis on which it rests. As the issues become clearer, the polite sceptic and with him that purely fictitious figure, the happy hedonist, will disappear.

The fundamental conflict in the West, he concluded, was between Christianity and chaos, and he rejected chaos.

For Waugh, Edward Gibbon epitomized a kind of glibly smug atheism that attracted clever adolescents. He knew it from personal experience; luckily he had managed to outgrow it. Not everybody did. As a schoolboy, Trevor-Roper became infatuated with Gibbon’s grandly sonorous style and mischievous skepticism. The Decline and Fall of the Roman Empire remains monumental even now as a work of scholarship; throughout his life, Trevor-Roper dreamed of equalling it in literary as well as historical terms.

Trevor-Roper’s unpublished writings of the 1930s record his inner turmoil in some detail. Despite his anti-Christian prejudices, he began his career as a church historian. Part of his research involved studying centuries-old commemorative brass plaques of the sort that could be read only if you laid a sheet of paper upon the surface and then rubbed a crayon over it to make a legible copy. According to his diary (December 5, 1938), he spent a jolly lunchtime laughing with a friend about the sorts of people who go to ancient churches to take brass rubbings of monuments; the next day, the same friend caught him on his knees in a church, taking a brass rubbing.

Trevor-Roper shared Waugh’s compulsion to write, but he dissipated most of these energies in private journals, his voluminous correspondence, and extensive notes for books that never got past the outline stage, on subjects including the Puritan Revolution, the Elizabethan moneylender Thomas Sutton, the succession crisis of the Elizabethan era, seventeenth-century Anglo-Spanish relations, the relationship of Protestantism to capitalism, and Oliver Cromwell. For decades he talked of putting together an ambitious multi-volume study of the English Civil War; this never materialized.

Some of his colleagues became concerned that he would waste his intellect. Wallace Notestein, professor emeritus of English history at Yale, repeatedly reminded him that his vocation was academic history, not trivial gossip: “The trouble with controversies is they will take your mind away from history. Historians need leisure and quiet almost as much as poets.” Xandra, Trevor-Roper’s future wife, had told him in 1954:

I feel you get waylaid by all the other small commitments of your life and pass by the main commitment of your life—I realise you must write—I could take pride in encouraging you to do this and in creating an atmosphere that would nourish your writing.

Only Xandra spelled out the truth:

After all, you must admit that you have not much output to show, so far—Archbishop Laud (which you won’t let me read), The Last Days of Hitler, some essays for The New Statesman and History Today, an unfinished pamphlet and an introduction to some very unedifying letters . . . As soon as your sabbatical year starts, you are to start writing—I am going to force you.

But not even she could find a way to make Trevor-Roper fulfill his promise; he would never equal the achievement of The Last Days of Hitler, his first, and only, real success.

The Last Days of Hitler, published in March 1947, boasts a verve and lightness of touch that are missing from the rest of Trevor-Roper’s published work. It was based partly on his wartime experience as an intelligence officer; this is the secret to its freshness and imaginative breadth. His character sketches of Hitler’s inner circle are often deft and evocative; his account has never been seriously challenged except on points of detail and interpretation. Yet some of Trevor-Roper’s passing claims about Catholics amounted to malicious slander. He drew direct parallels between the Jesuits and the Nazis: In addition to the false claim about Goebbels’s Jesuit education, he compared Heinrich Himmler to St. Robert Bellarmine. The same passage featured a gratuitous sideswipe at Cardinal Newman.

Among the Catholics outraged at Trevor-Roper’s unprovoked attacks was Evelyn Waugh. He wrote to Trevor-Roper privately and then—more bluntly—to the Tablet:

There was not the smallest reason why Mr Trevor-Roper should introduce Catholic theologians into his nasty story. They are dragged in ignorantly, maliciously and irrelevantly. Mr Trevor-Roper had a sensational subject. Apparently he thought it too good an opportunity to be missed for giving wide currency to his prejudices.

Waugh was barely a decade older than Trevor-Roper, but by 1947, when this first clash took place, he had already been famous for two decades as the most stylishly funny writer of his generation.

Waugh is arguably the greatest Anglophone novelist of the twentieth century. His peers include Henry James, Joseph Conrad, James Joyce, and V. S. Naipaul; he is the finest stylist of them all. Among American writers, perhaps only Ernest Hemingway has anything like his ability to evoke a scene vividly with a few choice words; but Waugh has a classical grace and elegance all his own. He mastered all the technical innovations of the Modernists whilst shunning self-conscious literary experimentation. His role as an entertainer is inextricable from his conception of his art.

He stands with T. S. Eliot and Paul Claudel as one of the foremost Christian artists in modern literature; yet his depictions of sin and vice can look suspiciously gleeful. His anarchic sense of humor often seems deployed against the forces of good and innocence; you wonder whether he is positively rooting for the most wicked, vicious characters he has created. Other Catholic novelists agonize about the Problem of Evil; Waugh shows you just how much fun evil can be. This seems true even in less hilarious books like A Handful of Dust (1934), where he makes undeniably clear where sin and vice always lead.

Waugh’s best-known book is Brideshead Revisited (1945), which is usually read as a nostalgic elegy for a Catholic, aristocratic, hierarchical world destroyed by modernity and the Second World War, at least by those who hurry through the last chapters, or radically misread the ending. Yet the novel points ultimately toward hope, albeit of a sort that bewilders or repels those who reject Christian teachings.

On December 5, 1953, Trevor-Roper published a review in The New Statesman that would later be reprinted in his collection Historical Essays (1957) under the title “Sir Thomas More and the English Lay Recusants.” The prose is high-pitched, denouncing “the modern priestly biographers of the recusants,” who lament the sixteenth-century persecution of Catholics without realizing “that a society at war has the right to protect itself not only against traitors, but against their dupes.” Trevor-Roper sounds angrily constipated, as he usually did when discussing Catholicism, all the way to his final insult:

Dead as mutton, the recusants can still serve to bait a priestly trap. Come unto us, say the Roman clergy, come into the Church, says Mr Evelyn Waugh (for in the intellectual emptiness of modern English Catholicism only the snob-appeal is left) . . .

Of course he was expecting a response; but he baited his trap less expertly than he realized. Waugh wrote to his friend Fr. Philip Caraman, S.J., (1911–1998) on December 7:

Have you read Roper in this week’s New Statesman? I spotted four errors in the first three lines and have written about them. I seemed to find a dozen others. I am sure that someone better-educated than I am could find a hundred. Would it not be a good thing to employ one of your learned friends to go through the articles with a fine comb and expose them all in a long article? It is time Roper was called to order and this article seems a happy opportunity.

Waugh’s first letter to the editor of the New Statesman (December 12, 1953) begins sharply:

Why does this contributor write so very often about the Catholic Church, a subject on which he is conspicuously ill-informed? Sometimes one has to read three or four paragraphs before striking the howler which reveals the quality of his scholarship. This week three lines suffice.

After two minor corrections, Waugh supplies a major one:

Among men of education “recusant” has a limited and useful meaning. It has nothing to do with high treason and the denial of the monarch’s spiritual supremacy; it simply means refusing to attend Anglican Church services. But even in Mr Roper’s loose employment, he has got his facts wrong . . .

The ending is harsh:

Later, among the strange jumble of speculations and misstatements, Mr. Roper truly remarks that in the last century the English Catholic bishops were chary of encouraging men of their faith to take University degrees. Can he wonder at this when he himself presents the spectacle of a tutor in Modern History who clumsily and offensively attacks the Catholic religion?

On December 26, Trevor-Roper’s reply to Waugh was published. He conceded two errors, but refused to back down on the term “recusant”:

I am far from my books, but I think that if Mr Waugh will take a little trouble before screaming about my “howlers,” he will find that before 1570 the word “recusant” was generally applied to those who refused the oath of supremacy; it was only later that it acquired its meaning of non-attendance at church.

Waugh responded by quoting the dictionary’s definition of “recusant” as “one, especially a Roman Catholic, [. . .] who refused to attend the services of the Church of England.” Amid other corrections, he also got round to addressing Trevor-Roper’s initial taunt:

Mr Trevor-Roper’s position as a tutor to Christian undergraduates seems to me dubious. In the essay under discussion he wrote: “In the intellectual emptiness of modern English Catholicism only the snob-appeal” (where do young dons pick up their vocabulary?) “is left.”

This is to insult not only Roman Catholicism, but all forms of Christianity. No one can doubt that we possess the Scriptures, the Creeds and the Fathers. We claim much else beside. The sane Christian criticism of Roman Catholicism is that we are too full and have enriched the original deposit of faith with legends and opinions. If we are empty then what does Christendom contain?

By now, as Waugh noted in his diary, “My dispute with Roper in the New Statesman becomes tediously pedantic.” Trevor-Roper issued another lengthy, irritable reply, claiming superior knowledge of the term “recusant” on the grounds that his forebears, unlike Waugh’s, remained Catholic until the mid-eighteenth century. Waugh shot back: “I cannot accept the theory that because Mr Roper’s family apostatised more recently than mine, he has inherited a superior insight into the proper use of language.” He also reiterated that Trevor-Roper misused the term “recusant” more than once, defended himself unconvincingly even by his own logic, and did not, despite his training as a church historian, understand how cardinals were appointed; in addition, he got the date wrong of St. John Fisher’s execution, as he himself grudgingly admitted. Trevor-Roper toothlessly snarled in response: “May I recommend to Mr Waugh a period of silent reading?” The editors allowed him the last word; but he had obviously been defeated.

In 1957, Trevor-Roper was appointed Regius Professor of Modern History at Oxford, at the age of only forty-three, and immersed himself enthusiastically in administrative intrigues and academic catfights that ended only in 1980 when he left Oxford to become Master of Peterhouse, which was then the most traditionalist of Cambridge colleges. Here too his time was consumed with endless petty rows that once again got in the way of serious scholarly work.

Lucrative journalism turned out to be an even bigger problem. In spring 1983, Trevor-Roper was asked by the Sunday Times to authenticate a set of documents that a German magazine claimed were Hitler’s diaries. He might have quite literally written the book on Hitler’s last days; but he spent little time in archives and, as he once confessed, did not “read German with ease or pleasure”; in short, he had no surefire means of evaluating the diaries. After claiming they were authentic, Trevor-Roper belatedly had second thoughts; but by the time he retracted the claim, it was too late; when they were revealed to be obvious fakes, he was publicly, internationally, universally humiliated.

Trevor-Roper carried on at Cambridge for a few more years, showing remarkable kindness to young men who approached him for guidance. One of them, Blair Worden, his literary executor, has worked tirelessly to make Trevor-Roper’s unpublished work available to a wider readership; Richard Davenport-Hines has done a truly exemplary job editing Trevor-Roper’s letters and diaries; Adam Sisman’s 2010 biography Hugh Trevor-Roperv is unexpectedly absorbing. To have men of this quality as advocates, Trevor-Roper must have done something right. And yet the question remains: Does his life deserve to be remembered, except as a cautionary tale?

Trevor-Roper was the quintessential example of a Young Fogey, a subspecies of dandy that emerged in the twentieth century as England began to lose her power, confidence, and grip on her empire. Young Fogeys are rarely members of the British ruling class; they merely dress and speak as though they were. The illusion they cultivate is not always self-conscious (or successful); even so, Young Fogeys can become genuinely dangerous if they are mistaken for the real thing and begin to gain authority on that account.

Young Fogeys aspire to professions—academia, diplomacy, the civil service, the bar—that might in theory allow them to write books in their spare time; but these days the more successful among them spend their twenties as researchers at think tanks, or as “spads” (special advisers) to Tory MPs, if not in some area of the financial services industry that has direct influence over the Conservative Party. A select few relieve their frustrated literary ambitions through journalism (usually on financial or political topics as opposed to anything that might be interesting to read). But most prefer to daydream about “being a writer.”

The Young Fogey is a divided soul. He has a taste for the art, literature, music, architecture, and clothing of the past, particularly when they originate from a hierarchical, aristocratic culture. But the Young Fogey isn’t in a position to trust his instincts, because he is uncomfortable in his own body as well as in the society in which he lives. He feels isolated and alienated from others by virtue of his intellect, even when he isn’t particularly intelligent.

His intellect is the most important thing about him; and yet the Young Fogey is ashamed of it and seeks other means of demonstrating effortless superiority. In the absence of beauty, inherited wealth, or noble birth, the most obvious form of unearned status is age. The nineteen-year-old Young Fogey dresses and talks as though he were forty-five because he has no other obvious means of claiming status. He could show his exam results, but most people don’t care.

Evelyn Waugh was never a Young Fogey. True, from the age of forty-five he had the cardiovascular health of a man at least twenty-five years his senior. A self-consciously reactionary persona gave him some means of coping with the fact that by the time he was fifty, a single flight of stairs could reduce him to a wheezing, sweaty wreck. And in his “Art of Fiction” interview for The Paris Review (autumn 1963), he famously claimed:

An artist must be a reactionary. He has to stand out against the tenor of the age and not go flopping along; he must offer some little opposition. Even the great Victorian artists were all anti-Victorian, despite the pressures to conform.

Yet even here he was not a Fogey. Fogeyism is above all an expression of impotence. There is no such thing as an artistic, creative, or generative Fogey: Fogeyism is morbidly parasitical on the past. Hugh Trevor-Roper represents the archetype of the kind of Young Fogey that has come to dominate the British Conservative Party since the early 1950s: the Reactionary Whig Fogey.

Fogey Whiggery arose after the collapse of the Whigs’ political home, the British Liberal Party, which once had been the Conservative Party’s main rival for power, but tended to win between six and twelve seats in parliament in the decades following the Second World War. Self-described Whigs like Trevor-Roper avoided the Tories’ new opponents, the Labour Party, out of principled resistance to socialism or tribal allegiance (or simple snobbery). Instead, these Whigs held their noses and began to ally themselves with the Tories, whom they continued to despise as their intellectual inferiors.

Whigs who gained influence over the Conservative Party sought to refashion it along progressive, egalitarian, materialist lines that suited their enlightened theories. Most wanted to release it from the influence of the Church of England, whose leadership remained unpalatably conservative as late as the 1970s. But undercover Whigs could not do any of this openly, so they covered up their quiet revolution by pretending to be reactionaries, until they were duped by their own deception.

Reactionary Whig Fogeys dress and talk like old-fashioned defenders of the throne, the altar, the landed classes, and traditional British culture; but they are merely play-acting as Victorian-era grandees. At heart they are no different from private-equity asset-strippers in their fixation on short-term benefits and their sheer incompetence at ensuring the survival of any institution with which they are entrusted.

The old British ruling class maintained their empire with impressive efficiency; their replacements, Reactionary Whig Fogey impostors with degrees in politics, philosophy, and economics from Balliol, can scarcely organize an effective replacement bus service for a broken-down commuter train. Look at what they have achieved, after fourteen years of controlling a Conservative Party–led government. It is difficult to identify positive accomplishments—just consistent disappointments and the occasional administrative catastrophe, alongside a steady decline in quality of life. Reactionary Whig Fogeys have talked tough about immigration policy while consistently failing to control national borders, or even keep an accurate count of who enters or leaves the country. This failure in particular has alienated a high enough proportion of once-loyal Tory voters to be fatal for the entire party. Why would anybody want to be ruled by people so treacherous, inept, and intellectually mediocre?

In Trevor-Roper’s generation, the other major writers among Reactionary Whig Fogeys were the philosopher Sir A. J. Ayer (1910–1978), the historian of ideas Sir Isaiah Berlin (1909–1997), and the historian Richard Cobb (1917–1996). All were professional academics with pretensions to essay-writing. None has left anything of abiding interest or value.

Fogey Whiggery grew from an attempt to reconcile a nostalgic aesthetic stance with a progressive, materialist intellectual position that radically contradicts it; with such an unstable foundation it was inevitably doomed to failure, defeat, and humiliation. In many ways, being doomed is the point. Fogey Whiggery is rooted in a sentimental attraction to aristocratic hierarchy, High Church ritual, ancient traditions, high culture, and civilization itself. None of these elements coheres with the Reactionary Whig Fogey’s oft-stated principles of political economy. Indeed, if put into action, those principles would destroy all of it, for they leave no room for anything that cannot instantly be monetized or reduced to a numerical value: art, music, architecture; the immortal soul; God himself. There is no substantial difference between the beliefs of the Reactionary Whig Fogey and those of the postwar technocrat who reduces people to interchangeable economic vectors whose sole purpose is to help achieve economic growth. At least the technocrat is honest with himself about what he really holds sacred.

And so despite all their pretensions to “culture” and “enlightenment,” the Reactionary Whig Fogeys have left behind no monuments. The closest thing to a memorial for twentieth-century Fogey Whiggery is Wolfson College, Oxford, whose main building was completed in 1974. Here the principles of Fogey Whiggery are embodied in cheap concrete that is not only unsightly and degrading, but also unsuitable for the damp local climate. Wolfson College is too pitiful to be meaningfully ugly.

Waugh had no elaborate vision for the future, just a good idea of what the future would hold. His entire post-conversion oeuvre meditates implicitly on the Four Last Things: Death, Judgment, Heaven, and Hell. If you miss this aspect of his narratives, then you will think of him as cruel, nihilistic, and occasionally sentimental. You might also misinterpret the sanctuary lamps that continue to be lit in the chapels of great houses even after the families who built them are gone, in Brideshead Revisited and the Sword of Honour trilogy (1952–1961): the flames signal that the Body of Christ is present in the tabernacle on the altar. Our traditions, our culture, and our civilization itself depend on that continuing presence, which is not merely a symbol.

In 1928, not long before his conversion, Waugh wrote a biography of the Pre-Raphaelite artist and writer Dante Gabriel Rossetti, who harvested (or plundered) Catholic traditions and symbols to reuse them in his work. Waugh could see that Rossetti was “a Catholic without the discipline or consolation of the Church,” and recognized that

there was fatally lacking in him that essential rectitude that underlies the serenity of all really great art. The sort of unhappiness that beset him was not the sort of unhappiness that does beset a great artist; all his brooding about magic and suicide are symptomatic not so much of genius as mediocrity. There is a spiritual inadequacy, a sense of ill-organisation about all he did.

In other words: It isn’t enough to copy the great artists of the past: you must share the essential elements of their faith. As Waugh grew into his vocation, he began to see how he might need more than talent to succeed. What really matters is a writer’s ability to discern reality and reflect or illuminate it for the reader. Waugh is often praised as a stylist, but style is merely a vehicle for communicating the truth. Waugh saw the truth clearly and had an effective means of sharing it with others. This is why his work will survive for as long as the English language continues to exist: because he told the truth.

The post Waugh Against the Fogeys appeared first on First Things.

]]>
The Right to Be Killed https://firstthings.com/the-right-to-be-killed/ Mon, 28 Jul 2025 05:00:00 +0000 https://firstthings.com/?p=92313 In the days surrounding the assassination of Martin Luther King Jr., leaders of the civil rights movement startled their white supporters with a change in direction. Their efforts had...

The post The Right to Be Killed appeared first on First Things.

]]>
In the days surrounding the assassination of Martin Luther King Jr., leaders of the civil rights movement startled their white supporters with a change in direction. Their efforts had been toward desegregation, but then talk shifted to black power. There could be no integration, they argued, unless white people looked on blacks as equally and fully human. And this demand was worth dying for. As James Cone, the theologian of black liberation, put it: “When the black man rebels at the risk of death, he forces white society to look at him, to recognize him, to take his being into account, to admit that he is.” In effect: Either recognize our humanity and treat us accordingly, or deny our humanity and kill us like animals.

The mortal “struggle for recognition” propels social and political change. Originating with Hegel, Francis Fukuyama popularized this argument, observing that, from democracy and universal enfranchisement to the rights of women, gays, the disabled, and others, history is a story of social and political upheaval driven by people’s demand that others recognize their humanity. The desire for recognition is not just a symptom of our therapeutic culture. It is the first of our needs, once our animal needs have been met. The struggle for recognition of our humanity is the struggle to clarify our civil and human rights.

What about assisted suicide, now legal in Canada, the entire U.S. West Coast, and soon, perhaps, New York and additional American states? Must it be numbered among our civil and human rights, consequent upon the recognition of our humanity?

Let us first note that the demand for legal assisted suicide addresses not the legality of killing oneself, but the legality of assisting others to kill themselves. The suicidee (patient? victim?) is secondary. The primary object of the right-to-die movement is the living.

People may kill themselves at any time, without permission or even much pain. Even where it is not legally permitted, suicide, once accomplished, is beyond the reach of legal consequence.

No doubt people want legal assisted suicide for many reasons: the fear of being a burden, enduring poor quality of life, dying alone, dying in excruciating pain. We must nevertheless focus on the desire for someone else to do the killing. Alongside fear of a botched attempt or leaving behind a mess for others, I suggest that the desire for assisted suicide is a perverse expression of the need for recognition. People who wish to kill themselves also want their choice to be socially approved.

The need for social approval is bound up with the need for recognition. Humans are neither the fastest nor the strongest animals, and none of us can survive on our own. Social disapproval feels like a death sentence because, for most of history, it has been one. Beyond the evolutionary and civilizational need for social approval, there is the psychological and the spiritual: The approval we receive tells us who we are and who we ought to be. That our desires or behaviors may make us unlovable feels devastating. The yearning for social approval and authenticity with oneself has driven countless people into countless closets, fearful that they will come to be known and shamed rather than known and loved.

Gay pride parades arose from this emotional and spiritual need, turning a ground for social stigma into a point of pride. The need for social approval was a central driver of the gay rights movement. Even when many states had civil unions that conferred on same-sex couples most of the rights of heterosexual marriage, there remained a felt need for social approval that was not equivocal. “Love is love,” we were told. Gay couples were already living together, already out of the closet. But the point was never that same-sex couples needed legal permission to love one another. The point of same-sex marriage was to secure social approval to the point that disapproval was no longer legally admissible or socially acceptable.

The same motivation drives the right-to-die movement. Suicide remains an object of social stigma, directed first at the person who kills himself and then at those close to him—his parents, his wife, his employer, his classmates. In our culture, suicide is still considered a tragedy. I still see crisis hotline phone numbers posted in bar restrooms, urging people to call someone and talk rather than kill themselves. Obituaries treat suicides with gentle euphemisms, informing us that a young person “died unexpectedly.” Those who wish to kill themselves know that their desire is unacceptable, and though they intend to die and so cut themselves off from society, they nevertheless desire that the living approve of their departure. This quest for approval, like the fight for same-sex marriage, is a core element of the demand for legal assistance for suicide. Legal assistance means unequivocal approval.

The social approval of suicide requires the social management of how people talk about suicide. The eulogy or funeral of one who has died by assisted suicide does not require talk of tragedy, nor even saying to those who survive the suicidee, “I’m sorry for your loss.” That it is a loss at all must be obscured. Grief must be circumspect, kept distinct from disapproval.

Again, there is a precedent. Immediately after same-sex marriage became legal federally, activists directed their attention to transgender rights. “Preferred pronouns” appeared, first in university classroom icebreaker activities, then basically everywhere: email signatures, X bios, LinkedIn profiles, conference registration forms, check-in questionnaires at the doctor’s office—too many places to list.

Tactically speaking, the focus on pronouns was brilliant: Not only is it a “small” thing to ask—a matter of life and death to the transgender person but supposedly of zero import to others; it is a way to ensure that people remain cooperative at all times. We use third-person pronouns when speaking about someone, typically in that person’s absence. We address people with second-person pronouns: you, your, yours. Preferred pronouns are a way to manage, not just whether a person is treating a transgender-identified person as though he or she were the gender he or she claims to be, but how that person relates to transgender identification as such. To use someone’s preferred pronouns is to be disciplined by that person in his or her absence. I’ve had too many conversations with parents whose adolescent child identifies as transgender, and listened as they torture themselves, using the child’s preferred pronouns and faking a smile, unable to give voice to the grief that expresses disapproval.

We must expect the same social and political outcome with suicide, which we are meant to believe will be transformed by assistance into something other than a moral tragedy. Like the affirmation of transgender identity, to give social approval to assisted suicide is to allow the dead to manage the living, dictating how the living speak and think about suicide. Assisted suicide is the preferred pronouns of nihilism.

What sort of right is the right to assisted suicide? Like earlier movements for recognition, it offers itself as a matter of civil and human rights, of choice and human dignity. The truth is otherwise. Its advocates say they wish to die with dignity, and then they ask to be euthanized like pets. It is a feature of human dignity to be able to face enormous suffering. The rights emerging from the sexual revolution require the non-recognition of the meaning of the human body. Likewise, the “right” to assisted suicide can only be the right not to be recognized as a human being.

When black Americans were struggling for civil and human rights—for the recognition of their humanity— they arrived at the profound conviction that it was dignified to risk death in that struggle. Assisted suicide represents a perverse inversion: a renunciation of dignity, the demand that one’s humanity go unrecognized. A society that honors that demand will not, in the end, recognize the humanity of anyone.

The post The Right to Be Killed appeared first on First Things.

]]>
No Chosen, No “Almost Chosen” https://firstthings.com/no-chosen-no-almost-chosen/ Fri, 25 Jul 2025 05:00:00 +0000 https://firstthings.com/?p=92503 Mazal tov! Partisans on the left and on the right, fighting bitterly for a larger swath of America’s increasingly divided political landscape, have now found a goal around which...

The post No Chosen, No “Almost Chosen” appeared first on First Things.

]]>
Mazal tov! Partisans on the left and on the right, fighting bitterly for a larger swath of America’s increasingly divided political landscape, have now found a goal around which to unite in harmony: The Jews are our misfortune.

Talk to your average committed progressive, and you’ll soon hear one or another rant about Palestine, variations on the theme of how it’s a darn shame that Jewish political influence is directing American weapons to Israel, where they’re being used to facilitate a genocide in Gaza. That this phase of the war itself erupted only after Hamas’s savage attack, or that it can end the moment the terror group returns the civilians it still holds hostage—these are treated as superfluous details. What matters is the raw sentiment that powers the contemporary progressive movement and holds that America is complicit in Israel’s colonial wars. It’s a powerful enough rallying cry to place Zohran Mamdani, a political wannabe with a virtually nonexistent resume, within striking distance of becoming the mayor of New York City. Forget income inequality or social justice or all the other pinko chestnuts of old—all you need to do to excite the left these days is to chant “globalize the Intifada.”

Turn to the right, and you’ll hear a similar tune. America—thus spoke Tucker Carlson and his minions—is fighting Iran at Israel’s behest. “The one silver lining to all this,” wrote Darryl Cooper, a popular self-styled internet historian on the right and a guest on Carlson and Joe Rogan’s shows, “is that everyone, left, right, and center, now knows that Israel controls U.S. foreign policy, and that most politicians and conservative media personalities are foreign agents.” A million people viewed Cooper’s rant on X, and many—including influential pundits like Candace Owens —echoed it for days. The mantra: We oughtn’t to send American boys to die in Israel’s wars.

Call it a Jewish superpower: causing the shrillest voices on the left and the right to converge. We’ve been here before, and it’s not a happy place. When both sides turn on the Jews, we know we’re facing a major and potentially lethal social contagion. And as is usually the case when anti-Semitic spirits run high, the problem has absolutely nothing to do with the Jews themselves.

Anti-Semitism works best when Jews are treated as both anti-matter and matter: They’re the Marxist fiends who plot communist uprisings and also the capitalist pigs who own all the factories; they’re effeminate little creeps who can never achieve true and noble masculinity and also libidinous sexual predators who seduce the women and corrupt the young; they’re pathetic because they’re so powerless and dangerous because they’re all-powerful. These blatant contradictions aren’t a bug—they’re a feature, allowing the haters to cast the Jews as the ultimate shapeshifting villain. It’s a convenient illusion, excusing the Jew-haters from the hard task of fighting their real, and far more formidable, enemies.

And let’s not kid ourselves; the real enemy of our contemporary lunatics, left and right, are not the Jews, but those who think that America is a great, godly, and exceptional nation. Which is to say, most Americans.

Consider the fact that no lesser lion of liberalism than Jon Stewart now fawns over Carlson’s cackling propaganda videos, warning that World War III will soon ensue if President Trump takes any action against the murderous mullahs in Tehran. Recall that Stewart effectively ended Carlson’s career at CNN after eviscerating him in a memorable and contentious interview. But the bitter enemies both share a foundational belief that America is first and foremost the sum of all its flaws, and that the best way to mend it is to cut it down to size by making sure it enjoys no undue advantage over any other nation. Who, after all, is to say that any one country is better than any other?

That, of course, is the worldview that guided Barack Obama, which is why he was so fond of repeatedly telling his fellow Americans that “that’s not who we are” as a country. Obama had a vision of America that was radically different than that endorsed by most Americans, and to sell it to the American people, he often insinuated that anyone who didn’t share his more enlightened worldview was simply a benighted soul erring on the wrong side of history. And Carlson, I believe, is little more than an Obama bro in a tanned MAGA skinsuit, defending Iran—the crown jewel of Obama’s foreign policy—and castigating America for believing it has any moral advantage or right to pursue its interests. It’s no coincidence, then, that Stewart sang an ode to Carlson on his own podcast while hosting Obama’s senior aide, Ben Rhodes.

At its core, the furor over U.S. action in the Middle East (or, for that matter, Ukraine) is not a political position. It’s a theological one, predicated on a rejection of American exceptionalism. And because American exceptionalism is an article of faith shared by most Americans—a recent survey put the number at 73 percent—the new America downgraders must find new ways to sell their wares.

But how? It’s impossible to portray Trump’s policy as needless warmongering. Such claims fall apart upon contact with reality, because no one, least of all the president, is proposing Iraq redux, a large-scale military operation that risks American dollars or lives. Instead, Trump’s approach is as cautious as it is reasonable.

Judicious policies and limited involvement: There’s little here to excite the imagination, which leaves those who reject American exceptionalism in a pickle. Their worldview, to be fair, is a legitimate one, and it does not necessarily bend toward rank anti-Americanism any more than fervently believing in American exceptionalism risks souring into arrogant and ill-advised triumphalism. But if you hold—as does Obama, and as does Carlson— that America is just another nation, neither special nor godly nor chosen, your options are limited when it comes to convincing others that your views are worthy of consideration. Because the facts don’t support any real cause for alarm over impending Armageddon. And because it’s more than a little inconvenient to target a supermajority of Americans as the enemy, the America downgraders turn to—and on—the Jews.

It’s a trick the hard right in France tried about a century ago. Unable to sell their political vision to a nation increasingly enamored with secular liberalism, they dragged a Jewish officer named Alfred Dreyfus to court and accused him—and, by proxy, all Jews—of treason. You hardly need to be a historian to know that the gambit backfired, miserably. And yet here we are: Rather than arguing in good faith about what America’s role in the world ought to be, too many, on the right and the left, are arguing that America’s enemies aren’t really its enemies, that its interests aren’t really its interests, and that those dastardly Jews are once again manipulating the righteous gentiles to do their dirty work for them.

It’s a canard, of course, but a clever one. The Jews represent 2.4 percent of the American population. Seven million or so reside in Israel. But it’s not real people that concerns the Jew-hater. In hateful discourse, right and left, the Jews stand as representatives of the foundational belief that America is a covenantal nation—the Hebrew name for the United States is Artzot Ha’Brit, the lands of the covenant—and as a covenantal nation, our nation occupies a unique place in the Creator’s plan for mankind. Here’s the logic: Reject the original bearers of the covenant, and you negate the whole project. Turn against the root, and you’ve cut off the branch of American exceptionalism as well.

Let us, then, be clear: Those who ululate about Palestine or Iran rarely care about either. Nor are they engaged in an earnest conversation about American policy and its goals. Instead, they’re arguing for an America stripped of its divine mission, seeing us as just another bloated Rome, stumbling toward ruination. America, to them, is not a shining city on a hill, spreading the light of liberty and hope to a world drowned in darkness, but an empire to curb, resist, and, ultimately, overcome. This is why Whoopi Goldberg, to name but one useful idiot, recently argued on The View that women in Iran had it better than black Americans, and why Carlson famously traveled to Moscow to opine on camera about the superiority of the Russian way of life. Both statements are rejections of the belief that America is an “almost chosen” nation (as Abraham Lincoln wisely put it), and if you’re going to reject God’s providential favor, you could hardly do better than rejecting the chosen people themselves, the Jews. No chosen, no “almost chosen.”

Don’t worry about us Jews; we’ll be just fine. Worry mightily about America. We can and should argue about how we ought to respond to Moscow, Tehran, or any other global threat. We should never, though, doubt the role this exceptional nation was destined to play in the course of human events, or to undermine it by denying its favor in God’s eyes. It’s time to reject the Jew-haters, reaffirm our love for and commitment to this country, and work together to let America be America again.

The post No Chosen, No “Almost Chosen” appeared first on First Things.

]]>
Ecumenical Fear and Loathing https://firstthings.com/ecumenical-fear-loathing/ Thu, 24 Jul 2025 05:00:00 +0000 https://firstthings.com/?p=93182 Once upon a time, there was a culture that was split into two bitterly opposed parties. They shared the same history and proclaimed the same fundamental beliefs, but distrust...

The post Ecumenical Fear and Loathing appeared first on First Things.

]]>
The End of the Schism:
Catholics, Protestants, and the Remaking of Christian Life in Europe, 1880s–1970s
by udi greenberg
harvard university, 368 pages, $39.95

Once upon a time, there was a culture that was split into two bitterly opposed parties. They shared the same history and proclaimed the same fundamental beliefs, but distrust between them was deeply entrenched. Even when they lived alongside one another, their lives were strikingly separate. They inhabited their own partisan media worlds. Both sides saw every issue through the prism of their conflict, and they found ingenious ways to blame each other for anything that was amiss in the world. There was little reason to imagine that the situation might ever change.

And yet, within a single lifetime, it did—almost beyond recognition. As a new series of challenges arose, the two parties found themselves sharing common enemies. And though plenty of their leaders and spokespeople reflexively blamed the new evils on their old enemies, some bold souls on each side began to plunder the other’s ideas and even, cautiously, to form a common front. As new challenges proliferated, so, too, did the temptation to cross party lines in search of collaborators. Soon the two sides began to find principled justifications for these alliances of convenience. Quite suddenly, the kind of cooperation that once had been denounced on all sides as treachery came to seem almost banal: yesterday’s partisanship.

Udi Greenberg’s book is about how western Europe’s division between Catholic and Protestant, which seemed ineradicable, morphed in less than a century into a broad ecumenical consensus. Greenberg is much too careful a historian to draw the dubious parallel I’m suggesting with our own age. But you have to, don’t you?

The conventional, cynical version of the rise of ecumenism holds that nothing heals a quarrel better than a common enemy. In the modern age, Protestants and Catholics discovered that they hated and feared secularism, and in particular communism, more than they hated and feared each other. The losing side in a culture war can’t afford to be picky about its alliances.

Greenberg’s project is to deepen, broaden, and enrich that crude hypothesis. His method is to immerse himself in the best-selling writers who occupied both sides of the divide during the century he considers, with particular attention to those who had international readerships. People whom we now see as theological giants of the age, such as Barth and Bonhoeffer, are hardly mentioned in this book, while now-forgotten figures like Napoleon Roussel and Wilhelm von Ketteler take center stage. Greenberg’s hard-won familiarity with this material, in many languages, is the book’s greatest strength. It gives him the authority to show us this era, not as we choose to remember it, but as it appeared to itself.

The story he weaves has three strands, threaded through this entire period: three themes that consistently drew Protestants and Catholics together despite themselves. The first, as expected, is the fear and loathing of socialism, both in its blandly anticlerical and its fiercely anti-Christian variants. Greenberg tracks the emergence of “an inter-Christian alliance as a tool to maintain Christian hegemony over public life.”

The second strand is more original. Greenberg sees a similar Catholic-Protestant alliance emerging in the late nineteenth century to defend traditional Christian norms on gender roles, sexuality, and the family in the face of feminism and changing sexual mores. This is the book’s most compelling theme. As early as the 1890s, Catholic and Protestant politicians in Germany were cooperating to block proposals to liberalize marriage laws; in the 1950s, there was still a determined, cross-confessional rearguard action against the advance of women’s rights. Recognizing this common interest as a spur to ecumenism is a sharp insight.

Greenberg’s third strand of ecumenical cooperation, which emerges later and rather more tentatively, is global missions. He sees the mission field as being aggressively confessionalized even at the turn of the century, but he reckons that thereafter shared concerns about “civilization” and Islam drew Protestant and Catholic missionaries together. The retreat and collapse of empire only accelerated this convergence, as European Christians discovered common cause against the worldwide threat of socialist-inflected nationalism and discovered the common language of international development aid.

Once he has established these three arenas—socialism, gender, and missions—Greenberg then traces how Catholics and Protestants slowly, warily learned to cooperate in each one. Two episodes are pivotal. First is the rise of Nazism and fascism, which he provocatively claims was “more important” than communism in driving ecumenism. The argument is that a great many Christians came to believe that their stance on fascism—whether for or against—was more important than the old Catholic-versus-Protestant division. And so Christians on both sides scrambled for ecumenical allies; the one thing pro- and anti-fascists could agree on was the need to work across confessional lines.

Something similar happened in the 1960s, when a new split emerged. On one side stood the post–World War II Christian establishments, rooted in the Christian Democratic parties that now dominated the center-right: conservatives and old-fashioned liberals who had made peace with postwar democracy but were fighting a rearguard action against leftist secularism. On the other side were the new Christian radicals of the 1960s, the apostles of Bonhoeffer’s “religionless Christianity,” fiercely opposed not just to imperialism and racism but to the complacent “churchianity” that they believed had betrayed Jesus’s own ethics. Again, the one thing the two parties agreed on was the irrelevance of old confessional scruples. By the 1970s, ecumenism went without saying.

It’s a pretty compelling case, and a story told with verve and formidable learning. But there are a few things to notice along the way.

One is the book’s restricted scope. The title promises “Europe,” but it is actually about Germany, France, Belgium, the Netherlands, and Switzerland, with walk-on parts for other countries. Fair enough: These were the nations in continental Europe that experienced real Catholic-Protestant divisions within their borders. (I’d love to know the Hungarian story, but that’s not a language anyone learns to read lightly.) Greenberg does briefly cite Spain and Ireland as contrasting cases, which show that different paths were possible. These were European countries that maintained assertively confessional identities throughout this period, and, in Ireland’s case, continued to be viciously divided. The point is that Franco-German ecumenism was not an inevitable feature of modernity, but a response to specific circumstances.

The exclusion of two other countries—Britain and the United States—is more of a stretch. Greenberg says that American ecumenism had a quite different trajectory, which is true, but there were enough points of contact that the comparison would be worthwhile. And Britain: Well, I’m British and I don’t want to sound like I am griping at being ignored, but Britain is a European country with a deep history of Catholic-Protestant antipathy and a country in which all three of Greenberg’s arenas were sites of fierce contestation. In many ways, Britain’s journey toward ecumenism was strikingly similar to the Continent’s, but the differences could be illuminating. The war against Hitler did not shatter Britain but rather renewed its national myths. There was no Christian Democrat party on the center-right; instead, the center-left Labour Party famously owed “more to Methodism than to Marx.” And the Church of England scrambles many of Greenberg’s categories, being an ungainly ecumenical entity of its own: A historically Protestant church many of whose leading members forcefully insisted on their Catholic identity. What would the transformation of British gender roles, the sudden disintegration of the world’s biggest colonial empire, and the surge of 1960s radicalism in the British context look like when seen in light of the European ecumenical movement? I’d love to know what Greenberg would have to say.

Until I hear that fuller version, I’m not convinced that the changes Greenberg describes were as sharp as he claims. During the French Revolution, Jacobin “dechristianization” was already teaching Catholics and Protestants to make common cause. After Napoleon’s defeat, Catholic Austria, Protestant Prussia, and Orthodox Russia formed a “Holy Alliance” against liberalism and revolution. In the mission field, there was rivalry but also cooperation. The Anglo-French Opium Wars against China secured rights for both Catholic and Protestant missionaries, and there as elsewhere they learned to carve out distinct geographic zones. The Protestants’ World Missionary Conference in Edinburgh in 1910—conventionally the launchpad for modern ecumenism but barely mentioned in this book—deliberately chose not to address missions to South America, on the grounds that that Catholic continent was already Christian. Nor did ecumenism carry all before it. Confessional divisions (including divisions among Protestants, a subject that goes unmentioned) are stubbornly persistent. Two hours after I finished the book, a former student emailed me to say that the late Pope Francis was the Antichrist.

One feature of Greenberg’s argument is a troubling parallelism, whereby Protestants and Catholics, and indeed conservatives and radicals, always mirror one another. But these traditions really were starkly different. Their struggle was asymmetric warfare. Nineteenth-century Protestants were indeed intensely paranoid about Catholicism, but Catholics were not nearly so worked up about Protestants. Or again, the pro-Nazi Deutsche Christen did indeed cherish ecumenical hopes, of a sort—but they were Protestant. Pro-Nazi Catholics (a much smaller group) had no such enthusiasms, and neither Catholic nor Protestant anti-Nazis showed much evidence of ecumenism, either. And it may be tasteless to point this out, but the postwar ecumenical flowering was possible only because Rome gave so much ground, embracing principles of liberalism, pluralism, and democracy that it had once denounced as Protestant deceptions. Vatican II’s conservative opponents grumbled that the Council was not a ceasefire in the wars of religion but a surrender. It was easy for 1960s Protestants to be generous to Catholics. The Protestants seemed to have won.

I appreciate why Greenberg might not want to express such a crass view. As he points out, he is neither Christian nor European, and he does not have a dog in this fight. But as he knows all too well, in a world of zero-sum partisanship, nothing and no one is ever neutral.

Greenberg is austerely objective, but, inescapably, he writes from within the twenty-first-century American academy and its fastidious values. The great benefit of this perspective is that he can be merciless in his criticism of European Christianity, which is often richly deserved. You may not agree that ecumenism was at heart “a project to perpetuate disparities” or a “struggle to preserve inequality,” but he makes a serious case. And it is fair to describe ecumenical writers’ “obsessions” with certain issues, to call their gender policies “monstrous” and their propaganda “lurid,” even to note that their convictions are “repeated ad nauseam.” Greenberg has read an awful lot of this stuff and has earned the right to express an opinion. When he is discussing policies like the forced castration of gay men or the exclusion of illegitimate children from public schools, it is hard to disagree with him.

But I missed seeing the other side of the coin. Greenberg is ready to blame but not to praise. At times it feels as though he wanted to. There are figures, such as Yves Congar or Hendrik Kraemer, whose stances he does seem to admire. But although he describes their criticisms of “totalitarianism,” he carefully puts that term in scare quotes, distancing himself from their value-system—while feeling free to criticize ecumenists’ “heteronormativity.” He points out the 1950s Christian Democrats’ “relentless” determination to avoid both socialism and hyper-individualism without countenancing the possibility that this determination may have been wise. His discussion of global “development” is quick to recognize its neocolonial qualities without noticing its countervailing merits. Only the Christian radicals of the 1960s get friendly treatment.

I get it, of course. What self-respecting modern secular academic would be caught being warm about establishment European Christians? And why should he? Well, because he’s a historian. He doesn’t need to like his subjects, but he does need to be able to inhabit them, to see the world through their eyes. Doing so would help him understand, for example, why Christians had to work so hard to impose their social vision despite believing that it was “natural.” This was not, as he implies, some revealing contradiction, but a central tenet of Christian anthropology, in which human beings are not what we were made to be.

And yet, ultimately—and despite the final pages warning of the embrace of Christian nationalism by the modern European far right—I found this a hopeful book. It suggests that even the entrenched divisions out of which Greenberg is writing are not forever. All we need to reunite us is something we can hate and fear more than we hate and fear each other.

The post Ecumenical Fear and Loathing appeared first on First Things.

]]>
Goodbye, Saffron https://firstthings.com/goodbye-saffron/ Wed, 23 Jul 2025 05:00:00 +0000 https://firstthings.com/?p=93142 In A Distant Mirror: The Calamitous 14th Century, Barbara Tuchman wrote that the “people of the Middle Ages existed under mental, moral and physical circumstances so different from our own...

The post Goodbye, Saffron appeared first on First Things.

]]>
Vanishing Landscapes:
The Story of Plants and How We Lost Them
by bonnie lander johnson
hodder and stoughton, 320 pages, £22

In A Distant Mirror: The Calamitous 14th Century, Barbara Tuchman wrote that the “people of the Middle Ages existed under mental, moral and physical circumstances so different from our own as to constitute almost a foreign civilization.” Thus, Tuchman went on, the similarities between us and them reveal qualities “permanent in human nature.” From this perspective, a new book by scholar and First Things contributor Bonnie Lander Johnson, Vanishing Landscapes: The Story of Plants and How We Lost Them, offers the intriguing promise of a way back to a life closer to nature. If we know how they did it, maybe we can do it, too.

Lander Johnson is a fellow and associate professor at Downing College, Cambridge, where she teaches the literature and history of the early modern period. She is also the author of Botanical Culture and Popular Belief in Shakespeare’s England and the editor of The Cambridge Handbook of Literature and Plants. Now Vanishing Landscapes offers a general-interest work on similar subject matter. In it, Lander Johnson examines the history of humanity’s relationship with nature through the prism of seven workhorse plants, over a period ranging from the 1530s through the 1750s.

Lander Johnson details the British uses of apples, saffron, woad, reed, oak, grapes, and wheat, during the transition from the premodern (medieval) usage to the modern. Wild crab apples, for instance, have always flourished in the British Isles. But the first variety that was good for eating, the Costard apple, was introduced by the Normans in the eleventh century. After being for a time a fashionable food for aristocrats, it became a staple for the medieval peasantry and remained popular until the 1960s. Fruit trees were particularly significant in medieval times because their bounty was reserved for personal use, instead of being sold as cash crops, as grains were. And apples were “easy to grow and harvest, easy to store over winter and highly nutritious whether they [were] eaten raw, cooked or fermented as cider or vinegar.” Orchards were considered part of the medieval commons, with the result that apples were freely available to everyone, Lander Johnson writes, and thus represented a natural and direct human right to God’s bounty—so fundamental that people did not even conceive of it as a “right.” Apples represented “the ancient expectation that the earth was created for use by all people, ‘in common.’” Until, that is, enclosure wrested this precious resource from the people.

Saffron, the subject of the second chapter, illustrates our former reliance on herbs—and on those around us —for health and healing. In medieval Europe, “simples” were remedies known to most housewives, produced from a single herb or flower and grown in a kitchen garden. Today saffron is primarily a niche seasoning associated with Spain or the Middle East, but in the Middle Ages, it grew in fields and gardens all over Britain and Ireland, “both treasured and ubiquitous.” Medieval people used it to treat melancholy, “pox of the eye,” and “cysts, swellings and cankers.” (Modern scientific research lends at least some support to all three uses. Unfortunately, most saffron commercially available today has been diluted and would not be recognizable as the medieval product.)

In order to explain how our relation to nature was transformed, Lander Johnson offers an eclectic mixture of history, personal reflection, and present-day reporting. She contemplates the apple tree in the garden of her Cambridge home and bakes saffron buns with her children. She also visits modern-day craftspeople who are attempting to resurrect or carry on nearly lost traditions—small-farm cider-makers in Devon; a woman in the Scottish Highlands who makes natural plant dyes; a husband-and-wife reed-cutting team in East Anglia.

The main turning-point is Henry VIII’s dissolution of the monasteries and the sixteenth-century enclosure of land formerly used as commons. Apples lost their centrality due to the widespread privatization of orchards in the period from 1536 to 1550. These reforms transferred land formerly held by the crown, monasteries, or the nobility to smaller landholders, resulting in a social division between landowners and less fortunate tenant farmers. To some extent these changes were imposed from above, but they were also widely taken up because of human impulses we’ll recognize. “Some medieval commoners aspired to a greater degree of separation from their neighbours,” Lander Johnson writes. They wanted to amass wealth of their own, pile up stores for the future, and gain in security and self-determination. Goodbye, free apples.

Around 1550, the new forms of land ownership and the shuttering of the monasteries created a wave of displaced rural youths. These youths set out for London, where the trades were increasingly being centralized. Once there, they no longer had access to simples prepared by female relatives or village herb-women, and they were much more likely to receive an education and become literate. Medicine became literate as well; the College of Physicians, the first body of its kind in Britain, was established in 1518. Henceforth, forms of “expert” knowledge arose, and remedies became impressive, exotic, and rare instead of trusted, domestic, and practical. Goodbye, saffron.

The story of humanity’s premodern relationship with nature and early-modern alienation makes for riveting reading. Lander Johnson’s chapters on local clothing dyes and reed-farming prior to fenland drainage in East Anglia beautifully evoke the mystique of the past. Further developments include the effect on English forestry policy of Oliver Cromwell’s New Model Army and the relationship between government management of grain stores and the foundation of the banking system. But Vanishing Landscapes is less successful in delivering on its most tantalizing promise: explaining what the plants meant to the people of the time and how social structures and plant use were connected. In her introduction, Lander Johnson suggests that the premoderns can teach us how to resist our own alienation from nature—a thrilling proposal, but one left unfulfilled.

The general tone of the book is established early on:

We all once lived in the landscape. We worked and loved, slept and dreamed inside it. . . . We relied on plants for everything but we did not view them as mere objects for our pleasure and use. . . . Wheat was not only food but the breath of God calling us to communion with each other. Flowers fattened bees but they also showed us how to live, their faces turned all day to heaven and inward at night. We were not the authors of nature but part of its fabric. This was our experience of the natural world until we became modern people.

It’s possible that all these things are true—the reader has no information to the contrary—but the exaggerated romantic language gives me pause. It does not seem credible that human beings who were engaged in subsistence farming ever needed to learn about day and night from the flowers. If they thought wheat was “the breath of God,” that’s interesting, but it raises questions and should be sourced. Nor does it seem likely that early modern people understood nature just as we do now, as a landscape full of plants who should be our respected and loving partners, with whom we are more or less equals as part of the fabric of the natural world. This rhetoric is a particular feature of the present—when we have defeated nature, have no fear of it, and are busy exhorting others to treat it better. Peasants in Britain in the 1500s might have indulged in almost exactly the same form of misty nature-worship as today’s urban nostalgists—but it seems unlikely. How did medieval Christians understand “nature” anyway?

Lander Johnson says that medieval people believed “the fruits of the earth were created by a loving God for the sustenance of your body and soul,” that pruning branches was “caring for God’s own body,” and that plants were “creatures with a soul placed in them by God.” This reader is left suspicious. In the religious framework of the Middle Ages, was your soul nourished by plant foods, or only your body? And in what sense does God place the soul in the plants? St. Thomas Aquinas, following Aristotle, taught that though all living creatures have anima, a form of life force or soul, only the human soul is immortal, and God’s personal intervention occurs only in the placement of human souls.

On a higher level, were human beings part of the “fabric” of God’s creation in the same way animals and plants were, or in a different way? I suspect that medievals had ideas of order and hierarchy that undergirded their treatment of “nature” (whatever that meant to them), and that these faithful, Christian belief structures were the key element that has become alien and unimaginable to us. Yet they are precisely what we need to learn about, in all their challenging detail, if we are to reverse course.

Lander Johnson’s story of transition reminds us of Tuchman’s “permanent” qualities: Our rulers’ desire for power and our human desire for advancement severed our connection to the apples; the lure of the foreign and exotic caused saffron and woad to disappear; and so on. It would be fruitful to connect these impulses to the modern-day vices inherent in phenomena such as urban pastoral nostalgia, which has complex effects on the very natural world to which its practitioners wish to return. Lander Johnson acknowledges some of these vices during her visits to the modern-day practitioners of old-fashioned, low-tech, small-business ways of farming and crafting— but not quite sharply enough. She knows that traditional making and crafting businesses, in their modern incarnation, produce luxury goods for elites at such extraordinary expense that the product often has to be subsidized by other income streams, usually some form of tourism.

She also mentions that the elite love of rural areas in which pastoral nostalgia can be indulged is rapidly turning these areas into second-home deserts (in both the U.S. and the UK), thus pushing out lower-income residents and all but the most successful makers and crafters—who in fact are an intensely modern phenomenon. She identifies the complex role nostalgia plays in alienation—in, for example, London of the mid-sixteenth century, when newly educated arrivals from the countryside turned to cataloguing botanicals as a hobby, hastening the dominance of scholarly expertise and displacing the very tradition of herb-wisdom they yearned for. Yet she indulges in this form of utopian dreaming herself. It’s a frustrating tendency in an otherwise thoughtful book. Too much of this, and the landscape—both medieval and modern—vanishes.

The post Goodbye, Saffron appeared first on First Things.

]]>
The Future of Catholic Theology https://firstthings.com/the-future-of-catholic-theology/ Tue, 22 Jul 2025 05:00:00 +0000 https://firstthings.com/?p=92988 About ten years ago I found myself in China teaching a weeklong philosophy seminar on the thought of Thomas Aquinas. Present were forty or so young philosophers from premier...

The post The Future of Catholic Theology appeared first on First Things.

]]>
About ten years ago I found myself in China teaching a weeklong philosophy seminar on the thought of Thomas Aquinas. Present were forty or so young philosophers from premier Chinese universities. Also present, acting as observers in the back of the room, were members of the Chinese Communist Party. I taught in jacket and tie, but everyone knew that I and one other Dominican professor were priests. The students talked to us more openly at the meals, at crowded tables, where it was not easy to be overheard. Most were non-Christian, but almost all were studying Western philosophy. I will never forget asking one of them why he was present at the seminar, given that the philosopher we were studying was a medieval Western Christian thinker. He said, “Father, the Cultural Revolution in the 1960s severed contemporary Chinese culture from its historical past, its traditional ethical resources. Today we know that communism is a failed system, but what we don’t know is the meaning of life. We wonder whether it might have something to do with Christianity.” I found these words prophetic.

We, too, have been severed from our historical past. It’s all too common to think that nothing can exist beyond the secular order, which represents a kind of stasis, the endpoint of Western history. And this mentality is increasingly attended by discontent, a sense that things aren’t working. This Chinese student, however, emerging from the most intensive attempt in history to stamp out religious belief, was aware of a profound and genuine possibility, a condition of naivete, that of a person seeking meaning, open to a religious proposal. He was envisaging the possibility of a post-secular order and a new religiosity.

He was correct, not only about the spiritual conditions in China, but also about those in our own societies. We live amid global religious conflict, the threat of nuclear extermination, amazing scientific progress, and Western existential malaise. The meaning of life is indeed a twenty-first-century question.

Philosophy and the natural sciences can give us answers, but only to a point. A very good philosopher might provide sound arguments for the existence of God, but he cannot introduce us to God personally. Moreover, there is a nucleus of personality in us, characterized by intelligence and freedom, which demarcates the existence of a soul. But no one knows what happens to the soul after death. And neither philosophy, nor politics, nor technology can deliver us ultimately from the problem of evil, whether moral or physical. Religion and claims about revelation remain always relevant and unavoidable. Thus our challenge—and our opportunity. In our historical moment, Catholic theology should seek to explain the meaning of life in its ultimate registers: with reference to God and the Incarnation.

This task requires being forthright about revealed truth. Catholic theology is not only spiritual, inviting us to union with God by contemplation and love. It is also explanatory. Only because of the mystery of the Trinity and the Incarnation do we fully come to know ourselves and the created order, in a way that protects and elevates the natural goodness of creation and human culture. With the knowledge of the Trinity we know ourselves and our nature most fully. Without it, we become opaque to ourselves. The Trinity and the Incarnation provide us with an ultimate explanation of the world. Allow me to prosecute the argument.

The first aim of twenty-first-century Catholic theology must be to engage intellectually with the creed. We must talk about the intelligibility of the creed in the public square, for all comers. Ancient Christians argued from the first generations about the meaning of Christianity and how to interpret the Old and New Testaments. They resolved their disputes by formulating agreed-upon creeds. The Apostles’ Creed is one of the earliest, perhaps from Rome in the second century. In 325 came the Nicene Creed, which is still recited in the Catholic Mass. Creeds do not merely demarcate tribal affiliation, like a ceremonial tattoo. They articulate claims about reality. They offer first and final explanations. They point toward mysteries, true, but a mystery is not unintelligible. It is super-intelligible, matter for study throughout one’s whole life.

What is theology? Aquinas says it is a peering into God through the medium of the creed. The doctrines of the faith teach us to look into God, the Holy Trinity, and to see all things in light of God: the incarnation of God in our human nature, the life of grace, the mystery of the Church, the sacraments, and of course human nature, explained most fully in light of the Holy Trinity. Theology involves seeing all things together in the light of God’s self-revelation. It is a science, a body of knowledge concerned with explanation, because it explains how all things come from God and return to God and are best understood in light of God. Above all, it tells us how all things reveal to us who God is in himself. The central premise of Christianity is that God wishes to communicate to us a share in his own eternal life, in what he is in himself.

Consider the summit mystery, the Holy Trinity. There is in God, from all eternity, spiritual fatherhood. For God is the eternal origin of his Word, his Logos, the Wisdom of God through whom he has made all things. And God is the origin eternally of the Holy Spirit, the love who comes forth from the Father through his Logos, through whom God made all things.

On this view, all that is comes to be from a higher source of intelligence and goodness, uncreated reason and primordial love, giving us our very being as a gift. For this reason, the world that comes from God is intelligible, and we can make progress in understanding it. Reality reflects his uncreated wisdom. It is also fundamentally good. Despite the presence of evil in our common history, creation is subject to the God of love, the God of atonement and resurrection, the God of reconciliation and mercy. This vision of God allows us to understand ourselves in his image—as beings made for logos and for love, the charity that develops into friendship with God, sacrificial love for others, and communion of persons. On this view, our nature is not a random product of physical accidents and evolution, though we are indeed highly evolved animals. More fundamentally, we are spiritual animals with immaterial souls, persons made for freedom and for love of the truth. We are persons capable of God.

Twenty-first-century theology should have confidence. Christians have something unique to offer the world, because only we can announce the revealed truths that explain the world most profoundly. Point toward the mystery of Christ and you indicate the inner mystery of who God is, and what man truly is. The eclipse of Christ in the world brings with it the eclipse of God and the eclipse of man. As the contemporary West has discovered, a world without theology is a world that knows and accomplishes much. But it is a world in which humans no longer know where they originate from, what they should live for, or who and what they really are. Karl Barth kept a powerful image above his desk. It was a print of an early-sixteenth-century altarpiece by Matthias Grünewald. The altarpiece was originally installed in a monastic hospital. Christ is depicted as a pox victim, dying gruesomely on the Cross, against the backdrop of darkest night.

It has been called the most hope-filled picture in Western European history. Even in the worst moment of history—in the darkness of sin—we have put God to death in his human nature; God is bringing forth the greatest good, the merciful forgiveness of sins and the resurrection of the dead. Standing next to Christ on the cross is John the Baptist. He points toward the Son of God, who has achieved sublime solidarity with us in death. The Baptist’s hand is extended, his finger enlarged. This figure, Barth tells us, is the theologian, the one who, despite his unworthy stature relative to Christ, looks, stands, and points, indicating the mystery that is the foundational ground of the world, the trinitarian love that has made all things and can remake all things, even amid the disorientation and sin of the human race.

Theology in the twenty-first century needs to contemplate and to point, to indicate God, and with resolve and joy seek to explain the world in light of God. Theology must be creedal.

This brings us to Catholic theology’s second aim: to be against and for elements of secular liberalism. To my mind, global liberalism is not dead and has not failed. It remains the intellectual lingua franca of our national and international life, whether we like it or not. It is a solvent on dogmatic forms of religious belief. For secular cosmopolitan liberalism (represented in its origins by Immanuel Kant and more recently by John Rawls), the Catholic religion is a source of political disorder, because it claims to know too much about God and requires too much of citizens of the regime, asking them to shape their lives in relation to dogmatic commitments. Secular liberalism insists that dogma ought not live so loudly within our lives. Civic polity and nonviolent coexistence cannot be built on dogmatic convictions—those of Catholicism, or those of any other religious – tradition—and those who hold dogmatic views are perforce dangerous people. The religious person is a threat to peace and cooperation, because his commitments to supernatural belief threaten the equilibrium of political coexistence and continually threaten violence and coercion.

The secular liberal vision of human nature is not, however, entirely free from theological influences, or from its own implicit doctrinal stances. It retains key aspects of the theological teaching that man is made in the image of God, but this truth is reconceived in reductive ways, for political purposes. Instead of being persons made in the image of the Holy Spirit, capable of agapic love, we are now beings of freedom, made to will and to strive for the fulfillment of our desires. Concord through political compromise becomes the highest expression of our moral union. Instead of communion, we seek constitutional constraints on exaggerated forms of individual liberty. Instead of seeking a common truth in logos, or reason, we construct our own truths. Rather than seek a common mind, we seek political concord through the mutual tolerance of one another’s freedoms. This concord entails suspending any aspiration to a common metaphysical conception of human nature, the cosmos, or the divine nature. Whereas our ancestors sought social unity on the basis of a shared understanding of the transcendentals—being, the one, truth, goodness, beauty—we are told to bracket such questions for the sake of cultural diversity. Belief in the Trinity, and in man made in the image of the triune God, remains as an optional private belief. But it cannot emerge in discussions and disputations as explanatory. It is a subjective viewpoint, a therapy for those who desire it, perhaps for those who delusionally think they need it.

What secular liberalism has lost, then, is an ultimate perspective. We no longer know that we were created by Truth itself to seek the truth about existence, to encounter God in human flesh, to discover God in his eternal life. And we forget that we were made by transcendent love to be free—free to love in and through all things, even to become heroically strong with love, to live the truth of divine love in our human lives.

These losses have been costly. A human freedom without transcendent love is a diminished freedom. A life of logos without contemplation of the transcendent God is a limited life, which may readily turn toward distractions of the senses, of pleasure, or of power and conquest, because it has found no rest in transcendence. Within the horizon of secular liberalism we find ourselves continually on the borderlands of spiritual despair, frustration, rivalry, and acedia. Against all this, Catholic theology should speak confidently about the Christian vocation to contemplate God and to love with heroic charity.

Not all that is affirmed in liberal culture is counterproductive or misguided. The night before he was named a cardinal, John Henry Newman delivered a speech in Rome. He declared:

For thirty, forty, fifty years I have resisted to the best of my powers the spirit of liberalism in religion. . . . It is an error overspreading, as a snare, the whole earth. . . . Liberalism in religion is the doctrine that there is no positive truth in religion, but that one creed is as good as another, and this is the teaching which is gaining substance and force daily. It is inconsistent with any recognition of any religion, as true. It teaches that all are to be tolerated, for all are matters of opinion. Revealed religion is not a truth, but a sentiment and a taste. . . . It is as impertinent to think about a man’s religion as about his sources of income or his management of his family. Religion is in no sense the bond of society.

However, Newman was no enemy of democratic society or of liberal education. In fact, he wrote twin defenses of these ideas.

His Letter to the Duke of Norfolk is one of the great tractates in defense of the role of human conscience in modern democratic society. Written to contest Gladstone’s unhappy claim that British Catholics could not make good citizens, it bears strong likenesses to the teaching of Aquinas on the natural human right to religious freedom, which was restated forcefully at the Second Vatican Council. Likewise, The Idea of a University advocates for liberal education in the arts and sciences, against a servile and utilitarian notion of education. He defends the university’s pursuit of integral knowledge in each discipline in accord with its object, a pursuit unified by philosophy and with ultimate reference to theology.

Newman’s defense of freedom and reason reflects the main lines of the Catholic Church’s engagement with modern liberal critics. The Church has acknowledged the importance of certain modern achievements in human learning: the historical study of the Bible and its human authors in their original contexts; the discoveries of the modern sciences; the role of philosophical reason in mediating arguments about religious claims and ethical quandaries in society; and the place of freedom, tolerance, and civility in a pluralistic society. The Church has developed a modern form of natural ethics out of her ancient and medieval sources, and she has done so fruitfully. Whereas secular society has been riven by dangerous and stupefying ideologies in the past century, the Church has created a common language of natural rights, virtues, and principles of social thought, which are integrated with a rational understanding of the human person and a metaphysical view of nature that is open to philosophical reflection on the transcendent God. Theology in the twenty-first century should strive to maintain and advance this project. Our age is burdened by the diminished rationality of secular liberal culture and its libertarian view of freedom, which in practice is paradoxically weak and unambitious. We should propose a fuller, metaphysically informed approach in ways that are strategic, courageous, and charitable.

More important still, we must address the spiritual poverty of secular liberalism. The Church in the modern world has maintained the reality of the supernatural knowledge of God. Modern people really can discover who God is in himself. This knowledge is available only in light of divine revelation, most fully in the mystery of Christ, by his incarnation, life, death, and bodily resurrection. The Church can show to modern men and women that a higher mystical life exists—that of the saints. We can know supernatural truths with certitude and participate in them sacramentally and numinously by contact with the divine, in faith.

The culture of secular liberalism is not our only conversation partner, however. This fact brings me to the third claim: Theology in the twenty-first century must engage with the major non-Christian religious traditions. Two are of special importance: Hinduism and Islam. Today there are as many Muslims as there are Christians, and nearly as many Hindus; both populations are growing rapidly. Both Hinduism and Islam are culturally and historically vast, internally complex, pluralistic religious traditions. They also both make, in different ways, strong claims that pertain to the truth of Christianity.

The classical religious culture of India is not opposed to the notion of an incarnation of God in history. The central claim of Christianity, that God has become human, is in one sense uncontroversial within mainstream Vedantic Hindu traditions. The problem with Christianity rests rather in the belief that God became incarnate only once and in a place outside of India, with its Vedic traditions and divine-human avatars, which are meant to indicate and govern our relationship with God. In a word, the problem is the Christian scandal of particularity: the notion that God inaugurated his special presence to the world in the people of Israel and fulfilled the process by becoming human in Jesus Christ, a process that flowers in the sacramental life of the Catholic Church. Against this view, Hinduism holds that there are many incarnations and distinct forms of incarnation, and this multiplicity permits a plurality of legitimate religious traditions and rites, schools of thought, and religious disciplines.

Islam teaches almost the opposite, taking up stances on the Incarnation and the Trinity that are not entirely unlike those of secular liberal skeptics. The Qur’an states that the prophetic writings of ancient Israel and of the Church have been altered and contain serious errors. God is not a unity of three persons. There has been no incarnation. There was no atoning death and resurrection of Christ. The Jews are not the chosen people. Jesus was a prophet, in continuity with a line of prophets stemming from Ishmael, foreshadowing one to come after Jesus, to whom the Qur’an was confided. The true mediator between God and humanity is the text of the Qur’an itself. All other mediations are dissolved—Hindu, Jewish, Christian, or otherwise—and their seductive, misleading teachings are superseded. Coeternal with God and dictated to the human race, the Qur’an is the sole vehicle for knowing and serving God.

How twenty-first-century Catholic theology will and should engage with Hindus and Muslims is a largely unexplored question, but I will note three approaches. The first is anthropological. The Church should cultivate a culture of religious logos and agape. Our aim should be rational conversation, advocacy of religious freedom and truth-seeking, respect for life and civic laws, and communion in shared common goods. In this context, Catholic theologians must study Islam or Hindu culture, much as they would have studied Enlightenment or Marxist philosophy in another age. To talk to others, you have to know who they are and what they believe, and have a sophisticated respect for how they think and act.

Second, we should note that these non-Christian religions can contribute to the remediation of the errors and impoverishments of secular liberalism. Both Hinduism and Islam are marked by profound metaphysical teachings regarding the nature of God and the contemplative life. Both have their mystics, who seek union with God by love, as in the Sufi traditions of Islam or Advaita Vedanta in Hinduism. We can identify elements of logos and agape in Hinduism and Islam, and thus cultivate friendships on the basis of comparative theology.

Third, in our engagement, we should return to the idea of the incarnation in Christianity. In his Theological Investigations, Karl Rahner discusses the nature of revelation as an unveiling of who God is in the most perfect way possible. He argues that the very notion of revelation implies a mediation to us, by God himself, of God’s immanent presence in the world. This mediation fittingly occurs in that which is greatest in the visible order of nature: the human being. In other words, he who speaks of revelation speaks of intimate knowledge of God and the presence of God made possible by God himself, through his own initiative. But there is no more perfect way for God to do this, to teach us of himself inwardly or to be present to us in what he is, than for him to become what we are, to become human. Thus, the very logic of revelation implies incarnation. Our perfect knowledge of the inner life of God is made possible because God communicates to us who he is, in his eternal Logos, within our human nature.

On this understanding, we have something constructive to discuss with members of other religious traditions. The Hindu idea of a multiplicity of incarnations or avatars of the divine may seem generous and inclusive, since it allows for presences in many times and places, but it carries within itself the challenge and disharmony of incompatible messages. The incarnations or manifestations of God risk creating rival truths instead of confirming and fulfilling one another. One thus understands the advantage of the Islamic claim that God has ended the conversation by forsaking all incarnations. He has emphasized his unbroachable transcendence, and the inapproachable divine has given us a word of revelation proportionate to our finite nature, without the possibility of any access to God in himself. At this point, Catholicism can advance Rahner’s argument: that the unique and solitary incarnation of God in history reveals to us once and for all who God is, in his inner life as the Trinity, and it does so in the form of the God-man, Jesus, so that we truly come to know who God is in himself. It is a bold claim: God can and does make himself utterly present to us in all his transcendence, precisely in what we are, so that we can encounter the transcendent God precisely in what he is, the eternal communion of Father, Son, and Holy Spirit. God deploys his omnipotent goodness and mercy not to withhold himself in his transcendence, but to make himself accessible to us, even in the fragility of human infancy and crucifixion.

This brings me to my fourth aim: Defend the arts and humanities. The notion of the Incarnation clearly affirms that God can take on human flesh, which means that the human form of our nature can become resplendent with the glory of God. If we think about the pre-Christian art of Greek and Roman civilization, we observe that it celebrated the human form. But awareness of the human capacity to embody transcendence sometimes makes for tragedy rather than triumph. The Greek child who died at age twelve is depicted beautifully in stone, commemorated precisely as a being who cannot last, who will perish, but whose transitory form, in all its beauty and splendor, recalls to us an unknown first beauty, primeval and uncaused, for which our souls may long by nostalgia, or yearn for by spiritual eros.

Christianity assimilated but also radically transformed ancient culture’s remembrance of and yearning for transcendence. Because the Word has become flesh and dwelt among us, the human form of God brings something of the uncreated into the fabric of our finite human existence. Even in crucifixion we find God. Even human death may be iconic of divine life. These possibilities have ennobled human art. No one is more human than God, and no art is more human than Christian art—poetry, portrait, music, statuary, and architecture. What Michelangelo and others laid claim to was the bold idea that to depict the humanity of God is to depict what is greatest in human nature, creaturely beauty, and evoke the presence of God, in its perfect manifestation among us. Christian revelation inaugurates an aesthetic revolution: divine beauty in human form, and the human form alive forever in the resurrection.

Literature and the fine arts coincide with our deepest philosophical and theological commitments, though they indicate truths in a different manner. They give us material individuality, particular gestures, distinct acts of history, movements and decisions in time. The sonnet about one person’s love for another, the sound of a piano sonata, the painted image of a child, the crucifix of San Damiano: Great art manifests the particularity of what it means to be human, and it invites us to consider the universality of the human condition in the same particularity.

Ask yourself: How are the arts and humanities doing in elite schools today, apart from their original inspiration and matrix of development? Not so well. They are increasingly eclipsed by STEM and continually subjected to political instrumentalization. Our culture, obsessed with the expansion of free possibilities, is obsessed with the power of technology, but seldom has the patience for the skills needed to preserve the arts and the serious study of humane letters. Attention to technological stimulation consumes us, while attentiveness to the aesthetic and philosophical dimensions of human existence is increasingly difficult. Nevertheless, man is an artistic animal. Good art teaches us that the cosmos is our home and that it exists to be humanized, and that our embodied life in this world can be turned toward what is most ultimate, the beauty of God. True intelligence organizes the external world. Beauty, then, reveals the mind to itself. It reflects the elegance of ordering intelligence back to the perceiver, who in turn intuits the grandeur of our collective nature. The beautiful things built by our ancestors reveal to us still today that we have a spiritual vocation, that we have transcendent aspirations.

During the last two hundred years, the Catholic Church has fought a battle on the fields of truth and of goodness. She has made great advances on behalf of true philosophy, sacred theology, and a sound modern ethics. She has made less progress on the field of beauty, in the domains of the arts and humanities. But these domains are related to the liberal freedom that I am suggesting can be converted into a freedom for love and contemplation. True splendor teaches our hearts what they ought to love. We need a Catholicism that is elegant, and profoundly so. Catholic theology in the twenty-first century should seek to be beautiful. It must reflect on the monuments of beauty we have inherited, sacred and profane, and it must prophesy a beauty yet to come, created in the power of Christ and by the innovation of the human spirit.

I arrive at my final aim: Catholic theology must emphasize the sacramental order and the contemplative, mystical life of the Church. Human art has great nobility, but not all art is human in origin. There is also the divine art of God, given to us in the seven sacraments instituted by Christ: baptism, confirmation, holy orders, matrimony, penance, anointing of the sick, and above all the Eucharist. The sacraments are symbols that indicate the real presence of Christ and convey grace to us, as a way of living contact with him. When God gives everything to us in the Eucharist—Christ’s body, blood, soul, and divinity—we are invited to give everything to God, our whole person, body and soul. The Church thus manifests herself as the mystical body of Christ, living in Christ and with him.

God did not institute the sacraments on a whim. He instituted them because our nature has need of them. We are spiritual beings, yes, but spiritual animals, who live in our bodies and in our senses. We need to feel the presence of God as well as know it, and to express our response to God in ritualistic and habitual ways. The grace of the sacraments allows us to respond to God in stable practices that gradually perfect our interiority. The sacraments provide an embodied, enacted pathway to a spiritual interior life, and they do so in a way that depends primarily not upon us but upon God.

All the mystical reformers of the Church, from Benedict of Nursia to Francis of Assisi, from Bernard of Clairvaux to Teresa of Avila, have depended upon a sacramental life that was deeply eucharistic and aided by regular confession. Grace is interior, but it arrives from the outside, through the signs and words of Christ, which bind us to the Church and to one another.

Catholic theologians in the twentieth century were sometimes ambivalent about the sacramental system, fearing uniformity and concerned about the deadening effect of the external authority of the Church. Fear of exaggerated authority is understandable. But the sacramental economy of grace does not come from human beings. It comes from God, and it is necessary to the mystical life of the Church, the life embodied by the saints. Any theology that seeks a renewal of the life of the Church must aim at the mystical life of union with the Trinity and union with Christ crucified. That same aim must, for the very reason that it is centered on Christ, be undertaken in and through the sacramental life. Theologians must first live the sacramental life in its depths, if they wish to show the way toward that life to others. We cannot love what we do not see. For that reason, theology as an expository and explanatory discipline has an important role. It points toward the mystery of the presence of God, so that the desires of the heart may be rightly oriented, and so that God’s gift of himself may be manifest to our secular world, in the liturgical witness of the Catholic faithful. Theology in the twenty-first century, as in every century, must highlight the contemplative lives of the saints, and do so in the context of the eucharistic presence of God in our world.

Let me close with a story. I know a missionary to southern India who worked for years in a Catholic shelter for people dying of AIDS. Every day, his religious community would expose the Blessed Sacrament in a monstrance on the altar of their chapel, with its doors open toward the street, so that anyone passing by could see the Blessed Sacrament and come inside to pray. For many months he observed a young Hindu man, the victim of a serious stroke. He was homeless and partially paralyzed. He would come in each day to pray. He typically stood at the back of the Church, staring at the Eucharist on the altar. One day the priest asked him gently why he came and what he was seeking. The young man replied: “They tell me it is God, and I try to believe it.”

They tell me it is God, and I try to believe it. That is our task: to tell, to indicate, like John the Baptist in the portrait of Grünewald, pointing to the truth. “Behold the Lamb of God, who takes away the sin of the world.” Our world is spiritually homeless and half paralyzed, but many hearts are full of desire to see. If we will but tell them where to find God present in our world, the third person of the Holy Trinity will do the rest. It is the Holy Spirit who is the true protagonist of good Catholic teaching, and his work is evident in history, in myriad voices. They sing in polyphony, like a choir of saints and Doctors of the Church. Twenty-first-century theology must hope and toil for the emergence of such living forms. Their unity comes from above, as observed by Teilhard de Chardin in the statement that inspired Flannery O’Connor: “At the summit you will find yourselves united with all those who, from every direction, have made the same ascent. For everything that rises must converge.”

The post The Future of Catholic Theology appeared first on First Things.

]]>
Paul’s Ethnic Gospel https://firstthings.com/pauls-ethnic-gospel/ Mon, 21 Jul 2025 05:00:00 +0000 https://firstthings.com/?p=92306 Grace, not race”—so goes the tidy maxim by which many modern interpreters characterize Paul’s gospel. In this reading, Paul severs the covenant community from its ethnic roots and replaces...

The post Paul’s Ethnic Gospel appeared first on First Things.

]]>
Grace, not race”—so goes the tidy maxim by which many modern interpreters characterize Paul’s gospel. In this reading, Paul severs the covenant community from its ethnic roots and replaces it with a universal spiritual entity transcending the particularities of Israel. Jewish insistence on ethnic identity as necessary for membership in the covenant community is seen as the problem Paul overcame; the gospel, in this telling, is a triumph of post-ethnic inclusion. The gospel moves away from ethnocentrism, we have been told. Paul is the apostle of progressive cosmopolitanism. But what if this narrative has it backward?

In Paul and the Resurrection of Israel: Jews, Former Gentiles, Israelites, Jason Staples offers a somewhat revolutionary framing of the New Testament gospel. The fruit of painstaking exegetical labor and theological attentiveness, the book stands as arguably the most consequential contribution to Pauline studies since John Barclay’s Paul and the Gift. Staples challenges not only popular caricatures of Paul, but longstanding theological assumptions about the nature of Christian identity.

According to Staples, Paul’s gospel represents not an abandonment of Israel’s ethnic identity but its restoration—accomplished paradoxically through the incorporation of Gentiles. The inclusion of Gentiles is not a detour from Israel’s story but a key to its fulfillment. Paul proclaims not salvation from ethnicity, but rather an ethnic salvation. His gospel is steeped in the “restorationist” hopes of Second Temple Judaism. In this historical-theological framework, a distinction emerges: “Jews” designates the descendants of the southern kingdom of Judah, whereas “Israel” refers to the full twelve tribes, especially the lost northern tribes who were exiled and assimilated by Assyria. The northern tribes’ exile constituted an “ethnic death,” as these Israelites were scattered among and absorbed by the Gentiles—a process Staples calls “gentilization.” Their recovery required not merely return but resurrection. And this resurrection, Paul proclaims, occurs through the in- gathering of Gentiles—its partial accomplishment during Paul’s life and its future complete fulfillment. Since Israel has been scattered among and assimilated by the nations, her salvation depends on salvation’s coming to Gentiles (Rom. 11:11–26). God, in his providence, uses this dilemma to accomplish his original promise for his chosen people: that the blessing of Abraham should come to all nations (Gal. 3:14; cf. Rom. 11:12). As Staples provocatively puts it, “Where Israel had become gentilized, now Gentiles were effectively being Israelitized, transformed from one ethnicity to another and integrated into the ethnic people of Israel.”

”Israelitization” does not mean that Gentiles become “Jews.” Staples is careful on this point. Paul envisions a reconstitution of Israel’s broader identity, one that includes both Jews and formerly Gentile believers as members of a newly restored people. Israel’s ethnic boundaries, grounded not in biology but in divine covenant, are open to expansion. In Paul’s hands, this expansion does not result in an abstract, deracinated “people of God.” It yields a renewed, particular people: Israel, reborn from among the nations. The inclusion of physically uncircumcised persons is not a rejection or replacement of Israel, argues Staples, but “the means by which God is reaching out and saving more of Israel than anyone anticipated, a process analogous to resurrection from the dead.”

In this vision, Gentiles do not remain Gentiles. Through union with Christ, they undergo what Staples describes as “ethnic conversion.” They become full members of the restored Israel—not metaphorically, but covenantally and ontologically. This, for Paul, is the startling good news: Gentiles are no longer outsiders; they have become Israelites alongside Jews—fulfilling God’s promise to restore “all Israel” (Rom. 11:26).

In his work, Staples focuses on what this restoration means for descendants of the original twelve tribes. For them, it is a message of humility and hope. Those who “gentilized” themselves by assimilating among the nations had negated their distinction from the rest of humanity, and as a result stand under the same judgment. However, the coming of salvation to Gentiles is proof that those who are now unfaithful may yet be saved through the new life of Christ’s Spirit. And Jews need to see that the inclusion of Gentiles is not a threat to their peoplehood, but rather a gift to them.

This reading has profound implications not only for Paul’s Jewish contemporaries but for modern Gentile Christians. It reorients Christian identity around Israel’s story rather than away from it. It challenges the supersessionist temptation to view the Church as a replacement for Israel and calls for a re-engagement with the covenantal particularity that defines the people of God.

Do contemporary Christians understand that our faith incorporates us into Israel’s ethnicity? Do we perceive this ethnic adoption as part of our salvation? Or has this aspect of the good news been abandoned? Paul thinks it’s a big deal that Gentiles-in-Christ are on equal status with circumcised Jews as part of Israel. Do we?

If Paul’s gospel entails the ethnic resurrection of Israel through Gentile inclusion, then Christians today must reckon with the theological significance of their adoption into that people. Paul imagines not a generic spiritual collective but a transfigured ethnic community whose identity is rooted in the promises made to Abraham and fulfilled in the Messiah. To be in Christ is to be in Israel.

For Christians, our primary ethnic identity is that of God’s people—inaugurated with Abraham, resurrected in Christ, and continuing in the history of the Church. Accepting Paul’s ethnic gospel means embracing the story of Israel as our ultimate ethnic narrative. Our genealogies are grafted onto Israel’s tree. We are not fleshly children of Abraham or foster children, but adoptees—legitimate legal children and heirs (Rom. 8:15–16; Gal. 3:29–31). For those in Christ, our “ethnic lineage,” so to speak, runs through the patriarchs, the judges, the kings, the prophets, and the ongoing story of God’s covenant people.

The markers of this new ethnicity, according to Staples, are primarily ethical. This is one reason why converted Gentiles have no need for physical circumcision. The primary identifier of membership in the renewed Israel is the circumcision of the heart by the indwelling Spirit of the Messiah, fulfilling the new covenant promise to write the law on the hearts of God’s people. Gentiles are ethnically changed, becoming full and equal “Israelites” through the Spirit. This sharing in the Spirit should lead to a distinctive way of living, centered on the worship of Israel’s Messiah and a common sacramental life. And the shared ethical commitments flowing from the Spirit’s guidance should result in foundational moral principles received by all. All this creates a tangible, transnational “ethnic” culture distinct from all surrounding cultures—a visible, distinct people among the peoples of the world, who will be peculiar in any context. Otherwise, we fall back into gentilization, abandoning the Israelitization to which we have been called.

Our Israelitization has profound political and ecclesiological consequences. Those incorporated into the renewed Israel cannot locate their deepest allegiance in any other nation, race, or cultural identity. Yet their incorporation does not obliterate those identities; it transfigures them. The cultural particularities of the nations are brought into Israel—not as rivals, but as adornments. As various ethnic groups are grafted onto Israel, they bring the gifts of those backgrounds with them, blessing Israel from “the fullness of the nations” (Rom. 11:25). Thus, though Gentiles are assimilated, they are assimilated in such a way as to enrich the ethnicity. Nonetheless, all must place a premium on unity. The olive tree is enriched by the wild branches, yet there is only one tree. Paul’s vision resists fragmentation and tribalism. The unity of the Church is grounded in this singular, covenantal peoplehood.

Church division, then, is a failure to live out the Israelite identity granted through Christ. It should be interpreted as mirroring the disunity that was part of the plight of God’s people and from which they needed to be redeemed. In such division, we revert to another form of gentilization, short-circuiting the “Israelitization” that is the goal. Paul envisioned one restored Israel, not multiple peoples of God. We must seek unity within this shared ethnicity, rejecting the fragmentation that reflects old covenant failures.

The imperative of unity speaks not only to ecclesial identity but to contemporary social and political crises. Renewed Israel is called to be a strange nation for the healing of the nations. This calling will generate friction with any culture. Thus, the “Jewish Question” could be reframed as the “Israelite Question” (with “Israelite” including Gentiles-in-Christ). Our pre-conversion or extra-Israelite ethnic and national stories must be relativized so as to be integrated into the grander story of God’s faithfulness to all Israel, fulfilled in Christ and in the Spirit’s work among the nations.

This integration is at odds with the modern tendency to prioritize nation-state myths or racial identities in our social outlook and sense of selfhood. Through conversion, the nations come “to” Israel while remaining where they are. As a result, this transnational, transracial ethnicity can serve as an instrument of peace among the nations. By resurrecting Israel through Gentile inclusion, God demonstrates His power to overcome ethnic divisions. This reconstituted Israel, drawn from all peoples but united in one ethnic identity centered on the Messiah, models and mediates God’s peace. Its very existence challenges worldly divisions. All can find a home here among God’s elect people—in continuity with Old Testament Israel, as part of the new covenant people, who are the fulfillment of God’s promise to redeem all Israel, answering the prophetic hope of Israel’s restoration.

Paul’s gospel, as interpreted by Staples, is profoundly ethnic news for Gentiles: Through Christ and the Spirit, they cease being outsiders and become full members of the resurrected, expanded ethnicity of Israel. As Paul says, the mystery revealed in Christ is that Gentiles are “fellow heirs, members of the same body” (Eph. 3:6)—not of a new body, but of Israel’s body, now resurrected. The story of “our people” is written not by modern racial theory or nationalist myth but by the Scriptures, and it is carried forward in the history of the Church—across centuries and nations. Our forebears are the saints of the Old and New Testaments, the martyrs, theological doctors, and philosophical masters throughout the Church’s history. The great creeds, magisterial texts, and authoritative social teachings all are ours. The rich repertoire of liturgical practices, sacred music, architectural styles, artistic achievements, and scientific advancements are ours. The great victories against heresy, error, and injustice are ours. These goods constitute our objective cultural heritage—the materials arising from the providential outworking of history to form our shared identity, linking past, present, and future generations. We should rally around this heritage, drawn from our true native country, which comprises all kinds of countries and cultures. Doing so will help us to honor our legitimate diversity and embrace the God-given ethnic identity of the restored Israel. This peculiar people—drawn from all peoples throughout history, yet united in one ethnic identity—is our people.

Yet Paul’s gospel also confronts the Church’s historical failures. The early Church, as Staples notes, maintained strong continuity with Israel through its Jewish leadership. But as Gentiles came to dominate and synagogue relations deteriorated, the understanding of Gentile incorporation into Israel faded. Supersessionism arose, not merely as a theological misstep, but as a fundamental forgetting of who Gentile Christians are. Paul’s ethnic gospel was gradually replaced by spiritualized universalism. And with the rise of Constantinianism in late antiquity, the rise of Westphalian nationalism in early modernity, and then the rise of modern racial theories, Christians have been tempted to identify the faith with political projects or ethnicities other than Israel. These identities domesticate the gospel.

Staples’s work calls for recovery. Paul proclaims not the erasure of ethnicity, but its transformation. Gentiles, once alienated from the commonwealth of Israel, are now citizens (Eph. 2:11-22): full participants in Israel’s resurrection. This resurrection, Paul insists, is the mystery long hidden, now revealed in Christ (Eph. 3).

To embrace Paul’s gospel is to embrace the ethnic story of Israel as our own. It is to see ourselves not as rootless spiritual individuals but as adopted children in a transnational, historically continuous covenant people. This peculiar people—drawn from all nations, yet sharing one ethnic identity through the Spirit—is our people. This is our ethnic gospel.

The post Paul’s Ethnic Gospel appeared first on First Things.

]]>
Forecast https://firstthings.com/forecast/ Sun, 20 Jul 2025 05:00:00 +0000 https://firstthings.com/?p=93197 How long can two people stay togetherwith this in the news and that in the sky? A change in love is a change in the weather.As I have loved...

The post Forecast appeared first on First Things.

]]>
How long can two people stay together
with this in the news and that in the sky?

A change in love is a change in the weather.
As I have loved you . . . love one another,

said Christ, then disappeared. Why
can’t God and mortals stay together?

His absence so long—it feels like never.
Does to love, then, mean to leave? To die

suggest a lovely change in the weather?
Questions are common and kisses too rare.

Hold me, though the tectonic sheets fly
apart till two can no longer stand together.

The fault between us grows, but where
earth is moved seeds take hold. Let us try
to change our love, to change the weather.

Jim Richards

The post Forecast appeared first on First Things.

]]>
The Imperative of Reconsolidation https://firstthings.com/the-imperative-of-reconsolidation/ Fri, 18 Jul 2025 10:00:00 +0000 https://firstthings.com/?p=93208 We often fail to recognize how deeply the traumas of the early twentieth century shaped American political culture. During the Great Depression, capitalism seemed to have failed. The system...

The post The Imperative of Reconsolidation appeared first on First Things.

]]>
We often fail to recognize how deeply the traumas of the early twentieth century shaped American political culture. During the Great Depression, capitalism seemed to have failed. The system sputtered and broke down. After his election in 1932, Franklin Roosevelt embarked on measures to take charge of the economy. When global war came in the 1940s, government control and coordination were supercharged. An upsurge in solidarity ran in parallel with the tight integration of economic life. The shared suffering of the years of economic depression had bonded people together, and that sense of unity was intensified by the war, which required the total mobilization of society.

The experiences of depression and war marked the generations that endured them. As American culture entered the 1950s, it was characterized by an intense desire for ongoing consolidation, as if to indemnify society against the perils it had so recently experienced. As a consequence, in the fifties, the United States enjoyed what might be called peak solidarity. It was a time of historically low income inequality, driven in part by extremely high tax rates for the wealthy. It was also a time of unprecedented social unity. The encompassing term “Judeo-Christian” gained currency, and an ecumenical middle-class consensus prevailed. Social institutions were strong. Churches of all sorts experienced an increase in attendance and influence. Rotary Clubs and other mediating associations were powerful. Marriage anchored domestic life.

Twentieth-century history is best understood in terms of movements and counter-movements. The society-wide reaction to economic disintegration and the existential threat of war emphasized consolidation, economic and cultural. But as this tide of solidification and homogeneity surged forward, an opposite current was called into existence, one opposed to the putatively clotted, immobile, and conformist realities of a society characterized by a high degree of solidarity.

In the popular imagination, postwar dissent was led by radicals and progressives on the left. That’s a half-truth, at best. Rebellion against the postwar settlement had a rightwing manifestation. William F. Buckley Jr. was influenced by Alfred Jay Nock, a self-declared “philosophical anarchist” and critic of the spiritual mediocrity of mass democracy. In 1960, Buckley facilitated the founding of Young Americans for Freedom, a rightwing student organization designed to challenge the postwar status quo. In the same year, Students for a Democratic Society was established, a leftwing student organization dedicated to the same cause, albeit with a different political tendency.

Our historical accounts of this period of American history are fundamentally flawed, not only because they ignore the rightwing radicalism of the postwar decades, but also because they fix on the clash of Buckley’s emerging rightwing consensus, with its free-market principles, against the rising student radicalism, which protested against the Vietnam War, raged against middle-class conformity, and urged free-love principles. The standard histories fail to recognize the deeper impulse shared by both movements: deconsolidation.

The civil rights movement was a movement of deconsolidation. It aimed to break down the strong social consensus in favor of segregation, which was legally enforced in the South and socially enforced in the North. Second-wave feminism sought to undermine the social consensus that required the segregation of men and women into distinct roles. It is important to recognize that the Goldwater takeover of the Republican Party in 1964 had the same character. It was driven by rightwing activists who wanted to smash the political consensus in favor of the New Deal. Goldwater’s opposition to the Civil Rights Act of 1964 was likewise motivated by the desire for deconsolidation. His support of “states’ rights” (or, to use the less loaded term, “federalism”) grew out of his concerns about the concentration of power in the federal government during the FDR era.

These movements, different in substance but similar in method, aimed to break down concentrations of power and rigid controls. The goal of each was to foster greater fluidity and freedom. The rightwing version of deconsolidation emphasized the harmful effects of economic control. Central planning and regulation not only lead to economic stagnation; they also, and more importantly, stunt human freedom. Conservative authors often spoke of the dangers of the “monster state,” which extends its tendrils of control. The leftwing version fixed on the dehumanizing consequences of social control. Segregation subordinated blacks. Traditional sex roles did the same to women. Middle-class morality encouraged soul-crushing conformity, and it stood in the way of sexual freedom.

There were parallel movements in Christianity. As Matthew Rose has detailed in our pages (“Death of God Fifty Years On,” August/September 2015), in the early 1960s, radical theologians were depicting the disintegration of the metaphysical conception of God as the triumph of the spirit of Christianity. In the 1970s, Catholic radicals sought to make the Church less authoritative, less “rigid,” less “judgmental”—another project of boundary blurring and deconsolidation, which has parallels in “seeker-sensitive” evangelicalism.

This is not the place for a full summation of American politics and culture after 1950. But I hope the main pattern is evident. Yes, there was resistance to the deconsolidating project over the decades. George Wallace did not run just on a racial platform, although that was certainly a central element of his insurgent campaign in 1964. He won 34 percent of the Democratic primary vote in Wisconsin that year because he tapped into a working-class suspicion that the deconsolidation of the old consensus—not only on matters of race—was a raw deal for them.

Within a decade, the Republican Party would recognize the advantages to be gained by playing upon those suspicions as it recast itself as America’s socially conservative party, promising to protect its base against too-rapid cultural deconsolidation, while promoting a more dynamic, more mobile, and more open economy. A similar pattern characterized the Democratic Party, although in mirror image. The Democratic Party promised to defend the economic solidarity that flourished during the New Deal era, while recasting itself as the party of the cultural vanguard, seeking ever-greater social deconsolidation, which is to say more cultural openness, inclusion, and diversity.

By the 1990s, the imperative of deconsolidation became dominant in both parties. The American right and left merged. In July 1990, the U.S. Senate passed the Americans with Disabilities Act on a 91–6 vote. Republican president George H. W. Bush signed the bill with enthusiasm. That legislation was seen as a natural extension of the civil rights revolution, the next step in breaking down old prejudices and opening up American society. Three years later, Missouri congressman and Democratic majority leader Richard Gephardt opposed the North American Free Trade Agreement. But his resistance to a crucial element in the deconsolidation of the American economy was doomed. One hundred of his fellow Democrats in the House voted for the bill. Democratic president Bill Clinton signed it and declared it a victory for America’s long-term interests.

In subsequent years, both parties found ways to avoid addressing the problem of illegal immigration. Leaders of both parties repeated the slogan, “Diversity is our strength.” Both parties clamored to accommodate the wishes of Silicon Valley. Debates over taxation and economic regulation took place within a narrow band. In a word, by 2010, Republican elites had largely reconciled themselves to the agenda of the Human Rights Campaign, while Democrats had grudgingly accepted the background assumptions of the Club for Growth. Deconsolidation was king.

I do not wish to gainsay the imperative of deconsolidation. It has its time and place. Had I been a black man in 1960, I would have wished for a great deal of that strong medicine. Perhaps the same was true for many women. And in the 1970s, it was certainly true that the New Deal economy was sputtering and needed to be deregulated and deconsolidated. A more open society and a freer economy can be good things. But Scripture reminds us that for everything there is a season. There is a time to sow and a time to reap. There’s a time to deconsolidate and a time to reconsolidate.

The time to reconsolidate has come. Over the last decade, economists, journalists, and politicians have pivoted away from singing the praises of the free flow of labor, goods, and capital to bemoaning its consequences. In 2013, MIT economist David Autor published a paper documenting the devastating effects of the “China shock.” Numerous papers have been written about the steep rise in income inequality. More recently, political leaders of both parties have focused on the ways in which America’s deindustrialization harms the middle class and compromises national security.

The elite response has been to take steps to reconsolidate the American economy. In his first term, Donald Trump imposed tariffs on China. They were sustained by the Biden administration, which added export controls of technology deemed important for national security. In 2023, Jake Sullivan, Biden’s national security advisor, gave a speech at the Brookings Institution that itemized the dangers posed by economic globalization and outlined the pressing need for economic solidarity. Republican senators Marco Rubio (now secretary of state) and Josh Hawley have sounded similar notes in recent years.

A consensus is building: The American economy is too open, too fluid, too deconsolidated. The imperatives, now, are reindustrialization, repatriation of core economic functions, and the restoration of middle-class prosperity. In a word, reconsolidation.

Our polarized politics masks this consensus. The Trump administration recently sought to establish a comprehensive tariff regime, targeting China in particular. Think what you wish about the cogency of that effort, it has clearly established the Republican Party as the vehicle for economic reconsolidation. Because the Democratic Party is engaged in an all-out effort to shore up its status as the country’s establishment party, its leaders have refused to cooperate in this project, hoping that the Trump administration will fail catastrophically. This political turbulence will abate. Over the long term, I expect bipartisan cooperation to prevail as the underlying consensus in favor of economic reconsolidation asserts itself. Put simply, we’re almost certain to deglobalize and restructure the American economy to reunite the interests of labor and capital, elites and working stiffs.

I’m less sanguine about bipartisan cooperation in matters of culture and society. For a long time, “diversity” has been a praise word. It taps into the old imperative of deconsolidation, signaling what many still believe is the welcome prospect of breaking down an allegedly overconsolidated, homogeneous, complacent, and perhaps even racist mainstream consensus. The American left remains deeply committed to this project of cultural deconsolidation. That commitment is a major source of today’s polarized political environment.

But we are not living in the 1950s. Voters are increasingly hostile to ongoing deconsolidation. In 2012, Charles Murray published Coming Apart: The State of White America, 1960-2010. Murray documents the almost complete collapse of the old mainstream consensus among those at the bottom of American society.

And not just at the bottom. Today, outside of wealthy enclaves, the stabilizing institution of marriage is in decline. Clubs, associations, and other mediating institutions have decayed or disappeared. Religion plays a more remote role than it once did. To a striking degree, the foundations of middle-class life, derided by critics for decades as engines of “exclusion” and enemies of individual freedom, have eroded to the point of disappearing. Today, if you are born to a woman with a high school diploma, odds are strong that you will grow up without knowing a father at home—or a Father in heaven.

Some years ago, in these pages I argued that we are living in an era characterized by a crisis of solidarity (“Crisis of Solidarity,” November 2015). I’ve made this point several times in different terms. The causes are many—economic, cultural, technological—and we can debate them. But the reality is evident. The male-female dance is broken. Even the children of the rich feel economically vulnerable, when they’re not overwhelmed by mental health problems. Every major institution in our society is mistrusted—media, universities, government, even the churches. The public is disgruntled and angry at its leaders, who often respond in kind (“takers,” “deplorables”).

Just as economic deconsolidation has reached a dead end, even more so has its cultural twin. Another word for deconsolidation is disintegration. The most powerful force in the politics of the contemporary West, including the United States, is the fear of living in a disintegrated society. This fear has become a concrete political issue in the domains of immigration and patriotism.

You do not need a PhD in political science to recognize that rising hostility toward mass migration and calls for the restoration of strong borders amount to a demand for reconsolidation. The same holds when voters thrill to broad affirmations of national greatness and other patriotic themes. Or, for that matter, when they are reassured by the affirmation that men are men and women are women. Here as well, the reestablishment of “borders” works against disintegration—meaning, in this case, the dissolution of any coherent sense of what it means to be human.

At present, the Democratic Party seems unable to formulate a version of cultural and national reconsolidation. On the contrary, it denounces such measures as “fascist.” The Democrats still sing from the hymnal of the church of multiculturalism. For this reason, I predict that the American left is doomed to become a minority faction for the foreseeable future. The fear of disintegration is powerful and growing. The geopolitical threats facing America will intensify that fear, making the party of cultural reconsolidation increasingly attractive to voters, and thus more electorally dominant.

We are living in a time of fundamental reorientation. Major elements of our political culture are turning against the postwar imperative of deconsolidation. The pathways to this much-needed reconsolidation—economic and cultural, indeed, moral and spiritual—are many. Some methods are wise; others are foolish. Some are noble; others are debasing. Some are effective; others do more harm than good. Our job will be to nurture what is wise, champion what is noble, and promote what is effective.

The post The Imperative of Reconsolidation appeared first on First Things.

]]>
Out of the Wilderness https://firstthings.com/out-of-the-wilderness/ Thu, 17 Jul 2025 09:00:00 +0000 https://firstthings.com/?p=93118 When the agnostic Swiss playwright and novelist Max Frisch died, he asked that his funeral be conducted in St. Peter’s Church in Zurich. But certain conditions applied: The service would...

The post Out of the Wilderness appeared first on First Things.

]]>
Don’t Forget We’re Here Forever:
A New Generation’s Search for Religion

by lamorna ash
bloomsbury, 352 pages, $29.99

Godstruck:
Seven Women’s Unexpected Journeys to Religious Conversion
by kelsey osgood
penguin, 352 pages, $30

When the agnostic Swiss playwright and novelist Max Frisch died, he asked that his funeral be conducted in St. Peter’s Church in Zurich. But certain conditions applied: The service would be stripped of any religious trappings. A couple of friends would speak. No priest would bless the mourners. No prayers would be offered. No passages from the Bible or the Book of Common Prayer would be read.

Among the friends in attendance was the philosopher Jürgen Habermas. The contradiction struck him so forcefully that he used it to open his now famous essay The Awareness of What Is Missing. He saw the funeral as the perfect paradoxical embodiment of the modern age: a church acceding to a service with no “Amen”; an irreligious dead man conceding that there was no more suitable building in which to display his coffin.

A new Pew Research study reports that at least a fifth of the adults in many countries across Europe, the Americas, and East Asia are leaving their childhood religions. By and large, this is a movement of disaffiliation, towards no other tradition in particular. Christianity is seeing particularly heavy losses, with a ratio of about six people leaving for every new convert in the States, and as many as nine to one in the UK.

At the same time, a recent Bible Society report brings hope of a “quiet revival” among British Gen Zers, particularly in Catholic and Pentecostal churches. And among the public intellectual class, there has been a distinct sense, if not of revival, of at least a kind of thaw, catalogued hopefully by writers like British Christian broadcaster Justin Brierley in his book The Surprising Rebirth of Belief in God.

Two new books take the reader beyond the pie charts and broad surveys. Instead of statistics, they offer stories. Their subjects have no name recognition, no podcasts or books to sell. They are ordinary young men and women in various states of spiritual rest and unrest. Some have found their rest in Christianity; others have sought it elsewhere. And among those who claim to be Christians, there are chasmic differences on doctrine and ethics. But whether consciously or subconsciously, all share the same hunger.

In Godstruck, Kelsey Osgood offers intimate portraits of six American women who have each embraced a different religious faith in adulthood, mostly abandoning the traditions (if any) they were handed in childhood. She weaves these stories together with the story of her own conversion to Orthodox Judaism. In Don’t Forget We’re Here Forever, Lamorna Ash invites the reader to travel with her across the UK, interviewing young people who identify as conservative Christians, liberal Christians, or some nebulous “other.” She focuses on Christianity because it’s the only faith with “any hold” on her own biography, through her paternal grandmother, the family’s last “true Christian.” As for herself: Well, it’s complicated.

The women profiled by Osgood are Angela, a California rationalist drawn to Quakerism; Sara, once a nominal Catholic, now a passionate charismatic evangelical; Kate, also drawn away from nominal Catholicism and into Mormonism; Hana, an orphan who turned to Islam; Christina, attracted to the radical renunciation undertaken by the Amish; and Orianne, the only subject with some continuity between her Catholic childhood and her adult vocation as a nun. Then there is Osgood herself, claiming no Jewish heritage by blood, yet claiming it by adoption—and not just any form of Judaism, but the most difficult, demanding form she could bear. She observes her fellow spiritual travelers with a shrewd and sympathetic eye, half maternal, half sisterly.

By contrast, Ash writes with the voice of the troubled wanderer, still seeking and not yet finding. The shape of her irreligion is distinctively evangelical: She is frustrated when her subjects believe things that, by her lights, aren’t true. She may be less than certain what she means when she declares her intention to prove “that religion still matters enormously,” but she retains fixed points for what it cannot mean. It can’t mean a rejection of higher-critical assumptions about Scripture. It can’t mean an embrace of Christian doctrine as anything more reasoned than a wild Kierkegaardian leap. And it certainly can’t mean an adherence to traditional sexual ethics.

Ash’s biases have been lamented in a sharp review of the book by one of its subjects, Jack Chisnall, whom Ash shadowed while he was discerning the Anglican priesthood. Despite Ash’s announcement that she is witnessing a new thing, Chisnall perceives that she is just playing the latest variation on the tune sung down the centuries by Schleiermacher, Schweitzer, Spong, and many more. (The final third of her book features millennial deconverts clutching dog-eared copies of Richard Rohr and Brian McLaren.) And yet it was really Jack who launched her quest. They are like estranged siblings, parting ways as the path forks. Young Jack warns young Lamorna that, in the great house of your life, there is not one room on whose door God will not come knocking. But in Lamorna’s house, some doors remain locked.

Osgood, by contrast, points to a study showing that some “nones” were intrigued by an immersive six-month retreat with nuns. The vows, including chastity, exerted an appeal for an anxious generation slowly discovering that life lived without moral boundaries hasn’t made them as free as they thought it would. Even Ash confesses the occasional doubt concerning her bisexual experiments as she encounters serene religious sisters and guileless young missionary couples. She uneasily apprehends that, as Zena Hitz puts it, sex is “more than chewing bubble gum.”

In the end, the bravest heroes of Ash’s story are the most transgressive, especially gender-bending women like “Tom” and “Robert.” Their version of Jesus is far more “worldly” and affirming than that of the conservative evangelist who “likes where the rules are” in Scripture and (shockingly) thinks sin is a worse problem than climate change. In a didactic digression on official Catholic statements about transgenderism, Ash echoes Andrew Sullivan’s complaints about then-Cardinal Ratzinger’s Pastoral Letter On Homosexuality. Both make progressive revisionism a sine qua non of “human dignity.” In their role as self-anointed prophets, they are here to inform the Church that there is a new drumbeat. March to it now, or be left behind.

As for Osgood, casual hookups were never her besetting temptation in the first place. But in a thousand other ways, large and small, she asks not what Judaism can do for her, but what she can do for Judaism. When young Kelsey first begins exploring synagogues, she feels patronized by the “God-optional” choices, by their “kowtowing to presentism,” their conversion processes offering “all the spiritual sustenance of being a product on a conveyor belt.” She will have the real thing, or nothing at all.

Osgood tenderly traces her evolution from pious child rapturously caught up in church and nature—like some cherubic Victorian moppet—to precociously hardened little atheist, deciding at age eight that it was time to put away childish things. There followed “thirteen years of silence,” in which she taunted her believing friends only because she was terrified that she had trapped herself “alone with nothingness.” Today, she finds herself a Jewish wife and mother of two. She has been forced to surrender her wanderlust, her laziness, and the obsessive self-absorption that fueled her anorexia. As she keeps and is kept by Shabbat, as she attends to the rhythms of making her challah and saying her prayers, her gaze is drawn perpetually outward, upward. Her soul delights in the command for the command’s sake, even when it is a mystery.

The necessity of mystery is one point of commonality between Osgood and Ash, as both reject the idea of a faith that is fully comprehensible or reasonable. Though Osgood does emphasize that pragmatism can’t sustain religious conversion without a spark of true belief, she’s not too concerned with the objective justification of any one belief in particular. Her profiles of Sara and Kate, the charismatic evangelical and the Mormon, trace similar arcs. Neither woman seems to have an epistemological framework that can sustain her beyond the experiential. Both women watch and wait for a burning in their bosoms. Both enter communities full of earnest young people who declare their closeness to Jesus with passionate intensity. Osgood can’t exactly relate, as her own personality is more cerebral than emotive. But she believes she and her subjects are all in the same epistemological boat, and she agrees with her rationalist-Quaker friend Angela that the magisteria must not overlap. We all of us must choose our narratives “in the absence of any meaningful data proving or disproving some transcendent force.” Without such data, free to pick, why wouldn’t you “pick the open one, the hopeful one, the magical one”?

The phrase “proving or disproving” requires teasing out. It may be true that God’s existence—where by “God” we mean Hashem, God of the patriarchs and the prophets, God who became God with us, God hanging on a tree and rising bodily on the third day—can’t be mathematically “proved.” Even if one might attain something like logical certainty for the existence of a personal First Cause, further revelations require more than the tools of deduction. This need not mean that the tools of reason are unavailable or ineffective when we consider the divine. Osgood’s Catholic nun, Orianne, offers an intuitive version of the argument from beauty, which she first formulated as a child. Later, comparing Christianity and Islam, she recalls deciding that the hanging God is worthier of worship than the God who scorns crucifixion—an observation that could be sharpened as an argument from maximal goodness. Granted, Orianne never thinks of herself as making arguments, premise-and-conclusion style. But her common sense can be made rigorous.

This is not to deny that intellectual justification has its limits, particularly as the chain of custody from divine Word to recorded word to modern reader becomes difficult to trace. But it seems safe to say that the God of the Bible is a God who was concerned with preserving a record of his earthly dealings, most intimately and particularly his incarnation. If the gospels do not constitute “meaningful data,” then what exactly are they? How are we to receive a verse such as John 19:35, on the piercing of Jesus’s side through the eyes of the beloved disciple: “And he that saw it bare record, and his record is true: and he knoweth that he saith true, that ye might believe”?

Ultimately, we cannot avoid the business of testing, weighing, and judging. Not because we are pursuing a “proof,” in the purest sense, but in that we are gathering evidence, making a case—a process to which we seem to have been positively invited.

On one level, this is the invitation extended in Ross Douthat’s recent book Believe, which energetically argues that God probably exists and that you, dear religiously non-committed reader, should probably get busy with some form of worship. But Douthat’s hopeful prescription becomes fuzzier when it comes to the question of the particular shape that worship should take. As a Catholic, he would prefer that everyone worship the Christian God, and his final chapter homes in unapologetically on the case for Christ. At the same time, he explains that he will still be happy if he can inspire his readers to take up some religion of some kind. Were he to read Osgood’s book, he would see in all her subjects an admirable movement toward the religious traditions that feel most accessible and make the most sense in their particular circumstances. To that extent, he and Osgood, the post– Vatican II Catholic and the Jew, are kindred latitudinarian spirits. I, by contrast, though I don’t qualify strictly as “evangelical,” might in a certain ironic, photo-negative sense share more DNA with Ash. One could say that we are cut from the same Protestant cloth.

Ash’s book finds some of its most poignant moments in reflections on prayer. Certain kinds of prayer she judges hypocritical (such as praying outside abortion clinics), but she’s touched in spite of herself by those charismatic Christians who approach the throne with all the boldness of people who believe the apostolic age never ended. Yet here she also finds the quiet teenager with chronic pain, no longer able to bear having her friends pray over her, grateful at least that God has allowed her to “walk alongside other broken people.” At various points, people offer Ash prayers and “words from the Lord,” which she receives with a mix of emotion and bemusement.

In one sweet vignette, she allows an earnest boy to petition the Lord for a reduction in her tooth pain from a receded gum. “Imagine if it had gone,” she reflects. Would she have fallen off her chair and become a Christian? Would she have believed? My mind went back to a conversation I once had with Douglas Murray and Justin Brierley, in which Brierley asked Murray what, if anything, could nudge him decisively towards Christianity. Douglas answered, “I think I would need to hear a voice.” Had I been expecting this answer, I would have asked him what he wanted the voice to say.

It is near the end that Ash’s story takes its darkest and most unexpected turn, when her mother is diagnosed with Alzheimer’s. And so, irrevocably, the world changes. The sky darkens. Her mind is “in hell.” Only then does she learn to pray. And wherever she finds herself walking now, down whatever unfamiliar street, she cannot pass a church without slipping inside. “Often their doors are open. Mostly their pews are empty.”

Osgood also writes vulnerably about the early days of her struggle with secondary infertility, which is on her mind as she listens to a charismatic pastor preach about the woman with an issue of blood. Her evangelical friend Sara, fighting diabetes, goes to the front for a laying on of hands, wistfully hoping this might be the Sunday she receives healing. Osgood doubts, but she doubts with compassion. For she prays, too, all the time. “What I get, of course, is silence.” This is not a bitter statement. In some sense, it is a profoundly Jewish statement. One thinks of Reb Saunders in Chaim Potok’s The Chosen, raising his son in silence so that one day he will understand the silence of God. Yet, even amid God’s silence, Osgood remembers that she was at peace. God did not heal her infertility, but she dared to believe “something useful was happening nonetheless . . . a great tapping into something universal and bottomless, an acknowledgment that despite the contemporary hubris that we can eradicate human pain, it would always be there, assaulting us, molding us, wearying us, carrying us.”

“Let it get worse, because it will,” Ash prays, “but let that not break us. Let it not hurt her. Let us take it in and let it be.” “I am a person lost in the wilderness,” Osgood prays. “Hear me. Help me.”

And indeed, she declares, “He does.”

The post Out of the Wilderness appeared first on First Things.

]]>
The Great Excommunicator https://firstthings.com/the-great-excommunicator/ Wed, 16 Jul 2025 10:00:00 +0000 https://firstthings.com/?p=93111 When Sam Tanenhaus agreed in 1998 to write a biography of William F. Buckley Jr., it would have been hard to name an American who deserved one more. Less than a decade after the defeat...

The post The Great Excommunicator appeared first on First Things.

]]>
Buckley: The Life and the Revolution That Changed America
by sam tanenhaus
random house, 1,040 pages, $40

When Sam Tanenhaus agreed in 1998 to write a biography of William F. Buckley Jr., it would have been hard to name an American who deserved one more. Less than a decade after the defeat of communism, the conservative movement was so firmly in the saddle that even Democratic president Bill Clinton was extolling the virtues of deregulated markets and military strength. There had always been a conservative disposition among the American citizenry. But there had been no conservative movement worthy of the name until 1955, when the twenty-nine-year-old Yale grad “Bill” Buckley founded his weekly magazine, National Review.

Buckley not only started conservatism; for a quarter century, he embodied it. He fought campus leftism and upheld anti-communism, defending Senator Joe McCarthy and his purges in a widely read book. Wealthy, preppy, literary, he summoned dozens of less privileged activists to his family’s Connecticut estate in September 1960 to codify the movement’s principles: personal liberty, free markets, states’ rights, anti-communism.

Buckley lampooned liberal pieties in a syndicated column that was rambling, recondite, and allusive. He wrote of Hunter S. Thompson: “What emerges with a most awful vividness from this collection, presented as a chrestomathy by the most highly accredited bard of the period, is a very nearly unrelieved distemper, and this, along with the tintinnabulary drugs, is so markedly the Sign of Thompson that to fail to give it due emphasis would be to fail to remark Jimmy Durante’s nose.” Those who flocked to Buckley’s speeches would hear an outlandish Oxonian drawl, which he claimed to have picked up in a year of English public school before the war. His project was not just ideological but social: Buckley aimed to purge his movement of the losers who cost it prestige and (therefore) adherents, cutting loose the anti-Semites of the late-stage American Mercury and the conspiracy theorists of the John Birch Society. A run for mayor of New York in 1965 brought him to the attention of television producers, who would run his pioneering talk show Firing Line for four decades. Buckley counted Henry Kissinger and Ronald Reagan among his friends—one could even call them protégés. Only with Reagan’s rise to the presidency was Buckley eclipsed as the leader of the whole conservative movement.

It is true that by the time Tanenhaus embarked on his biography, there already existed an excellent one, published in 1988 by John Judis, then as now one of the giants of American political journalism. But Judis was a man of the left. Tanenhaus, soon to become the editor of the New York Times Book Review, had strong anti-communist sympathies and had written an acclaimed biography of Buckley’s friend and mentor Whittaker Chambers. That made him a good candidate to extend the story to Buckley’s death in 2008 at the age of eighty-two. Tanenhaus might take in the decades Judis missed, in which Buckley shifted his focus to spy novels and sailing the world, only occasionally descending to adjudicate intra-conservative spats over identity politics.

Though Tanenhaus enriches our understanding of Buckley’s rise, he has not supplanted Judis. Nor does his thousand-page book, published in Buckley’s centenary year, offer us a significantly more sympathetic account. Indeed, it will be seen as another milestone in an ongoing downgrade of Buckley’s writerly reputation.

Buckley was a man of parts, the scion of a family that was eccentric, peripatetic, and, to all appearances, fabulously rich. His Texas-born father had operated for years as an oil prospector in Mexico and thrilled to its culture: The ten Buckley children all spoke Spanish growing up, Bill especially well; Yale would pay him a salary to teach it when he was an undergraduate. For a while the family lived and traveled in Europe, where the children also learned French. They took riding lessons. Bill played the piano (and eventually the harpsichord). Around the dinner table in northwestern Connecticut, or at the family’s winter home in South Carolina, William F. Buckley, Sr., a charismatic and garrulous reactionary, would dominate the conversation— and his young namesake, Tanenhaus tells us, could “parrot his father’s political opinions with remarkable facility and alarming confidence.”

Chief among their preoccupations in 1940 and 1941, when Buckley was in his early teens, was to keep the United States from being drawn into another war in Europe. The country was torn between memories of Woodrow Wilson’s involvement of the United States in the First World War (widely deplored), and the power of his protégé Franklin Roosevelt in the White House (widely supported). Buckley’s father was a member of the America First movement and an admirer of Charles Lindbergh, the country’s beloved hero, who had become an earnest isolationist. Lindbergh suspected that the Jews and the British had an interest in talking the United States into war, and so did William F. Buckley Sr. There were isolationist families, like the Buckleys, and internationalists, like their neighbors the Spingarns (Jewish leftists) and the Cotters (children of the progressive Episcopalian pastor). Two of these Cotter children would become international screen stars under different names: Audrey and Jayne Meadows. Indeed, the Buckleys always seemed to be blundering into people who were destined for fame. The children competed in equestrian events with Jacqueline Bouvier (later Kennedy) and the future playwright Edward Albee. The doomed poet Sylvia Plath was wowed by the family estate when she visited it for the debut of Bill’s sister Maureen, her Smith College housemate. Betty Friedan was another Smith friend of the Buckley girls. Fats Waller once played the piano at the Buckleys’ house—his cousin was their butler.

Buckley went to Yale. He was enormously popular—tapped for the prestigious Skull & Bones club, star of the debating society, editor of the Daily News. He was so busy he hired a secretary. Under the influence of the political scientist Willmoore Kendall, a sort of right-wing Jacobin, Buckley wrote attacks on Yale’s more popular professors for the socialism and religious agnosticism they seemed to believe in. These weren’t particularly coherent arguments, and Buckley, who revered Yale, was surprised that anyone would take offense at them. The cash-strapped publisher Henry Regnery, an America Firster, was interested in publishing God and Man at Yale, the book Buckley had bundled these writings into. Buckley’s father advanced Regnery $3,000 to get the thing into print in 1951. He would provide another $25,000 when Buckley was co-writing his follow-up book, McCarthy and His Enemies, in 1954. As young Bill came under the influence of two repentant ex-communists—James Burnham, a onetime intimate of Trotsky, and the literary memoirist Whittaker Chambers—communism emerged as Buckley’s preoccupation. Through Burnham, he made contact with the CIA, for which he would work for a year in Mexico, posing as a businessman.

Tanenhaus is justified in calling Buckley’s father “an unsung founder” of modern American conservatism— through the money he poured into Buckley’s books and, later, National Review, not to mention the pre-progressive certitudes he poured into Buckley’s head. Indeed, William F. Buckley Sr. emerges from these pages as more fascinating than his son. He was rich now and then, but never the magnate he appeared to be. Buckley’s brother Jim recalled driving onto the family estate with their father and hearing him say, “If someone sees that house, they might think I have three to five million dollars.” Jim had assumed his father was worth many times that. But the patriarch was juggling, and when he suffered a stroke in the summer of 1955, the children’s $20,000 annual stipends would rapidly dwindle, then disappear altogether.

The fortunes of the senior Buckley’s company rested on connections—to authoritarian politicians in Mexico and Venezuela, to mercenaries in Mexico, and to a variety of irregular financiers in New York. He had been tied up with oilman Bernard Doheny and interior secretary A. B. Fall, conspirators in the Harding administration’s Teapot Dome scandal, which was until quite recently the most flagrant example of self-enrichment in the history of the American presidency. He was not implicated in Teapot Dome itself, but he found himself at that time “up to my nose” in a deadly counterrevolutionary insurrection in Venezuela.

The editor of National Review never knew any of this. Nor did he ever find out that his father’s father, a Texas sheriff he assumed had been cut from the same entrepreneurial cloth, had actually been a leftist revolutionary, indicted for helping the radical leader Catarino Garza escape American capture. In his notes, Tanenhaus alludes to an unpublished larger narrative he has written about this period of Buckley’s family history. One would be keen to read it.

This background helps correct a prevalent stereotype about Buckley. Because his father and his name were Irish, it is easy to assume an Irish element about his conservatism . . . something horsey, witty, and high-toned. No. Buckley was the son of a Southern frontiersman who really had more in common with an uncouth Oklahoman like Willmoore Kendall than a Merrion Square dandy like Oscar Wilde. He contrasted his own Mexican background with the Kennedys’ Ireland-focused culture. His mother was a German Catholic from New Orleans who started a chapter of the John Birch Society just as Buckley was purging it from conservatism’s ranks. He spent many of his youthful winters in South Carolina, on the estate of the magnificent Civil War diarist Mary Boykin Chesnut, which his father had bought. Buckley was straightforward, generous, hospitable, and—however proper his English—resolutely disinclined to snobbery.

Tanenhaus makes a case for regarding Bill Buckley as a Southerner; indeed, he overstates it. For three long chapters he goes into the history of segregation and the “massive resistance” to its dismantling in the 1950s. Yet Tanenhaus’s digressions, here and elsewhere, serve a point. The elder Buckley, though he treated his black employees with exemplary decency, was pouring a good deal of his fortune into the Camden News, a newspaper that backed the local Citizens’ Councils that were defending Jim Crow. Tanenhaus notes that, at Yale, Buckley opposed a plan to fund scholarships for black students and refused to sponsor an interracial dance on the grounds that interracial marriage brings “ostracism and broken bones.”

This was the attitude National Review brought to race when Buckley founded it. Tanenhaus finds it striking that it was Buckley’s mentor James Burnham, a northerner, a Europeanist, an ex-Trotskyite, who led the magazine’s opposition to civil rights. It should not surprise. The magazine’s case against desegregation was more constitutional than tribal. This has always been true of most opposition to civil rights. Tanenhaus, with a baby boomer’s tendency to use the American race problem as an all-purpose moral heuristic, calls Buckley’s editorial “Why the South Must Prevail” a statement that “haunts his legacy and the conservative movement he led.”

This was a more convincing view in 1998 than it is today. To be sure, Buckley’s own argument against civil rights was preposterously weak. For him, as long as there was the risk of one black vote tipping an election against “the claims of civilization,” blacks on the whole must be denied the franchise, because any vote could be that vote. That’s absurd: You could say the same about whites or, indeed, anyone. But stronger arguments were beginning to emerge, and in the early 1960s Barry Goldwater announced that he opposed civil rights because it would bring into being “a federal police force of mammoth proportions . . . neighbors spying on neighbors, workers spying on workers, businessmen spying on businessmen.” The woke era has vindicated Goldwater’s view.

You would need to be of Buckley’s generation, a century old, to have a sense of the countervailing pressures operating on National Review in its first decade. Its commitments cannot be deduced from contemporary ones or from Buckley’s later reputation as a reasonable member of the conservative establishment. Its starting premise was that, in drafting the hero of World War II, Gen. Dwight Eisenhower, as its presidential candidate in 1952, the Republican Party had colluded with elite “opinion makers” and betrayed the country’s conservatives. National Review intended, as Buckley put it in a private letter, to “read Dwight Eisenhower out of the conservative movement.” It ran essays by Freda Utley, another ex-communist isolationist of the Regnery circle, who thought the allies should go easier on Nazi Germany and considered Israel “the first racist state in modern history.” There was Revilo Oliver, the classicist, polemicist, and Holocaust denier, who wrote on race, heedless that war might have changed anything.

This couldn’t last. Buckley was trying to inspire the generation that fought World War II to build a political movement. They wanted to join a band of liberators, not a ward for cranks. There was a fine line here. Policing it would require of Buckley a Florentine combination of delicacy and ruthlessness. Robert Welch, the Kentucky-born anti-communist who had started the John Birch Society, had admired Buckley for years. He was a man of considerable brilliance who had never shown Buckley anything but kindness. But he saw conspiracies behind almost every event in public life. The two fell out over—of all things—Boris Pasternak’s novel Doctor Zhivago, which enraptured Buckley along with other literary Westerners when it was smuggled out of the Soviet Union in 1957. Not only was it a great novel; it was evidence of something indomitable about the Russian spirit that could survive even communism. For that very reason, Welch smelled a communist plot.

Buckley couldn’t attack the Birchers wholesale. Republicans depended on their votes. He singled out and personally denounced Welch for sins that were, in the final analysis, neither intellectual nor moral but social. “Our movement has got to grow,” Buckley explained to a friend. “It has got to expand by bringing into our ranks the moderate, wishy-washy conservatives: the Nixonites.” And to these swing voters, Welch would make the party look like what Buckley called “Crackpot Alley.” Ronald Reagan, the Great Communicator just emerging into national politics, gratefully took Buckley’s side. Buckley had assumed his own role as the movement’s Great Excommunicator.

In trying to describe what irked Buckley about Ike, Tanenhaus captures a paradox of conservative thought in a progressive world: “The New Deal had been kept intact,” he writes, “. . . through the stealth rhetoric of conservatism.” Governing ideologies are dialectical. The more progressive and planned a society becomes, the more need it has to win over public opinion, which is generally not progressive at all. So rhetorical conservatism bubbles up even in progressive eras, perhaps especially then, because progressives require something to pit against actual conservatism. This creates considerable dissension among conservatives, not to mention a lot of bad intellectual incentives.

When you consider how the establishment tried to freeze him out and how little money Buckley had at his disposal, his record of attracting genuine literary (as opposed to merely ideological) talent to his magazine is extraordinary. Early on there were prize prose stylists from the academy: the literary scholars Hugh Kenner (who had been Marshall McLuhan’s protégé in Toronto) and Guy Davenport. Joan Didion. Garry Wills. The New York Times Book Review editor John Leonard. The columnist George Will. The dance critic Arlene Croce. Almost all would break with Buckley. It was partly that they came to disagree with him politically; partly that, for a literary person, writing for a conservative magazine was the equivalent of entering a monastery; and partly that Buckley, though a generous boss, could abuse his privileges—even claiming a sort of editorial droit du seigneur by cribbing from his writers’ work before it appeared. He infuriated Wills by declaiming, unattributed, whole passages of Wills’s unpublished essay on James Baldwin during a debate with Baldwin himself at the Cambridge Union in 1965. When Wills speculated publicly (and probably baselessly) about National Review’s having been funded by the CIA, the two parted ways. Buckley complained late in life that he had been running a “finishing school for apostates.”

In 1965, Buckley ran half-seriously on the Conservative Party ticket for mayor of New York. It was a publicity stunt that changed his life. He got only 13 percent of the vote. But the race exposed him on television almost nightly, and viewers liked him—some for his frankness about problems (crime, above all), some for his quirky conservatism (he favored legalizing drugs), and some for his wit (he said that if he won, he’d demand a recount). It was here that Buckley really effected his separation from the resentful right, in a way that anticipated Boris Johnson’s campaigns for mayor of London: You cannot be dogmatically conservative when you are begging for votes in the most cosmopolitan city on earth. The public would get to see more of him. On his interview show, Firing Line, which began airing after the election, he became what Tanenhaus calls “a new type of public figure . . . a performing ideologue.” That same year, his wife Pat, daughter of a Vancouver tycoon, came into a mammoth inheritance. The Buckleys were now wealthy Manhattanites, with a maisonette on the Upper East Side, ever larger boats to feed Buckley’s growing sailing obsession, and stays in Gstaad, where every winter Buckley would ski for several weeks and write a book.

He never managed to write the book he intended to be his magnum opus—a conservative summum that he planned to call The Revolt Against the Masses. To look at the Ortega y Gasset–derived title is to see why. Even at Yale, Buckley, when he was not speaking, writing, or otherwise performing, had a tendency to get bored with politics. He had been lukewarm about all the Republican presidential candidates in his lifetime: Eisenhower, Nixon, Goldwater, Nixon again. Buckley’s youthful conservatism—which really had been a conservatism—was coming out of synch with the emerging populist movement that had borrowed the name. Conservatism as Buckley understood it was a preference for the noble against the crude, a defense of the “best that has been thought and said,” an elitist movement. He is alleged to have quipped in 1963 that he would rather be governed by the first two thousand names in the Boston phone book than by the Harvard faculty, but that was a bon mot, not a credo. He never believed any such thing. In the twenty-first century it would become a kind of conservative parlor game to ask which postwar thinkers would have backed Donald Trump’s reshaping of the Republican Party and which would have opposed it. The question can be answered for Buckley more easily than for any other: He would have been a resolute opponent. And sometime after the start of the Nixon administration he snapped awake to discover, perhaps to his private horror, that he had been having a social hallucination, and that the crowd who had been rallying behind his banner for decades, whom he had taken for Optimates, were in fact Populares.

That changed everything. How could you lead the masses in a Revolt Against the Masses? The Republican Party was now pursuing a “Southern Strategy” that focused on suburban transients and poor whites in the sticks. Those were not Buckley’s people. “Even now, the only newspaper Bill read or took seriously was the Times,” Tanenhaus tells us. Buckley was beginning to backpedal from his slashing assertions about civil rights. “I was wrong,” he eventually said of his opposition to racial integration. “Federal intervention was necessary.” Why break one’s mind over the race problem? In the European ski resorts and yacht clubs where he spent so much of the year, it didn’t really come up. Buckley was writing yachting memoirs and spy novels. He was learning to paint with David Niven, Princess Grace, Teddy Kennedy, and John Kenneth Galbraith. He came to feel a “sneaking affection” even for his old liberal-Republican nemesis, Nelson Rockefeller. Forced to choose perfection of the life or of the work, Buckley settled on the former.

Not that everybody in the establishment was willing to let bygones be bygones. Gore Vidal called Buckley a Nazi and took his languorous manner as evidence of closeted homosexuality, and was constantly getting the better of him in debate. The televised conversations the two held during the party conventions in summer 1968 turned into an embarrassment. At the height of the popularity of segregationist George Wallace, Vidal implied that Buckley, absent from the set, was a member of the Ku Klux Klan. “He’s over at that Wallace headquarters, stitching hoods,” Vidal said. Eventually Buckley lost his temper on television, calling Vidal a queer, and he insisted on writing up the incident for Esquire magazine. Vidal again bested him, using the pages of Esquire to rebut Buckley’s account by retailing stories from Audrey and Jayne Meadows—Buckley’s youthful friends and Vidal’s adult ones—about the bigotries of Buckley’s father.

Starting in the 1970s, Buckley was, socially speaking, a grandee in a party with which, sociologically speaking, he had little in common. His two closest friends were Henry Kissinger and Ronald Reagan, both of them outsiders for whose bona fides he had vouched—and whose undying gratitude he now enjoyed. Buckley had never really been a Nixon man, but now he drifted around the administration taking odd jobs. He sat on the board of the U.S. Information Agency. He spent a few weeks scrounging ineffectually for information in Chile in 1971, even traveling to the coastal retreat of Pablo Neruda in Isla Negra, though the poet wouldn’t see him. He became the U.S. representative to the UN’s so-called Third Committee, responsible for human rights—a post he happened to occupy during the Yom Kippur War of 1973.

Tanenhaus is struck at several junctures in Buckley’s life by his poor judgment and inattention to basic life matters. One episode stands out from his time running for mayor. Edgar Smith—rapist, murderer, and National Review subscriber—was on death row, convicted of biting the breast off a fifteen-year-old girl and shattering her head with a boulder. Through sheer force of flattery, he bamboozled Buckley into believing he had been framed. Buckley worked tirelessly and won his release, without—Tanenhaus shows—ever taking the trouble to familiarize himself with the basic facts of the case, including the evidence. Even after Smith plunged a six-inch knife into a woman’s chest, Buckley couldn’t see that he himself had done anything wrong.

Now in the Nixon years, Buckley seemed again at sea. When the New York Times published the Pentagon Papers, it occurred to Buckley that it would be funny to write a parody in the form of a “fake” Pentagon Papers, alleging U.S. and foreign crimes around the world. The problem was that it wasn’t funny at all. It was just a set of false reports that misled government officials and other journalists for days.

Buckley was surprised by the scandal over the Watergate break-in of 1972, which led to Nixon’s resignation two years later. His main preoccupation throughout was to protect his old CIA friend E. Howard Hunt, the organizer of the break-in itself. Buckley hid from his readers the important information he knew about Hunt, and even helped him with legal expenses. Tanenhaus judges Buckley harshly for what he sees as a grievous breach of journalistic ethics. “It was one thing to shield a source as the Washington Post’s Bob Woodward shielded his crucial FBI source, ‘Deep Throat,’ for the purpose of informing the public,” Tanenhaus writes. “It was quite another to protect powerful people from public scrutiny.”

Was it? Who fits better under the rubric “powerful people”? The washed-up, newly widowed persona non grata E. Howard Hunt? Or “Deep Throat,” who turned out to be Mark Felt, long J. Edgar Hoover’s top aide at the FBI? When it mattered, Woodward did not tell readers of Felt’s FBI connections or of his own prior acquaintance with Felt. Buckley’s role seems more, not less, defensible. He seems more like someone trying to live up to E. M. Forster’s old dictum: “If I had to choose between betraying my country and betraying my friend, I hope I should have the guts to betray my country.” It speaks well of him.

In the half decade between the fall of Nixon and the election of Reagan, Buckley found himself, in Tanenhaus’s words, “at odds with others on the right.” Since this was probably the most conservative half decade of the last century, it was an odd time to be on the outs with the movement as a whole. But once his friend Reagan was elected, he gained the relevance that comes from influence. He made recommendations for Reagan hires—including Richard Perle, Jeane Kirkpatrick, and Eugene Rostow—that give Buckley a claim to be one of the founders of what we now call neoconservatism, and celebrated the twenty-fifth anniversary of National Review.

And in this celebratory mood, with a quarter-century of Buckley’s life still ahead, the book comes to a halt. It gives the reader a few epilogue-style pages and simply skips the rest of Buckley’s life, without offering any justification. Is there one? Buckley’s last decades were, as we say, more social than political. His wife, a gatekeeper of Manhattan society in the age of new investment banking fortunes and the lavish charity balls described in Tom Wolfe’s Bonfire of the Vanities, will likely go down as a more significant fin de siècle figure than he was. Buckley was now inhabiting a world more familiar to society columnists than to political biographers.

Nonetheless, it is striking that Tanenhaus’s discussion of Buckley’s political maneuvering in the Reagan years is less detailed than the one Judis gave us a generation ago. And concerning the years after Judis wrote, there remains much to say about the way, once communism was vanquished, the Reagan coalition unraveled. Buckley played a role in that unraveling during the presidency of George H. W. Bush. National Review editor Peter Brimelow tried to reintroduce the case for a restrictive immigration policy. Brimelow would eventually leave the magazine and found the right-wing website VDARE. And two writers, Pat Buchanan and – Joseph Sobran—the former a one-time Buckley friend, the latter a Buckley protégé—waged a strident print campaign against the outsized influence of Israel’s interests on U.S. foreign policy. Buckley ousted Sobran (eventually), scolded Buchanan (then, confusingly, endorsed him for the Republican presidential nomination in the New Hampshire primary), and wrote a meandering essay called In Search of Anti-Semitism, which was nearly impossible to draw a message out of. But Tanenhaus is a smart student of ideology and could at least have tried. He gives the episode half a sentence.

The book winds up feeling like one of those three-volume nineteenth-century lives that is missing its final volume. Perhaps the explanation is simple: Tanenhaus didn’t make a curious artistic choice. He just ran out of time. Maybe he got sidetracked by the quantity of material he found on the early part of Buckley’s life, until, a quarter-century into the project, the Buckley centennial loomed, and it was time to fish or cut bait.

An ambivalent picture emerges. When Buckley was finishing McCarthy and His Enemies in 1954, he was puzzled by the mistrust of McCarthy’s protective wife, Jean. Didn’t she see that, in a world of enemies, Buckley was one of the only writers who wished him well? Of course she did. But as Tanenhaus puts it, she also “shrewdly saw what McCarthy merely sensed: that this ringing defense of her husband was littered with fastidious hedgings and muted concessions to his critics.” So it is with Sam Tanenhaus’s biography. It has brought to light Buckley’s wit, his hard work, his gift for friendship, and his extraordinary generosity, monetary and otherwise, to those he loved. Still, every hundred pages or so, Tanenhaus lets drop some damnation-with-faint-praise that gives a lukewarm, and probably accurate, account of how posterity will see him. “Buckley’s function had never been to give theoretical substance to the movement,” he writes. “He was not its best or most serious thinker. He was its most articulate voice.”

The post The Great Excommunicator appeared first on First Things.

]]>
The King and the Swarm https://firstthings.com/the-king-and-the-swarm/ Tue, 15 Jul 2025 11:00:00 +0000 https://firstthings.com/?p=92934 The printing press did not just change how people shared information. It changed the normative patterns of consciousness itself. After those changes came a period of chaotic upheaval, out...

The post The King and the Swarm appeared first on First Things.

]]>
The printing press did not just change how people shared information. It changed the normative patterns of consciousness itself. After those changes came a period of chaotic upheaval, out of which emerged the worldview and political settlement characteristic of modernity. In England, that upheaval reached a crisis in the seventeenth century, in a series of convulsions that birthed not just a new political settlement, but also a reality-picture that would in turn shape the cultural, scientific, and technological imaginary of the modern era—and, in time, give rise to our modern democratic norms.

This story is generally presented as one of ineluctable progress, sometimes called the “Whig version of history.” Named in 1931 by the historian Herbert Butterfield, after a faction in English seventeenth-century Restoration politics that played a crucial role in the dispatch of absolute monarchy, Whig history conceives these events and all that followed as a one-way ratchet toward modern freedom and progress.

Though many still believe in Whig history, it is already over—a casualty of the post-print counter-Enlightenment. For while believers in Whig history generally recognize the contribution made by the printing press to their story, most assumed the advent of digital culture would continue this trajectory. They were wrong. The digital revolution is profoundly reactionary. The transformations it brings are less revolution, in the laudatory Whig sense, than putsch—one that critically undermines every presupposition underpinning Whig history.

The end of print culture is already upon us. With its end, we are already witnessing the disintegration of modernity’s load-bearing foundations, including the valorization of facts and objectivity, and a conception of the individual subject as a universal model of human personhood. This reality-picture, which crystallized in the seventeenth century, is already well on its way to dissolution in the solvent bath of digital media, a process radically accelerated by the spread of AI.

It is perhaps too neat to say the transition from print to digital forces us to replay the seventeenth century again, but in reverse. And yet much that is happening today makes sense, seen in those terms. And perhaps its most momentous effect is to undermine the cultural norms and habits of thought that form the bedrock of modern liberal democracy.

As this has grown increasingly difficult to deny, a bitter struggle has erupted over how best to shape the post-print (which is to say post-liberal, post-democratic) political order. To date, the incumbency advantage has rested with those seeking to continue print-era democratic desiderata, within a body politic now predominantly formed by and for digital media consumption. This “swarm” model frames its program as radically democratic, and as surfacing organically from the aggregate desires of the people. In practice it withdraws political agency from that people to an expert class of purportedly neutral functionaries. And the program such functionaries implement is often unpopular, its structures viewed as illegitimate by the very people it purports to represent. Its unpopularity is due not to a rupture with democratic principles so much as to continuity with these principles, and particularly with the democratic presumption of a somewhat agonistic relation between rulers and demos. More subtly, the same cognitive shift that, among proponents of swarm “democracy,” justifies the emergence of this model from the husk of the representative kind, has also contributed to stripping it of popular legitimacy.

But of all possible alternative modes of governance, the one that today carries the most potent cultural charge still lurks, mostly shadowed, at the fringes of the contemporary political imaginary. It is not deemed legitimate among respectable people, in respectable countries. And no wonder: It is the political form whose abolition forms the origin story of Whig history as such. It was the most common mode of governance across the West in premodern times, this political form that now seems, at least to many of today’s extremely online young radicals, to call seductively not from the deep past but rather from the future: the king. Even as democracy has become something other than itself under digital conditions, so this potent, enchanted-seeming figure has slipped from the reactionary fringes to animate a substantial minority. When we consider the wider metaphysical transformation that has enabled its re-emergence, it becomes unsettlingly clear that in the aftermath of the digital putsch, the king may be our least bad option.

To see past the Whig history hampering contemporary efforts to understand the digital revolution requires us first to sketch the worldview it displaced. In very reductive terms, this medieval Christian picture understood all of material, political, and social reality as a hierarchical arrangement of the natural order, formed from above by the thoughts of God himself and governed by a system of analogical correspondences. Within this system, kings bore a relation to their polity analogous to that between Christ and his Church, between the head of a household and those within that household, and between the head and the body of a human being.

This set of nested natural hierarchies was underwritten by a metaphysics, derived from Aristotle by way of Thomas Aquinas, that understood all phenomena to be “caused” not just by material stuff and the contingent forces that act on it. Everything was understood also to have a formal cause—the form that gives a thing its characteristic being. Everything also had a final cause—the end to which it is directed. Though the ontological nature of such forms was a matter of lively debate, many simply assumed it to be continuous with divine ideation.

For medieval Christians, this political order typically required a king and was theologically grounded in the self-sacrificial rule of Christ over his Church, a model of leadership understood as perfect service. Only the fallen nature of mankind created the possibility of deviation from the prelapsarian model of authority as service, opening space for coercion, violence, and tyranny. A great deal of medieval political thought deals with how best to avert such tyranny.

This cosmology was radically disrupted by an information revolution: the printing press. The spread of print drove far-reaching material changes, canonically explored by Elizabeth Eisenstein in The Printing Press as an Agent of Change. Print upended European intellectual culture, forged new relations among commerce, faith, and intellectualism, accelerated the fragmentation of Latin Christendom, and drove new interest in scientific study and exploration. Downstream of these changes came every pivotal event in Whig history: the Protestant Reformation, the European exploration of the globe, the rise of modern science—and, in time, those religious and political disagreements that culminated in the English Civil War. In turn, this conflict culminated in the beheading of Charles I, Cromwell’s Protectorate, and finally the establishment of constitutional monarchy in 1688.

This all happened because the printing press did not just multiply copies of the Bible. It multiplied readers. And, as the literary historian Walter J. Ong has argued, literacy is a mind-altering technology: “More than any other single invention, writing has transformed human consciousness.” And as the literacy scholar Maryanne Wolf has shown, the Greek alphabet especially concentrates neurological activity in the logical, systematizing left brain. Thus, one byproduct of the spread of literacy was the normalization of an increasingly linear, analytic mode of thought. Other effects include a sense of inner life, and of time as a linear sequence of events.

The printing press extended the mind-altering information revolution of literacy from a relatively small premodern political and clerical elite to the general population. It transformed the prevailing culture. It spread rationality: Eisenstein, for instance, documents the mania for tabulation, classification, and systematization across every field of knowledge that erupted in medieval Europe with the first spread of print. And it spread individualism, with far-reaching political consequences. Notably, wherever the new inwardness flourished, enabled by literacy, so too did a reluctance to accept authority as a given—especially in matters of faith. Reading the texts of early Protestant radicals, the sense is of a freshly kindled and fiercely independent piety determined to shake itself free from centralized authority, and to assert the right of individuals to cultivate a personal relationship with God. And in turn, this decentralization and interiorization of authority problematized not just the authority of God’s ministers on earth, but also that of temporal rulers.

It is sometimes assumed that the medieval world took for granted that monarchs are both absolute and unmediated, and also that they derive their authority directly from God. But it was faltering public faith in monarchy as a political form, in the early modern era, that inspired the invention of the divine right of kings as a last-ditch effort to salvage the monarchic order. This concept was first set out by James I of England in 1598, in The Trew Law of Free Monarchies. Here, James theorized that the right of a king to govern absolutely stemmed from God and in no way drew upon the consent of the governed. Referencing the medieval analogical cosmology, he argued that “the proper office of a king towards his subjects agrees very well with the office of the head towards the body and all members thereof.”

James argued that the king’s divine anointment by God absolved him of the need to pay any regard to Parliament, which should simply act as a court ready to do his will. But his words, ironically propagated by means of the technology that was undermining his legitimacy, fell on deaf ears. John Milton’s Areopagitica both addresses and describes a new, literate, free-thinking political subject unmoved by such medieval analogies, and whose insistence on free speech and thought would be critical to the ensuing political turbulence. Milton’s political writing reveals not just the early contours of this idealized individual but also those of his corresponding political form: a bottom-up order formed in aggregate from the desires of a people coming together voluntarily as a collective.

By the time James’s successor, Charles I, upped the ante on divine right by seeking to govern England directly, this individualistic mindset had passed the point of no return. Even so, the decapitation of Charles I was a profound moral and spiritual shock to the nation. Symbolically, the separation of Charles’s head from his body represented the decapitation of the medieval political order, with its nested correspondences. The radicals were sure of their cause, though. Not long after the Areopagitica, Milton would write The Tenure of Kings and Magistrates, in which he invoked this bottom-up political ideal as justification for that execution and painted Charles as a vicious tyrant.

But even this was only the end of the beginning. Further convulsions followed, finally coming to a head in 1688 with the deposition of James II and the accession of William of Orange. With this “Glorious Revolution,” too, came a constitutional monarchy, a settlement that limited royal powers to the largely symbolic headship of “the Crown in Parliament,” ceding political agency to elected parties, including the Whigs. The same period saw the most radical of England’s dissenters depart for the New World, where these experiments in mass literacy and disavowal of kings would culminate, a century later, in the American Founding.

Whig history begins, then, with an English head of state losing his head. Or perhaps we might say with the democratization of headship, now understood as a quality whose proper scale is the individual. It is no coincidence that, in the century that followed 1688, the individualistic ideal expressed in politics by the Roundheads, and in theology by the Puritans, would in turn be “discovered” all the way down to the most microscopic components of the universe itself. No longer ordered from the top down by God’s ideas, the stuff of reality came to be understood, in Isaac Newton’s 1704 words, as emergent from “solid, massy, hard, impenetrable, movable particles.”

In the process, the reality-picture enabled by this new print culture discarded Aristotelian metaphysics and the notion of formal and final causes. One leading seventeenth-century critic of such forms was Francis Bacon, who decried formal and final cause as obstacles to inductive research. Concurrently and relatedly, the medieval conception of the world as contained within and formed through God’s ideation was first bracketed and then discarded altogether. From acting as—in Aristotelian terms—the formal cause of all creation, God would first retreat to the remote status of the “divine watchmaker” who set this mechanistic cosmos into motion, before later, as Charles Taylor has shown, withdrawing from creation altogether.

This is a simplistic account. But its characteristic features are the disappearance of formal and final causation and the related weakening and eventual rejection of top-down ordering—even, for some, of the possibility of just hierarchy as such. Today the prevailing culture simply treats as self-evident the assertion that natural phenomena do not possess any such attributes as characteristic form or purpose, except as imaginative projections that must be treated with suspicion lest they obscure our ability to see those phenomena “objectively.” The same reality-picture also embraces the print paradigm’s Newtonian conception of materiality: one of tiny, distinct, hard, particulate atoms orbiting one another and somehow—we know not how—agglomerating into larger units, from which all forms of material then appear as emergent properties, including life and even human thought. It is the most democratic imaginable conceptualization of reality. As we will see, it produced a corresponding political form.

The idealized political subject for this reality-picture is rational, politically engaged, and agentic, able to absorb information, reason clearly, and form sober individual judgments. This subject found an influential voice in Milton’s critique of state censorship and, later, in the argument for such subjects’ right in aggregate, as “the people,” to modulate or even discard monarchic authority. In other words, as the political scientist Adam Garfinkle has observed, this political subject and the “deep reading” practices that enable his emergence would, in time, legitimize the emergence of modern democracy:

Deep reading has in large part informed our development as humans, in ways both physiological and cultural. And it is what ultimately allowed Americans to become “We the People,” capable of self-government.

This political paradigm reached its apogee in the late nineteenth century, with the peak prevalence of long-form reading. In the process, it set the stage for its own dissolution. Its moment of both triumph and disaster was the concurrent arrival in the early twentieth century of the universal franchise, and with it the propagation of the first forms of post-print mass broadcast: radio and television.

In the Areopagitica, Milton was frank about the need to restrict the free circulation of ideas only to those cognitively able to engage with the most challenging material. Provocative arguments, for example, should be made in Latin, thus restricting their circulation to a highly educated minority. But almost four centuries on, the democratic subjectivity Milton imagined as the property of a still relatively small, educated elite had come to be taken for granted. It was seen no longer as a hard-won byproduct of literacy, but as (at least aspirationally) a universal property of all adult humans. As such, it came to seem obviously just and proper to extend to all these rational, self-directing subjects the right to engage in democratic politics. This imperative grew especially compelling once the development of radio and, latterly, TV enabled even those without the time or inclination to read books or newspapers to inform themselves politically and engage in the political process.

Thus mass democracy was born—and, simultaneously, died. Mass broadcast media were embraced by newly mass-democratic governments, less as vehicles for deliberation and debate in the style of print, than as propaganda. Edward Bernays wrote his seminal book on this topic in 1928. Nor was this the only way in which post-print media undermined the idealized, rational political subject of Peak Literacy.

Just as the spread of literacy changed how people thought, so, too, did its supplementation (or replacement) by broadcast media. Marshall McLuhan was among the first to see this: His 1962 book, The Gutenberg Galaxy, postulated the emergence of a “global village” enabled by mass broadcast media. Walter Ong developed McLuhan’s ideas, characterizing the emerging effect of TV and radio as a “secondary orality,” which is “sustained by telephone, radio, television, and other electronic devices that depend for their existence and functioning on writing and print.” Such a culture might, Ong suggested, retrieve the characteristic features of oral cultures, such as a reduced emphasis on analytic thinking or factual accuracy.

As the electorate both expanded and began to discard print for broadcast, supranational and pre-political forms of governance also expanded: colloquially, the “deep state.” The emergence, alongside the universal franchise, of an architecture of extra-democratic, institutional decision-making implies tacit acceptance among leaders that “We the People” do not always know best. Some decisions must be reserved for those with the knowledge and judgment to make them and, once made, may then be popularized and legitimized via mass media. Taken together, this represents a slow reimagining of democracy as a process of mediated acclamation for decisions already taken—what Carl Schmitt in 1970 foresaw as a “TV Democracy.”

Characterized by post-democratic managerialist rule, democratic participation would be replaced by what Schmitt calls a “daily, permanent plebiscite” in the form of TV and radio “representation.” In the event, though, the full realization of this new order would require new information revolution, on a par with that of print.

In the Whig version of history, an uprising or violent transformation is a “revolution” when it tends toward more freedom and other Whiggish goods. Revolutions that push in the opposite direction receive less complimentary descriptors: terms such as “putsch,” as for example Hitler’s 1923 attempt to overthrow the Weimar government.

The digital transformation was, in its earliest days, mostly received as another step forward in the ongoing revolution of modernity. The internet appeared, at first blush, to represent merely a further democratization of the discursive space, along the lines effected by the printing press. Texts such as Clay Shirky’s Here Comes Everybody celebrated the prospect it seemed to afford, with a new army of amateur experts transforming culture anew, from the bottom up. Meanwhile, digitally mediated forms of public participation would extend and enhance the democratic political process, in accord with the print-era paradigm. Politics would be transformed from the bottom up, in a process now technologically turbocharged, within a reality-picture assumed to be inert, desacralized, and atomistic.

But this account has it backwards. In Whig terms, the digital is not a revolution but a putsch, for digital communication differs from print not just in quantity but in kind. All the evidence suggests that the digital is every bit as potent today as print was after Gutenberg in shaping the consciousness of those who spend a great deal of time doing it. But it is formative in radically different ways, some of which undermine the development of that rational, analytic subjectivity upon which the modernist project is predicated, along with its preferred mode of governance.

Today there exists no shortage of dismayed commentary from academics on the frontline, concerning the by now very obvious cognitive decline in a student body that has swapped reading for scrolling. One professor at a regional public university, who writes as “Hilarius Bookbinder,” describes the typical lecture hall as full of “checked-out, phone-addicted zombies,” some of whom cannot sit through a fifty-minute lecture without leaving to look at their phones. The author describes how “our average graduate literally could not read a serious adult novel cover-to-cover and understand what they read. They just couldn’t do it. They don’t have the desire to try, the vocabulary to grasp what they read, and most certainly not the attention span to finish.” And if the distracting effect of scrolling in the “attention economy” fragments the capacity to absorb and engage with long-form reasoning, others are beginning to suspect that over-reliance on generative AI still more radically atrophies the capacity to think. A recent study from researchers at Microsoft and Carnegie Mellon indicates that higher reliance on generative AI correlates with less critical thinking.

A report in the Financial Times earlier this year showed that across the world, verbal and numerical reasoning and concentration are collapsing, having peaked in the 2010s. The author links the collapse to the rise of a “post-literate” society and the replacement of focused, intentional reading with context-switching and endless scrolling. Only 54 percent of Americans read even one book in the last year. The decline in reading is directly linked to the spread of digital alternatives. Device overuse is, in turn, correlated with poverty, but it is not confined to less-well-off individuals. Anecdotally, on a recent flight from San Francisco to Boston, I counted the number of my fellow passengers (presumably mostly the “coastal Americans” who pride themselves on education and rationality) who were reading books or ebooks: the tally was around 10 percent. The rest were watching videos or scrolling.

The inescapable picture, from every possible indicator, is that deep literacy is rapidly receding. If that anecdotal 10-percent sample of in-flight coastal Americans is any kind of guide, deep literacy will perhaps end up as a minority practice roughly in line with pre-print Western cultural norms. And this development represents a putsch because, as Adam Garfinkle argues, the norms and habits of mind inculcated by widespread long-form reading are also those on which the project of broad-based democratic participation is predicated.

For one thing, such a transition radically undermines the ability of political leaders to rely on the public’s capacity for long-form analytic reasoning. This is already discernible in the protests of every classical liberal over the deterioration of print-culture norms such as scientific rationality and civil public debate. Each such commentator is, in truth, lamenting the defeat of print by digital. To this we might add countless other digitally enabled discursive phenomena: cancel culture, moral panics, polarization, meme wars, and so on. The alarm now being sounded by educators across the developed world underlines how much more pronounced this transformation of the citizenry will become, as successive truly net-native generations reach adulthood.

There is no reversing this transformation: The internet has no “off” switch, and as of July 2025 there are adults of voting age who were born after the launch of the first iPhone. Every culture that has transitioned from print-first to digital-first ceased, in so doing, to form its population for democratic citizenship. They are, quite plainly, the wrong kind of subject.

But a declining facility for objective, analytic thinking is not the only consequence of the digital putsch. Digital reading is not “making people dumber” in some absolute sense, just less analytic. And amid the shroud- waving over smartphones and IQ, another consciousness-altering effect has gone relatively unremarked: the re-emergence of modes of thinking that emphasize pattern, image, and symbolism.

The physical form of print literature invites long-form linear reasoning, analytic reflection, and a deepening of felt interiority. By contrast, as the social critic Nicholas Carr has argued, digital reading is filled with distraction and multi-directional links, and characterized by overwhelming volume and variety. To navigate information in this form necessitates a different mode of content consumption—one that responds to information overload by filtering less for linear logic than for latent patterns.

In my own observation, internet content consumption degrades long-form concentration but heightens awareness of patterns of shared meaning, which echo mnemonic communicative registers more characteristic of medieval culture than of modernity. The journalist Tyler Cowen reports corresponding with a teacher who affirms this observation among his students. According to this teacher, “the ability of students to process and work with a text in a standard ‘linear’ fashion has declined,” but at the same time their ability “to find patterns or links between texts has increased substantially.”

Of course a person primed to detect and interpret patterns in this fashion will not necessarily bring that capacity to bear in the offline world. But just as the experience of long-form reading had epistemological effects well beyond the printed page, it is reasonable to expect that this new information technology may do likewise. And this in turn has far-reaching consequences, not just political but metaphysical—above all, in retrieving those aspects of the medieval conception of reality discarded at the inception of modern science: formal and final cause.

More recently, researchers in the interdisciplinary field of biosemiotics have re-evaluated findings in the natural sciences in the light of contemporary philosophy to argue that “meaning” is not a phantasm projected by humans onto mechanistic, atomistic reality but a fundamental component of that reality. Meaning resides not in “signal,” which is to say exceptional, incidents. It resides in the everyday or normative, which is to say in pattern. From this it follows that a resurgent popular facility for discerning patterns will entail a renewed interest in, and capacity to apprehend, meaning as a real feature of the world and not merely a phantasmagoric obstacle to its study. As Wendy Wheeler argues, the study of ecologies as meaning-making systems requires an “ontology of relations,” which is to say one of directedness or purpose.

More plainly, then, the digital putsch concentrates cognitive power. But it also reopens space for the return of meaning and purpose, of formal and final causation, to everyday public awareness. With this return comes a less reflexively hostile appraisal of “natural” or given patterns, as for example in the growing popular rejection of blank-slate dogma on human sex dimorphism. A renewed realism about such givens entails a pragmatic evaluation of ineradicable power asymmetries. And all of this sets the stage for a critical reevaluation of our prevailing political forms.

This is not, however, a cozy story of some marginal “re-enchantment” that leaves our world otherwise intact. The digital putsch raises grave questions, perhaps most urgently concerning political form and just rule. If, in aggregate, “We the People” no longer appear as rational print-era subjects, our electoral contribution to political decision-making is likely to end up still more diluted than in a “TV Democracy.” And the bitterest contemporary meta-political battle retrieves the medieval political problem of how to avoid the decline of absolution into tyranny. The emerging poles are those who wish to avert tyranny by doubling down on print- era democratic “values,” and those who believe these values have given rise to a new kind of oppression.

The first pole is the model is referred to warmly by its supporters, and contemptuously by its opponents, as “Our Democracy.” Briefly ascendant during the COVID crisis, it mobilizes digital “representation” toward a maximally emergent-seeming political order, while evacuating human leadership into purportedly neutral proceduralism wherever possible. Blending governance by NGO, treaty, and “stakeholders” with tightly ring- fenced rituals of democratic acclamation, it shows how print-era democratic norms operate in practice, applied to an increasingly post-print demos.

One of the visionaries of the postliberal Blairite project, Peter Mandelson, saw these implications clearly in 1998. “Representative government,” he observed, “is being complemented by more direct forms of involvement from the Internet to referendums.” But as Mandelson saw it, such parliaments are no longer capable of responding with sufficient flexibility to the political needs of the moment. Implicitly, then, a technocratic elite must fill the gaps. Perhaps the most eloquent contemporary advocate for this model is the futurist Benjamin Bratton, who argued after COVID for a retreat from “over-individuation,” toward an automated, planet-spanning, tech-enabled order capable of supplying to each according to his needs. In Bratton’s view, global economic and political entanglement requires new forms of governance that render obsolete the clumsy and slow-moving capacities of “ceremonial parliaments” and “constrained private interests” in the name of “planetary competence.” But for Bratton, no single individual can be in charge of such an order. To be competent and legitimate means to have more and better technocrats wielding more and better technology.

Perhaps the most complete attempt to date to realize at such a regime was the Biden presidency, fronted by a man who, like Charles I, lost his head in office—in this instance, by way of age-related cognitive decline. Unlike Charles, Biden remained on the throne, a phantom POTUS, sustained from below by a swarm of bureaucrats who gatekept access, stage-managed public appearances, ghost-wrote decrees, and otherwise collectively operated an empty presidential shell on behalf of a political program coordinated seemingly spontaneously.

Is this the best we can do now? Absent the mass literacy that made democracy so obvious a choice for the early American republic, a downgrading of the direct role played by “We the People” in decision-making may well be in the people’s best interests. But it does not follow that more and better technocracy is the only possible response. Indeed, the American electoral answer to four years of autopen governance strongly suggests that though people may not read so many books these days, this fact in no way inhibits—indeed, it probably aids—the emerging sense that something is profoundly off about Our Democracy.

The central problem with such technocratic swarms is that the “democratic” swarm governance on offer typically disavows ordering form—sometimes quite literally, in the tacit abolition of borders. It also disavows purpose, now reduced to empty procedural liberalism. But this is to misunderstand the swarm as metaphor. A swarm of bees, for example, is not an effect of pure emergence by individual bees. Something is at work that is more than the sum of random external stimuli plus individual bees’ contingent actions. Bees bee, and the principle that orders their beeing is not an emergent effect of the swarm but pre-exists each individual bee or even the beeing of an entire hive. We can call this ordering principle the nature of bees or, more classically, their formal cause. Taken together, the beeing of bees is not random but intentional: They have a purpose, namely to bee.

To those re-sensitized by digital pattern recognition to this dimension of reality, the swarmist claim to democratic political legitimacy is plainly false. From this perspective, the idea that pure emergence might produce order is absurd, and those who make this case appear as arguing in bad faith. This in turn invites suspicion as to the real ordering principles of this ostensibly emergent governing regime. A people less primed for analytic thought but more attuned to patterns intuits, correctly, that the real formal cause of the purportedly emergent swarm is not “We the People.”

Pattern recognition also reveals with painful clarity the reality that swarm technocracies are typically directed to ends other than the flourishing of their people. This is less a matter of active malice than of inheriting from representative democracy the assumption that rulers and ruled necessarily have an adversarial relation. In a democracy this dynamic is baked in, as in the carefully calibrated checks and balances within the U.S. Constitution, and underwritten by the possibility of voting out an unwanted regime. But once transposed to post-print post-democracy, the agonistic relation between rulers and ruled ceases to be a safeguard and becomes a liability—for power now rests with a technocratic permanent bureaucracy, which views its relation to the people as agonistic but cannot be voted out.

This unhappy situation cannot be averted merely by replacing such a bureaucracy with a strongman leader. When Trump’s opponents march across America under the banner of “No Kings,” the operant fear is that the permanent bureaucracy is about to be replaced by a single absolute ruler—but the principle of agonistic relation between ruler and ruled will remain. This is, in essence, what is meant by “dictatorship.” Averting this alternate form of tyranny means not simply discarding the swarm for an individual, but re- examining the print-era assumption of a necessarily adversarial relation between asymmetric roles in a power hierarchy. A central theme of premodern political thought, from Aristotle to Aquinas, was the crucial role played by friendship between rulers and ruled. According to Aquinas, this friendship is not a matter of sentiment so much as of attuning policy decisions as far as possible to the common good:

Good kings . . . are loved by many when they show that they love their subjects and are studiously intent on the common welfare, and when their subjects can see that they derive many benefits from this zealous care.

The mutinous dissatisfaction currently in evidence across many Western polities is a consequence of the sense, held even by members of the electorate who do not read a book from one end of the year to the next, that their leaders do not love them or have their interests at heart. It is also central to the appeal of Donald Trump: a sense that, however chaotic and flawed, he is not a bureaucrat but an individual human, who feels genuine affection for the people and nation he leads. By contrast, a swarm cannot feel anything at all, least of all friendship.

None of this is to claim that the only possible model for post-print political legitimacy is explicit monarchy. But we have long since ceased to form democratic subjects, while the transposition of democratic desiderata into the post-print era produces not perfect, headless swarm democracy but brittle, technocratic puppet regimes widely experienced as hostile and illegitimate. The people may no longer be subjects who read, but that does not mean they are stupid or incapable of making inferences. Amid the fast-moving, intimate dynamics of the digital public square, the most difficult thing to fake over time is political friendship. The resulting political legitimacy gap, perhaps starkest of all in my native Britain, invites us to re-evaluate whether there is a place in our future, as in our past, for named individuals vested with sacred as well as executive authority.

Effective policy still requires long-form thinking, though. Thus a post-print leader, however obliquely or overtly kingly, can govern both effectively and with popular support only by working in two registers: the rational print one and the symbolic digital one. Perhaps the individual who rules most deliberately in this fashion is El Salvador’s president, Nayib Bukele. Depicted as a tyrant by Western liberals, reportedly popular with his own electorate, and self-described on X as “Philosopher King,” Bukele governs as a “right-wing progressive.” His approach combines strong enforcement of public order, concern for ordinary citizens, techno-optimist elitism, and indifference to proceduralism. Importantly, Bukele uses the internet as a source of legitimacy, publishing carefully crafted iconography, public announcements, and ferocious rebuttals of his critics alongside active engagement in digital discourse, all within the accessible register of “secondary orality.” In this regard, whether by instinct or by design, he mobilizes a memetic enchantment made possible by secondary orality: one that recalls the calculated pageantry of premodern monarchs, such as royal entries and triumphal processions.

Bukele’s approach adumbrates a postliberal future of leaders who will operate in parallel thought-worlds: both the analytic, policy-based register of long-form literacy, whose expressive mode is logic, and the enchanted, monarchic register of secondary orality, whose expressive achievement is friendship. For a ruler or small elite able to code-switch, there need be no choice between the king and the swarm. Such a leader, rather than be subsumed by the swarm, will serve as its head or formal cause.

As AI agents improve to the point of shrinking the administrative class, we may find that what actually has the power to destroy the twentieth-century technocracy is not free markets and personal responsibility, or even anons posting memes, but developments in AI. If so, classical liberals may be disappointed to discover that just as “civil discourse” is not coming back, what comes after the deep state will not be a return to small and limited republican government. It is more likely to be big government mediated by big data, crunched by machine agents in a now almost entirely digital swarm. Should this outcome be realized within the legacy democratic paradigm, it will inevitably result in governance that is still more impersonal, less accountable, and less capable of friendship for those ruled, than the impersonal, unaccountable bureaucrats it has rendered obsolete.

If this happens, and I think it will, the return of the king will be not only possible but urgently necessary. Left headless, an algorithmically swarming regime of machinic proceduralism would represent the most monstrous pseudo-democratic tyranny of all. Our best safeguard against this fate is the ordering power of a human ruler, with a human head capable of prudence and justice, and a human heart capable of friendship.

The post The King and the Swarm appeared first on First Things.

]]>