Science & Technology Archives - First Things https://firstthings.com/category/science-technology/ Published by The Institute of Religion and Public Life, First Things is an educational institute aiming to advance a religiously informed public philosophy. Fri, 16 Jan 2026 11:53:12 +0000 en-US hourly 1 https://firstthings.com/wp-content/uploads/2024/08/favicon-150x150.png Science & Technology Archives - First Things https://firstthings.com/category/science-technology/ 32 32 AI as Liberation https://firstthings.com/ai-as-liberation/ Fri, 16 Jan 2026 06:00:00 +0000 https://firstthings.com/?p=122607 You can learn everything you need to know about our collective state of mind from the fact that the most talked-about book of the year is titled If Anyone...

The post AI as Liberation appeared first on First Things.

]]>
You can learn everything you need to know about our collective state of mind from the fact that the most talked-about book of the year is titled If Anyone Builds It, Everyone Dies

The “it” in question is advanced artificial intelligence. Although the book’s authors, Eliezer Yudkowsky and Nate Soares, present a serious and nuanced argument worth contemplating in full, their bottom line is simple enough. The machines, they argue, have reached the point at which, like anything flirting with cognition, they wish to maximize their performance. Sooner or later, this will mean eliminating those obtuse meat puppets that take way too long to complete basic computations and consume way too much electricity—namely, all of us paltry humans.

Judging by the book’s ecstatic blurbs—everyone from Nobel Laureate Ben Bernanke to Mark Ruffalo, best known for portraying the Hulk in the Marvel Cinematic Universe, lined up to sing its praises—we paltry humans seem to buy the argument. Read about AI these days, and the views on offer would likely range from dark to bleak to apocalyptic. The Cassandras predict the total annihilation of the species, while the optimists settle for a cheerier vision in which the machines take over every job and render Homo sapiens obsolete.

I’m sorry to spoil the pity party with a dose of old-fashioned optimism, but I feel compelled to note that everyone is not dying just yet. In fact, there are many reasons to believe that AI will usher in an era not only of great economic flourishing and innovation, but also of faith and spiritual growth.

For more on the former, you can listen to Alex Karp or observe the inspiring success of his company, ­Palantir, which uses AI to make America greater and keep Americans safe. The latter, however, is trickier: What reason have we to believe that an abundance of AI will send us straight to church or the synagogue?

To answer the question, it may be useful to turn to a term nearly three-quarters of a century old. 

In 1958, the British sociologist Michael Young was looking for a word to describe a new phenomenon he was observing. It was so strange, he felt he needed to coin an original word to get it right. He mashed Latin and Greek together to give us one of the most ascendant terms of the last half-century: meritocracy. 

On the surface, he argued, the idea sounds unassailable. Who in their right mind would object to a system that promoted and rewarded the smartest and most skillful? But take a closer look, and you’ll notice a deep and fatal flaw. Meritocracy, Young argued, sanctified success as a stand-alone virtue, which made a society dominated by meritocracy not only empty but also ­pernicious.

Many members of the merit-certified elite, Young mused,

came from homes in which there was no tradition of culture. Their parents, without a good education themselves, were not able to augment the influence exercised by the teacher. These clever people were in a sense only half-­educated, in school but not home. When they graduated they had not the same self-assurance as those who had the support and stimulus of their families from the beginning. They were often driven by this lack of self-confidence to compulsive conformity, thus weakening the power of innovation which it is one of the chief functions of the elite to wield. They were often intolerant, even more competitive in their striving for ascent than was necessary, and yet too cautious to succeed.

Over time, reliance on meritocracy further exacerbates this negative dynamic. The sons and daughters of the highly educated and richly rewarded these days are likely to grow up and assess their self-worth in terms of accomplishment. Made it into Harvard? Snagged that six-figure job? Moved into that corner office? You’re a success. Anything less? Disaster.

Now, imagine these meritocrats in the age of the machines. Think of the young woman who understands herself primarily as a graduate of an Ivy League medical school, say, encountering software that could glance at an MRI scan, compare it to every other similar case on record, and produce an accurate diagnosis in the time it takes her to put on her doctor’s coat. Or consider the young man who takes great pride in his career at a white shoe law firm, only to discover one morning that AI can inhale millions of pages per minute, ingest every existing precedent on the books, and deliver a sophisticated analysis before the budding lawyer can finish his morning espresso. 

It’s very likely that these meritocrats will soon find themselves, if not altogether out of a job, then at least in possession of one that is far less radiant and ­remunerating. 

What happens then? To hear Moshe Koppel tell it, only good things. Writing recently in Tablet Magazine, the Israeli computer scientist posed the question on everyone’s mind: “If automation hollows out jobs, what will people do all day that feels meaningful?”

Simple, he responded: They will do what humans have done since time immemorial, which is look to faith for answers and a sense of purpose.

Religion, Koppel reminded us, works because it offers “scheduled repetitions of doing what you said you’d do even when you don’t feel like it.” It’s a commitment, not a preference, and it offers “a class of goods [that] sits outside the market by design,” like reading Scripture, praying to God, and spending Saturday or Sunday with your friends and neighbors. Your job, in other words, may be disrupted by some clever computer that can do it twice as well in half the time, but your relationship with your maker and with your community is something you alone can navigate, and only by showing up and being fully present. “The boundary,” Koppel concluded, “protects the thing from the optimization pressure that dissolves it.”

To keep things biblical for a moment, think of AI not as the flood but as the dove, informing us that the deluge is over and that it’s now time to rebuild. For decades, we’ve been in a competitive frenzy of work, work, work that has scrubbed our existence of every trace of truth and beauty. We have measured out our lives with coffee spoons, obsessed with having just a little bit more: more money, more power, more respect. We asked only what we could do, rarely what we should. We generated immense wealth and progress, and then wondered why they brought with them so much misery. 

The coming of very smart machines may be just the chance we need to start over. 

True, like all technological upheavals, this one will bring profound changes we can’t even begin to predict, not all of them rosy. But also like all technological upheavals, this one offers us an opportunity to return to first principles and ask ourselves what being human is all about. 

“AI,” Koppel writes, “can fetch sources and summarize moves, but it cannot give you the reflex that keeps moral talk from devolving into sentiment.” The machines, in other words, may take some of our jobs, but they could never satisfy our desire for justice, for compassion, for truth, for transcendence—basic human instincts that have thrust us forward for millennia. We may lose some of the prestige that once came with ­being meritocratic high achievers, but we’ll gain something more valuable in return: the gift of being fully present and realizing, as so many of our ancestors have, that we matter because we were created in God’s image, not because we are on the receiving end of a ­gilded diploma or a padded paycheck. 

Ask any AI agent to sum up the Law of Conservation of Energy, and it will tell you that energy cannot be created or destroyed, only transformed from one form to another. The same is true of emotional and spiritual energy as well. The singular human genius that for too long has lashed itself to spreadsheets is about to be set free to contemplate not just the price of things but, finally, their value. With a little luck, we may be looking at a new great awakening, with artificial intelligence not replacing but liberating our much more precious, irreplaceable, and all-too-human intelligence. 

The post AI as Liberation appeared first on First Things.

]]>
The Madness in Miami https://firstthings.com/the-madness-in-miami/ Fri, 16 Jan 2026 06:00:00 +0000 https://firstthings.com/?p=123949 The great boxing spectacles of the past—the Thrilla in Manila (1975) and the Rumble in the Jungle (1974)—were never merely athletic contests. They were cultural dramas staged on global...

The post The Madness in Miami appeared first on First Things.

]]>
The great boxing spectacles of the past—the Thrilla in Manila (1975) and the Rumble in the Jungle (1974)—were never merely athletic contests. They were cultural dramas staged on global terrain. Rumble in the Jungle, in particular, captured a transforming American moment at the height of the civil rights era. For Muhammad Ali, the bout in Zaire was not simply a payday but a moral narrative: a reclamation of identity, geography, and historical meaning. 

Boxing has long carried this symbolic weight. Max Schmeling’s fights with Joe Louis in the 1930s came to embody the struggle between Nazi Germany and democratic America. Geography mattered as much as the fighters themselves; Ali’s journey to Africa was as important as the punches he threw. Boxing’s stages have always amplified its myths.

Against that backdrop, the recent Jake Paul–Anthony Joshua bout, streamed live on Netflix, could be called The Madness in Miami, or, with equal accuracy, The Money in Miami. YouTube sensation Jake Paul was the star attraction. Crossover bouts have become a reliable financial mechanism, and this one was no exception. Netflix’s subscriber ambitions alone justified the investment. Boxing has always been blunt about economics; boxers are “prizefighters.” Unlike athletes in salaried sports, boxers negotiate their livelihoods bout by bout, so money talk never leaves the frame. Gambling promotions saturated the event. Paul gifted himself a custom Ferrari Purosangue before the fight. None of this was peripheral to the spectacle; it was the spectacle.

The athletic stakes were modest. Few doubted Joshua’s victory. The intrigue lay only in how long Paul would last, and whether he could survive the scheduled rounds. The matchup lacked the narrative tension of Ali–Foreman or Holyfield–Tyson. What was striking instead was the muted atmosphere in the city itself. The hype for Paul–Joshua existed less in the streets than online, signaling a transformation in how public events now circulate.

Geography matters here in another sense. Miami is not merely where the Florida Athletic Commission licensed the bout. It is a city that epitomizes flamboyance, glitz, and influencer culture. Over the past decade, Miami has become the unofficial capital of the social-media generation, a place where aspirants curate identities through nightlife, bayfront vistas, and highly Instagramable neighborhoods like Brickell and Wynwood. Visibility itself functions as currency. Miami fuses entertainment and finance in a distinctly twenty-first-century form of American capitalism.

Jake Paul is the exemplary figure of this new economy of fame. For those outside his generational cohort, his appeal can seem opaque. Juvenile videos, brand partnerships, and energy drinks nonetheless translated into boxing notoriety through victories over faded names and a shrewd grasp of algorithmic attention. Mocked by critics as “Fake Paul,” he embraces the label. “YouTubers run the world,” he insists. “We are the new modern-day A-list celebrities.” Paul’s self-myth also speaks to a changed world of work. When legitimacy is harder to inherit from institutions, it must be performed, narrated, and monetized. For young men facing narrowing pathways, Paul offers a seductive promise: relevance without apprenticeship.

Influencer boxing emerged from this logic. Paul’s brother Logan popularized the form through a rivalry with British streamer KSI, converting digital antagonism into a monetized physical contest. These crossover bouts reveal something enduring about boxing. Even in an era dominated by MMA, boxing remains the symbolic arena for resolving disputes. In a culture saturated with callouts and feuds, it functions as ritualized resolution. Boxing also retains an aesthetic language; Ali’s “float like a butterfly, sting like a bee” still lingers in cultural memory.

Traditional boxers have had to adapt. Self-promotion is not new, but today’s fighters must maintain an entire aesthetic ecosystem. Anthony Joshua, despite emerging from a grittier North London milieu, has been drawn into this economy. 

On fight night, the Kaseya Center revealed how spectacle has changed. Audience members filmed themselves. The event was as much about being seen as seeing. Influencers dominated ringside. Strangers shouted for selfies. The bout itself was anticlimactic. Joshua dismantled Paul with ease, sealing the mismatch with a trademark straight right. The crowd’s response was not triumph but relief. Seriousness briefly returned the moment violence asserted itself, and just as quickly dissipated. Yet the knockout was not the end of Paul’s story. He emerged with a broken jaw, titanium plates, and an X-ray proudly displayed online. Defeat became content.

That, finally, was the madness in Miami: a spectacle in which money does not correct irrationality but fuels it, and where what mattered was not the fight itself but its digital afterlife. The fight was over. The content, it was clear, had only just begun.


Sipa USA via AP

The post The Madness in Miami appeared first on First Things.

]]>
True Humans https://firstthings.com/true-humans/ Wed, 14 Jan 2026 06:00:00 +0000 https://firstthings.com/?p=122888 The Catholic Church never condemned the theory of evolution nor came close to doing so. One might have expected otherwise: Many of the factors that had led to the Galileo fiasco...

The post True Humans appeared first on First Things.

]]>
The Origins of Catholic Evolutionism, 1831–1950
by kenneth w. kemp
the catholic university of america, 540 pages, $85


Darwin and Doctrine:
The Compatibility of Evolution and Catholicism
by daniel kuebler
word on fire, 304 pages, $29.95

The Catholic Church never condemned the theory of evolution nor came close to doing so. One might have expected otherwise: Many of the factors that had led to the Galileo fiasco two centuries earlier were present again. Darwin, like Galileo, was proposing a radical theory that struck many people as absurd—and that seemed contrary to the “plain meaning” of certain scriptural verses as they had generally been construed. Both theories were at first controversial even among scientists and faced weighty scientific objections, both observational and theoretical, which could not be resolved until decades later. Both theories contradicted aspects of the Aristotelianism that prevailed among Catholic theologians. Finally, both Galileo and Darwin promulgated their theories at times when the Church faced powerful challenges to her credibility and authority, as a result of which her doctrinal defense mechanisms were on high alert. Even if the Vatican’s condemnation of Galileo did not formally and irrevocably commit the Church doctrinally, it put the Church, for a time, on the wrong side of a scientific issue. The same could easily have happened with On the Origin of Species.

Indeed, in some ways, the circumstances facing the theory of evolution were even less auspicious than those Galileo contended with. Galileo’s offending ideas, after all, concerned astronomy, which ­Cardinal Bellarmine admitted at the time pertained to the faith only “incidentally,” whereas the theory of evolution, as applied to human beings, concerned matters of the highest theological importance, such as the nature of man and the doctrine of original sin. And whereas in Galileo’s day no one was using heliocentrism to attack Christianity and virtually all scientists were Christians, by the late nineteenth century religious skepticism and scientific materialism had gained many adherents, and evolution was being used as a cudgel against religion. As a result, many Christians, both Catholic and Protestant, were disposed to be deeply suspicious of the new ideas and the people who advocated them.

And yet, no condemnation by the Catholic Church ever came. In fact, the universal magisterium of the Church said nothing about evolution until 1950, more than ninety years after Darwin published his Origin of Species. Part of the reason for this caution, no doubt, was that Church authorities were keenly aware that science had long since vindicated heliocentrism and they had no desire to repeat past mistakes. Moreover, discoveries in geology and paleontology in the preceding century had shown that both the planet and the life upon it were of much greater antiquity than a literal reading of Genesis would suggest. This had a strong impact, since it was an accepted principle in the Catholic Church, at least since St. Augustine, that Scripture should not be read in a way contrary to what is known with certainty from reason and experience. Even so, the Church’s forbearance with regard to evolution is remarkable.

The full story of how the Church did react to the theory of evolution is told in a fine new book by Kenneth W. Kemp, professor emeritus of philosophy at the University of St. Thomas in Minnesota. The Origins of Catholic Evolutionism, 1831–1950 is a monumental work of scholarship: massively researched, comprehensive, nuanced, restrained in judgment, and clearly written. Its focus is on the ideas of the many Catholic scientists, theologians, and philosophers who either advocated evolutionism in some form or at least defended its ­compatibility with Catholic belief, and on how their ideas were received within the Catholic Church and by her magisterium. Kemp seems intimately familiar with the vast body of primary sources, from the writings of the Catholic evolutionists and compatibilists themselves to discussions of their ideas in ­contemporary Catholic periodicals, encyclopedias, theological textbooks, and internal Vatican deliberations.

If there was one key factor in the Church’s restraint, it would seem to be the prudence of the popes of that era with regard to natural science. The reigning pope when Darwin published On the Origin of Species in 1859 and The Descent of Man in 1871 was Pius IX, who famously denounced eighty errors of the modern world in his 1864 Syllabus Errorum, and yet not one of the eighty concerned evolution. Nor was evolution mentioned, directly or indirectly, in the decrees of the First Vatican Council, which Pius IX convoked in 1869.

The next four popes, including the fiercely anti-Modernist Pius X, likewise made no pronouncements about evolution. In the case of Pius IX’s immediate successor, Leo XIII, the reason can be guessed from a comment made in a private letter in 1892, quoted by Kemp:

There are restless and peevish spirits who press the Roman Congregations to pronounce on matters that are still uncertain. I am opposed to that; I will stop them because it is not necessary to prevent scholars from doing their work. One must give them the time to suspend judgment or even to make a mistake. Religious truth can only gain from that. The Church will always be in time to put them on the right road.

Though there was no shortage of “peevish spirits” in the Church who wanted to see evolution condemned root and branch, they were by no means predominant. There was a wide spectrum of attitudes within the Church at all levels, with regard both to evolution as a scientific ­theory and to the philosophical and theological issues that it raised.

Some (under the influence of Aristotelian biology and metaphysics) thought the evolution of species was impossible. Others thought the scientific evidence for some evolutionary change was strong but doubted whether it could have produced the great qualitative differences that exist between plants and animals or among kinds of animals. Still others were open to the idea of the common ancestry of all living things on earth but drew the line at man himself, either because they found the idea of an animal ancestry for man “repugnant” or because they thought Genesis 2:7 taught that God had formed the first human body directly from the dust of the earth. And finally, there were many who had no such reservations but agreed with the English biologist and Catholic convert St. George Mivart (“St. George” being his given name), who in On the Genesis of Species (1871) defended the idea of a natural evolution of species, all the way up to and including the human body, as “perfectly consistent with the strictest and most orthodox Christian theology.” Here the distinction between the human body and soul is crucial. No Catholics, whether scientists or theologians, were arguing that evolution, or any purely material process, could produce the human spiritual soul, with its powers of intellect and will. All agreed on the metaphysical impossibility of that, and on the Catholic teaching that the spiritual soul is directly created by a supernatural act, not only in the first human beings, but in every human being.

Though many Catholics denied or doubted Darwin’s theory, or aspects of it, on ­scientific, philosophical, or theological grounds, few theologians thought that the idea of evolution of species was per se contrary to Scripture. Writing in the influential Catholic journal the Dublin Review in 1896, Fr. David Fleming, OFM (later to be secretary to the Pontifical Biblical Commission), wrote, “the great majority of Catholic theologians hold . . . that evolution in itself is not excluded by the text of Genesis.” Nor was it considered by most ­theologians to be contrary to the Catholic faith. As early as 1868, we find St. John Henry Newman writing in a letter:

We do not deny or circumscribe the Creator . . . if we hold that He gave matter such laws as by their blind instrumentality moulded and constructed through innumerable ages the world as we see it. If Mr Darwin in this or that point of his theory comes into collision with revealed truth, that is another matter—but I do not see that the principle of development, or what I have called construction, does.

Among theologians and Church authorities, doctrinal concerns were focused primarily on the origin of man, and in particular how the bodies of the first humans came to be. Some theologians argued that it was authoritative Church teaching (even if not de fide) that Adam’s body was created directly and ­immediately from the dust of the earth. For example, in a review of Darwin’s Descent of Man (1871) in the Dublin Review, Fr. John Cuthbert Hedley, OSB (who was later made bishop), wrote:

It is not contrary to Faith to suppose that all living things, up to man exclusively, were evolved by natural law out of minute life-germs primarily created, or even out of inorganic matter. On the other hand, it is heretical to deny the separate and special creation of the human soul; and to question the immediate and instantaneous (or quasi-instantaneous) formation by God of the bodies of Adam and Eve—the former out of inorganic matter, the latter out of the rib of Adam—is, at least, rash, and, perhaps, proximate to heresy.

On the other hand, Newman wrote in a letter to Edward Pusey in 1870:

All are dust’—Eccles iii, 20—yet we never were dust—we are from fathers, why may not the same be the case with Adam? I don’t say that it is so but if the sun does not go round the earth and the earth stand still, as Scripture seems to say, I don’t know why Adam needs to be immediately out of dust.

Many theologians were willing to concede that natural processes, including evolution, may have played a large role in the formation of the first human bodies, but some of them thought that direct divine intervention would also have been required to make those bodies capable of receiving a spiritual soul.

The numerous complex issues concerning evolution and the contending views about them were vigorously, but civilly, debated in many Catholic fora, both popular and scholarly, over many decades. Kemp quotes a passage from H. L. Mencken that gives an interesting glimpse of this. In commenting on the Scopes Trial of 1925, Mencken, no friend of any religion, wrote:

The current discussion of the Tennessee buffoonery, in the Catholic and other authoritarian press, is immensely more free and intelligent than it is in the evangelical Protestant press. In such journals as the [Commonweal], the new Catholic weekly, both sides were set forth, and the varying contentions are subjected to frank and untrammeled criticism. Canon de Dorlodot whoops for Evolution; Dr. O’Toole denounces it as nonsense. . . . The [Commonweal] itself takes no sides, but argues that Evolution ought to be taught in the schools—not as an incontrovertible fact but as a hypothesis accepted by the overwhelming majority of enlightened men. The objections to it, theological and evidential, should be noted, but not represented as unanswerable.

Of course, limits on freedom of discussion among Catholics could be placed by the Roman Congregations, specifically the Congregation of the Index of Prohibited Books (which existed in one form or another from 1559 to 1966), the Pontifical Biblical Commission (founded in 1902), and the Holy Office (called the Sacred Roman and Universal Inquisition from 1542 until 1908, and the Congregation for the Doctrine of the Faith after 1965). It was the Congregation of the Index that was most active with regard to evolution, especially in the 1890s, when it leaned decidedly against ­evolutionary ideas. However, unlike the Holy Office, it was not empowered to issue doctrinal pronouncements or condemnations, but only to act against specific books, generally by listing them on the Index without publicly stating its reason for doing so.

Though some books written by advocates of evolution were placed on the Index, it was generally for reasons other than the author’s advocacy of evolution: Some of these books were either explicitly anti-religious or atheistic and ­materialistic in outlook; others were reductive in their philosophical anthropology or gave inadequate accounts of the difference between human beings and lower animals; still others proposed ­unacceptable theories about the inspiration of Scripture.

There were, however, two important cases in which the Congregation of the Index did restrict books for their views on evolution: L’Évolution restreinte aux espèces organiques (second edition, 1891) by Dalmace Leroy, OP, and Evolution and ­Dogma (1896) by John A. Zahm, CSC. What offended the Congregation about these two books was primarily their position on the origin of the human body, which was essentially that of Mivart, namely that a Catholic could hold that the first human bodies (though not the soul) had arisen through evolution without any direct supernatural intervention. (It should be noted that Mivart’s own book was never censured, and ­Mivart was awarded an honorary doctorate in philosophy by Pope Pius IX after its publication.) In a key passage, Leroy wrote:

The human body is composed of matter and form. And the soul, its substantial form, comes directly from God, of course. But the matter, where does it come from? It comes from the slime of the earth, that is also certain, as the Church and tradition clearly teach. But was the human soul infused immediately into this slime, that is to say, without any preparation? And if it underwent preparation, as Genesis indicates, could it not have been evolution which effected it? That is the question that may still be asked.

Unfortunately, the Congregation thought otherwise, persuaded by a few “peevish spirits,” especially a Dominican theologian bitterly opposed to evolution named ­Buonpensiere (which, ironically enough, means “good thought”). Even in the cases of Leroy and Zahm, however, the Congregation decided not to put books on the Index but to command the ­authors—priests under religious obedience—to ­disavow them publicly and do what they could to remove them from circulation.

The issue came to a head in the 1920s and ’30s, when the Holy Office (which had not previously involved itself in evolution cases, but had recently been given responsibility for the Index of Prohibited Books) became concerned about the views on the evolution of the human body of the theologians Henry de Dorlodot, who died in 1929, and Ernest Messenger, whose 1931 book carried on Dorlodot’s work. After consultations that dragged on for several years, the Holy Office on June 10, 1936, voted eight to two that Messenger should be enjoined to withdraw his own book from sale. The next day, however, Pope Pius XI decided to accept the minority’s recommendation that the book be ignored for the time ­being; he also requested “an authoritative account of the scientific data of ­anthropological paleontology.” That report, which he received a year later, concluded that “our best attitude with regard to the question of the descent of man, so far as his bodily form is concerned, must be a patiently expectant one, with an evenly balanced mind, waiting till further discoveries and ­researches give us . . . a decisive result.” In the end no action was taken with regard to Messenger or Dorlodot. At no point did the Holy Office consider issuing condemnations of propositions connected with ­evolution.

The watershed for Catholic evolutionism came on August 12, 1950, when Pope Pius XII issued the encyclical Humani Generis, in which he wrote:

The Teaching Authority of the Church does not forbid that, in conformity with the present state of human sciences and sacred ­theology, research and discussions, on the part of men experienced in both fields, take place with regard to the doctrine of evolution, in as far as it inquires into the origin of the human body as coming from pre-existent and living matter—for the Catholic faith obliges us to hold that souls are immediately created by God.

Clearly implied is the position that Mivart put forward in 1871: that a natural, evolutionary origin of man at the bodily level is not contrary to the Catholic faith.

Humani Generis did not, of course, resolve the numerous important theological and philosophical questions surrounding evolution that have been discussed in the Church from Darwin’s day till now. Indeed, there has been relatively little official guidance for the ordinary Catholic concerning how to navigate these questions. Evolution is not explicitly mentioned, for example, in the Catechism of the Catholic Church. And antagonism to the idea of evolution—even the evolution of plants and animals—has begun to seep into some corners of the Catholic Church.

An excellent new book by Daniel Kuebler, professor of biology at Franciscan University of Steubenville, titled Darwin and Doctrine: The Compatibility of Evolution and Catholicism, is therefore timely. In the first five chapters, Kuebler (who I should note is a fellow officer of the Society of Catholic Scientists) reviews the Church’s understanding of the relation between faith and reason, the history of her engagement with evolution, the ­theology of creation, and the ­science of evolution, helpfully clearing up common misconceptions along the way. Each of the remaining chapters addresses an important theological or philosophical issue raised by ­evolution.

Chapter 6 is about the role of “chance,” which many see as opposed to God’s providence. Kuebler notes that this is hardly a new issue or one that arose only in relation to evolution: “Any cursory glance at our individual histories reveals a ­staggering number of chance events upon which our existence is predicated.” But, whether in evolution or in everyday life, chance in no way detracts from divine providence, since 

a God who sustains creation at every moment, [and] who allows created things to act as causes in their own right, also sustains all the chance encounters that occur among created causes during the evolutionary process. Those chance events that we actually observe in evolution are his plan, although from God’s atemporal perspective, it’s hard to call such events “chance” at all.

Others think that chance undercuts arguments for design or purpose in nature, as if it rendered everything in nature adventitious. Kuebler notes, however, that the role of chance is greatly overemphasized in most discussions of evolution, which in reality is an interplay of chance and order. For biological or evolutionary processes to occur at all requires a great deal of order at the level of physics and chemistry, as Kuebler illustrates with many examples.He discusses the order in the properties of atoms, reflected in the Periodic Table, which allows them readily and spontaneously to form amino acids and the other building blocks of life. He shows how much of the structure of proteins follows from strong physical constraints on how they fold up into “alpha helices” and “beta sheets.” At a deeper level, the fundamental laws of physics appear to have many fortuitous features, called “anthropic coincidences” by physicists, that seem designed to make the existence of living things possible. All of this underlying order powerfully shapes evolutionary outcomes.

Kuebler argues that in evolution “it is the order that exists in nature that is primary, and it is the chance aspects of the process that are secondary. In fact, the chance aspects of evolution, by and large, operate in such a manner as to uncover” all the biological possibilities allowed by this order. This thesis is dramatically illustrated by the ubiquitous phenomenon of “convergent evolution,” in which evolution keeps stumbling upon the same designs and innovations over and over again. “It turns out that there is hardly a structure or behavior that one can find in living organisms that is not convergent.” The camera-like eye, for instance, has evolved independently “at least seven different times, including in vertebrates, cephalopods, marine annelids, gastropods and even jellyfish.” Meanwhile, “Ovoviviparity, in which the egg is retained within the female reproductive track prior to a live birth, has evolved over one hundred times [independently] in lineages as diverse as amphibians, reptiles, and fish.”

The next chapter deals with objections perennially raised by some Aristotelian-Thomistic philosophers against even the metaphysical possibility of species’ evolving. Such discussions can hardly avoid arcana, but Kuebler does a good job of explaining why these objections are not insuperable, making use of the insights of modern Thomists ranging from Jacques Maritain to Mariusz Tabaczek, OP. Indeed, he shows how evolution can be seen as a process by which the potencies inherent in the material world are actualized.

The next two chapters deal with the many complex questions relating to human origins and original sin. In chapter 8, ­Kuebler reviews both Catholic theological anthropology and the current state of our rapidly increasing knowledge of extinct hominins and early man. He introduces the crucial distinction between “biological humans,” that is, those who are human according to some physiological criteria, and what some authors have called “theological humans” or “true humans,” that is, those endowed with immortal rational souls. Whereas Homo sapiens as a biological species arose (as all species do) in a gradual way by the spread of new traits within populations, the appearance of beings who were theologically human must have been sudden, logically speaking, as one either has an immortal soul or hasn’t. And though genetic evidence clearly indicates that the ancestral population of biological humans was never less than many thousands, that does not necessarily imply that the first theologically human beings—those who “fell”—had to be more than two in number. Various authors, both Catholic and Protestant, have speculated that God might have conferred a rational soul initially upon just one pair out of an ancestral population of biological humans, as well as upon the descendants of that pair. There could be scientific as well as theological reasons to entertain this possibility. It would dovetail, for instance, with the suggestion of Noam Chomsky and Robert C. Berwick in their 2015 book Why Only Us that the neurological basis for the human language capacity might have appeared at first in just a few individuals.

In other words, the biological polygenism implied by the scientific evidence does not logically preclude the theological monogenism—the one pair of original “true humans”—taught in Humani Generis. It should be noted, however, that many theologians have suggested that Pius XII did not intend definitively to condemn theological polygenism. Rather than saying that Catholics must reject it, he said that they must not “embrace” it, a formulation that allows for suspension of judgment. And he gave as the grounds for not embracing it the fact that “it is in no way apparent” how a multiplicity of first true humans could be reconciled with the Church’s teachings on the fall of man and original sin; but he did not rule out its becoming apparent at some later time. The idea that Pius XII intentionally left the door open to further development is supported by Kemp’s recent study in the Vatican Archives of the preliminary drafts of that encyclical, which have only recently been made available to scholars. The preliminary drafts were more definitive in their rejection of theological polygenism than the final text.

Many Catholic theologians have acted as though the door that Pius XII left slightly ajar were wide open and rushed through it to embrace theological polygenism. Though doing so might help in resolving certain issues, such as whom the children of Adam and Eve could have married, it creates others, such as how to understand St. Paul’s statement that “by one man sin entered into the world.” Some caution seems still to be justified.

In chapter 9, Kuebler masterfully treats the subtle questions evolution raises about original sin and its consequences. He notes that some theologians have suggested that original sin is just the fact that humans have inherited from our hominin forebears the natural drives and impulses that often lead to aggression, lust, and selfishness. However, as Kuebler explains, those drives and impulses are not in themselves faults, nor the result of the fall of man. Rather, what resulted from the fall was the loss of those “preternatural gifts that allowed [the first true humans] to live in a state in which these drives were perfectly ordered [by reason] toward the good.” Similarly, he dispels some misconceptions about the sense in which death is a consequence of the fall. The human bodies that arose through evolution were just as naturally mortal as those of our animal ancestors but were conditionally granted immunity from death as a preternatural gift. Kuebler quotes St. Augustine: “It is one thing . . . not to be able to die, like [the angelic] natures which God created immortal, while it is quite another to be able not to die; and this is the way the first man was created immortal, something to be granted him . . . not by his natural constitution.”

Many have wondered how original sin can be inherited (or, in the words of the Council of Trent, acquired by “propagation, not by imitation”). It is surely not a physiological trait passed on genetically. But if it is a spiritual trait, how can it be propagated, given that the spiritual soul of a child is not produced by the parents but created directly by God? Kuebler helpfully explains that the fallenness we inherit is not a positively existing thing, susceptible of transmission, but a lack. What is passed on to us biologically is an animal nature with all its evolved drives and urges, raised indeed to the level of rationality by God, but without the gift of sanctifying grace and the preternatural gifts that were bestowed on the first humans and forfeited by them. “This is the state in which we find ourselves, saddled with the burden of attempting to order our desires without the aid of [those original gifts].”

At the end of his book, Kuebler presents a number of very interesting theological observations, which include some striking parallels between evolutionary history and salvation history. In neither history, for example, do we find a smooth triumphal progression, but rather vicissitudes, reversals, and even disasters that throw what had seemed to be the divine plan far off course. In evolutionary history, there were dead ends, environmental catastrophes, and mass extinctions. In salvation history, there were the sin of Adam, the Israelites’ lapses into idolatry, the Babylonian captivity, the destruction of the First Temple, and Judas’s betrayal. And yet, from failure, destruction, and death, new life arose.

Many Catholics and other Christians are just as unsure what to make of evolution theologically and how to integrate it into an orthodox Christian view of the world as were their predecessors in the nineteenth century. They will be helped immensely by these two excellent new books and the many fascinating discussions and analyses that they contain.

The post True Humans appeared first on First Things.

]]>
How Science Trumped Materialism (ft. Michel-Yves Bolloré) https://firstthings.com/how-science-trumped-materialism-ft-michel-yves-bollore/ Mon, 12 Jan 2026 10:00:00 +0000 https://firstthings.com/?p=123184 In the ​latest installment of the ongoing interview series with contributing editor Mark Bauerlein, Michel-Yves Bolloré joins in to discuss his recent book, God, the Science, the Evidence. The...

The post How Science Trumped Materialism (ft. Michel-Yves Bolloré) appeared first on First Things.

]]>
In the ​latest installment of the ongoing interview series with contributing editor Mark Bauerlein, Michel-Yves Bolloré joins in to discuss his recent book, God, the Science, the Evidence.

The conversation is embedded below. For your long-term convenience, subscribe via Apple Podcasts or Spotify.

The post How Science Trumped Materialism (ft. Michel-Yves Bolloré) appeared first on First Things.

]]>
The Failure of Bioethics https://firstthings.com/the-failure-of-bioethics/ Mon, 12 Jan 2026 06:00:00 +0000 https://firstthings.com/?p=122596 When in April of 2025 the Hastings Center for Bioethics (the oldest bioethics think tank in this country) revealed its new five-year strategic plan, one of six “values” specified...

The post The Failure of Bioethics appeared first on First Things.

]]>
When in April of 2025 the Hastings Center for Bioethics (the oldest bioethics think tank in this country) revealed its new five-year strategic plan, one of six “values” specified as central to the center’s work was “inclusiveness and diversity.” “We engage with a wide range of perspectives and values with humility,” the plan said, and “we invest in diversifying the field of bioethics.” Were it actually achieved, this investment in diversity would be a significant contribution. I suspect, though, that the center has a steep hill to climb if it is really to incarnate that value in its work, for this would mean encouraging a bioethics that includes diversity of viewpoint—and that, alas, is seldom what those calling for diversity seem to want.

The center is not alone in emphasizing diversity. More recently still, in late June of 2025, in an editorial published in the American Journal of Bioethics (AJOB), nine bioethicists, some of them well known, contend that (as the ­editorial is ­titled) “­Bioethicists Must Push Back Against Assaults on Diversity, Equity, and Inclusion.”

To call the editorial disappointing would be an understatement, though for anyone who has spent years in the academy it is surely not surprising. Trite sentences abound. “Will we be on the right side of history?” we are asked, the assumption evidently being that there is such a side and that we can identify it. And, pulling out a very old playbook, the authors exhort us to “speak truth to power.” Evidently, bioethicists must courageously bar the door against the Nazis taking aim at under-represented groups in America, even if this resolution has more to do with political commitments than with medicine or science.

One need hardly be supportive or fond of Donald Trump to wonder whether bioethics as currently practiced can speak authoritatively about the character of the good life. How strange it is that a bioethics that often underscores the importance of autonomy for patients, the importance of having their persons respected, should set itself against the concerns of so many of these same people when they are not patients but simply citizens. More than a decade ago the sociologist John Evans was already observing that many people had lost confidence in the ability or willingness of bioethicists to represent their values and beliefs. The gap seems to have widened since then. One may reasonably wonder whether bioethicists have any real claim to expertise about the character of the good life, and aggressively asserting what are, in large part, political claims only exacerbates the problem.

The editorial in AJOB is, in many respects, only the culmination of a long process of development. Whe­n what ­eventually came to be called bioethics first emerged from medical ethics and began to garner attention in this country, it had no unified voice and was often marked by disagreement. Those disagreements were about the ends or goals to be pursued in medical care and research. Deep, and sometimes perhaps unanswerable, questions about the meaning of our humanity were involved. Both religious and philosophical visions of human nature were in play, needing to be unpacked, debated, and refined. The scholars writing in this newborn field did not think of themselves as bioethicists. They were grounded in many well-­established disciplines—religion, philosophy, medicine, psychology, law—out of which they approached the humanistic problems being raised by modern research medicine.

One thinks of Willard Gaylin, inviting us to contemplate the idea of “neomorts”; of Daniel Callahan, asking how long we should want to live and what a peaceful death would be like; of Joseph ­Fletcher, suggesting that technological reproduction was more in keeping with our human nature than old-­fashioned procreation; of Paul Ramsey and Richard McCormick, debating the use of children as research subjects; of Renée Fox and Judith Swazey, offering a thick and (to some extent) dissenting examination of developing transplantation technology; of Hans Jonas, exploring what it means to be a living organism; of Leon Kass, unpacking the difference between procreation and reproduction; of William F. May, reflecting on the meaning of the newly dead body; of Loretta ­Kopelman, exploring the meaning of freedom and ­competence for those with ­psychiatric problems; of H. T. Engelhardt, drawing attention to the effect on ­medical care of social pluralism; of Jay Katz and Eric Cassell, reflecting on the relation between patients and physicians.

Perhaps that is enough to make the point. This was a bioethics of which it could truly be said, “We engage with a wide range of perspectives and values with humility.” From various angles, grounded in different traditions of thought, and focused on different issues in medicine, those scholars were reflecting on the meaning of our humanity. They did not begin by assuming that we know what it means for us to live well and flourish. They did not have a common political agenda. They probed and examined their questions from various angles of vision. They did not seem to think that a principle such as justice—thin and undeveloped as it is in the recent ­editorial—could be of much use when abstracted from some deeper understanding of our humanity. In short, they were not just political advocates seeking to speak truth to power. And at least some of them may well have anticipated that they might be on the losing side of history.

Now, however, it seems that we all know what our goals should be and need only become more politically active and effective in order to bring them to fruition. Such activity, especially in the shrill tone one hears in the AJOB editorial, is unlikely to be very helpful in persuading those who hold different views to change course. Nor is the suggestion that opponents of DEI are filled with “racist hatred” or are something like Nazi supporters a very helpful or measured approach (mirroring in its own different register the rhetorical tone adopted too often by Trump). It is political activism, done in a way that is likely to appeal ­only to those who need no reasons or arguments to persuade them. Only that can explain why these bioethicists write to oppose the administration’s attack on DEI initiatives in general, not just on those that may directly affect the work of ­bioethicists. One need not think that the sledgehammer the Trump administration’s Department of Health and ­Human Services has taken to research ­funding is wise in order to be less than impressed with the program for bioethics put forward in the editorial.

Moreover, there is little reason to suppose that bioethicists are uniquely qualified to unpack and clarify what the moral principle of justice means or requires, even if, as the editorial puts it, “justice continues to be a core commitment of bioethicists.” We cannot know what justice means or requires unless, for example, we first know who counts as a person to whom the principle of justice applies. We cannot thoughtfully pursue equity until we have worked through and clarified the relationship between equality and equity. More generally, what justice requires may be different in different spheres of our life, nor need it always produce what looks like simple equality. Merit may rightly be considered in academic institutions when, for instance, we are training physicians or engineers. Democratic elections result in unequal but legitimate distributions of political power. And surely we should realize that a commitment to restorative justice—such as the AJOB editorial announces—is simply that: a commitment, not an argument. 

“Embedding DEI in our research, teaching, and service,” as the AJOB editorial recommends, is likely only to overlook the complexity of justice and narrow the impact of bioethics. It may be, after all, that just treatment of others would include recognizing and making room for the diverse intellectual perspectives they bring to the study of questions at the heart of bioethics.

There are, to be sure, complex and difficult questions embedded in the nondiscrimination principle enunciated in Title VI of the 1964 Civil Rights Act. Programs or organizations receiving federal funds may not discriminate against individuals on the basis of race, color, or national origin. This does not mean that one may not, for example, seek a diverse pool of applicants for a position. But, again, the AJOB editorial passes over the difficult task of articulating this difference. In focusing only on ­equitable representation, it loses the commitment to equal treatment that ­genuine nondiscrimination requires. If this is not a failing of the editorial, we need to see the argument, not just a broadside against those who think differently. Only thus can a diverse range of thinkers and perspectives actually be welcomed within the community of bioethicists.

According to the AJOB editorial, “bioethics emerged in the shadow of World War II.” True as that is chronologically, it is only part of the truth conceptually. Physicians had for a long time understood it to be part of their calling to attend to the thick moral dimensions of care for the ill. And, as David Rothman noted in one of the first accounts of the emergence of bioethics, a wide range of (often secularizing) social changes first made physicians strangers to their patients and then brought new people, other strangers, to the bedside along with physicians. Some of those new strangers became bioethicists, and one might suggest that discerning the meaning of care for those who, in the midst of rapidly changing social circumstances, were facing some of the deepest problems of life, was every bit as integral to the emergence of bioethics as was an underdeveloped principle of justice.

After several decades of growth and development, bioethics as understood today by many who call themselves bioethicists shows little interest in the meaning of our humanity and the ways it is probed from a variety of perspectives and within a variety of disciplines. It has a very foreshortened sense of its own history, missing the ways in which it grew from and expanded upon concerns that had always been internal to the practice of medicine. It assumes that we know where the human good lies and must simply find ways to achieve that good. It assumes that this achievement is largely to be pursued through political activity. And now it even seems to suggest that those who disagree—and certainly those who support the “wrong” political ­positions—are morally corrupt.

What happens to those who disagree? Some may be put off by the self-righteous tone that is ­presented as essential to bioethics. Some may simply lose interest in what ­passes for official bioethics. Some may look elsewhere and turn to other disciplines in search of the kinds of insight once offered by bioethics. But all of us will be the poorer for what we now mostly lack: the sort of probing reflection and civil discourse among a ­genuinely diverse array of voices from different disciplines and different points of view, grounded not in a political program but in a vision (or visions) of human nature, that once made bioethics both interesting and significant.

The post The Failure of Bioethics appeared first on First Things.

]]>
Cancer and the Cure of Souls https://firstthings.com/cancer-and-the-cure-of-souls/ Mon, 12 Jan 2026 06:00:00 +0000 https://firstthings.com/?p=122689 I have cancer,” the elderly woman ­announced from her hospital bed high above York Avenue in Manhattan. “But cancer is not the sickness. Cancer is the cure. Because cancer...

The post Cancer and the Cure of Souls appeared first on First Things.

]]>
I have cancer,” the elderly woman ­announced from her hospital bed high above York Avenue in Manhattan. “But cancer is not the sickness. Cancer is the cure. Because cancer brings you close to God.”

A Catholic priest is ordained to give God. The priest exists to mediate in the name and power of the one Mediator—to be an instrument who gives glory to God in sacrifice and the grace of God to souls. At every ordination, each new priest, just moments after being ordained and clothed in priestly vestments, kneels before the bishop, who slathers the priest’s palms with perfumed chrism and exhorts: “May the Lord Jesus Christ, whom the Father anointed with the Holy Spirit and power, guard and preserve you, that you may sanctify the Christian people and offer sacrifice to God.” Such became the ambit of my activity henceforth. Such became the concerns of my heart.

For every newly ordained priest, hands still fragrant, the first months of ministry are precious. There is the awe of the first confession and the fanfare of the first Mass, and perhaps other ­Masses of Thanksgiving and celebrations, too. But it is always the deeper mystery at hand that is so striking. For the new priest begins to perform acts he ­previously could not have performed in any respect. I had ­prepared for these acts for some seven years but had never actually done them. One cannot ­simulate instrumentality. So I, the new priest, must act. I must do the priestly things, at once confident in the God who has ordained me and humble ­before the God who is yet saving me. “Understand what you will do, imitate what you will celebrate, and conform your life to the mystery of the Lord’s Cross,” the bishop also told each of us, passing ­into our ­anointed hands a paten and chalice filled with bread and wine, the principal tools of our new trade.

In my Dominican province, the Province of St. Joseph in the Eastern United States, it is common for newly ordained friars to be tasked with two months of intense pastoral ministry before returning to Washington, D.C., for a final year of studies in theology. Usually, one priest is sent to the Dominican Healthcare Ministry, an apostolate and bioethics center based on the Upper East Side of Manhattan that provides spiritual care to patients at several hospitals throughout the city. Such was my lot in the summer of 2024.

For the new priest, hospital ministry is a plunge into the deep. First, compared to the priest’s ordinary orbit, the hospital is strange. The smells and bells of the patient floor are not those of the sanctuary. A white thirteenth-century habit with a ­fifteen-decade rosary swinging from the side announces itself among scrubs and white coats. It ­signifies a different kind of physician-ship, ordered to a different though complementary end. The body, after all, exists for the sake of the soul, and the soul for the sake of God. Order and priority: The good of grace in just one soul, St. Thomas Aquinas explains, exceeds the good of nature in the entire universe.

Second, there is no warm-up at the hospital. My orientation meeting at Memorial Sloan Kettering Cancer Center was interrupted by an emergency call for a patient who was actively dying. I was not the priest on duty, but I was the priest most immediately available. So I was instructed to go up—orientation could wait. Later that day, a ­Spanish-speaking patient returned to the sacraments for the first time in some sixty years. These sorts of things, I would soon learn, occur frequently. God is serious about his desire to go the distance, to save all and bring all to full knowledge of the truth, especially in the final moments. Which actually happened on one emergency call at 2:30 a.m., when a woman expired just as the last drops of baptismal water graced her head, poured from a pill cup no less, in the presence of fourteen members of her family. Heaven enveloping earth—now, at the hour of death, right before our bleary eyes.

Memorial Sloan Kettering, which the Dominican Healthcare Ministry serves day and night all year round, excels in its area of expertise. Everyone who has been to Sloan, whether as patient, family, employee, or visitor, knows this. Go to any floor of the main hospital at 1275 York Ave., and competence abounds. The environment is energetic, hopeful. Doctors conduct research and perform trial treatments that are available nowhere else, and adept and cheerful nurses wear T-shirts that read, “imagine a world without cancer.”

Yet the obvious, uncomfortable truth remains that the human mortality rate is absolute. Modern medicine, with all its marvels, can heal and enhance human life but, in the end, only delays the inevitable. “If we ever do achieve freedom from most of today’s diseases, or even complete freedom from disease,” wrote Lewis Thomas, onetime president of Memorial Sloan Kettering, “we will perhaps terminate by drying out and blowing away on a light breeze, but we will still die.”

Cancer is peculiar among diseases that kill. Viruses, bacterial infections, parasites—these cause death from without. Heart disease, organ failure, and genetically transmitted illnesses are closer to cancer, for in these cases, the body gives way of itself. And then there are the autoimmune diseases, which, like cancer, attack the body from within, but, unlike cancer, do so through an immune system tricked into assailing what is itself healthy.

For all the harm these afflictions wreak, the physical evil at work in cancer is more profound. For cancer is life turning against itself at its core. The DNA of a healthy cell—that most basic ­material principle of life—mutates, and then the cell divides and divides and divides, multiplying with chronic vigor. These new mutant cells, which can begin in almost any tissue or organ, do not buttress the life of the organism of which they are part, as cells are designed to do. No, they mock it. Like parasites produced by one’s own body, cancer cells can evade immuno-detection and shirk the natural death cycle that healthy cells obey. In turn, cancer cells can replicate infinitely, inundating one’s body with what is inimical to one’s body. It is sick irony: By their immortality, cancer cells reap our mortality—from manic mitosis to malignant mass to metastasis to the morgue. Only then does cancer finally die.

Considered in the light of divine revelation, cancer may well be the most primordial lethal curse consequent on Adam’s sin. Though cancer does not figure much in the Scriptures (the Philistines ­suffer an outbreak of tumors after capturing the Ark of the Covenant in 1 Samuel 5), death figures from the outset: “You shall not eat of the fruit . . . neither shall you touch it, lest you die” (Gen. 3:3). But they touched and ate, and so we who are dust must return to dust. If a man is not killed by another organism, a natural disaster, or a freak accident, or by the failure of his own organism, the very principle of his life will revolt against him. His own cells will destroy him.

When a priest leads an OCIA class or a marriage preparation meeting, death is mostly an abstraction. It is something we need saving from, yes, but something still far off. In the hospital, however, and especially the cancer ward, the Catholic priest must deal with death directly. Usually, there is time to broach the subject with patients and family: the diagnosis, the prognosis, the ultimate implications, both physical and spiritual. And naturally, patients vary in their willingness to face what lies before them. Some are aware and preparing: “I trust in God, Father, and am coming to terms with this.” Others exhibit a stoic indifference: “We all have to die.” Yes, but what of dying well? Still others stand in denial: “I’m fine. I’ll beat this and get back out there. I have more things to do.” And then? Or if not?

No one can avoid ultimate questions. But in our age, so underinformed on spiritual matters, these questions have become all the more intimidating. When the crucible comes, many fumble about, despairing of real answers and trying to “­meaning-make” their way to a noble death. Yet the truth is that we are not left to our own devices in our search for answers. God himself has already made the meaning. Moreover, God has commissioned the priest to manifest that ­meaning—not because the priest of himself has particularly keen insights into the nature of disease or the ­meaning of life and death, but because the priest bears God’s insights, which impart God’s peace. Because there are, in fact, real answers: God’s providence is infallibly good. No diagnosis, and no response to treatment, is an accident in his loving plan. And God really has appointed death as our ultimate and universal punishment since Adam, though only to unveil a greater glory: that he should redeem us from eternal death in and through the death and resurrection of his Beloved Son, and in and through our own death and resurrection in him. These soaring, saving truths converge in chiaroscuro mystery: Earthly life ends in darkness, even as the light of eternal life dawns. And here we stand together—patient, family, medical staff, priest—at the threshold.

The austerity of this mystery struck me at the beginning of the summer, at that very first orientation-interrupting call. But the impression only intensified over the next two weeks, when, almost daily, I would visit Jack, a gregarious and athletic twenty-three-year-old man from Long Island suffering from neuroblastoma, a rare, typically childhood cancer. I met him on my second day at Sloan, three weeks exactly from the ordination and only about an hour before celebrating a Saturday evening Mass of Thanksgiving across the street at the Dominican Church of St. Catherine of Siena. Jack’s referral sheet read “­declining condition,” and when I walked in, he was very weak, and his mother, his older sister, and her fiancé were seated around his bed. The room was heavy. I didn’t know it, but the night before, one of his doctors had reported that Jack’s liver was failing, and all options were exhausted.

Jack had been battling for more than two and a half years, since his senior year of college, but in recent months, his cancer had pulled ahead decisively, though only physically. For after his initial diagnosis, Jack had come completely alive in the Catholic faith of his upbringing. He began, whenever he was able, to join his father at daily Mass. He studied the Scriptures, watched apologetics­ ­videos, and read theology and the lives of the saints. And then he discussed these things over long walks with his father, and while hanging out with his sister and brother and friends, and at a men’s prayer group at a nearby parish, and in so many priceless moments with his mother, who, herself a nurse, cared for him indefatigably at home and in the hospital. The result was that Jack began to see and embrace God’s ­purposes at work in and through his sufferings. And with his humor and charm, Jack spurred others to join him—from his ­extended family and large circle of friends, to fellow patients in Sloan’s pediatric unit, to his own doctors and nurses, whom he spared neither the daggers of his wit nor the double-edged sword of the gospel. He once dared ask his mother, who was grappling with his worsening state: “Mom, whom do you love more, God or Jack? You know the right answer.” And when there was hesitation, he doubled down and asked again.

Eleven days after I met Jack, God forced our hands on this central question. He had prepared Jack with all the rites of the Church and a speck of Holy Communion for a final meal. And then, early in the fourth watch of the night, as Jack’s family and nurses kept bedside vigil, God consumed Jack in a blazing fire of divine love and human suffering (Heb. 12:29, Song of Sol. 8:6). It was a pleasing offering, a holy death: one beloved son by divine grace, conformed to another Beloved Son by divine nature. Had Jack recovered—an outcome for which so many had prayed and from which much goodness and spiritual fruit surely would have come—Jack’s life would have been too human, too this-worldly, too predictably heroic. God’s ways are more mysterious, more cruciform, more sublime.

The night Jack died, he appeared in near-­perfect health in the dreams of two of his closest friends. “Jack, you look great. You beat the cancer!” one friend exclaimed in his dream. “Yeah, I pulled through,” Jack affirmed, but with his eyes down, as if he knew something his friend did not yet know. In the morning, both friends awoke to a group text from Jack’s younger brother delivering the news. Mysterious, cruciform, sublime. Which spoke for itself at his wake and funeral, packed with hundreds of family, friends, peers, even nurses from Sloan, each pondering what Martha and Mary pondered at their loss: “Lord, if you had been here, my brother would not have died” (John 11:21, 32). But the Lord had in fact been there all along, right in the thick of it, and our task was to ponder how—to make sense of what God had just done through Jack, and of what Jack had just done with God.

“Did I not tell you that if you believe, you will see the glory of God?” Jesus said to those gathered outside Lazarus’s tomb before he raised the man from the dead (John 11:40). The responsibility of the Catholic priest is to point to the glory of God—to help others see what God is really doing beneath the surface, at the deepest level of things—and to make this glory really present through the sacraments.

The preeminent virtue that the priest must bring to the hospital, then, is living faith. “We walk by faith and not by sight” (2 Cor. 5:7), and never more intensely than at the hour of our death, the hour of glory (John 17:1–3). Because at death, whether one’s own or that of a loved one, the sheer invisibility of the gospel addresses us in its purest form. The naked eye sees only a living body, racked, about to reach its end. And as death closes in, an entire life’s worth of misdeeds and regrets can return in a frontal assault. The horror. What comes next—for me, or for my loved one?

But the mind illumined by faith beholds an eternal horizon opened up by the mercy of God at work in the cross of Christ. By this faith, the hospital priest understands that he spends his days at the foot of Calvary, “the place of the skull.” He looks death in the face again and again, though never alone. Rather, the priest stares down death together with the Living One, the Conqueror, who died but now holds the keys of death and Hades in victory (Rev. 1:18).

So poised, the priest testifies to what he sees when he looks at death with Christ. With a conviction and compassion that render him credible, he encourages the patient lying before him that death is not the end, that the deathbed is, in truth, an altar, a living crucifix, and that Christ looks back in love from highest heaven, beckoning us to offer ourselves with him and so come to him. And then the priest, as only he can, makes present the full power of Calvary by performing the saving signs thereof: I Absolve You . . . Through This Holy Anointing . . . The Body of Christ. Through these sacred acts, the priest outdoes his own humanity: He forgives what no man can forgive, heals what no doctor can heal, and feeds as no food can feed. “If anyone eats of this bread, he will live forever” (John 6:51).

Inevitably, a priest is affected by his ministry, as also the Incarnate Word, our great high priest, was affected by his ministry. He marveled; he wept; he sweated blood; he died in ministry. And above all, he was perfected in love through his ministry—not that he ever loved imperfectly, but rather that he performed greater and greater acts of love, until he performed the greatest act of love a man ever has performed or ever will.

Every priest is privileged to experience something of Christ’s priestly affections. The holier the priest, it would seem, the more profound his experience, but for the new priest at least, for whom everything is so novel, this deeper love he has begun to share with Christ is to be especially savored. He sees, through the new lens of his instrumentality, that the same redemption God is working in his own soul, God is also working, through his ­unworthy hands, in the souls of others. He likewise sees how the truths of faith that he has contemplated and studied with great fervor for many years are both real and useful for souls. Because only the truth sets free, only the truth has grace, only the truth perfects in love.

In the hospital, everything is intensified because the finish line is approaching. We cannot flee from the finish line but must run toward it in faith. There, ministering on the homestretch, I found that for the believer who is prepared to ­exit this life, death is often easier to accept than it is for the loved ones who are looking on. Patients pronounce striking utterances. One man in his late seventies—a retired contractor and heavy-machine operator fondly described by his son as a “hot dog connoisseur”—looked forward blankly after receiving Holy Communion and declared before a full room, “God is present.” A formulation, his son assured, that was not in his father’s register. The man died a few days later. Another time, a rapidly declining forty-two-year-old woman, who had desired to convalidate her seven-year civil union into a sacramental marriage, came to clarity of mind just long enough to exchange vows with her husband during a ceremony in her hospital room. (The nurse even brought up a cart of celebratory treats from the café.) When it was her turn, the bride answered “Yes!” five times over, then added the only two words prescribed: “I do.” Already anointed and no longer able to eat, she died the next morning—a seventeen-hour marriage, the last sacrament she received. And then there was the forty-seven-year-old wife and mother of two who signed her last will and testament, confidently professed the creed, and received full last rites with the apostolic pardon and viaticum, all about an hour before she died.

For the loved ones who look on, seeing God accomplish such works up close has its own impact. Many family members open up to the light of faith through their exposure to suffering. The dying point them to life beyond the veil. And by grace, those who remain can come to see that notwithstanding death’s sting, what has happened is ultimately good, because St. Paul’s assertion to the Philippians is true: “to die is gain” (Phil. 1:21). So much so that when one helps others attain that gain enough times, one mysteriously begins to wish it for oneself. “To depart and be with Christ—it is much better” (Phil. 1:23). But we do need priests, and so “it is more necessary that I remain in the flesh” (Phil. 1:24).

God knows what he is about. In ways that exceed human understanding, he unites us to himself precisely through the scourge of death. “I have cancer,” the elderly woman ­announced—freshly absolved, anointed, and ­communicated—from her hospital bed high above York Avenue. “But cancer is not the sickness. Cancer is the cure. Because cancer brings you close to God.”

Such is the logic of the cross, the true logic of our life on this earth. “When I am lifted up,” Jesus promised, “I will draw all men to myself” (John 12:32). For some, God uses cancer unto death to draw them once and for all, as in the case of a daughter of Hungarian gypsies, who was born into Nazi captivity and reared in communist ­atheism but baptized, confirmed, and anointed on her deathbed in the ICU. For others, God uses cancer unto discharge, sending them back into his vineyard, at least for a time—like the faithful captain of an FDNY ladder company forced into early retirement by cancer from 9/11, his second day on the job, or the tech executive, also with 9/11 cancer (he was working in the towers), who returned to a beautiful practice of the faith after decades away. And down the line, whether they reach remission or return for readmission—as did the captain, who died with such grace last June—God still knows what he is about. For the lifting up only began on Calvary. It was perfected on the third day, and especially forty days after that.

Fourteen months after Jack’s passion and death, God granted a taste of resurrection. Jack’s older sister married her fiancé, and there, for the first time since Jack’s services, we all were together again—Jack’s family and friends, including the two who saw him in their dreams, and even one of his ­nurses from Sloan. Jack, too, made his presence known. During the homily, which made reference to him, the lights suddenly flickered. They flickered again when the newlyweds recessed: a double sign, we dared to hope, of the family forerunner at the nuptial banquet on high, descending to grace another with his approval. Because God knows what he is about.

This magnificent mystery of salvation through suffering goes widely unknown in our day. Assisted suicide proliferates, and so many slide into serious error about the true ­meaning of life and death, either unaware or in denial of the divine glory at hand. “Did I not tell you that if you believe, you will see the glory of God?” (John 11:40).

We know there is much work, indeed much praying and preaching and policymaking, to be ­done. But we also take confidence from the fact that God yet accomplishes his greatest wonders where few, if any, are there to witness—as in the hospital room, through the ministrations of the priest, especially at the end of life. “Having loved his own who were in the world, he loved them to the end” (John 13:1). Salvation, after all, is clinched only at the end, in the final moment before death. The priest wants to be there. The priest exists to be there, to give God’s presence there: “I am with you always, even to the end of the age” (Matt. 28:20).

The post Cancer and the Cure of Souls appeared first on First Things.

]]>
Practitioners of Infanticide https://firstthings.com/practitioners-of-infanticide/ Tue, 06 Jan 2026 06:00:00 +0000 https://firstthings.com/?p=118570 A physician declares his dying patient—a seven-pound baby boy—“dangerous as dynamite,” a “menace to society.” A routine medical procedure could save the boy’s life, but he was born deformed....

The post Practitioners of Infanticide appeared first on First Things.

]]>
A physician declares his dying patient—a seven-pound baby boy—“dangerous as dynamite,” a “menace to society.” A routine medical procedure could save the boy’s life, but he was born deformed. Later reports will find that most of the deformities are cosmetic: He is missing his right ear, and the skin on his shoulder is defective. But, critically, there is a blockage at the end of his intestine.

This last seals the boy’s fate. There will be no lifesaving operation. The crying baby with chubby legs and wide-open blue eyes, facing the flashbulbs of the press, is instead to be starved and dehydrated to death. It is an act of the “kindest mercy” for the child to be “put out of its misery,” the physician has told the parents. For the next decade, in newspaper columns, in public speeches, and even in a feature film that he will write and star in, the physician will present his patient as an exhibit in his argument that compassion and the scientific method compel American medicine to bring about rational ends to “lives of no value.” The editorial board of the New Republic, Helen Keller, and many leading physicians will agree with him.

The Bollinger baby—christened by his relatives Allen after his father, yet unnamed in the press and even in modern accounts of the tragedy—became the first publicized case of a newborn in America forced to die because of his disabilities. The year was 1915. The physician became a celebrity. Decades before Jack Kevorkian, decades before either abortion or assisted suicide was legalized anywhere in the United States, there was Harry Haiselden, the surgeon and showman at the head of the ­German-American Hospital in Chicago.

No jury would convict Haiselden. He insisted that he treated his “defective” infant patients as he did “because he love[d] them.” He loved them to death. Sometimes he actively accelerated their deaths: He removed the umbilical ligature of one patient, leaving him to bleed to death, and prescribed another potentially lethal doses of opiates. It was an ambivalent love. “Horrid semihumans drag themselves along all of our streets,” Haiselden warned at the end of his autobiography. “What are you going to do about it?”

It is tempting to dismiss Haiselden’s odious question, precisely because it is odious. That would be a mistake. Today Haiselden is achieving a posthumous conquest of the medical field. His victories are not just in Canada, where the Quebec College of Physicians and many clinical ethicists have urged Parliament to legalize the euthanasia of disabled newborns, or in the Netherlands, which under the infamous Groningen Protocol has been euthanizing “neonates” with terminal illnesses for two decades.

It is in less likely places that Haiselden’s victory is taking shape, pitting parents against the physicians of their disabled children—parents like Krystal VanderBrugghen, who alleges that her child with Down syndrome received inadequate, discriminatory, even life-threatening medical care, in “the best children’s hospital in the world.” Stories like hers have been a century in the making.

The best children’s hospital in the world for 2026, according to Newsweek and Statista, is the Hospital for Sick Children (SickKids) in Toronto. I walked into SickKids in the summer of 2025 to see Krystal, a “mama bear” according to one of her friends, and Mo, who asked me not to use his ­real name because one of his children is receiving treatment at the hospital, and he fears retaliation. Krystal befriended Mo’s wife in the coffee lounge over the summer, and soon Mo was friends with Krystal, too.

We decided it would be best to speak in the wing of the pediatric unit, whispering whenever a nurse walked past. Mo and Krystal both credit religious faith—Mo is Muslim, while Krystal and her husband Jeremy are Canadian Reformed Christians—with fortifying them to bring children with Down syndrome into this world. Mo said his wife felt guilt-tripped by their healthcare team, who asked her immediately what quality of life she, Mo, and their three other children would have if she gave birth to a child with Down syndrome. “At the end of the day,” Mo told me, “I am not God. I cannot decide who lives, who doesn’t live.” Now, with his child with Down syndrome already five years old, the experience of raising him is “probably . . . the best thing in my life.” Krystal experienced the same pressure and reward. She was advised three times by clinicians that she could “terminate at any point and start again.” She didn’t want to start again. She wanted her child to be born.

On December 4, 2023, eighteen months before I met Krystal and Mo, Veya was born at McMaster University Medical Centre in Hamilton, Ontario. Like many children with Down syndrome, she had a cardiac defect, which in her case meant that she was in active heart failure for the first four months of her life. She needed cardiac surgery, which required her to be transferred to SickKids Hospital in Toronto. It is an hour-plus commute for Krystal on “a good day,” especially since the AC in her car stopped working. It was worth it; the surgery worked. “It’s funny,” said Krystal. “They try so hard to end this life, but the second she’s born, they do everything they absolutely, possibly, humanly can do to preserve her life and get her here to get her heart repaired. But once we started getting involved with GI [the gastro­intestinal team] and she started having more problems, that’s when it was like they drew the line.”

A month after her heart was repaired, Veya developed an undiagnosed liver disease, causing her bile to be thick. She underwent liver surgery. This time, the surgery didn’t work. Veya desperately needed a liver transplant, and although the rest of her individualized specialty care team approved her for a liver transplant, the trans­fer team ­denied her this lifesaving treatment. ­Krystal still doesn’t know the reason. Veya needed to stay in the ICU.

Without a liver transplant, Veya’s immune system was vulnerable. I asked how Mo was ­recruited to help with Veya’s medical journey. “I invited him into my meetings,” Krystal said. Mo continues for her: “Yeah. I hear stories. Krystal tells me what’s happening. She’s gone through a lot, like, mentally. I’ve lived here almost a year. That’s hard. So God knows what she’s going through, right?” Mo’s child was being treated for leukemia in the hospital, and he had no complaints against SickKids. “It’s interesting because I’m seeing two sides, right?” said Mo. “I’m seeing my side and then I’m seeing her side. Two different teams, but from her side, ­Krystal’s team, and I’ve used this word a lot, I’ve been baffled on what’s happening.”

The quality of care for Veya dropped precipitously, Krystal and Mo believe. Shortly after Veya was denied her liver transplant, while she was unattended, she received a potentially lethal amount of potassium, ten times her usual dosage. Her heartbeat exceeded 350 beats per minute. The hospital told her that the overdose “passed through four pharmacists and two nurses,” Krystal said on a recent podcast. “We’re really sorry but it was around Christmas time,” was the only excuse she received from the hospital about the incident that nearly killed her daughter.

SickKids declined to answer my questions about the incident and about whether any steps were taken to prevent a similar incident in the future. In an emailed statement, a spokesperson commented: “We cannot comment on individual cases due to patient privacy. . . . Decisions about care for each child’s unique case are guided by clinical expertise, ethical standards, multidisciplinary collaboration, and partnership with families.”

At first, Krystal believed the potassium overdose had been an innocent mistake. Now she is not so sure. At several points, the physicians in the ICU have seemed to “[want] to free up a bed spot and rush her out because she’s been here for too long.” Three days into her care, a doctor said that if Veya needed a ventilator, she would not receive one, despite being on full code, because “it would do more harm than good.” (Due to a 2019 court decision in Ontario, physicians need not seek consent for a Do Not Resuscitate order, or even inform patients that one has been placed against them, if their care is deemed “medically futile.”) Krystal had to enlist the patient relations department in order to get her daughter’s DNR lifted.

She felt coerced into giving up. “SickKids is very ableist,” Krystal told me. Another ICU physician put his hand on her shoulder and said, “You know, Mom, it’s been such a long road for you guys. You can admit when enough is enough, and you can let someone die with dignity.” Was the overdose intentional? An effect of neglect? Or a simple accident? Whatever the case, it happened, Krystal and Mo believe, only because Veya is disabled. At one point, when she asked whether Veya was being denied a transplant because of her Down syndrome, a transplant physician answered, “Mom, I think you know the answer to that deep down in your heart.” She heard similar comments from the physicians. “[Another] ICU doctor said, ‘We look at Veya, all that she is and all that she was born with,’” Krystal said. “And I said, ‘What, a head, two arms, two legs?’ I’m like, ‘Yeah, she came with a cardiac defect. That’s fixed. That’s not causing the problem. Or are you isolating her extra chromosome here?’”

The accidents—if accidents they were—continued, always occurring when Veya was by herself. “Every time I step away, something happens,” Krystal said. Mo interrupts: “Twenty-minute lunch break.” Krystal continues: She went on a “twenty-minute lunch break, and they shut off a medication that they knew from a couple days ago she had withdrawal symptoms from.” Another incident occurred when Veya was struggling to breathe. It was a code blue, but the crash team, instead of rushing to help Veya, was slowly walking to her. Krystal had to raise the alarm herself.

Krystal and her husband Jeremy felt that Veya was unsafe at SickKids. The Delta Hospice Society and the Euthanasia Prevention Coalition organized a round-the-clock daily watch over Veya. But SickKids began to clamp down on the visits. It also banned Veya’s general pediatrics team from visiting unless they first asked for permission. When I spoke with Krystal, she was in the last steps of organizing an ambulance to move Veya to another hospital.

At the same time, Veya was meeting her development milestones. She liked geese and her bravery beads; she played with her brothers and sisters. “That’s the thing,” Krystal told me. “This ICU admission, she’s actually met three milestones, or two—I guess popping [your teeth] is not really a milestone. Maybe it is, but I’m like, you learn to smile. You learn how to coo. You just can’t make noise because she’s got the [respiratory] tube. But then, you popped your first tooth. I’m like, look at this! This isn’t a kid on death’s door. But they’ve been treating her like she’s on her way out and ­palliative.” When SickKids was handing Veya over to another hospital—at the time, Krystal was considering either McMaster or a hospital in the United States—SickKids said that she was not on ­palliative care.

As is the case throughout Canada’s healthcare system, it is difficult to find conclusive evidence of neglect or wrongdoing when medical care is subpar. But SickKids Hospital is no stranger to euthanasia. Just two years after Canada legalized medical assistance in dying (MAID)—a euphemism for euthanasia—a panel inside SickKids Hospital, co-chaired by the director of its department of bioethics, envisioned MAID for minors without the need for parental consent, a practice unheard of even in the Netherlands, which permits euthanasia for “mature minors.” (Currently, MAID in Canada is legal only for those over the age of eighteen.) The policy was written to address the need for “MAID-providing institutions to reduce social stigma surrounding this practice.” SickKids declined to answer my questions about this policy, including whether it is in force today.

SickKids has historically been at the forefront of letting children die of their disabilities, especially children with Down syndrome. A study found that between 1952 and 1971, of fifty children with Down syndrome and blocked food passages, twenty-­seven were left to die of their obstructions instead of receiving routine medical treatment. In 1979, the institution was lambasted by the Canadian Psychiatric Association, which warned that “this increasingly common act in medical practice is being vigorously promoted by able and influential advocates within our profession and within our society at large,” despite the fact that it was likely illegal without a court order.

Between June 1980 and March 1981, a spree of murders struck SickKids hospital. Over the course of several nights, thirty-six babies and infants died, many of them due to an overdose of digoxin, a drug used to control heartbeats and often used for assisted suicide in the United States. A judge confirmed that at least five of those deaths were murders (though the defense believed the number was closer to seventeen), and yet the judge at the preliminary hearings absolved the only suspect, a pediatric nurse. No one else was ever charged, despite statistical evidence from the U.S. Centers for Disease Control that tied another nurse to the deaths.

Two years later, with the scandal refusing to die down, a Royal Commission of Inquiry investigated the deaths. ­Richard Rowe, the chief cardiologist at the hospital, was asked by the commission whether he disapproved of so-called mercy killing. His response: “almost.”  He explained that since the thirty-six babies had a “minimal chance of surviving,” the motive behind their deaths might “perhaps be that of ­mercy-killing.” It was not true: Many of the children had been likely to survive. Some were barely sick. Adrian Hines described his son Jordan: “He entered the hospital a healthy baby with a touch of pneumonia. He didn’t even have a heart ­condition.” An autopsy revealed that Jordan had died of digoxin overdose, a medication he was never prescribed.  

The main medical associations in Canada declined to condemn the homicides at SickKids. The president of the Ontario Medical Association claimed, “I don’t know if withholding surgery is legal,” while a spokesman for the Canadian Medical Association emphasized that the CMA had revised its ethics code “to allow patients to die in dignity.” When asked by investigators whether the dramatic increase in the number of deaths in the hospital’s cardiac ward could have been caused by euthanasia, the chief cardiologist was vague. “[Euthanasia] may have come into those discussions. We talked of many things and we didn’t keep notes.”

SickKids declined to answer my questions about its history concerning infanticide and discriminatory treatments, the institutional norms that might have enabled them, and what steps, if any, were taken to prevent similar incidents going forward: “At SickKids, we are deeply committed to upholding our core values of compassion, dignity, respect, and equity in every aspect of patient care. Our staff bring extraordinary skill, judgement, and dedication to their work to ensure that every child and family receives the highest standard of care, regardless of diagnosis or ability.”

SickKids Hospital is not the only institution that has allowed children with disabilities to die of treatable illnesses. It accords with the direction of the field of medicine over the past century. The preventable deaths of children with disabilities occur, for the most part, without media interest. To understand why the law and societal outrage have failed to stop this practice, we must trace the history of child murder in North America since 1915.

And we must discard a fiction: that infanticide, being illegal, was not historically practiced by physicians in North America. As the medical historian Martin Pernick stresses, “the history of infanticide by lay people—parents, midwives, and governments, dating back to ancient Greece—was widely discussed in these debates [over selective non-­treatment of disabled children]. But the role played by past American physicians in such decisions is now virtually unknown.”“Therapeutic homicide”—a term used in an editorial of the Canadian Medical Association Journal a decade ago, before Canada legalized euthanasia—is, as a rule, practiced by physicians before receiving legal sanction. Under these conditions, it is uncommon but not rare. Its fatal logic is the starting point for the devaluation and killing of people with disabilities.

Even as Roe v. Wade was being argued before the Supreme Court in 1971, Who Should Survive?, a film produced by the Joseph P. Kennedy Jr. Foundation, dramatized the decision to let a baby with Down syndrome die of a treatable intestinal blockage. The film was based on real deaths at Johns Hopkins University Hospital. Over the course of fifteen days, as the medical team and parents wait, the child dies of starvation.

Near the end of the film, a litany of questions is posed to the audience. “Do all children have a right to life? Who should protect the child’s rights? Do physicians have a duty to preserve life? Does mental retardation diminish the right to life?” The film is quick to note how the excruciatingly slow death of the child affects the nurses and the physician. The child’s interests are not considered. The film merely asks questions, as if doing so were nonpolitical: “The film you are about to see has a beginning and an end—but no conclusion, because it provides no answer to the questions it poses.”

Left unsaid is that a child without Down syndrome, presenting the same physical defects, would never be left to die. A non-disabled baby with a treatable life-threatening illness would receive treatment, even if the parents and medical team disagreed. Anything else would be medical malpractice or child abuse. By contrast, those deemed “profoundly disabled,” whose lives have “no value,” receive no protection.

The slippery slope that Roe’s critics warned of in fact happened in reverse: first infanticide, then mercy killing, and finally eugenic abortion on demand. Legalization followed clinical practice, not the other way around. The logic continues to prevail in the courts, whether in America, Canada, Colombia, or the Netherlands: If passive euthanasia is valid, why not active euthanasia? If prenatal abortion, why not “post-natal abortion”? If disability is a qualifying condition for assisted death, based on the empirical determination of medical experts in view of “medical futility,” then to what extent is consent necessary or even desirable?

Yet the justification for these acts is offered ­only after the fact. Ronald Regan’s 1983 “Evil Empire” speech is more often invoked than read, and most people would be surprised to learn that its early paragraphs are not about the USSR, or even ­communism: They are about evil at home. Every year, publications such as the Washington Post and the New York Times were reporting that thousands of babies were being left to die from hunger or treatable medical conditions—for the sole reason that these babies had disabilities, whether terminal or not, and were deemed “defective.” This reporting sparked the Baby Doe laws after Ronald Reagan’s surgeon general, the pediatric surgeon C. Everett Koop, denounced the nontreatment of viable babies as contrary to medical ethics. In March 1983, the Department of Health and Human Services, under the auspices of a federal law that protected people with disabilities from discrimination—a precursor to the Americans with Disabilities Act—passed an executive action to stop selective nontreatment and starvation. Yet the courts overturned the action, as Congress had not passed the requisite legal protections. In response, Congress passed a weakened amendment to the Child Abuse Prevention and Treatment Act—which the American Academy of Pediatrics continues to claim is irrelevant to physician or institutional standards. So the legal protections enacted by Reagan lapsed.

The practice of selective nontreatment based on disabilities has continued. Though as a medical option this practice is presented as rational, the people it kills are often those with conditions that are at the same time being fear-mongered in the media—whether trisomy 13 and 18, HIV, or Thalidomide poisoning. One senior medical director at a faith-based perinatal center in New York told me, “Back in the early nineties when I started on the faculty, the chairman [of pediatrics] at the time said that although other hospitals were starting to withhold nutrition and hydration from children with terminal illness, that’s something that would never, ever happen at this facility. But you know ten years later, into the early 2000s, it was something that when the parents asked, the ethics committee would often approve it.”

Today, in most facilities across the United States, the ethics committee would not need to be involved. Though neonatologists are split on the ethics of withdrawing food and water for ­newborns—surprisingly, more so for the terminally ill than for the disabled—a recent survey of neonatal intensive care units (NICU) published in the Journal of Perinatology found that a majority of NICU units in North America now practice “withdrawal of artificial nutrition and hydration” for newborns. Of those facilities, more than 80 percent reported not requiring an ethics consultation before ceasing all food and water; virtually none had a policy on which diagnoses would qualify a patient for withdrawal of nutrition and hydration. The American Academy of Pediatrics now classifies feeding children as morally optional.

Since this is how disabled children are treated, it is no wonder that the medical field is nonchalant about the fates of children born alive after botched abortions. In January 2019, Ralph Northam, a pediatric neurologist and then-governor of Virginia, caused a furor by describing what happens when infants survive third-trimester abortions:

When we talk about third trimester abortions, these are done with the consent of, obviously the mother, [and] with the consent of the physicians, more than one physician, by the way, and it’s done in cases where there may be severe deformities, there may be a fetus that’s non-viable. So in this particular example, if a mother is in labor I can tell you exactly what would happen. The infant would be delivered; the infant would be kept comfortable; the infant would be resuscitated if that’s what the mother and the family desired, and then a discussion would ensue between the physicians and the mother.

To ask whether this is legal is beside the point. It does not need to be legal to be practiced, since any law is in force only to the extent that it is followed.

There is an irony to the story: In its final summer, the Biden administration, hardly a pro-life administration, quietly reintroduced some of the Reagan administration’s protections. Section 504 of the Rehabilitation Act of 1973 now explicitly prohibits healthcare discrimination based on disability. Thus a newborn with Down syndrome and a heart problem must, by law, receive whatever “medical treatment is provided to other similarly situated children.” But this legal requirement is not enforced. Most hospitals, despite the fears of ethicists, made no changes to their policies, and the media failed to report on the regulation. So the regulation became moot. Last year, a study reported that between 2019 to 2022, adults with both Down syndrome and Covid were more than six times more likely than patients with similar comorbidities to have had a DNR placed on them—a rate far higher than any other illness or disability, including any terminal illness. Yet there was no reckoning among healthcare clinicians or institutions. The soul of medicine does not easily change after a century of practice.

It is impossible not to wonder what would have happened to medicine if the Bollinger ­baby—Allen—had not been killed by his primary physician. After all, Allen was nearly saved. The Chicago Tribune reported on the figure whom Harry Haiselden called a “wild eyed, interfering, hysterical woman,” a certain Catherine Walsh of 4345 West End Avenue, who attempted to convince either the mother of the baby or the physician to spare the boy after the medical commissioner of Chicago had failed to do so.

It is an astonishing piece of journalism, made even more so by the fact that Walsh’s voice was ­uncommon in contemporary debates over the Bollinger baby. Though opposition to Haiselden’s actions was voiced, mostly by Catholics quoted in the press (with opinion more divided among secular, Jewish, and Protestant experts), prominent figures who might have advocated for the child instead ­condemned him. Helen Keller, otherwise an ­advocate for the disabled, endorsed Haiselden’s work as a “­service to society,” since “no one cares about that pitiful, useless lump of flesh.” The Baltimore Catholic Review, published under James Cardinal ­Gibbons, claimed that “no one could be blamed if the child was let die according to nature” and supported Haiselden’s actions.

Yet Catherine Walsh, by her own account, nearly succeeded. She sought and received permission to baptize the child, although the child, unbeknownst to her, had already been christened. All we know is that Catherine belonged to a local Catholic Church; she likely was the mother’s friend. Her comments to the Tribune were quoted in full: 

I went to the hospital to beg that the child be taken to its mother. It was condemned to death, and I knew its mother would be its most merciful judge. I found the baby alone in a bare room, absolutely nude, its cheek numb from lying in one position, not paralyzed. I sent for Dr. Haiselden and pleaded with him not to take the infant’s bloom on his head.

It was not a monster—that child. It was a beautiful baby. I saw no deformities. I patted him. Both his eyes were open, he waved his little fists and cried lustily. I kissed his forehead. I knew if its mother got her eyes on it she would love it and never permit it to be left to die.

“If the poor little darling has one chance in 1,000,” I said to Dr. Haiselden, “won’t you operate and save it?” The doctor laughed. “I’m afraid it might get well,” he replied.

As I left the hospital a man said to me. “I guess the doctor is right from a scientific standpoint. But humanly he is wrong.” “Thank God,” I answered, “we are all human.”

It took five days for Allen to die. Anna Bollinger, the boy’s mother, never recovered from the death of her fourth child. She never saw the child; the medical staff would not permit it. Even amid the savagery of the First World War, Anna’s death almost two years later was front-page news across the country. Her husband, Allen Bollinger, told the Associated Press: “After the baby’s death, my wife fell into a settled melancholy and wasted away. If ever a woman died of a broken heart she did.”

It is callous to claim that a moral lesson can be gleaned from this level of suffering. Yet in 2025, in the lobby of SickKids, the best children’s hospital in the world, I found myself walking away from Mo and Krystal repeating Catherine’s words from 1915: “Thank God, we are all human.”

On August 1, five weeks after the last time I spoke with Krystal, Veya, whose middle name was Hope, died after a nineteen-month fight. Just like Allen, Veya was the fourth child. Her parents managed to move her to another hospital, the same hospital in which she was born. “Through the incredible team at McMaster [Hospital], God brought deep healing to our hearts from the trauma we left Sick Kids with,” Krystal wrote on her Instagram account. “Her last days were tender, peaceful, and full of love.”


Image by Lyfhospital, licensed via Creative Commons. Image cropped.

The post Practitioners of Infanticide appeared first on First Things.

]]>
John Searle’s Minds and Machines (ft. Edward Feser) https://firstthings.com/john-searles-minds-and-machines-ft-edward-feser/ Fri, 02 Jan 2026 10:00:00 +0000 https://firstthings.com/?p=122048 In this episode, Edward Feser joins R. R. Reno on The Editor’s Desk to talk about his recent essay, “The Common Sense of John Searle,” from the December 2025...

The post John Searle’s Minds and Machines (ft. Edward Feser) appeared first on First Things.

]]>
In this episode, Edward Feser joins R. R. Reno on The Editor’s Desk to talk about his recent essay, “The Common Sense of John Searle,” from the December 2025 issue of the magazine.

The conversation is embedded below. For your long-term convenience, follow us on SoundCloud or subscribe via Apple Podcasts or Spotify.

The post John Searle’s Minds and Machines (ft. Edward Feser) appeared first on First Things.

]]>
Dark Phantoms https://firstthings.com/dark-phantoms/ Mon, 29 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=118638 It happened quickly, so quickly that you’d think it was impossible to retain the image. The Ohio Turnpike in October 2024, 5:30 a.m. and pitch-black, the road straight and...

The post Dark Phantoms appeared first on First Things.

]]>
It happened quickly, so quickly that you’d think it was impossible to retain the image. The Ohio Turnpike in October 2024, 5:30 a.m. and pitch-black, the road straight and level and empty of cars, 80 miles per hour—just right to make it to central Wisconsin by sunset—high beams and no radio, only the hum of the engine, when out of nowhere a buck leapt into my lane, a thick tan body and towering antlers dashing in from the right. I knew instantly what it was. I wasn’t drowsy—I’d just spent five hours sleeping in the back at a rest stop near the state line—but I couldn’t avoid him. I’d barely touched the brake and cut the wheel an inch before we hit. I doubt more than a half-second passed since he entered my sight. The crunch of metal, plastic, flesh, and bone was just that, a crunch, not sharp or loud. The hood of the car swung up to the windshield, the airbag blew, the car slowed and drifted to the shoulder. Ten seconds passed with no noise or motion. The skin on my face started to burn (from the chemicals in the airbag, I am told). I opened the door, stepped out, and spotted a dark mass a hundred feet back, lying half in the roadway, unmoving. I reached inside and clicked the hazard lights.

A tow truck driver dropped me at the Toledo airport, where I rented a car and made it home that night. A week later I left Wisconsin for good, moving to Boston for a time before settling in Washington, D.C., far from the place where I had seen more Amish carriages than cars passing my front door. Every week or so, the image comes back. The road, the lights, the deer flash in my mind as if I were in a theater at the start of a movie, the black screen suddenly illuminated. The sight of a Ford Flex on Wisconsin Avenue might cause it, or the deer that were nosing around the gardens outside my building last week. Sometimes it has no cause at all—it just happens. Each recurrence is a jolt. The ordinary day is broken. I wasn’t hurt, wasn’t shocked, suffered no trauma physical or mental, no tender feelings for the poor buck (though I loved that car). I felt only astonishment at how suddenly the world had changed. It is not the impact but the moment before it, the split-second awareness that something bad is going to happen, and I can’t prevent it—that is what comes back again and again.

The study of dreams may be regarded as the most trustworthy approach to the exploration of the deeper psychic processes.” So wrote Freud in Beyond the Pleasure Principle (1920), which addressed nightmares suffered by veterans of war that seemed to contradict the theory he had laid out twenty years earlier in The Interpretation of Dreams. In that earlier book, Freud had characterized “dream work” as a mode of ­wish-fulfillment. In a dream state, the ego relaxes and repressed desires are given expression, though in distorted form. Those desires are shameful, cowardly, selfish, disturbing, or otherwise contrary to the moral sense, but they are in us and have been since early childhood. We dream because we must, because “the return of the repressed” can’t be checked, for such desires never go away, only simmer in the unconscious. In healthy individuals, the repressed returns by means of sublimation, whereby destructive instincts are channeled into safe habits that meet our psychic needs without endangering social relations (as lust, for instance, is contained by marriage). It is, in Freud’s view, a tragic compromise that leaves us never fully satisfied. But civil society cannot survive without it.

What about the ex-soldier who falls asleep and dreams of the trenches, mad with fear and ­uncertainty as bombs fall for hours, a friend beside him clutching his rifle and soon to die in the mud? It was common in 1919. There were an “immense number of such maladies,” Freud writes, with no “basis of organic injury.” Such dreams reenacted the worst moments, pulling “the patient back to the situation of his disaster, from which he awakens in renewed terror.” The agony put the Freudian model to the test. What wish was granted, what pleasure got its release? What perverse mechanism forced the veteran to relive what he never deserved to experience in the first place? The source of these miseries was like a foreign body lodged deep inside, Freud observed, hidden and malignant. It kept the trauma fresh as a living torment. Psychoanalysis did not help these patients. The analyst couldn’t get the patient to recognize what had been repressed in his waking life and expressed in those nightmares, because the dream content didn’t belong to the patient. Something else, a daemon inside (Freud uses the Greek term), was in control. The patient could not claim and examine the content of the dream, only endure it.

To have an image in your mind, unpleasant or disastrous, which may pop up at any time, with or without a relation to the present, isn’t so different. Many people experience memory-flashes, and though the objects are less intense and lethal than those of Freud’s subjects, the mechanics are the same. It used to be that everyone in America recalled exactly where they were and what they were ­doing when they heard that JFK had been shot. A special announcement, the newscaster’s voice, the look on the face of a person beside you, all rushed into consciousness whether you wanted them or not. The memory has a will of its own. Freud described how hard it is to discuss these visions when they reach traumatic levels: The patient “is obliged rather to ­repeat as a current ­experience what is repressed, instead of, as the physician would prefer to see him do, recollecting it as a fragment of the past” (emphasis in original).

This is correct. When the deer flares up these days, shying back too late, my reflex too slow, I’m not ­remembering. The moment is ­repeated (not by me), and the bus I’m riding or the corner I’m standing on dissolves, and I’m back on the road in the dark, and the impact is coming. I can’t do anything with this apparition, can’t make it stop or start, can’t find a meaning in it. When I recall the moment deliberately, the effect is different, a shiver, not a jolt. It doesn’t help; I can’t be cured. My desires are sometimes shameful, but at least they’re human. This occurrence isn’t human at all. Freud gives it a name, “repetition-compulsion,” and tries to grant the nightmares a purpose when he says they are the mind’s attempt to face danger by staging over and over the traumatic moment as if it were an exercise, so that we can better respond when another threat arrives.

It’s a hollow rationale. Look at the man who underwent a shock and wakes up trembling long after, or the woman who picked up the phone one lazy afternoon and heard the news that a loved one was gone, and the ringing in her head breaks the dead of night for years. Try telling them that a terrible thing has happened, yes, and that they must re-experience it until the edge has softened, ponder and interpret it, step back and get some distance—and they’ll answer with a grimace or a moan. They can barely describe it. Nothing in these echoes is revelatory or forward-looking. What purpose can repetition serve? What moral instruction does it impart, what good comes of it?

Why is human nature like this, self-tormenting? Freud the scientist replies, “It just is.” In Beyond the Pleasure Principle, repetition becomes a rule. The best outcome for a trauma victim is never to think of what happened again, but the ­daemon won’t allow it. When a trauma upsets the placidity of daily life, the sufferer is compelled to repeat the experience in some other mode (a dream, a game), even if it means re-experiencing the pain, until the troubling side falls away. Life goes on. The inscrutable will doesn’t care about his feelings, only its serenity. Quiescence, not happiness, is the goal. In fact, for Freud, the ultimate repetition is death—out of nonexistence we came and to nonexistence we go: “The goal of all life is death.”

But I don’t see any lessening of intensity with each repetition of that early morning in Ohio. I’m no more in control of the image ­many months later than I was of the event at the time. I think millions of Americans are walking around at this very moment with cursed traces in their heads that strike without notice and have lost none of their bite, remnants of a past they’d much rather forget, concentrated into an instant. It might be a ­fender-bender, the loss of a job, a breakup, or much worse. Probably it’s a matter best avoided when meeting a new co-worker, chatting in the yard with a neighbor, falling in love by the third date. What dark phantoms lie within, what bits of life that hit hard and linger but give no relief or insight, only shadow the bearer and teach us sternly that we are not entirely ourselves?

The post Dark Phantoms appeared first on First Things.

]]>
Work Is for the Worker https://firstthings.com/work-is-for-the-worker/ Mon, 22 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=118612 In these early days of his pontificate, Pope Leo XIV has made one thing clear: The responsible use of AI will be one of his central themes. It has...

The post Work Is for the Worker appeared first on First Things.

]]>
In these early days of his pontificate, Pope Leo XIV has made one thing clear: The responsible use of AI will be one of his central themes. It has me thinking about landscaping.

Ten years ago, I lived with my wife and children in a two-bedroom house with a small yard. My job every weekend was to cut the grass and trim the bushes. Done right, it would take an hour. And though it wasn’t back-breaking work, I ­usually did it in thick humidity, and there was much sweating. Afterward I would take a shower, put on fresh clothes, and grab a cold beer, and then I would take the first sip while admiring the lawn, low and neat and striped. It would be hard to overstate how satisfying that moment was.

Ours wasn’t the finest yard on the block—there was a lot of crabgrass, and the lines weren’t flawless. But when all was said and done, I could stare at this small patch of manicured land and say, “You know what? I did that.”

Eventually we moved, and our family and our yard grew larger. I was needed for other things on Saturdays. So we outsourced the mowing. It wouldn’t be practical for me to keep doing it, my wife said, and I agreed. Today, I can still look out over the lawn on Saturday evenings with a beer in hand. And to be honest, the lawn looks better than when I was cutting it. But I can’t shake the thought that Saturdays are somehow thinner and smaller and less complete. Something has been lost.

In his 1981 encyclical Laborem Exercens, Pope John Paul II highlighted the two ends of human work: the objective and the subjective. The objective end, the object of work, is to make things that improve the world, like inventing a sewing machine or building a house or teaching double-entry accounting. When I mow the lawn, I produce something of value: a cleaner, more walkable, more aesthetically pleasing patch of land. Work is for others, for society.

The other end of work is its subjective value. As a person works, John Paul wrote, “these actions must all serve to realize his humanity, to fulfill the calling to be a person.” In other words, work is undertaken not just for the sake of the thing produced, but for the sake of the person producing it. The creation of something new doesn’t merely transform raw materials; it changes the person who produces it. When I mow the lawn, it moves something in me. It brings about a sense of learning or ­accomplishment or humility that makes me more human. Work is for the worker.

Ideally, societies are built and economies are run with both the objective and subjective ends in mind. In practice, the two are often at odds. New machines destroy jobs. They create jobs, too, but the old job, that thing that once existed, is destroyed. There are no more musket manufacturers.

Of course, human life has always been about disruption and its tradeoffs. You have a new sibling (good!), but now you get less undivided attention (sad!). It’s beautiful and sunny outside (good!), but now the beach is crowded (sad!). Your single topped the charts (good!), but now you can’t go to a restaurant in peace (sad!). We always hope that new technologies bring about real progress, that the good outweighs the bad. But that’s not always the case. Electric blankets kept us warm (good!), but they caused house fires and leukemia (bad!).

Our great task, when it comes to markets and the economy, is to weigh the true costs and benefits of things. We gain a more complete and nuanced view as we learn more. This is in the nature of negative externalities—things whose true cost is hidden or not immediately apparent. Dumping a factory’s garbage into the river may boost profit margins in the short term, but it exacts a terrible cost from society over the long term. The idea, then, is that over time people or governments recognize this hidden toll and amend it.

What is striking about the debate over artificial intelligence is how haphazardly we’ve weighed the negatives. The powers of AI are mind-blowing and immediately apparent. In twelve seconds, you can write a press release, code a website, or analyze the use of foreshadowing in Hamlet. Artificial intelligence clearly aids the objective ends of work. It mows a lawn much better than I can.

But as a society, we have overemphasized AI’s progress toward work’s objective goals and underemphasized what it does to work’s subjective ends. Pope Leo stressed this point at the Vatican’s recent AI conference, saying that any judgment of artificial intelligence “entails taking into account the well-­being of the human person not only ­materially, but also intellectually and spiritually. . . . The benefits or risks of AI must be evaluated precisely according to this superior ethical ­criterion.”

This “superior ethical ­criterion,” the subjective end of work, is immediately evident to parents. When your daughter is dangling from the monkey bars, if your only concern were the objective end of the work—namely, getting her body from one end of the apparatus to the other—you would just carry her to the other end.

But what a stupid idea! We all know that getting across the monkey bars is worthwhile precisely because of the time and difficulty and ­failure—the inefficiencies, if you will—­involved in accomplishing it. As it turns out, time and difficulty and failure are the only way to achieve the subjective end of work—which is also called ­character.

Great managers, great businesses, and great economies produce both objects of value and people of character. Artificial intelligence thus far has produced only the former. Consider a recent study by Microsoft and Carnegie Mellon that tracked 319 knowledge workers who used AI tools. It found two things: Generative AI both improves the efficiency of workers and makes them lazier thinkers. A similar MIT study found that prolonged use of ChatGPT produces an “accumulation of cognitive debt”—one of the more creative euphemisms for brain rot. Study after study confirms what many of us already knew: AI makes us both more efficient and worse versions of ourselves.

It’s easy to criticize AI for making us dumber. It’s harder to prescribe how to deal with it. What guidelines should we follow in determining how—and ­whether—we should use AI tools?

One answer is prudential judgment. When it comes to deliberations over whether to use a tool or not, it’s obvious that I should use a knife to cut vegetables and that I shouldn’t use a robot to read my kids’ bedtime stories. In the in-­between cases, we have to make judgment calls.

If you need to decide how or whether to use an AI tool—in writing an essay, graphing a chart, analyzing survey data, creating a song, editing a video, writing a thank-you card, or deciding where to live—here are a few questions to aid your judgment call.

Does AI stimulate critical thinking or outsource it? If it generates time savings, what are you doing with the surplus time? If the primary gain is efficiency, how much have you learned in life from doing things inefficiently? Since you’ve begun using AI tools, have you become more fulfilled or less? If you were teaching your son to do this task, would you have him use the tool or not? What do you, the worker, see as the purpose of work? Does this tool help you fulfill that purpose? If you were presenting this work to God, how would he view the process by which you created it?

Henry David Thoreau wrote, “The cost of a thing is the amount of . . . life which is required to be exchanged for it.” The cost of AI must be assessed by a similar question: How much of my humanity must I exchange for the privilege of using this tool?

The post Work Is for the Worker appeared first on First Things.

]]>
When Life Ends Mid-Sentence https://firstthings.com/when-life-ends-mid-sentence/ Thu, 18 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=120772 It was Gerstäcker’s mother. She held out her trembling hand to K. and had him sit down beside her, she spoke with great difficulty. It was difficult to understand...

The post When Life Ends Mid-Sentence appeared first on First Things.

]]>
It was Gerstäcker’s mother. She held out her trembling hand to K. and had him sit down beside her, she spoke with great difficulty. It was difficult to understand her, but what she said”—thus ends Franz Kafka’s final novel, The Castle. If he intended to finish the novel (a point of debate among Kafka aficionados), then death intervened to ensure that the work literally ended mid-sentence.

The ending of The Castle captures the impact of human mortality. Accidental though it may be, its sheer incoherence makes it a stroke of genius in its presentation of the human condition. I have faced the very real and unexpected prospect of imminent death twice. The first was a few years ago when I suddenly found myself facing the business end of a pistol held by a rather angry and unstable fellow who had stepped out of the shadows to confront me. He then spent fifteen minutes telling me he was going to pull the trigger and terminate my existence. The second was more recent, when I was afflicted with an unforeseen and catastrophic medical incident. Like many, I had always wondered what my feelings in such circumstances would be. Strange to tell, on neither occasion did I feel much panic, perhaps because everything was happening so quickly. Rather, my dominant emotion was melancholy confusion, summarized in the sentence, “What an absurd way for it all to end!” My life, as I saw it both times, had not yet reached its logically coherent and satisfying conclusion. I was going to depart this life too soon.

Perhaps it is the fact that we live in a culture that has pushed death to the margins that makes us resent not just its inevitability but also its absurdity. We no longer have the regular, even routine, exposure to it that our ancestors did. I have never been present at a death. Few in the younger generation have even seen a dead body. Death occurs in faraway places and to other people. We don’t need categories for handling it, and so when it intrudes upon our worlds, we don’t know how to respond. 

Perhaps more significantly, a world saturated with movies and TV shows trains us to think that life must have a plotline and a structure. It must have a beginning, a middle, and an end. These must stand in continuity with each other and make sense as a whole. But in reality, this is not so. Life just ends. Indeed, as with Kafka’s novel, it ends mid-sentence, with business permanently unfinished. There are still words to be spoken, things to be done, conversations to be had, friendships to be enjoyed, kisses to be given and received. But there is no satisfying, coherent resolution to the whole.

That is where the Christmas story comes into play. Death is the enemy that makes all of life ultimately absurd, and the Incarnation is the beginning of the answer, the start of that which brings a coherence to creation. Light shines in darkness and the darkness has not overcome it. If life for me, as for all of us, ends mid-sentence, then that is where grace begins. Yes, there is continuity in the gospel. The New Testament’s genealogies and attention to Old Testament prophecy testify to the historical continuity of the Messiah’s mission but this does not overwhelm the fact that Christ represents a decisive break in the ages. Light explodes into the bleak, dark night of human existence, contradicting it and putting it to flight, just as the cross will later contradict all human moral and epistemological expectations of how God should act. How many of us will receive cards this Christmas showing the child in the manger as the source of light for Mary, Joseph, and the other witnesses to his birth? There is a serious theological truth being represented there.

Of course, in an obvious play on those scenes, later artists presented science as the source of light. But we live at a time when science is much more morally ambiguous and where attempts to make sense of life by claiming that the darkness is light have run their course. Reality is beginning to dawn for many. Even many intellectuals are now taking the Christian message seriously, at least as something to be considered. There are, of course, those who think that light is a good idea, even though they do not believe it really exists. Even Richard Dawkins has grudgingly gestured toward “cultural Christianity” in a desperate attempt to set limits to a science unencumbered by any sense of final causality. 

But that will not do. One cannot reduce Christianity to a matter of merely instrumental importance. Like those TV shows and movies that present life as having a beginning, a middle, and an end, such approaches are insufficient, a form of nihilism that wants the benefits of the faith while denying those truths upon which those benefits necessarily rest. The Christmas message makes claims about life and about God, not least that life makes no ultimate sense without God, doomed as it is to end mid-sentence for all of us. And only in the child lying in the manger do we see how those sentences can be completed.

The post When Life Ends Mid-Sentence appeared first on First Things.

]]>
The Death of Daniel Kahneman https://firstthings.com/the-death-of-daniel-kahneman/ Wed, 03 Dec 2025 06:00:00 +0000 https://firstthings.com/?p=113356 Daniel Kahneman was a Nobel laureate in economics, the author of the international bestseller Thinking, Fast and Slow, and a giant in the study of decision-making and behavioral economics...

The post The Death of Daniel Kahneman appeared first on First Things.

]]>
Daniel Kahneman was a Nobel laureate in economics, the author of the international bestseller Thinking, Fast and Slow, and a giant in the study of decision-making and behavioral economics. On March 27, 2024, he died. Not until a year later did it become known that he had taken his own life.

The revelation has received relatively little attention. It was made by Jason Zweig in an article for the Wall Street Journal on March 14, 2025. A month later, Katarzyna de Lazari-Radek and Peter Singer offered their views in an essay for the New York Times. Little else has been written, and nothing of any length. Kahneman made clear that he did not intend his death as a public act or statement, and yet it raises questions. When an expert on judgment and decision-making decides to take his own life, we can’t help asking whether his final act confirmed his reputation or undermined it. We wonder how we should approach the last years of our lives.

The reasons for Kahneman’s decision are not clear, and caution is warranted. But the articles by Zweig and de Lazari-Radek and Singer offer glimpses of his reasoning. Just before his death, Kahneman contacted several friends to inform them of his decision and say goodbye. In these messages, Kahneman explained that he was acting on his belief that “the miseries and indignities of the last years of life are superfluous.” He confirmed that he was not suffering from any condition that caused pain or disability. He was active, still capable of research and writing and of enjoying many things in life. Just before traveling to an assisted-­suicide clinic in Switzerland, he spent several days in Paris, according to Zweig, “walking around the city, going to museums and the ballet, and savoring soufflés and chocolate mousse.” Nonetheless, Kahneman was convinced that his kidneys were “on their last legs” and that “the frequency of [his] mental lapses” was increasing. He was ninety years old. “It is time to go,” he concluded.

It is hard not to view Kahneman’s decision through the lens of his work. In Thinking, Fast and Slow, he considered the way in which the last years of a person’s life might govern the evaluation of that life. His conclusion was that, when we “intuitively” assess a life, its duration means little. Most important are the peaks and ends—in other words, the most intense high points and the manner of death. To describe this phenomenon, he used his “peak-end rule.” Kahneman argued that this rule creates distortions and prevents us from thinking clearly, logically, and well. But it is not clear that he thought we could or should do much about it.

Kahneman cited many experiments in demonstration of the peak-end rule. One involved painful colonoscopies, another holding one’s hand in very cold water. But one experiment explicitly concerned the evaluation of a person’s life. A psychologist and his students developed two versions of the biography of a fictitious woman, “Jen,” who never married and had no children and “died instantly and painlessly in an automobile accident.” In the first version, she was “extremely happy throughout her life (which lasted either thirty or sixty years), enjoying her work, taking vacations, and spending time with her friends and on her hobbies.” In the second version, she lived an additional five years, dying when she was thirty-five or sixty-five. The extra years were described as pleasant, but less so than the earlier thirty or sixty years. Each participant in the study was asked to consider the desirability of a version of Jen’s life and the total happiness she experienced.

According to Kahneman, the results demonstrated that the length of a life meant little and the quality of the last years meant a great deal to the assessment of that life. Even the doubling of the length of Jen’s life had no effect on assessments of its desirability. More strikingly, the addition of five “slightly happy” years to an otherwise extremely happy life “caused a substantial drop in evaluations of the total happiness of that life.” These results confounded Kahneman. He suggested tweaks to the experiment, but these only confirmed that “what truly matters when we intuitively assess” a life—or shorter events, such as a vacation or childbirth—“is the progressive deterioration or improvement of the ongoing experience, and how the person feels at the end.”

To help him understand this result, Kahneman distinguished two types of utility, corresponding to two selves. “Experienced utility” was the amount of pleasure (or pain) a person experienced at each moment over a period of time. In contrast, “decision utility” reflected an assessment of the pleasurableness (or painfulness) of the episode as a whole. The “experiencing self” knows the experienced utility of an episode. It can answer the question, “Does it hurt now?” The “remembering self” assesses decision utility. It can answer the question, “How was it on the whole?”

When participants in the study ignored the length of Jen’s life and focused on its last five years, their remembering selves made judgments on the basis of decision utility. They assessed her life as a whole and did not think about the sum of the experienced utility of each moment her experiencing self would have known. Kahneman likened the remembering self’s indifference to time, its emphasis on peak events and endings, to storytelling. “In storytelling mode,” he observed, “an episode is represented by a few critical moments, especially the beginning, the peak, and the end. Duration is neglected.” The remembering self tells the story of our lives.

It is tempting to believe that, when Kahneman decided to end his life, he relied on the perceptions of his remembering self. From this perspective, his life would be no better if he lived an additional five or more years. In fact, it would be judged worse, because it would be marred by “the miseries and indignities” of his last years. By avoiding these last years, he would give the story of his life a more pleasant ending, and it would be judged better on the whole. But if this was his view, it would have been met with several objections.

Not the least of these was Kahneman’s insistence that the storytelling of the remembering self was wrong. By the standards of rational ­decision-making, to ignore duration and emphasize peak events and endings was irrational. The sum of Jen’s experienced utility had to be greater if she lived five years longer. Nor was it reasonable to evaluate an entire life by its last years. Kahneman called both these assessments “indefensible.” The “logical” approach was to understand a life as “a series of moments, each with a value,” and the value of a life as “the sum of the values of its moments.” “The remembering self’s neglect of duration, its exaggerated emphasis on peaks and ends, and its susceptibility to hindsight combine to yield distorted reflections of our actual experience.”

And yet, in his own life, he did not dismiss entirely the storytelling of the remembering self. He regarded this contradiction as an artifact of his humanity. Episodes of his life were shaped by recollections of peak events and their endings; they were largely unaffected by their actual duration. Kahneman was even known on occasion to end his vacations a day or two early to ensure that they produced good memories. “I am my remembering self,” he wrote; “the experiencing self, who does my living, is like a stranger to me.” Still, it seems unlikely that he would make a truly momentous decision on the basis of a judgment he considered inaccurate. To end one’s vacation on the basis of the distorted judgment of the remembering self is one thing; to end one’s life on that basis is another.

Another problem with attributing Kahneman’s decision to the judgment of the remembering self is that his remembering self was not going to be around to judge. Unless Kahneman believed in an afterlife, he could not expect to remember his life after it had ended. His remembering self would die with him. Of course, the experiment involving evaluations of Jen’s alternate lives suggests that the work of the “remembering self” is not always a work of memory. The participants did not remember and evaluate their own lives; they imagined and evaluated Jen’s. The label “remembering self” is therefore slightly misleading. The self who ignores duration and focuses on peak experiences and endings is not always engaged in remembering. Rather, it is evaluating an experience—its own or someone else’s. The remembering self must still, however, have a place to stand, a perspective from which to evaluate. In the case of Kahneman’s evaluation of his own life, this perspective is paradoxical. His life was not over; the end was not known. It seems he was imagining his life from the perspective of someone who survived him, someone who already knew the ending.

Comments from Lazari-Radek and Singer tend to confirm this. Kahneman, while still alive, judged that his life was “complete.” “Kahneman thought that he had completed his life,” wrote Lazari-Radek and Singer, presumably on the basis of their conversations with him. This is a perplexing statement. Can a life be judged complete before it is over? Kahneman seemed to think so. In his messages to friends, he suggests that anything further he could do or experience would be “superfluous.”

This judgment is especially perplexing given that he purported to believe that his life was meaningless. In the interview with Lazari-Radek and Singer, Kahneman denied that his work had any objective significance: “Other people happen to respect it and say that this is for the benefit of humanity,” but they were mistaken. “I just like to get up in the morning because I like the work.” When Lazari-Radek and Singer argued that his work was important, he disagreed: “If there is an objective point of view, then I’m totally irrelevant to it. If you look at the universe and the complexity of the universe, what I do with my day cannot be relevant.”

If Kahneman’s life was meaningless, how could it be complete? Completion assumes a whole: a story with a beginning, middle, and end; a chord, the resolving note of which has been sounded; a picture in which all is in its place and nothing is missing. A meaningless life, a life without significance, can never be complete because it is not whole. Yet Kahneman believed his life was somehow both meaningless and complete. Of course, he made no pretense to objective judgment. His sense of completion was simply “a feeling.” “I feel I’ve lived my life well,” he said, “but it’s a feeling. I’m just reasonably happy with what I’ve done.”

Lazari-Radek and Singer did not accept Kahneman’s assessment of his work or his life. They thought his work was valuable: “We do not agree that the size and complexity of the universe render irrelevant an individual’s work for the benefit of humanity.” And they thought Kahneman still had more work to do; he “could still enlighten ­audiences on how to make better decisions.” Nonetheless, they respected Kahneman’s decision to end his life: “If, after careful reflection, you decide that your life is complete and remain firmly of that view for some time, you are the best judge of what is good for you.” They added that a judgment that a person’s life was complete carried special weight “in the case of people who are at an age at which they cannot hope for improvement in their quality of life.”

But Kahneman’s staking his life on the “feeling” that it was complete remains extraordinary, given that he was a behavioral economist, much of whose work consisted of showing us that our feelings are often mistaken and distorted. Zweig suggested that Kahneman’s decision to end his life was unrelated to the principles of decision-­making that he promoted in his work. It was motivated “above all” by a desire “to avoid a long decline, to go out on his terms, to own his own death.” Zweig noted that Kahneman was deeply troubled by the death of his wife in 2018, after years of dementia. His mother likewise lost much of her memory before she died. Zweig surmised that Kahneman did not want the same to happen to him. When in his last message to his friends Kahneman stated that “the miseries and indignities of the last years of life are ­superfluous,” this meant, Zweig thought, that Kahneman believed he faced the same fate.

In short, Kahneman was scared. He was afraid that his cognitive abilities would decline along with his body. It is an understandable fear. For most of us, the ability to move our bodies easily, to hear, see, and think clearly, to live without depending on others—these seem like the minimum requirements for living well. The prospect of losing them is frightening.

We all experience fear. Some of us change our behavior as a result: We stop riding our bikes on crowded streets, pass on that trip to the Himalayas, or avoid having children. But sometimes we don’t change our behavior, even though we are afraid. In these cases, we judge the good to be attained as worth the risk.

The word for willingness to withstand fear in pursuit of a good is “fortitude” or, more commonly, “bravery.” At first glance, to suggest that Kahneman lacked bravery seems silly. This was a man who calmly and methodically faced what many would consider the ultimate and deepest loss, death. Traditionally, a willingness to die on the battlefield or in other difficult situations has been the mark of bravery. But in one of his essays on fortitude, Josef Pieper quotes Thomas Aquinas to remind us that it is not the risking of death that matters, but the realization of the good: “To take death upon oneself is not in itself praiseworthy, but solely because of its subordination to good.” Real bravery requires a correct evaluation of things, of the risks as well as what one hopes to preserve or gain. Kahneman’s willingness to face death was brave only if it was for the sake of something good—good enough to warrant ending his life.

What was the good that Kahneman hoped to gain? Fear points to what we value. Fear arises from the perception that we are in danger of losing something we believe is good. In Kahneman’s case, it appears he feared the deterioration of his mind and body. One might say that he wished to preserve his physical and mental health. But if this were the case, ending his life was not a good solution. A dead man has certainly not succeeded in preserving his health. A more exact description of the good he pursued might be: He wanted to preserve an image of himself, free from the deterioration associated with aging. He wanted to see himself, and to be remembered by others, as he was in his vital years.

In a notorious article for the Atlantic in 2014, the bioethicist Ezekiel Emanuel articulated his desire to live no more than seventy-five years. “How do we want to be remembered by our children and grandchildren?” he asked. “We wish our children to remember us in our prime. Active, vigorous, engaged, animated, astute, enthusiastic, funny, warm, loving. Not stooped and sluggish, forgetful and repetitive, constantly asking, ‘What did she say?’ We want to be remembered as independent, not experienced as burdens.” He admitted that “with effort our children will be able to recall” the good moments. But if we live much past ­seventy-five, he contended, the later years—the years of disabilities and caregiving arrangements—will inevitably become the salient memories. He concluded, “Leaving them [our ­children]—and our grandchildren—with memories framed not by our vivacity but by our frailty is the ultimate tragedy.”

The desire to be remembered as healthy and vital is only natural. We regard blossoming flowers as more beautiful than wilted ones. A prowling tiger wins our admiration more than a wounded one. And a new, gleaming building more inspires our awe than one that totters in disrepair. In each case, the stage in which an object is regarded as at its most vital defines what we consider the object to be. A flower is known by its blossoming, the tiger by its prowling, the skyscraper by its proud piercing of the sky. We can still recognize these objects when they are not at their most vigorous, but their lives take their meaning and significance from the stage in which they most fully realized their purposes.

Still, this focus on our vital years is not satisfactory. The vital years are attractive, but they are not a totality. A flower that does not wilt is artificial. A tiger that cannot be wounded is a stuffed animal, and a building that is impervious to gravity is a fantasy. A man whose body never deteriorates is not human. To end one’s life for the sake of perpetuating a memory of oneself as vigorous and successful is a sacrifice of the real to the fake. It is not bravery.

Underlying the desire to avoid the later years of life is a failure to distinguish between living and the image of living. Preserving one’s image as immune to illness and decay might be worth ending one’s life if the image were thought commensurable with those additional years of life—if the fact of existence were essentially the same as an image of existence. In that case, a few years of life could be deemed superfluous, an unnecessary epilogue. That the image is only a story, and those extra years are years of actual existence, wouldn’t matter. Life and the image of life would be equivalent.

Kahneman’s decision to die at ninety and Emanuel’s desire not to live beyond seventy-five suggest that this equivalence has some salience. For those of this mindset, the qualitative distinction between a life and the image of a life tends to dissolve. The experience of living has no greater reality, no firmer foundation, than an image of that experience. The two are commensurable. Both can be placed on a single continuum of pleasure and pain.

In The Brothers Karamazov, Elder Zosima speaks of our connection with “mysterious worlds,” in which are buried the roots of our thoughts and feelings. These mysterious worlds are concealed from us, and yet we have been granted a sense of our living bond with them. Indifference arises when this bond dies. We might fight off indifference with distractions and passing pleasures, but it is a losing battle, especially as we age and the pleasures lose their appeal.

For a social scientist like Kahneman, the risk of losing touch with these mysterious worlds would seem acute. Much of Kahneman’s work was an attempt to describe the ways in which humans were likely to behave irrationally. This work required Kahneman to keep an eye on what was rational, according to a scientific understanding that depended on objective observation and measurement. Such a focus leaves no room for mysterious worlds.

The exclusion of mystery can be seen in Kahneman’s consideration of the study of well-­being. As he became familiar with existing work in this area, he realized that almost every study depended on responses to survey questions that measured the remembering self’s assessment of well-being, not that of the experiencing self. Since he was already convinced that the assessments of the remembering self were not reliable, he sought ways to measure well-being from the perspective of the experiencing self. His solution was to assume that every moment experienced by the experiencing self and every episode known to the remembering self could be understood in terms of utility—that is, as either pleasurable or painful.

Kahneman was not naive about this categorization. He conceded in Thinking, Fast and Slow that “the experience of a moment or an episode is not easily represented by a single happiness value.” There were two obvious complications. First, feelings come in many forms. “Positive feelings” include emotions as varied as “love, joy, engagement, hope, [and] amusement.” (Likewise, “negative emotions” could include emotions as different as “anger, shame, depression, and loneliness.”) Second, positive and negative emotions may exist at the same time. An event might be experienced as amusing and shameful. Still, Kahneman insisted that it was “possible to classify most moments of life as ultimately positive or negative.”

This perspective flattens the world. Falling in love and eating ice cream differ not in the kind but only in the degree of pleasure they bestow. The death of a daughter and the traffic jam that makes us late for work differ only in the degree of pain they cause. A hug from a friend and a bite of cake are more or less the same thing. Some events may be more positive or negative than others, but in the end, all events are commensurable. All are on the continuum of utility, which cannot comprehend mystery.

Perhaps Kahneman’s utilitarian calculus was merely a method of simplifying complex issues for the sake of his experiments. But it would be difficult to leave this viewpoint at the office. Anyone who thinks in this way regularly for his work would be likely to acquire the habit in other areas of his life. For Kahneman, all was either pleasurable or painful. All that life offered fit neatly, without remainder, within the parameters of his experiments. The possibility that he might learn something new, something significant enough to change his experience of living, was evidently unthinkable.

Aging has a way of tightening our focus on matters that escaped our attention earlier. By the time we reach our seventies, the urge to make our mark on the world—through amassing wealth or power or praise—may still exist, but it will have faded. In this way, aging can clear away distractions that hid things that were there all along. What seemed a loss is a gift, for it calls attention to the fullness of existence itself.

But not every loss incurred during aging will cause us to recognize the fullness of being. Most will be experienced as losses, nothing more. Much therefore depends on whether we have an expectation of something beyond the loss, however mysterious that something may be. Entertaining this expectation is a habit of thought that determines whether we experience life as a dead end or as an adventure in understanding what it means to live well.

One might expect Kahneman to have developed this habit of thought. But we know that he regarded his work not as a pursuit of the truth about living well, which would redound to “the benefit of humanity,” but rather as a private amusement, like a crossword puzzle.

He may have had a point there. Many of the experiments on which Kahneman’s work relies seem too contrived to reveal much that is profound or helpful to living. One wonders what we can really learn about well-being or human flourishing from our reactions to painful colonoscopies or the amount of time we hold our hands in cold water. Likewise with assessments of the life of a woman who might live five more “slightly happy” years. The experiments are clever, but they are detached from the concrete experience of living. Perhaps after a lifetime of developing experiments of this kind and being handsomely rewarded for it, Kahneman sensed their hollowness.

Was Kahneman’s decision the right one? None of us can know what he faced or the reasons for his actions. Still, we can hope that when our time comes, we will have the courage to withstand the pain and indignities of aging. Elder Zosima says of those who take their own lives, “There can be no one unhappier than they.” Even though he knew the Church considered suicide a grave sin, he confessed that he prayed for them and thought “in the secret of [his] soul” that he was permitted to do so, for “Christ will not be angered by love.” Perhaps this is what we owe Daniel Kahneman: not our condemnation, but our prayers.


Image by nrkbeta, licensed via Creative Commons. Image cropped. 

The post The Death of Daniel Kahneman appeared first on First Things.

]]>
How to Become a Low-Tech Family https://firstthings.com/how-to-become-a-low-tech-family/ Wed, 26 Nov 2025 06:00:00 +0000 https://firstthings.com/?p=113471 Is there a life beyond the screen? In 2010, Nicholas Carr’s The Shallows described what the internet was doing to our brains. Although still relevant today...

The post How to Become a Low-Tech Family appeared first on First Things.

]]>
The Tech Exit:
A Practical Guide to Freeing Kids and Teens from Smartphones

by clare morell

penguin random house, 256 pages, $27

Is there a life beyond the screen? In 2010, Nicholas Carr’s The Shallows described what the internet was doing to our brains. Although still relevant today, Carr’s book came on the scene before two major events: the rapid proliferation of smartphones and the explosion of social media activity. In 2024, Jonathan Haidt’s The Anxious Generation showed that these new developments had ­fueled dramatic increases in rates of depression, anxiety, and other mental health issues in Gen-Z teens and young adults—and left many unable to conceive of a life that isn’t saturated by screens. 

Silicon Valley, we have a problem.

Clare Morell’s The Tech Exit is strong on solutions and strong on hope. Morell begins by laying out the problem, taking aim at two powerful myths in our culture, myths widely repeated because they sound so reasonable. The first is that if technology is harming your child, you can remedy the situation with screen-time limits. Children can use screens less—an hour a day, for example. 


As Morell points out, time limits simply don’t work. Teens crave social acceptance and peer approval, and these cravings are only amplified by screens. Moreover, digital experiences can make the real world feel so unbearably dull that, even after only a little time online, kids will keep longing to return to their devices. The result? Parents who try to enforce screen-time limits “are constantly having to stand between a drug-dispensing machine and an underdeveloped brain. It’s an untenable, exhausting situation.”

The second myth: Parental app controls are effective at limiting kids’ access to harmful online content. Morell demolishes this fanciful idea, pointing out that it’s often easy to find work-arounds and loopholes. She recounts how one boy circumvented a parental monitoring app on his phone by going into the app itself, clicking on the Support button, and then searching for porn from within the browser that opened up. 

And Morell reminds us that kids don’t even need to go looking for explicit material. Social media is a conveyor belt for content so dehumanizing, violent, and grotesque that “the porn children view today makes Playboy look like an American Girl doll catalogue.”

Though The Tech Exit includes some disturbing and tragic stories, refreshingly it focuses not on digital harms, but on reclaiming a life free of screens. Freedom begins with what Morell calls the “fast”: a total screen detox. In a culture such as ours, which operates according to the Rule of Tech Ubiquity—­technology anywhere, anytime, for anyone, to do anything—complete withdrawal may be a daunting proposal. There are nuances and exceptions in Morell’s approach, but the initial prolonged period of abstinence is essential if we are to discover who our children are.  

The first couple of weeks, ­Morell concedes, can be “hell,” with kids upset or distressed that they can’t use their screens anymore, and parents having to spend far more time with their children. (Five hours of Monopoly a day, anyone?) But as one mom observed, once tech “gets out of their system and if you hold your guns, you will see these versions of your kid that you’re like, ‘Well, if I had known this, I would have done this forever.’”  

The core of Morell’s prescription is not the fast but the FEAST—her acronym for five basic commitments toward reclaiming a low-tech life. Find and connect with likeminded families; get buy-in from kids by explaining and educating them on the harms of digital tech, and exemplifying healthy tech use; adopt alternatives to smartphones; set up accountability and screen rules; and trade screens for real-life responsibilities and pursuits. 

Morell emphasizes that families cannot be islands of tech resistance, but must join with other families in their neighborhoods and schools. This gives parents allies and kids friends who aren’t on screens. Some families sign on to a version of the Postman Pledge (named after Neil Postman, the media critic), formally affirming their commitment to limit their tech use and build a new community ethos. Specific tech-­reduction strategies include: having a landline at home; replacing smartphones with “dumb phones,” with only call and text options; giving children not just chores, but “adult” responsibilities such as cooking and shopping (or home repair, we would add); encouraging play in nature, walks, reading, board games, crafting, music, journaling, and tinkering.

All that might sound ambitious, but our impression—­having lived as a low-tech family for twenty years—is that Morell’s approach is realistic and her hope ­well-founded. Some reviews of The Tech Exit have accused Morell of idealism, as if she assumed that life will be perfect once the screens are gone or greatly minimized. This is a misreading. If anything is unrealistic, it is the conventions of contemporary parenting. 

Many years ago, long before we had our own children, we ­attended a christening. It was a solemn ­liturgical ceremony, as high as high church can get, except at the end, when the priest passed the swaddled infant back to the husband and wife and casually remarked, “Good luck with your project.”

It sent a chuckle through the congregation. It seemed just a quip at the time, but now, as we look back at that event a quarter century ago, we can’t help wondering whether the priest meant something more. Our culture, then, was drifting from a traditional view of family, which emphasized parental responsibility and self-sacrifice, toward an emphasis on parents’ personal fulfilment. 

The same attitude was transmitted to children. The result, today, is “acceptance parenting,” whose central mantra, as Mary Harrington has written, is: “I don’t mind what you do. I just want you to be happy.” If so much of what we do and makes us happy and fulfilled emanates from a screen, then the idea of a screen-free or low-tech life remains—for many parents and their children—an unfathomable proposition.  

The Tech Exit doesn’t address this foundational change in our culture. It assumes that parents will gladly take on the duty of instructing, guiding, and setting rules—such as “no private tech use” during childhood. This assumption doesn’t negate Morell’s message, but her approach might resonate primarily with parents who are comparatively traditional or authoritative in their parenting styles. 

For these parents, at least, a low-tech existence is not just possible, but fruitful—a life in which children play outside together, a life of music and games and homes full of books, where “screen entertainment is a rare treat, not a ­daily ­occurrence, and where parents might decide to equip their teenage son with a pickup truck rather than a smartphone.”

The closing chapters of Morell’s book shift from ground-up FEAST solutions—what we as parents and families can change—to top-down solutions: how our laws and policies need to change. For instance, kids can easily lie about their ­ages in order to access adult content. Our age-­verification law is useless, almost by design. Meanwhile, a law known as Section 230 is so ­abysmally written that internet companies face “absolutely no consequences for promoting child sex abuse ­material”—even if it results in the sex trafficking or death of children.

What keeps our laws so impotent? Morell, as a former adviser to Attorney General Bill Barr, writes: “Often as a bill gets close to getting a vote, either in Congress or in a state legislature, Big Tech swoops in with their armies of lobbyists and lawyers, an incredibly organized machine, and mounts a tremendous pressure and intimidation campaign to scare lawmakers away or buy them off.” Nevertheless, this is not a fatalistic book. Its last section, in particular, offers feasible pathways for change that can be implemented by schools, school districts, and entire towns.  

Brad East has observed that when Christians write about technology, they tend to rehearse truisms: “God is the source of all creativity; God made us to be makers; any tool can be bent toward sin or gospel service; what we need are wisdom and virtue and good habits.” This sentiment, when applied to smartphones, is like printing a holy icon onto a pack of Marlboros and expecting teen smoking to serve the good, the beautiful, and the true.

Still, worldview matters. Our ultimate beliefs—those that declare the “first things” of life—shape our values and behaviors. Even the best tech books are often reluctant to acknowledge this point. The illuminating chapter on spirituality in Jonathan Haidt’s Anxious Generation is about psychological feeling and spiritual practice more than existential beliefs. In the final pages of The Tech Exit, Morell discusses Maslow’s self-actualization theory and our need to transcend ourselves by “focusing on things beyond the self like altruism and spiritual awakening,” but again the theme is individual growth. 

So, although Morell demolishes popular beliefs around screen-time limits and parental controls, she sidesteps a conversation about ultimate beliefs. Yet just as her strategies and practices make sense only within a certain model of parenting, the FEAST to which The Tech Exit points us—real relationships and pursuits in the real world—makes sense only within a world­view that gives primacy to these domains. 

Could Christianity serve as this model? It might, depending on how we frame the model, and assuming we go beyond vague and watered-down recommendations about the use of our God-given powers of creativity. There are more vital imperatives to consider. We are commanded to be stewards of the primary things God made. Above all, we are commanded to love God and each other. 

Certain corollaries follow. We cannot allow virtual reality to become more important than physical reality; we cannot allow an impulsive or emotional attraction to social media or AI to become more important than our relationships of love and self-­sacrifice to real people. And if technological innovations interfere with the primary imperatives, then the innovations must be rejected or radically modified. 

Not everyone will agree with this application of the Christian worldview. But only an encompassing worldview can provide a foundation for our tech-reduction strategies. Without a foundation, individual strategies become ideas on a checklist, difficult to sustain amid technological and social ­pressures. 

The Tech Exit speaks to parents who want to save their children from the digital universe. In this way it taps into the love of mothers and fathers for their sons and daughters. A parent’s love is a powerful motivation, but not everybody is a parent or a child. Our whole society would benefit from a more principled approach to technology use. 

When it comes to managing our screens and devices, the only consensus, so far, is that if our tech makes us suffer badly enough, we should stop using it. But why make suffering the motivator? We have just undergone an uncontrolled global experiment in what smartphones and social media can do to the mental health of our kids. AI is now enticing those same kids, and us, into a new rat cage. But we are not obliged to plunge into another reckless experiment. 

Morell concludes her book with these words: “Removing digital tech from childhood is the first step, but the far greater task ahead of us is to reclaim true human flourishing.” Quite so. This ambitious and essential book throws open the door to reality. Do we have the courage to step through?

The post How to Become a Low-Tech Family appeared first on First Things.

]]>
The Rest as History https://firstthings.com/the-rest-as-history/ Wed, 19 Nov 2025 06:00:00 +0000 https://firstthings.com/?p=113439 The Sabbath is making a comeback. Across the West, that most singular and ancient of weekly phenomena—a day marked by the absence . . .

The post The Rest as History appeared first on First Things.

]]>
Israel’s Day of Light and Joy:
The Origin, Development, and Enduring Meaning of the Jewish Sabbath


by jon d. levenson
eisenbrauns, 296 pages, $24.95

The Sabbath is making a comeback. Across the West, that most singular and ancient of weekly phenomena—a day marked by the absence of market forces, digital devices, and the manic demands of professional ­productivity—is enjoying a ­curious renaissance. The notion that modern individuals desperately need systematic respite from the matrix of expectations and neuroses ­imposed on them by their world is no longer marginal. Perusing the ­smorgasbord of self-help gurus, parenting manuals, mindfulness retreats, and decluttering guides, one constantly encounters paeans to the digital detox, frequently termed a “tech ­Sabbath.”

That highly successful people in the twenty-first century are rediscovering the power, beauty, and necessity of a millennia-old biblical custom will come as a surprise to everyone except those who already observe it. For those of us fortunate enough to live our lives within this propitious rhythm, the only surprising thing about this rediscovery is its belatedness. The blessings of the weekly Sabbath, aptly described by Talmudic rabbis as “one-sixtieth of heaven,” require no elaboration beyond direct ­experience. The Sabbath’s power has been evident for millennia. Jews across the centuries have undergone every tribulation and oppression dreamt up by humanity. Yet every week, for twenty-four hours, they have returned—­liturgically and psychologically—to a state of numinous tranquility. They have rested, they have remembered, and they have affirmed their allegiance to a world in which time itself can be sanctified and the future redeemed. When Ahad Ha’am, a decidedly non-­religious Zionist thinker, remarked that “more than the Jews have kept the Sabbath, the Sabbath has kept the Jews,” his exaggeration was, at most, slight.

Given the antiquity and centrality of the Sabbath to both the Jewish and the Christian traditions, it is unsurprising that a number of modern authors have sought to explicate and re-enchant this weekly institution. Perhaps the best-known effort in this vein is The Sabbath (1951), by the neo-Hasidic philosopher Abraham Joshua Heschel. That slim volume, overflowing with fabulously poetic aperçus, invited its readers to taste eternity in the guise of sacred time. As a ­psychospiritual tour of the Sabbath’s “palaces in time,” it has yet to be surpassed. Honorable mention must also be made of Erich Fromm’s To Have or to Be? (1976), in which the psychoanalyst’s coruscating insights illuminate the Sabbath’s role in counterbalancing the acquisitive instincts of a disenchanted world. Yet our moment, characterized by a profusion of information and an impoverishment of wisdom, demands a reconceptualization of the Sabbath that is both spiritually sensitive and intellectually rigorous, attuned equally to the history and to the phenomenology of this remarkable ­institution.

Stepping into this role with characteristic erudition and eloquence is my eminent mentor Jon D. ­Levenson. In an academic world increasingly defined by methodological parochialism, Levenson’s work has always stood apart. He has the rare capacity to harvest from a wide range of academic fields—history, theology, biblical criticism, rabbinics, and philosophy—in service of extensive and penetrating considerations of enduring theological questions. Levenson writes with exquisite religious ­sensibility, conveying a sense not only of the outer forms of religious praxis but also of the strivings, emotions, and aspirations that accompany them. ­Israel’s Day of Light and Joy is vintage Levenson, evincing the breadth of scholarship, felicity of ­articulation, and twinkle-eyed wit with which he has reigned over seminar rooms and lecture halls for many decades.

The early chapters address a set of questions concerning the origins of the Sabbath itself. How and when did this institution arise? Does it appear consistently across the canon of the Hebrew Bible, or are there variants, slowly converging toward coherence? Do analogues exist in other ancient cultures? Levenson leads the reader across the landscape of accepted scholarship, even dipping a toe in the waters of speculation.

His most salient claim is that the šabbāt of the Hebrew Bible may have originated in connection with a Babylonian full moon festival (šabattu), becoming synonymous with the “seventh day” only through a lengthy process of theological convergence and calendrical standardization. He reminds us that the seven-day week itself is a non-­natural phenomenon. Unlike the day (solar rotation), month (lunar cycle), and year (earth’s revolution), the seven-day structure appears sui generis. Its only general analogue in the ancient world exists within the Greco-Roman astronomical system, with each day being governed by a celestial body (hence the name of our modern seventh day, derived from “Saturn’s Day”).

Yet for all such similarities, ­Levenson’s most forceful point is the uniqueness of the theologically freighted biblical Sabbath. Ordained from the start as a moment of sanctity and transcendence, it has no true parallel in any ancient civilization. It is no mere “Day of Rest” (although cessation from work is important), nor is it a tribute to the powers of the planetary spheres that were once invested with deterministic power. Such a pagan cosmology, in Bertrand ­Russell’s arresting formulation, views humankind as “a small thing in comparison with the forces of Nature,” a pitiable slave “doomed to worship Time and Fate and Death, because they are greater than anything he finds in himself.” The Sabbath stands as a ritualized repudiation of inexorable temporality. The Sabbatical observer declares his faith in a vision of the cosmos in which humanity is not an ­isolated speck adrift in ­indifference, but a covenantal being of irreducible significance, bound from inception to God, community, and creation. This paradigm shift, foundational to the biblical revolution, is ratified every week by the imbrication of a non-natural unit of sacred time within an otherwise cosmological calendar. This jarring break functions as a subtle yet transformative simulacrum of the Bible’s insistence on mankind’s unique dual status: as a being confronted at once by both the majesty of a powerful universe and the sovereignty of its all-powerful author.

Levenson devotes much time to comparing Jewish and Christian approaches to the Sabbath, particularly around the question of legal regulation. Rabbinic tradition, from its earliest texts, surrounds the Sabbath with a latticework of prohibitions, customs, and finely wrought distinctions. The act of ceasing from labor, it turns out, requires extensive and exhaustive attention. For many Christian interpreters, this has seemed paradoxical, if not absurd. For how can a day of spiritual liberation be reduced to a list of technicalities? At its worst, this rabbinic normativity is caricatured as a monument to desiccated Pharisaism, overcome by the ­liberating spiritualization of Christian grace.

Levenson rejects this caricature. Following a venerable line of halakhic thought, he argues that it is through law—precise, enforceable, and shared—that the Sabbath achieves its character. To observe the Sabbath is not merely to embrace a state of mind, but to enter into a communal choreography of rest. The laws of governing the Sabbath, correctly conceived, cannot be dismissed as mere crabbed legalism. Far from hindering spiritual praxis, they underwrite it. This tightly guarded and defined form of rest also safeguards the socio-ethical component of this institution, compelling as it does kings and paupers, seigneurs and peasants, humans and animals, to return to a prelapsarian state of freedom and fellowship. This radically egalitarian state becomes possible only within a matrix of normative constraint. If Heschel rhapsodized about the Sabbath’s “palace in time,” Levenson reminds us that these marvels of spiritual architectonics require floor plans.

As elsewhere, Levenson’s work here demonstrates a deep sympathy with rabbinic interpretations of the biblical texts, as well as with their traditionalist heirs in the medieval and modern canons of Jewish scholarship. Levenson’s competence in these frequently difficult textual traditions, and his sensitivity to their subterranean theological subtleties, are uncommon for a biblical scholar and, indeed, are lacking in some modern theologians. This sympathy leads him not only to oppose the classic Pauline approach to the Sabbath, but also to point out the lamentable failure of various reformist denominations of Judaism to preserve the “essence” of the Sabbath while eviscerating its legal frameworks. Some may view these commitments as a flaw in his analytical approach. Others will count them as a strength and a welcome counterbalance to the pervasive misapprehension that rabbinic hermeneutics are inimical to sound scholarship and reasoning.

The chapter with the greatest contemporary ­relevance—and the chapter this reviewer wishes could have been more extensive—is this book’s final one, which details the challenges posed by the Sabbath to modernity, and vice versa. Levenson notes that, for Orthodox Jews in particular, the Sabbath now functions as a weekly act of defiance against the instrumentalization of human life. The refusal to use technology, to conduct commerce, or to attend to digital devices forms a profoundly countercultural posture, a theological protest against the mechanization of existence. Highlighting and entrenching Heschel’s observations, Levenson notes that various forms of the “secular Sabbath” bear only the palest semblance to the genuine article. True Sabbath is not a tool for a more efficient Monday. It stands as a reminder that human life and dignity are ends in themselves, imbued with the eternity of the divine image and the attendant obligation to tend to those parts of our lives and personhood that cannot be priced on the market, yet have worth beyond number. In an age of overstimulation, the Sabbath is a rare opportunity to step off the treadmill that claims so much of our time and attention, and dedicate ourselves to restoring our tranquility, dignity, and, ultimately, our humanity.

To be sure, Levenson’s work is hardly the final word on the Sabbath. Some of his historical claims invite further scrutiny, and his alignment with rabbinic traditionalism will perhaps alienate some of his readers. The extent to which the Jewish Sabbath is truly equipped to function as a counterweight to the excesses of twenty-first-century life is a subject that demands more extensive reflection. Yet this book’s great strength is in its aspiration: It dares to treat the Sabbath as neither a museum artifact nor an ethereal phantasm, but as a vibrant historical, theological, and moral institution, with the power to alter individual and communal rhythms of life.

To understand the Sabbath is to grasp something elemental about Jewish history, biblical anthropology, and the metaphysics of time itself. It is to encounter a vision of life in which the world is not merely a field for toil but a garden of repose, to be received in ­humility and joy. Levenson’s luminous work offers us the best starting point yet for such an encounter. ­Israel’s Day of Light and Joy is a book worthy of the day it honors.

The post The Rest as History appeared first on First Things.

]]>
The Common Sense of John Searle https://firstthings.com/the-common-sense-of-john-searle/ Fri, 14 Nov 2025 06:00:00 +0000 https://firstthings.com/?p=113205 The twentieth-­century philosopher Wilfrid Sellars drew an influential distinction between “the manifest image,” which is the way the world is presented to us in everyday experience and common sense,...

The post The Common Sense of John Searle appeared first on First Things.

]]>
The twentieth-­century philosopher Wilfrid Sellars drew an influential distinction between “the manifest image,” which is the way the world is presented to us in everyday experience and common sense, and “the scientific image,” which is the description of the world offered by scientific theory. For some thinkers in the Western tradition, there is a sharp conflict between these images—think of Zeno’s view that motion is an illusion, idealists who deny the reality of matter, or those who insist that science has disproved the existence of free will. But other philosophers argue that, rightly understood, the two ­images are in harmony. ­Aristotle and Thomas Aquinas are examples.

Another is John Searle, who died on September 17 at the age of ninety-­three. Searle taught at the University of California at Berkeley for sixty years. He was active in the campus’s famous Free Speech Movement in the 1960s, though he came to criticize the excesses of student protesters. He made several first-rate contributions to academic philosophy, while also attaining an unusual degree of influence outside the field. The latter was facilitated by his crystal clarity as a writer and public speaker, and a personal style that was often as humorous as it was self-confident and pugnacious.

If it’s true that a man can be known by his enemies, then it tells us much about Searle that he had famous public disputes with the deconstructionist Jacques Derrida and the materialist Daniel Dennett. Both thinkers, in Searle’s view, peddled nonsense in the guise of sophisticated theory. In Derrida’s case, the sophistry involved putting forward the bold but absurd thesis that nothing exists outside of texts and then, when challenged, retreating into the perfectly reasonable but banal observation that nothing exists except in some context. ­Dennett’s sleight of hand was more subtle. He would pretend to be giving a materialist explanation of consciousness, but on closer inspection, Searle argued, he was actually denying that consciousness existed.

The controversy for which Searle was best known, however, concerned Artificial Intelligence. According to a criterion for intelligence proposed by the mathematician Alan Turing, if a machine could, in response to questions, produce answers that were indistinguishable from those a human being might give, then we would have every reason to judge that it was literally intelligent. Searle rebutted this claim in his Chinese Room Argument.

Imagine that Searle, who knows no Chinese, sits in a room with a set of Chinese symbols and a rulebook in English telling him which combinations of symbols to give out in response to written questions slipped to him through a slot in the door. The rulebook does not tell him what the symbols mean; it simply allows him to mimic the behavior of a person who does. What Searle would be doing in this scenario, he argued, is essentially what a computer does: manipulating symbols according to the rules of an algorithm. The resulting mimicry, no matter how convincing, would not yield genuine understanding of Chinese. Likewise, what computers do can never amount to the operations of true intelligence, but only a simulation of it.

This much-debated argument is one of several by which Searle resisted the reductionist tendencies of contemporary philosophy, which are often fallaciously promoted in the name of science. In the middle of the twentieth century, logical positivists attempted to reduce all meaningful discourse to the descriptive language of formal logic and empirical science. Searle’s first book, Speech Acts, which built on the work of his teacher J. L. Austin, was among several key texts that led ­Anglo-American philosophers to take a more nuanced approach to language and its multifarious uses.

Just as Searle rejected the thesis that computers might exhibit genuine intelligence, so too did he criticize the popular view that the human mind is a kind of software implemented on the hardware of the brain. For one thing, the brain cannot, in Searle’s view, properly be characterized as hardware. Computers, he argues, are not naturally occurring objects, as stones, trees, and bacteria are. They are a human artifact, just as chairs, can openers, and airplanes are. Nothing is intrinsically a chair. An object counts as a chair only relative to an interpretation assigned to it by human observers. The same is true of computers. It makes no sense to explain the human mind by reference to the idea that the brain is computer hardware, since the brain counts as “computer hardware” only relative to the interpretation imposed by a human mind. The software model of the mind thus puts the cart before the horse.

Searle was highly critical of other versions of materialism as well, such as the extreme “eliminativist” thesis that if beliefs, desires, and other mental states cannot be explained in neurobiological terms, then they do not exist at all. Searle’s 1992 book The Rediscovery of the Mind is a tour de force, a sustained demolition of what had by then become a dogmatic reductionist orthodoxy in contemporary philosophy of mind.

In his books Rationality in Action and Freedom and Neurobiology, which appeared in the 2000s, Searle turned to the topic of free will. He argued against the idea that our actions are the necessitated effects of mental events of which we are merely the passive observers—as if everything we think and do simply happened to us. Choice, of its very nature, involves an irreducible and persisting self, which actively brings things about as a result of deliberation. There is a causal gap between our beliefs and desires on one hand and our behavior on the other, and only the self can fill that gap, by way of its agency. Searle’s view is that whether or not we can strictly prove the reality of freedom, we have no intelligible model of human action without it.

In his later work, Searle’s central interest was the nature of social facts in general and of social institutions in particular. His account of the way social facts are grounded in language, and language in turn is grounded in the mind, imparted a unity to what might otherwise seem to be disparate themes in his work. Searle was typically thought of as a philosopher of language and of mind. But by the end of his career, he had produced what would more traditionally have been described as a systematic philosophical anthropology.

The main weakness of Searle’s work is that he never developed a metaphysics that was as carefully worked out as his anthropology. Searle’s materialist critics often accused him of being a dualist in the Cartesian mold, a charge he always denied. He was as committed as the materialists were to the thesis that the natural world is all there is. He simply thought their way of fitting human beings into that world was simplistic. The trouble is that the conception of nature he shared with them, which he never seriously questioned, makes his position unstable. When he emphasized the continuity of human beings with the larger natural world, he sometimes sounded like he was offering just another riff on materialism. But when (as was more often the case) he emphasized how radically different the human mind is from everything else in nature, the charge of dualism was hard to rebut.

Like his materialist critics, ­Searle also had a tin ear for religion; he once suggested that only “bores” could think religion was still intellectually credible. Fortunately, his attitude did not deter him from the occasional friendly engagement with the Dominicans at Berkeley’s Dominican School of Philosophy and Theology.

Searle’s last years were very difficult. Charges of sexual harassment did enormous harm to his reputation and career, and he was stripped of his emeritus title at UC Berkeley. He essentially disappeared from public life. Speaking of the specific allegations that led to his downfall, his longtime secretary Jennifer ­Hudin has said (in an email that was published online after his death) that “after an extensive and intrusive investigation, these allegations were never found to be true.”

Whatever the facts and whatever Searle’s personal views about religion—or rather, all the more so in view of these things—many of us who admired him and benefited from his work pray earnestly that the reality of God was one further bit of common sense that he came to appreciate in his final days.


Image by Sascia Pavan, licensed via Creative Commons. Image cropped. 

The post The Common Sense of John Searle appeared first on First Things.

]]>